WorldWideScience

Sample records for modeling algorithm development

  1. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed b

  2. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS

    Science.gov (United States)

    2015-09-30

    High-resolution simulations using nonhydrostatic models like SUNTANS are crucial for understanding multiscale processes that are unresolved, and...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Development of Improved Algorithms and Multiscale ... Modeling Capability with SUNTANS Oliver B. Fringer 473 Via Ortega, Room 187 Dept. of Civil and Environmental Engineering Stanford University

  3. DEVELOPMENT OF 2D HUMAN BODY MODELING USING THINNING ALGORITHM

    Directory of Open Access Journals (Sweden)

    K. Srinivasan

    2010-11-01

    Full Text Available Monitoring the behavior and activities of people in Video surveillance has gained more applications in Computer vision. This paper proposes a new approach to model the human body in 2D view for the activity analysis using Thinning algorithm. The first step of this work is Background subtraction which is achieved by the frame differencing algorithm. Thinning algorithm has been used to find the skeleton of the human body. After thinning, the thirteen feature points like terminating points, intersecting points, shoulder, elbow, and knee points have been extracted. Here, this research work attempts to represent the body model in three different ways such as Stick figure model, Patch model and Rectangle body model. The activities of humans have been analyzed with the help of 2D model for the pre-defined poses from the monocular video data. Finally, the time consumption and efficiency of our proposed algorithm have been evaluated.

  4. Toward Developing Genetic Algorithms to Aid in Critical Infrastructure Modeling

    Energy Technology Data Exchange (ETDEWEB)

    2007-05-01

    Today’s society relies upon an array of complex national and international infrastructure networks such as transportation, telecommunication, financial and energy. Understanding these interdependencies is necessary in order to protect our critical infrastructure. The Critical Infrastructure Modeling System, CIMS©, examines the interrelationships between infrastructure networks. CIMS© development is sponsored by the National Security Division at the Idaho National Laboratory (INL) in its ongoing mission for providing critical infrastructure protection and preparedness. A genetic algorithm (GA) is an optimization technique based on Darwin’s theory of evolution. A GA can be coupled with CIMS© to search for optimum ways to protect infrastructure assets. This includes identifying optimum assets to enforce or protect, testing the addition of or change to infrastructure before implementation, or finding the optimum response to an emergency for response planning. This paper describes the addition of a GA to infrastructure modeling for infrastructure planning. It first introduces the CIMS© infrastructure modeling software used as the modeling engine to support the GA. Next, the GA techniques and parameters are defined. Then a test scenario illustrates the integration with CIMS© and the preliminary results.

  5. Algorithm Development for the Two-Fluid Plasma Model

    Science.gov (United States)

    2009-02-17

    of m=0 sausage instabilities in an axisymmetric Z-pinch", Physics of Plasmas 13, 082310 (2006). • A. Hakim and U. Shumlak, "Two-fluid physics and...accurate as the solution variables. The high-order representation of the solution variables satisfies the accuracy requirement to preserve the...here. [2] It also illustrates the dispersive nature of the waves which makes capturing the effect difficult in MHD algorithms. The electromagnetic

  6. Algorithm Development for the Multi-Fluid Plasma Model

    Science.gov (United States)

    2011-05-30

    ities of a Hall-MHD wave increase without bound with wave number. The large wave speeds increases the stiffness of the equation system making accu- rate...illustrates the dispersive nature of the waves which makes capturing the effect difficult in MHD algorithms. The electromagnetic plasma shock serves to...Nonlinear full two-fluid study of m = 0 sausage instabilities in an axisymmetric Z pinch. Physics of Plasmas, 13(8):082310, 2006. [5] A. Hakim and U. Shumlak

  7. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  8. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  9. Development of a multi-objective optimization algorithm using surrogate models for coastal aquifer management

    Science.gov (United States)

    Kourakos, George; Mantoglou, Aristotelis

    2013-02-01

    SummaryThe demand for fresh water in coastal areas and islands can be very high due to increased local needs and tourism. A multi-objective optimization methodology is developed, involving minimization of economic and environmental costs while satisfying water demand. The methodology considers desalinization of pumped water and injection of treated water into the aquifer. Variable density aquifer models are computationally intractable when integrated in optimization algorithms. In order to alleviate this problem, a multi-objective optimization algorithm is developed combining surrogate models based on Modular Neural Networks [MOSA(MNNs)]. The surrogate models are trained adaptively during optimization based on a genetic algorithm. In the crossover step, each pair of parents generates a pool of offspring which are evaluated using the fast surrogate model. Then, the most promising offspring are evaluated using the exact numerical model. This procedure eliminates errors in Pareto solution due to imprecise predictions of the surrogate model. The method has important advancements compared to previous methods such as precise evaluation of the Pareto set and alleviation of propagation of errors due to surrogate model approximations. The method is applied to an aquifer in the Greek island of Santorini. The results show that the new MOSA(MNN) algorithm offers significant reduction in computational time compared to previous methods (in the case study it requires only 5% of the time required by other methods). Further, the Pareto solution is better than the solution obtained by alternative algorithms.

  10. Modelling Kara Sea phytoplankton primary production: Development and skill assessment of regional algorithms

    Science.gov (United States)

    Demidov, Andrey B.; Kopelevich, Oleg V.; Mosharov, Sergey A.; Sheberstov, Sergey V.; Vazyulya, Svetlana V.

    2017-07-01

    Empirical region-specific (RSM), depth-integrated (DIM) and depth-resolved (DRM) primary production models are developed based on data from the Kara Sea during the autumn (September-October 1993, 2007, 2011). The model is validated by using field and satellite (MODIS-Aqua) observations. Our findings suggest that RSM algorithms perform better than non-region-specific algorithms (NRSM) in terms of regression analysis, root-mean-square difference (RMSD) and model efficiency. In general, the RSM and NRSM underestimate or overestimate the in situ water column integrated primary production (IPP) by a factor of 2 and 2.8, respectively. Additionally, our results suggest that the model skill of the RSM increases when the chlorophyll specific carbon fixation rate, efficiency of photosynthesis and photosynthetically available radiation (PAR) are used as input variables. The parameterization of chlorophyll (chl a) vertical profiles is performed in Kara Sea waters with different trophic statuses. Model validation with field data suggests that the DIM and DRM algorithms perform equally (RMSD of 0.29 and 0.31, respectively). No changes in the performance of the DIM and DRM algorithms are observed (RMSD of 0.30 and 0.31, respectively) when satellite-derived chl a, PAR and the diffuse attenuation coefficient (Kd) are applied as input variables.

  11. An Algorithm to Develop Lumped Model for Gunn-Diode Dynamics

    OpenAIRE

    Umesh Kumar

    1998-01-01

    A nonlinear lumped model can be developed for Gunn-Diodes to describe the diffusion effects as the domain travels from cathode to anode of a Gunn-Diode. The model describes the domain extinction and nucleation phenomena. It allows the user to specify arbitrary nonlinear drift velocity V(E) and nonlinear diffusion D(E).The model simulates arbitrary Gunn-Diode circuits operating in any matured high field domain or in the LSA mode.Here we have constructed an algorithm to lead to development of t...

  12. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    Directory of Open Access Journals (Sweden)

    Christley Scott

    2010-08-01

    Full Text Available Abstract Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a

  13. Development of a 3D modeling algorithm for tunnel deformation monitoring based on terrestrial laser scanning

    Directory of Open Access Journals (Sweden)

    Xiongyao Xie

    2017-03-01

    Full Text Available Deformation monitoring is vital for tunnel engineering. Traditional monitoring techniques measure only a few data points, which is insufficient to understand the deformation of the entire tunnel. Terrestrial Laser Scanning (TLS is a newly developed technique that can collect thousands of data points in a few minutes, with promising applications to tunnel deformation monitoring. The raw point cloud collected from TLS cannot display tunnel deformation; therefore, a new 3D modeling algorithm was developed for this purpose. The 3D modeling algorithm includes modules for preprocessing the point cloud, extracting the tunnel axis, performing coordinate transformations, performing noise reduction and generating the 3D model. Measurement results from TLS were compared to the results of total station and numerical simulation, confirming the reliability of TLS for tunnel deformation monitoring. Finally, a case study of the Shanghai West Changjiang Road tunnel is introduced, where TLS was applied to measure shield tunnel deformation over multiple sections. Settlement, segment dislocation and cross section convergence were measured and visualized using the proposed 3D modeling algorithm.

  14. Development of optimization model for sputtering process parameter based on gravitational search algorithm

    Science.gov (United States)

    Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.

    2016-07-01

    In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.

  15. Probabilistic models, learning algorithms, and response variability: sampling in cognitive development.

    Science.gov (United States)

    Bonawitz, Elizabeth; Denison, Stephanie; Griffiths, Thomas L; Gopnik, Alison

    2014-10-01

    Although probabilistic models of cognitive development have become increasingly prevalent, one challenge is to account for how children might cope with a potentially vast number of possible hypotheses. We propose that children might address this problem by 'sampling' hypotheses from a probability distribution. We discuss empirical results demonstrating signatures of sampling, which offer an explanation for the variability of children's responses. The sampling hypothesis provides an algorithmic account of how children might address computationally intractable problems and suggests a way to make sense of their 'noisy' behavior.

  16. Development of Genetic Algorithm Based Macro Mechanical Model for Steel Fibre Reinforced Concrete

    Directory of Open Access Journals (Sweden)

    Gopala Krishna Sastry, K, V.S ,

    2014-01-01

    Full Text Available This paper presents the applicability of hybrid networks that combine Artificial Neural Network (ANN and Genetic Algorithm (GA for predicting the strength properties of Steel Fibre Reinforced concrete (SFRC with different water-cement ratio (0.4,0.45,0.5,0.55, aggregate-cement ratio (3,4,5, % of fibres (0.75,1.0,1.5 and aspect ratio of fibres (40,50,60 as input vectors. Strength properties of SFRC such as compressive strength, flexural strength, split tensile strength and compaction factor are considered as output vector. The network has been trained with data obtained from experimental work. The hybrid neural network model learned the relation between input and output vectors in 1900 iterations. After successful learning GA based BPN model predicted the strength characteristics satisfying all the constrains with an accuracy of about 95%.The various stages involved in the development of genetic algorithm based neural network model are addressed at length in this paper.

  17. Analysis and Development of Walking Algorithm Kinematic Model for 5-Degree of Freedom Bipedal Robot

    Directory of Open Access Journals (Sweden)

    Gerald Wahyudi Setiono

    2012-12-01

    Full Text Available A design of walking diagram and the calculation of a bipedal robot have been developed. The bipedal robot was designed and constructed with several kinds of servo bracket for the legs, two feet and a hip. Each of the bipedal robot leg was 5-degrees of freedom, three pitches (hip joint, knee joint and ankle joint and two rolls (hip joint and ankle joint. The walking algorithm of this bipedal robot was based on the triangle formulation of cosine law to get the angle value at each joint. The hip height, height of the swinging leg and the step distance are derived based on linear equation. This paper discussed the kinematic model analysis and the development of the walking diagram of the bipedal robot. Kinematics equations were derived, the joint angles were simulated and coded into Arduino board to be executed to the robot.

  18. Models and Algorithms for Production Planning and Scheduling in Foundries – Current State and Development Perspectives

    Directory of Open Access Journals (Sweden)

    A. Stawowy

    2012-04-01

    Full Text Available Mathematical programming, constraint programming and computational intelligence techniques, presented in the literature in the field of operations research and production management, are generally inadequate for planning real-life production process. These methods are in fact dedicated to solving the standard problems such as shop floor scheduling or lot-sizing, or their simple combinations such as scheduling with batching. Whereas many real-world production planning problems require the simultaneous solution of several problems (in addition to task scheduling and lot-sizing, the problems such as cutting, workforce scheduling, packing and transport issues, including the problems that are difficult to structure. The article presents examples and classification of production planning and scheduling systems in the foundry industry described in the literature, and also outlines the possible development directions of models and algorithms used in such systems.

  19. Development of Serum Marker Models to Increase Diagnostic Accuracy of Advanced Fibrosis in Nonalcoholic Fatty Liver Disease: The New LINKI Algorithm Compared with Established Algorithms

    Science.gov (United States)

    Lykiardopoulos, Byron; Hagström, Hannes; Fredrikson, Mats; Ignatova, Simone; Stål, Per; Hultcrantz, Rolf; Ekstedt, Mattias

    2016-01-01

    Background and Aim Detection of advanced fibrosis (F3-F4) in nonalcoholic fatty liver disease (NAFLD) is important for ascertaining prognosis. Serum markers have been proposed as alternatives to biopsy. We attempted to develop a novel algorithm for detection of advanced fibrosis based on a more efficient combination of serological markers and to compare this with established algorithms. Methods We included 158 patients with biopsy-proven NAFLD. Of these, 38 had advanced fibrosis. The following fibrosis algorithms were calculated: NAFLD fibrosis score, BARD, NIKEI, NASH-CRN regression score, APRI, FIB-4, King´s score, GUCI, Lok index, Forns score, and ELF. Study population was randomly divided in a training and a validation group. A multiple logistic regression analysis using bootstrapping methods was applied to the training group. Among many variables analyzed age, fasting glucose, hyaluronic acid and AST were included, and a model (LINKI-1) for predicting advanced fibrosis was created. Moreover, these variables were combined with platelet count in a mathematical way exaggerating the opposing effects, and alternative models (LINKI-2) were also created. Models were compared using area under the receiver operator characteristic curves (AUROC). Results Of established algorithms FIB-4 and King´s score had the best diagnostic accuracy with AUROCs 0.84 and 0.83, respectively. Higher accuracy was achieved with the novel LINKI algorithms. AUROCs in the total cohort for LINKI-1 was 0.91 and for LINKI-2 models 0.89. Conclusion The LINKI algorithms for detection of advanced fibrosis in NAFLD showed better accuracy than established algorithms and should be validated in further studies including larger cohorts. PMID:27936091

  20. Successive smoothing algorithm for constructing the semiempirical model developed at ONERA to predict unsteady aerodynamic forces. [aeroelasticity in helicopters

    Science.gov (United States)

    Petot, D.; Loiseau, H.

    1982-01-01

    Unsteady aerodynamic methods adopted for the study of aeroelasticity in helicopters are considered with focus on the development of a semiempirical model of unsteady aerodynamic forces acting on an oscillating profile at high incidence. The successive smoothing algorithm described leads to the model's coefficients in a very satisfactory manner.

  1. Inversion model validation of ground emissivity. Contribution to the development of SMOS algorithm

    CERN Document Server

    Demontoux, François; Ruffié, Gilles; Wigneron, Jean Pierre; Grant, Jennifer; Hernandez, Daniel Medina

    2007-01-01

    SMOS (Soil Moisture and Ocean Salinity), is the second mission of 'Earth Explorer' to be developed within the program 'Living Planet' of the European Space Agency (ESA). This satellite, containing the very first 1.4GHz interferometric radiometer 2D, will carry out the first cartography on a planetary scale of the moisture of the grounds and the salinity of the oceans. The forests are relatively opaque, and the knowledge of moisture remains problematic. The effect of the vegetation can be corrected thanks a simple radiative model. Nevertheless simulations show that the effect of the litter on the emissivity of a system litter + ground is not negligible. Our objective is to highlight the effects of this layer on the total multi layer system. This will make it possible to lead to a simple analytical formulation of a model of litter which can be integrated into the calculation algorithm of SMOS. Radiometer measurements, coupled to dielectric characterizations of samples in laboratory can enable us to characterize...

  2. A novel hybrid classification model of genetic algorithms, modified k-Nearest Neighbor and developed backpropagation neural network.

    Science.gov (United States)

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the

  3. A simulation environment for modeling and development of algorithms for ensembles of mobile microsystems

    Science.gov (United States)

    Fink, Jonathan; Collins, Tom; Kumar, Vijay; Mostofi, Yasamin; Baras, John; Sadler, Brian

    2009-05-01

    The vision for the Micro Autonomous Systems Technologies MAST programis to develop autonomous, multifunctional, collaborative ensembles of agile, mobile microsystems to enhance tactical situational awareness in urban and complex terrain for small unit operations. Central to this vision is the ability to have multiple, heterogeneous autonomous assets to function as a single cohesive unit, that is adaptable, responsive to human commands and resilient to adversarial conditions. This paper represents an effort to develop a simulation environment for studying control, sensing, communication, perception, and planning methodologies and algorithms.

  4. Rate control system algorithm developed in state space for models with parameter uncertainties

    Directory of Open Access Journals (Sweden)

    Adilson Jesus Teixeira

    2011-09-01

    Full Text Available Researching in weightlessness above the atmosphere needs a payload to carry the experiments. To achieve the weightlessness, the payload uses a rate control system (RCS in order to reduce the centripetal acceleration within the payload. The rate control system normally has actuators that supply a constant force when they are turned on. The development of an algorithm control for this rate control system will be based on the minimum-time problem method in the state space to overcome the payload and actuators dynamics uncertainties of the parameters. This control algorithm uses the initial conditions of optimal trajectories to create intermediate points or to adjust existing points of a switching function. It associated with inequality constraint will form a decision function to turn on or off the actuators. This decision function, for linear time-invariant systems in state space, needs only to test the payload state variables instead of spent effort in solving differential equations and it will be tuned in real time to the payload dynamic. It will be shown, through simulations, the results obtained for some cases of parameters uncertainties that the rate control system algorithm reduced the payload centripetal acceleration below μg level and keep this way with no limit cycle.

  5. Development and validation of an algorithm to recalibrate mental models and reduce diagnostic errors associated with catheter-associated bacteriuria

    Science.gov (United States)

    2013-01-01

    Background Overtreatment of catheter-associated bacteriuria is a quality and safety problem, despite the availability of evidence-based guidelines. Little is known about how guidelines-based knowledge is integrated into clinicians’ mental models for diagnosing catheter-associated urinary tract infection (CA-UTI). The objectives of this research were to better understand clinicians’ mental models for CA-UTI, and to develop and validate an algorithm to improve diagnostic accuracy for CA-UTI. Methods We conducted two phases of this research project. In phase one, 10 clinicians assessed and diagnosed four patient cases of catheter associated bacteriuria (n= 40 total cases). We assessed the clinical cues used when diagnosing these cases to determine if the mental models were IDSA guideline compliant. In phase two, we developed a diagnostic algorithm derived from the IDSA guidelines. IDSA guideline authors and non-expert clinicians evaluated the algorithm for content and face validity. In order to determine if diagnostic accuracy improved using the algorithm, we had experts and non-experts diagnose 71 cases of bacteriuria. Results Only 21 (53%) diagnoses made by clinicians without the algorithm were guidelines-concordant with fair inter-rater reliability between clinicians (Fleiss’ kappa = 0.35, 95% Confidence Intervals (CIs) = 0.21 and 0.50). Evidence suggests that clinicians’ mental models are inappropriately constructed in that clinicians endorsed guidelines-discordant cues as influential in their decision-making: pyuria, systemic leukocytosis, organism type and number, weakness, and elderly or frail patient. Using the algorithm, inter-rater reliability between the expert and each non-expert was substantial (Cohen’s kappa = 0.72, 95% CIs = 0.52 and 0.93 between the expert and non-expert #1 and 0.80, 95% CIs = 0.61 and 0.99 between the expert and non-expert #2). Conclusions Diagnostic errors occur when clinicians’ mental models for catheter

  6. Genetic algorithm guided population pharmacokinetic model development for simvastatin, concurrently or non-concurrently co-administered with amlodipine.

    Science.gov (United States)

    Chaturvedula, Ayyappa; Sale, Mark E; Lee, Howard

    2014-02-01

    An automated model development was performed for simvastatin, co-administered with amlodipine concurrently or non-concurrently (i.e., 4 hours later) in 17 patients with coexisting hyperlipidemia and hypertension. The single objective hybrid genetic algorithm (SOHGA) was implemented in the NONMEM software by defining the search space for structural, statistical and covariate models. Candidate models obtained from the SOHGA runs were further assessed for biological plausibility and the precision of parameter estimates, followed by traditional backward elimination process for model refinement. The final population pharmacokinetic model shows that the elimination rate constant for simvastatin acid, the active form by hydrolysis of its lactone prodrug (i.e., simvastatin), is only 44% in the concurrent amlodipine administration group compared with the non-concurrent group. The application of SOHGA for automated model selection, combined with traditional model selection strategies, appears to save time for model development, which also can generate new hypotheses that are biologically more plausible.

  7. Development of a voltage-dependent current noise algorithm for conductance-based stochastic modelling of auditory nerve fibres.

    Science.gov (United States)

    Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J

    2016-12-01

    This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.

  8. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States)

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  9. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  10. Developing algorithms for predicting protein-protein interactions of homology modeled proteins.

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Shawn Bryan; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Roe, Diana C.

    2006-01-01

    The goal of this project was to examine the protein-protein docking problem, especially as it relates to homology-based structures, identify the key bottlenecks in current software tools, and evaluate and prototype new algorithms that may be developed to improve these bottlenecks. This report describes the current challenges in the protein-protein docking problem: correctly predicting the binding site for the protein-protein interaction and correctly placing the sidechains. Two different and complementary approaches are taken that can help with the protein-protein docking problem. The first approach is to predict interaction sites prior to docking, and uses bioinformatics studies of protein-protein interactions to predict theses interaction site. The second approach is to improve validation of predicted complexes after docking, and uses an improved scoring function for evaluating proposed docked poses, incorporating a solvation term. This scoring function demonstrates significant improvement over current state-of-the art functions. Initial studies on both these approaches are promising, and argue for full development of these algorithms.

  11. Development of Prediction Model for Endocrine Disorders in the Korean Elderly Using CART Algorithm

    Directory of Open Access Journals (Sweden)

    Haewon Byeon

    2015-09-01

    Full Text Available The aim of the present cross-sectional study was to analyze the factors that affect endocrine disorders in the Korean elderly. The data were taken from the A Study of the Seoul Welfare Panel Study 2010. The subjects were 2111 people (879 males, 1,232 females aged 60 and older living in the community. The dependent variable was defined as the prevalence of endocrine disorders. The explanatory variables were gender, level of education, household income, employment status, marital status, drinking, smoking, BMI, subjective health status, physical activity, experience of stress, and depression. In the Classification and Regression Tree (CART algorithm analysis, subjective health status, BMI, education level, and household income were significantly associated with endocrine disorders in the Korean elderly. The most preferentially involved predictor was subjective health status. The development of guidelines and health education to prevent endocrine disorders is required for taking multiple risk factors into account.

  12. High-resolution computational algorithms for simulating offshore wind turbines and farms: Model development and validation

    Energy Technology Data Exchange (ETDEWEB)

    Calderer, Antoni [Univ. of Minnesota, Minneapolis, MN (United States); Yang, Xiaolei [Stony Brook Univ., NY (United States); Angelidis, Dionysios [Univ. of Minnesota, Minneapolis, MN (United States); Feist, Chris [Univ. of Minnesota, Minneapolis, MN (United States); Guala, Michele [Univ. of Minnesota, Minneapolis, MN (United States); Ruehl, Kelley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guo, Xin [Univ. of Minnesota, Minneapolis, MN (United States); Boomsma, Aaron [Univ. of Minnesota, Minneapolis, MN (United States); Shen, Lian [Univ. of Minnesota, Minneapolis, MN (United States); Sotiropoulos, Fotis [Stony Brook Univ., NY (United States)

    2015-10-30

    The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.

  13. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    Science.gov (United States)

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.

  14. Technical note: Boundary layer height determination from lidar for improving air pollution episode modeling: development of new algorithm and evaluation

    Science.gov (United States)

    Yang, Ting; Wang, Zifa; Zhang, Wei; Gbaguidi, Alex; Sugimoto, Nobuo; Wang, Xiquan; Matsui, Ichiro; Sun, Yele

    2017-05-01

    Predicting air pollution events in the low atmosphere over megacities requires a thorough understanding of the tropospheric dynamics and chemical processes, involving, notably, continuous and accurate determination of the boundary layer height (BLH). Through intensive observations experimented over Beijing (China) and an exhaustive evaluation of existing algorithms applied to the BLH determination, persistent critical limitations are noticed, in particular during polluted episodes. Basically, under weak thermal convection with high aerosol loading, none of the retrieval algorithms is able to fully capture the diurnal cycle of the BLH due to insufficient vertical mixing of pollutants in the boundary layer associated with the impact of gravity waves on the tropospheric structure. Consequently, a new approach based on gravity wave theory (the cubic root gradient method: CRGM) is developed to overcome such weakness and accurately reproduce the fluctuations of the BLH under various atmospheric pollution conditions. Comprehensive evaluation of CRGM highlights its high performance in determining BLH from lidar. In comparison with the existing retrieval algorithms, CRGM potentially reduces related computational uncertainties and errors from BLH determination (strong increase of correlation coefficient from 0.44 to 0.91 and significant decreases of the root mean square error from 643 to 142 m). Such a newly developed technique is undoubtedly expected to contribute to improving the accuracy of air quality modeling and forecasting systems.

  15. Drowsiness/alertness algorithm development and validation using synchronized EEG and cognitive performance to individualize a generalized model.

    Science.gov (United States)

    Johnson, Robin R; Popovic, Djordje P; Olmstead, Richard E; Stikic, Maja; Levendowski, Daniel J; Berka, Chris

    2011-05-01

    A great deal of research over the last century has focused on drowsiness/alertness detection, as fatigue-related physical and cognitive impairments pose a serious risk to public health and safety. Available drowsiness/alertness detection solutions are unsatisfactory for a number of reasons: (1) lack of generalizability, (2) failure to address individual variability in generalized models, and/or (3) lack of a portable, un-tethered application. The current study aimed to address these issues, and determine if an individualized electroencephalography (EEG) based algorithm could be defined to track performance decrements associated with sleep loss, as this is the first step in developing a field deployable drowsiness/alertness detection system. The results indicated that an EEG-based algorithm, individualized using a series of brief "identification" tasks, was able to effectively track performance decrements associated with sleep deprivation. Future development will address the need for the algorithm to predict performance decrements due to sleep loss, and provide field applicability.

  16. Developing Scoring Algorithms

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  17. Mathematical algorithm development and parametric studies with the GEOFRAC three-dimensional stochastic model of natural rock fracture systems

    Science.gov (United States)

    Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.

    2014-06-01

    This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.

  18. Watershed model calibration framework developed using an influence coefficient algorithm and a genetic algorithm and analysis of pollutant discharge characteristics and load reduction in a TMDL planning area.

    Science.gov (United States)

    Cho, Jae Heon; Lee, Jong Ho

    2015-11-01

    Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction

  19. Development of an Experimental Model for a Magnetorheological Damper Using Artificial Neural Networks (Levenberg-Marquardt Algorithm

    Directory of Open Access Journals (Sweden)

    Ayush Raizada

    2016-01-01

    Full Text Available This paper is based on the experimental study for design and control of vibrations in automotive vehicles. The objective of this paper is to develop a model for the highly nonlinear magnetorheological (MR damper to maximize passenger comfort in an automotive vehicle. The behavior of the MR damper is studied under different loading conditions and current values in the system. The input and output parameters of the system are used as a training data to develop a suitable model using Artificial Neural Networks. To generate the training data, a test rig similar to a quarter car model was fabricated to load the MR damper with a mechanical shaker to excite it externally. With the help of the test rig the input and output parameter data points are acquired by measuring the acceleration and force of the system at different points with the help of an impedance head and accelerometers. The model is validated by measuring the error for the testing and validation data points. The output of the model is the optimum current that is supplied to the MR damper, using a controller, to increase the passenger comfort by minimizing the amplitude of vibrations transmitted to the passenger. Besides using this model for cars, bikes, and other automotive vehicles it can also be modified by retraining the algorithm and used for civil structures to make them earthquake resistant.

  20. Developing an algorithm for enhancement of a digital terrain model for a densely vegetated floodplain wetland

    Science.gov (United States)

    Mirosław-Świątek, Dorota; Szporak-Wasilewska, Sylwia; Michałowski, Robert; Kardel, Ignacy; Grygoruk, Mateusz

    2016-07-01

    Airborne laser scanning survey data were conducted with a scanning density of 4 points/m2 to accurately map the surface of a unique central European complex of wetlands: the lower Biebrza River valley (Poland). A method to correct a degrading effect of vegetation (so-called "vegetation effect") on digital terrain models (DTMs) was applied utilizing remotely sensed images, real-time kinematic global positioning system elevation measurements, topographical surveys, and vegetation height measurements. Geographic object-based image analysis (GEOBIA) was performed to map vegetation within the study area that was used as categories from which vegetation height information was derived for the DTM correction. The final DTM was compared with a model obtained, where additional correction of the "vegetation effect" was neglected. A comparison between corrected and uncorrected DTMs demonstrated the importance of accurate topography through a simple presentation of the discrepancies arising in features of the flood using various DTM products. An overall map classification accuracy of 80% was attained with the use of GEOBIA. Correction factors developed for various types of the vegetation reached values from 0.08 up to 0.92 m and were dependent on the vegetation type.

  1. Multiagent scheduling models and algorithms

    CERN Document Server

    Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur

    2014-01-01

    This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.

  2. Evaluation the Quality of Cloud Dataset from the Goddard Multi-Scale Modeling Framework for Supporting GPM Algorithm Development

    Science.gov (United States)

    Chern, J.; Tao, W.; Mohr, K. I.; Matsui, T.; Lang, S. E.

    2013-12-01

    With recent rapid advancement in computational technology, the multi-scale modeling framework (MMF) that replaces conventional cloud parameterizations with a cloud-resolving model (CRM) in each grid column of a GCM has been developed and improved at NASA Goddard. The Goddard MMF is based on the coupling of the Goddard Cumulus Ensemble (GCE), a CRM model, and the Goddard GEOS global model. In recent years, a few new and improved microphysical schemes are developed and implemented to the GCE based on observations from field campaigns. These schemes have been incorporated into the MMF. The MMF has global coverage and can provide detailed cloud properties such as cloud amount, hydrometeors types, and vertical profile of water contents at high spatial and temporal resolution of a cloud-resolving model. When coupled with the Goddard Satellite Data Simulation Unit (GSDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators, the MMF system can provide radiances and backscattering similar to what satellite directly observed. In this study, one-year (2007) MMF simulation has been performed with the new 4-ice (cloud ice, snow, graupel and hail) microphysical scheme. The GEOS global model is run at 2o x 2.5o resolution and the embedded two-dimensional GCEs each has 64 columns at 4 km horizontal resolution. The large-scale forcing from the GCM is nudged to EC-Interim analysis to reduce the influence of MMF model biases on the cloud-resolving model results. The simulation provides more than 300 millions of vertical profiles of cloud dataset in different season, geographic locations, and climate regimes. This cloud dataset is used to supplement observations over data sparse areas for supporting GPM algorithm development. The model simulated mean and variability of surface rainfall and snowfall, cloud and precipitation types, cloud properties, radiances and backscattering are evaluated against satellite observations. We will assess the strengths

  3. Development of Solution Algorithm and Sensitivity Analysis for Random Fuzzy Portfolio Selection Model

    Science.gov (United States)

    Hasuike, Takashi; Katagiri, Hideki

    2010-10-01

    This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.

  4. Development of a Simple Remote Sensing EvapoTranspiration model (Sim-ReSET): Algorithm and model test

    Science.gov (United States)

    Sun, Zhigang; Wang, Qinxue; Matsushita, Bunkei; Fukushima, Takehiko; Ouyang, Zhu; Watanabe, Masataka

    2009-10-01

    SummaryRemote sensing (RS) has been considered as the most promising tool for evapotranspiration (ET) estimations from local, regional to global scales. Many studies have been conducted to estimated ET using RS data, however, most of them are based partially on ground observations. In this study, we developed a new dual-source Simple Remote Sensing EvapoTranspiration model (Sim-ReSET) based only on RS data. One merit of this model is that the calculation of aerodynamic resistance can be avoided by means of a reference dry bare soil and an assumption that wind speed at the upper boundary of atmospheric surface layer is homogenous, but the aerodynamic characters are still considered by means of canopy height. The other merit is that all inputs (net radiation, soil heat flux, canopy height, variables related to land surface temperature) can be potentially obtained from remote sensing data, which allows obtaining regular RS-driven ET product. For the purposes of sensitivity analysis and performance evaluation of the Sim-ReSET model without the effect of potential uncertainties and errors from remote sensing data, the Sim-ReSET model was tested only using intensive ground observations at the Yucheng ecological station in the North China Plain from 2006 to 2008. Results show that the model has a good performance for instantaneous ET estimations with a mean absolute difference (MAD) of 34.27 W/m 2 and a root mean square error (RMSE) of 41.84 W/m 2 under neutral or near-neutral atmospheric conditions. On 12 cloudless days, the MAD of daily ET accumulated from instantaneous estimations is 0.26 mm/day, and the RMSE is 0.30 mm/day.

  5. Carbon export algorithm advancements in models

    Science.gov (United States)

    Çağlar Yumruktepe, Veli; Salihoğlu, Barış

    2015-04-01

    The rate at which anthropogenic CO2 is absorbed by the oceans remains a critical question under investigation by climate researchers. Construction of a complete carbon budget, requires better understanding of air-sea exchanges and the processes controlling the vertical and horizontal transport of carbon in the ocean, particularly the biological carbon pump. Improved parameterization of carbon sequestration within ecosystem models is vital to better understand and predict changes in the global carbon cycle. Due to the complexity of processes controlling particle aggregation, sinking and decomposition, existing ecosystem models necessarily parameterize carbon sequestration using simple algorithms. Development of improved algorithms describing carbon export and sequestration, suitable for inclusion in numerical models is an ongoing work. Existing unique algorithms used in the state-of-the art ecosystem models and new experimental results obtained from mesocosm experiments and open ocean observations have been inserted into a common 1D pelagic ecosystem model for testing purposes. The model was implemented to the timeseries stations in the North Atlantic (BATS, PAP and ESTOC) and were evaluated with datasets of carbon export. Targetted topics of algorithms were PFT functional types, grazing and vertical movement of zooplankton, and remineralization, aggregation and ballasting dynamics of organic matter. Ultimately it is intended to feed improved algorithms to the 3D modelling community, for inclusion in coupled numerical models.

  6. Algorithmic Issues in Modeling Motion

    DEFF Research Database (Denmark)

    Agarwal, P. K; Guibas, L. J; Edelsbrunner, H.

    2003-01-01

    This article is a survey of research areas in which motion plays a pivotal role. The aim of the article is to review current approaches to modeling motion together with related data structures and algorithms, and to summarize the challenges that lie ahead in producing a more unified theory...

  7. Graphical model construction based on evolutionary algorithms

    Institute of Scientific and Technical Information of China (English)

    Youlong YANG; Yan WU; Sanyang LIU

    2006-01-01

    Using Bayesian networks to model promising solutions from the current population of the evolutionary algorithms can ensure efficiency and intelligence search for the optimum. However, to construct a Bayesian network that fits a given dataset is a NP-hard problem, and it also needs consuming mass computational resources. This paper develops a methodology for constructing a graphical model based on Bayesian Dirichlet metric. Our approach is derived from a set of propositions and theorems by researching the local metric relationship of networks matching dataset. This paper presents the algorithm to construct a tree model from a set of potential solutions using above approach. This method is important not only for evolutionary algorithms based on graphical models, but also for machine learning and data mining.The experimental results show that the exact theoretical results and the approximations match very well.

  8. Models and Algorithm for Stochastic Network Designs

    Institute of Scientific and Technical Information of China (English)

    Anthony Chen; Juyoung Kim; Seungjae Lee; Jaisung Choi

    2009-01-01

    The network design problem (NDP) is one of the most difficult and challenging problems in trans-portation. Traditional NDP models are often posed as a deterministic bilevel program assuming that all rele-vant inputs are known with certainty. This paper presents three stochastic models for designing transporta-tion networks with demand uncertainty. These three stochastic NDP models were formulated as the ex-pected value model, chance-constrained model, and dependent-chance model in a bilevel programming framework using different criteria to hedge against demand uncertainty. Solution procedures based on the traffic assignment algorithm, genetic algorithm, and Monte-Cado simulations were developed to solve these stochastic NDP models. The nonlinear and nonconvex nature of the bilevel program was handled by the genetic algorithm and traffic assignment algorithm, whereas the stochastic nature was addressed through simulations. Numerical experiments were conducted to evaluate the applicability of the stochastic NDP models and the solution procedure. Results from the three experiments show that the solution procedures are quite robust to different parameter settings.

  9. Direct Model Checking Matrix Algorithm

    Institute of Scientific and Technical Information of China (English)

    Zhi-Hong Tao; Hans Kleine Büning; Li-Fu Wang

    2006-01-01

    During the last decade, Model Checking has proven its efficacy and power in circuit design, network protocol analysis and bug hunting. Recent research on automatic verification has shown that no single model-checking technique has the edge over all others in all application areas. So, it is very difficult to determine which technique is the most suitable for a given model. It is thus sensible to apply different techniques to the same model. However, this is a very tedious and time-consuming task, for each algorithm uses its own description language. Applying Model Checking in software design and verification has been proved very difficult. Software architectures (SA) are engineering artifacts that provide high-level and abstract descriptions of complex software systems. In this paper a Direct Model Checking (DMC) method based on Kripke Structure and Matrix Algorithm is provided. Combined and integrated with domain specific software architecture description languages (ADLs), DMC can be used for computing consistency and other critical properties.

  10. Complex fluids modeling and algorithms

    CERN Document Server

    Saramito, Pierre

    2016-01-01

    This book presents a comprehensive overview of the modeling of complex fluids, including many common substances, such as toothpaste, hair gel, mayonnaise, liquid foam, cement and blood, which cannot be described by Navier-Stokes equations. It also offers an up-to-date mathematical and numerical analysis of the corresponding equations, as well as several practical numerical algorithms and software solutions for the approximation of the solutions. It discusses industrial (molten plastics, forming process), geophysical (mud flows, volcanic lava, glaciers and snow avalanches), and biological (blood flows, tissues) modeling applications. This book is a valuable resource for undergraduate students and researchers in applied mathematics, mechanical engineering and physics.

  11. Load-balancing algorithms for climate models

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.T.; Toonen, B.R.

    1994-06-01

    Implementations of climate models on scalable parallel computer systems can suffer from load imbalances due to temporal and spatial variations in the amount of computation required for physical parameterizations such as solar radiation and convective adjustment. We have developed specialized techniques for correcting such imbalances. These techniques are incorporated in a general-purpose, programmable load-balancing library that allows the mapping of computation to processors to be specified as a series of maps generated by a programmer-supplied load-balancing module. The communication required to move from one map to another is performed automatically by the library, without programmer intervention. In this paper, we de scribe the load-balancing problem and the techniques that we have developed to solve it. We also describe specific load-balancing algorithms that we have developed for PCCM2, a scalable parallel implementation of the community Climate Model, and present experimental results that demonstrate the effectiveness of these algorithms on parallel computers.

  12. Load-balancing algorithms for climate models

    Science.gov (United States)

    Foster, I. T.; Toonen, B. R.

    Implementations of climate models on scalable parallel computer systems can suffer from load imbalances due to temporal and spatial variations in the amount of computation required for physical parameterizations such as solar radiation and convective adjustment. We have developed specialized techniques for correcting such imbalances. These techniques are incorporated in a general-purpose, programmable load-balancing library that allows the mapping of computation to processors to be specified as a series of maps generated by a programmer-supplied load-balancing module. The communication required to move from one map to another is performed automatically by the library, without programmer intervention. In this paper, we describe the load-balancing problem and the techniques that we have developed to solve it. We also describe specific load-balancing algorithms that we have developed for PCCM2, a scalable parallel implementation of the community climate model, and present experimental results that demonstrate the effectiveness of these algorithms on parallel computers.

  13. Optimization in engineering models and algorithms

    CERN Document Server

    Sioshansi, Ramteen

    2017-01-01

    This textbook covers the fundamentals of optimization, including linear, mixed-integer linear, nonlinear, and dynamic optimization techniques, with a clear engineering focus. It carefully describes classical optimization models and algorithms using an engineering problem-solving perspective, and emphasizes modeling issues using many real-world examples related to a variety of application areas. Providing an appropriate blend of practical applications and optimization theory makes the text useful to both practitioners and students, and gives the reader a good sense of the power of optimization and the potential difficulties in applying optimization to modeling real-world systems. The book is intended for undergraduate and graduate-level teaching in industrial engineering and other engineering specialties. It is also of use to industry practitioners, due to the inclusion of real-world applications, opening the door to advanced courses on both modeling and algorithm development within the industrial engineering ...

  14. Crowd Behavior Algorithm Development for COMBAT XXI

    Science.gov (United States)

    2017-05-30

    time to scenario development for CXXI scenario integraters. Report Organization This report is organized into literature review, analysis, results, and...TRAC-M-TR-17-027 30 May 2017 Crowd Behavior Algorithm Development for COMBATXXI TRADOC Analysis Center 700 Dyer Road Monterey, California 93943-0692...30 May 2017 Crowd Behavior Algorithm Development for COMBATXXI LTC Casey Connors Dr. Steven Hall Dr. Imre Balogh Terry Norbraten TRADOC Analysis

  15. Development of rubber mixing process mathematical model and synthesis of control correction algorithm by process temperature mode using an artificial neural network

    Directory of Open Access Journals (Sweden)

    V. S. Kudryashov

    2016-01-01

    Full Text Available The article is devoted to the development of a correction control algorithm by temperature mode of a periodic rubber mixing process for JSC "Voronezh tire plant". The algorithm is designed to perform in the main controller a section of rubber mixing Siemens S7 CPU319F-3 PN/DP, which forms tasks for the local temperature controllers HESCH HE086 and Jumo dTRON304, operating by tempering stations. To compile the algorithm was performed a systematic analysis of rubber mixing process as an object of control and was developed a mathematical model of the process based on the heat balance equations describing the processes of heat transfer through the walls of technological devices, the change of coolant temperature and the temperature of the rubber compound mixing until discharge from the mixer chamber. Due to the complexity and nonlinearity of the control object – Rubber mixers and the availability of methods and a wide experience of this device control in an industrial environment, a correction algorithm is implemented on the basis of an artificial single-layer neural network and it provides the correction of tasks for local controllers on the cooling water temperature and air temperature in the workshop, which may vary considerably depending on the time of the year, and during prolonged operation of the equipment or its downtime. Tempering stations control is carried out by changing the flow of cold water from the cooler and on/off control of the heating elements. The analysis of the model experiments results and practical research at the main controller programming in the STEP 7 environment at the enterprise showed a decrease in the mixing time for different types of rubbers by reducing of heat transfer process control error.

  16. Evolutionary algorithms in genetic regulatory networks model

    CERN Document Server

    Raza, Khalid

    2012-01-01

    Genetic Regulatory Networks (GRNs) plays a vital role in the understanding of complex biological processes. Modeling GRNs is significantly important in order to reveal fundamental cellular processes, examine gene functions and understanding their complex relationships. Understanding the interactions between genes gives rise to develop better method for drug discovery and diagnosis of the disease since many diseases are characterized by abnormal behaviour of the genes. In this paper we have reviewed various evolutionary algorithms-based approach for modeling GRNs and discussed various opportunities and challenges.

  17. A Multiple Model Approach to Modeling Based on LPF Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``

  18. Fast Algorithms for Model-Based Diagnosis

    Science.gov (United States)

    Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan

    2005-01-01

    Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.

  19. Modelling and development of estimation and control algorithms: application to a bio process; Modelisation et elaboration d`algorithmes d`estimation et de commande: application a un bioprocede

    Energy Technology Data Exchange (ETDEWEB)

    Maher, M.

    1995-02-03

    Modelling, estimation and control of an alcoholic fermentation process is the purpose of this thesis. A simple mathematical model of a fermentation process is established by using experimental results obtained on the plant. This nonlinear model is used for numerical simulation, analysis and synthesis of estimation and control algorithms. The problem of state and parameter nonlinear estimation of bio-processes is studied. Two estimation techniques are developed and proposed to bypass the lack of sensors for certain physical variables. Their performances are studied by numerical simulation. One of these estimators is validated on experimental results of batch and continuous fermentations. An adaptive control by law is proposed for the regulation and tracking of the substrate concentration of the plant by acting on the dilution rate. It is a nonlinear control strategy coupled with the previous validated estimator. The performance of this control law is evaluated by a real application to a continuous flow fermentation process. (author) refs.

  20. Developing Scoring Algorithms (Earlier Methods)

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  1. Modeling and Engineering Algorithms for Mobile Data

    DEFF Research Database (Denmark)

    Blunck, Henrik; Hinrichs, Klaus; Sondern, Joëlle;

    2006-01-01

    In this paper, we present an object-oriented approach to modeling mobile data and algorithms operating on such data. Our model is general enough to capture any kind of continuous motion while at the same time allowing for encompassing algorithms optimized for specific types of motion. Such motion...

  2. Final Technical Report: High-resolution computational algorithms for simulating offshore wind turbines and farms: Model development and validation

    Energy Technology Data Exchange (ETDEWEB)

    Calderer, Antoni [Univ. of Minnesota, Minneapolis, MN (United States); Yang, Xiaolei [Stony Brook Univ., NY (United States); Feist, Christ [Univ. of Minnesota, Minneapolis, MN (United States); Guala, Michele [Univ. of Minnesota, Minneapolis, MN (United States); Angelidis, Dionysios [Univ. of Minnesota, Minneapolis, MN (United States); Ruehl, Kelley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guo, Xin [Univ. of Minnesota, Minneapolis, MN (United States); Boomsma, Aaron [Univ. of Minnesota, Minneapolis, MN (United States); Shen, Lian [Univ. of Minnesota, Minneapolis, MN (United States); Sotiropoulos, Fotis [Stony Brook Univ., NY (United States)

    2015-10-30

    The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models. Subtask 1.1 Turbine Scale Model: A novel computational framework for simulating the coupled interaction of complex floating structures with large-scale ocean waves and atmospheric turbulent winds has been developed. This framework is based on a domain decomposition approach coupling a large-scale far-field domain, where realistic wind and wave conditions representative from offshore environments are developed, with a near-field domain, where wind-wave body interactions can be investigated. The method applied in the near-field domain is based on a fluid-structure interaction (FSI) approach combining the curvilinear immersed boundary (CURVIB) method with a two-phase flow level set formulation and is capable of solving free surface flows interacting non-linearly with floating wind turbines. For coupling the far-field and near-field domains, a wave generation method for incorporating complex wave fields into Navier-Stokes solvers has been proposed. The wave generation method was validated for a variety of wave cases including a broadband spectrum. The computational framework has been further validated for wave-body interactions by replicating the experiment of floating wind turbine model subject to different sinusoidal wave forces (task 3). Finally, the full capabilities of the framework have been demonstrated by carrying out large eddy simulation (LES) of a floating wind turbine interacting with realistic ocean wind and wave conditions Subtask 1.2 Farm Scale Model: Several actuator

  3. Methodology and basic algorithms of the Livermore Economic Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    Bell, R.B.

    1981-03-17

    The methodology and the basic pricing algorithms used in the Livermore Economic Modeling System (EMS) are described. The report explains the derivations of the EMS equations in detail; however, it could also serve as a general introduction to the modeling system. A brief but comprehensive explanation of what EMS is and does, and how it does it is presented. The second part examines the basic pricing algorithms currently implemented in EMS. Each algorithm's function is analyzed and a detailed derivation of the actual mathematical expressions used to implement the algorithm is presented. EMS is an evolving modeling system; improvements in existing algorithms are constantly under development and new submodels are being introduced. A snapshot of the standard version of EMS is provided and areas currently under study and development are considered briefly.

  4. Nonparametric temporal downscaling with event-based population generating algorithm for RCM daily precipitation to hourly: Model development and performance evaluation

    Science.gov (United States)

    Lee, Taesam; Park, Taewoong

    2017-04-01

    It is critical to downscale temporally coarse GCM or RCM outputs (e.g., monthly or daily) to fine time scales, such as sub-daily or hourly. Recently, a temporal downscaling model employing a nonparametric framework (NTD) with k-nearest resampling and a genetic algorithm has been developed to preserve key statistics as well as the diurnal cycle. However, this model's usage can be limited in estimating precipitation for design storms or floods because the key statistics of annual maximum precipitation (AMP), especially for longer hourly durations, present a systematic bias that cannot be preserved due to the discontinuity of multiday consecutive precipitation events in the downscaling procedure. In the current study, we develop an approach to downscale a consecutive daily precipitation at once focusing on the reproduction of AMP totals for different durations instead of day-by-day downscaling. The proposed model has been verified with the precipitation datasets for the 60 stations across South Korea over the period 1979-2005. Additionally, two validation studies were performed with the recent datasets of 2006-2014 and nearest neighbor stations. The verification and the two validation tests conclude that the population-based NTD (PNTD) model proposed in the current study is superior to the existing NTD model in preserving the key statistics of the observed AMP series and suitable for downscaling future climate scenarios.

  5. Models and algorithms for biomolecules and molecular networks

    CERN Document Server

    DasGupta, Bhaskar

    2016-01-01

    By providing expositions to modeling principles, theories, computational solutions, and open problems, this reference presents a full scope on relevant biological phenomena, modeling frameworks, technical challenges, and algorithms. * Up-to-date developments of structures of biomolecules, systems biology, advanced models, and algorithms * Sampling techniques for estimating evolutionary rates and generating molecular structures * Accurate computation of probability landscape of stochastic networks, solving discrete chemical master equations * End-of-chapter exercises

  6. LCD motion blur: modeling, analysis, and algorithm.

    Science.gov (United States)

    Chan, Stanley H; Nguyen, Truong Q

    2011-08-01

    Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms.

  7. A Developed ESPRIT Algorithm for DOA Estimation

    Science.gov (United States)

    Fayad, Youssef; Wang, Caiyun; Cao, Qunsheng; Hafez, Alaa El-Din Sayed

    2015-05-01

    A novel algorithm for estimating direction of arrival (DOAE) for target, which aspires to contribute to increase the estimation process accuracy and decrease the calculation costs, has been carried out. It has introduced time and space multiresolution in Estimation of Signal Parameter via Rotation Invariance Techniques (ESPRIT) method (TS-ESPRIT) to realize subspace approach that decreases errors caused by the model's nonlinearity effect. The efficacy of the proposed algorithm is verified by using Monte Carlo simulation, the DOAE accuracy has evaluated by closed-form Cramér-Rao bound (CRB) which reveals that the proposed algorithm's estimated results are better than those of the normal ESPRIT methods leading to the estimator performance enhancement.

  8. A Topological Model for Parallel Algorithm Design

    Science.gov (United States)

    1991-09-01

    New York, 1989. 108. J. Dugundji . Topology . Allen and Bacon, Rockleigh, NJ, 1966. 109. R. Duncan. A Survey of Parallel Computer Architectures. IEEE...Approved for public release; distribition unlimited 4N1f-e AFIT/DS/ENG/91-02 A TOPOLOGICAL MODEL FOR PARALLEL ALGORITHM DESIGN DISSERTATION Presented to...DC 20503. 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS A Topological Model For Parallel Algorithm Design 6. AUTHOR(S) Jeffrey A Simmers, Captain, USAF 7

  9. Development and Evaluation of a New Air Exchange Rate Algorithm for the Stochastic Human Exposure and Dose Simulation Model

    Science.gov (United States)

    between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure and Dose Simulation (SHEDS) model is a population exposure model that uses a pro...

  10. An Analysis of Audio Features to Develop a Human Activity Recognition Model Using Genetic Algorithms, Random Forests, and Neural Networks

    Directory of Open Access Journals (Sweden)

    Carlos E. Galván-Tejada

    2016-01-01

    Full Text Available This work presents a human activity recognition (HAR model based on audio features. The use of sound as an information source for HAR models represents a challenge because sound wave analyses generate very large amounts of data. However, feature selection techniques may reduce the amount of data required to represent an audio signal sample. Some of the audio features that were analyzed include Mel-frequency cepstral coefficients (MFCC. Although MFCC are commonly used in voice and instrument recognition, their utility within HAR models is yet to be confirmed, and this work validates their usefulness. Additionally, statistical features were extracted from the audio samples to generate the proposed HAR model. The size of the information is necessary to conform a HAR model impact directly on the accuracy of the model. This problem also was tackled in the present work; our results indicate that we are capable of recognizing a human activity with an accuracy of 85% using the HAR model proposed. This means that minimum computational costs are needed, thus allowing portable devices to identify human activities using audio as an information source.

  11. An Automatic Registration Algorithm for 3D Maxillofacial Model

    Science.gov (United States)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  12. Scheduling Algorithm for Complex Product Development

    Institute of Scientific and Technical Information of China (English)

    LIUMin; ZHANGLong; WUCheng

    2004-01-01

    This paper describes the Complex product development project scheduling problem (CPDPSP) with a great number of activities, complicated resource, precedence and calendar constraints. By the conversion of precedence constraint relations, the CPDPSP is simplified. Then, according to the predictive control principle, we propose a new scheduling algorithm Based on prediction (BoP-procedure). In order to get the problem characteristics coming from resource status and precedence constraints of the scheduling problem at the scheduling time, a sub-project is constructed on the basis of a sub-AoN (Activity on node) graph of the project. Then, we use the modified GDH-procedure to solve the sub-project scheduling problem and to obtain the maximum feasible active subset for determining the activity group which satisfies resource, precedence and calendar constraints and has the highest scheduling priority at the scheduling time. Additionaily, we make a great number of numerical computations and compare the performance of BoP-procedure algorithm with those of other scheduling algorithms. Computation results show that the BoP-procedure algorithm is more suitable for the CPDPSP. At last, we discuss briefly future research work in the CPDPSP.

  13. Adaptive Genetic Algorithm Model for Intrusion Detection

    Directory of Open Access Journals (Sweden)

    K. S. Anil Kumar

    2012-09-01

    Full Text Available Intrusion detection systems are intelligent systems designed to identify and prevent the misuse of computer networks and systems. Various approaches to Intrusion Detection are currently being used, but they are relatively ineffective. Thus the emerging network security systems need be part of the life system and this ispossible only by embedding knowledge into the network. The Adaptive Genetic Algorithm Model - IDS comprising of K-Means clustering Algorithm, Genetic Algorithm and Neural Network techniques. Thetechnique is tested using multitude of background knowledge sets in DARPA network traffic datasets.

  14. Development and Testing of Data Mining Algorithms for Earth Observation

    Science.gov (United States)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  15. Model Checking Algorithms for CTMDPs

    DEFF Research Database (Denmark)

    Buchholz, Peter; Hahn, Ernst Moritz; Hermanns, Holger

    2011-01-01

    Continuous Stochastic Logic (CSL) can be interpreted over continuoustime Markov decision processes (CTMDPs) to specify quantitative properties of stochastic systems that allow some external control. Model checking CSL formulae over CTMDPs requires then the computation of optimal control strategie...

  16. Model Checking Algorithms for CTMDPs

    DEFF Research Database (Denmark)

    Buchholz, Peter; Hahn, Ernst Moritz; Hermanns, Holger

    2011-01-01

    Continuous Stochastic Logic (CSL) can be interpreted over continuoustime Markov decision processes (CTMDPs) to specify quantitative properties of stochastic systems that allow some external control. Model checking CSL formulae over CTMDPs requires then the computation of optimal control strategie...

  17. Development of a data assimilation algorithm

    DEFF Research Database (Denmark)

    Thomsen, Per Grove; Zlatev, Zahari

    2008-01-01

    assimilation technique is applied. Therefore, it is important to study the interplay between the three components of the variational data assimilation techniques as well as to apply powerful parallel computers in the computations. Some results obtained in the search for a good combination of numerical methods......, splitting techniques and optimization algorithms will be reported. Parallel techniques described in [V.N. Alexandrov, W. Owczarz, P.G. Thomsen, Z. Zlatev, Parallel runs of a large air pollution model on a grid of Sun computers, Mathematics and Computers in Simulation, 65 (2004) 557–577] are used in the runs....... Modules from a particular large-scale mathematical model, the Unified Danish Eulerian Model (UNI-DEM), are used in the experiments. The mathematical background of UNI-DEM is discussed in [V.N. Alexandrov,W. Owczarz, P.G. Thomsen, Z. Zlatev, Parallel runs of a large air pollution model on a grid of Sun...

  18. Fuzzy audit risk modeling algorithm

    Directory of Open Access Journals (Sweden)

    Zohreh Hajihaa

    2011-07-01

    Full Text Available Fuzzy logic has created suitable mathematics for making decisions in uncertain environments including professional judgments. One of the situations is to assess auditee risks. During recent years, risk based audit (RBA has been regarded as one of the main tools to fight against fraud. The main issue in RBA is to determine the overall audit risk an auditor accepts, which impact the efficiency of an audit. The primary objective of this research is to redesign the audit risk model (ARM proposed by auditing standards. The proposed model of this paper uses fuzzy inference systems (FIS based on the judgments of audit experts. The implementation of proposed fuzzy technique uses triangular fuzzy numbers to express the inputs and Mamdani method along with center of gravity are incorporated for defuzzification. The proposed model uses three FISs for audit, inherent and control risks, and there are five levels of linguistic variables for outputs. FISs include 25, 25 and 81 rules of if-then respectively and officials of Iranian audit experts confirm all the rules.

  19. Dynamic exponents for potts model cluster algorithms

    Science.gov (United States)

    Coddington, Paul D.; Baillie, Clive F.

    We have studied the Swendsen-Wang and Wolff cluster update algorithms for the Ising model in 2, 3 and 4 dimensions. The data indicate simple relations between the specific heat and the Wolff autocorrelations, and between the magnetization and the Swendsen-Wang autocorrelations. This implies that the dynamic critical exponents are related to the static exponents of the Ising model. We also investigate the possibility of similar relationships for the Q-state Potts model.

  20. A Generic Design Model for Evolutionary Algorithms

    Institute of Scientific and Technical Information of China (English)

    He Feng; Kang Li-shan; Chen Yu-ping

    2003-01-01

    A generic design model for evolutionary algo rithms is proposed in this paper. The model, which was described by UML in details, focuses on the key concepts and mechanisms in evolutionary algorithms. The model not only achieves separation of concerns and encapsulation of implementations by classification and abstraction of those concepts,it also has a flexible architecture due to the application of design patterns. As a result, the model is reusable, extendible,easy to understand, easy to use, and easy to test. A large number of experiments applying the model to solve many different problems adequately illustrate the generality and effectivity of the model.

  1. Development and evaluation of a clinical model for lung cancer patients using stereotactic body radiotherapy (SBRT) within a knowledge-based algorithm for treatment planning.

    Science.gov (United States)

    Snyder, Karen Chin; Kim, Jinkoo; Reding, Anne; Fraser, Corey; Gordon, James; Ajlouni, Munther; Movsas, Benjamin; Chetty, Indrin J

    2016-11-08

    The purpose of this study was to describe the development of a clinical model for lung cancer patients treated with stereotactic body radiotherapy (SBRT) within a knowledge-based algorithm for treatment planning, and to evaluate the model performance and applicability to different planning techniques, tumor locations, and beam arrangements. 105 SBRT plans for lung cancer patients previously treated at our institution were included in the development of the knowledge-based model (KBM). The KBM was trained with a combination of IMRT, VMAT, and 3D CRT techniques. Model performance was validated with 25 cases, for both IMRT and VMAT. The full KBM encompassed lesions located centrally vs. peripherally (43:62), upper vs. lower (62:43), and anterior vs. posterior (60:45). Four separate sub-KBMs were created based on tumor location. Results were compared with the full KBM to evaluate its robustness. Beam templates were used in conjunction with the optimizer to evaluate the model's ability to handle suboptimal beam placements. Dose differences to organs-at-risk (OAR) were evaluated between the plans gener-ated by each KBM. Knowledge-based plans (KBPs) were comparable to clinical plans with respect to target conformity and OAR doses. The KBPs resulted in a lower maximum spinal cord dose by 1.0 ± 1.6 Gy compared to clinical plans, p = 0.007. Sub-KBMs split according to tumor location did not produce significantly better DVH estimates compared to the full KBM. For central lesions, compared to the full KBM, the peripheral sub-KBM resulted in lower dose to 0.035 cc and 5 cc of the esophagus, both by 0.4Gy ± 0.8Gy, p = 0.025. For all lesions, compared to the full KBM, the posterior sub-KBM resulted in higher dose to 0.035 cc, 0.35 cc, and 1.2 cc of the spinal cord by 0.2 ± 0.4Gy, p = 0.01. Plans using template beam arrangements met target and OAR criteria, with an increase noted in maximum heart dose (1.2 ± 2.2Gy, p = 0.01) and GI (0.2 ± 0.4, p = 0.01) for the nine

  2. 基于CHNN聚类算法的款式部件生成模型%Part style developing model based on CHNN clustering algorithm

    Institute of Scientific and Technical Information of China (English)

    钱素琴

    2009-01-01

    The auto-gained function of parts in the intelligent fashion design system is studied. A method based on continuous Hopfield neural network (CHNN) clustering algorithm is used to accomplish a part style developing model. Some characteristic variables which represented the property of part structure are gained to make up of a space-dot set. Then the set is classified by CHNN. By analyzing the relations between the part sorts and garment styles, the rules of part arrangement are gained. The model is used on the part named coat piece and the classification is realized. According to the experimental results, the model is expansive and the method is effective.%对智能化服装款式设计系统中的款式部件的自动获取功能进行了研究.采用基于连续Hopfield神经网络(CHNN)的聚类算法提出了一个款式部件的风格生成模型.提取表现部件造型特征的特征要素构造一个空间点集,利用CHNN网络对该点集进行聚类,分析部件类别与款式设计风格之间的关系,建立基于款式风格设计的部件搭配规则.并将该模型应用于款式的衣片部件上,实现了衣片部件的聚类.实验结果表明,该模型设计合理,分类清晰,具有可扩展性.

  3. Worm algorithm for the CPN−1 model

    Directory of Open Access Journals (Sweden)

    Tobias Rindlisbacher

    2017-05-01

    Full Text Available The CPN−1 model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CPN−1 on the lattice is much lower than that for simulating 4D QCD. However, to our knowledge, no efficient algorithm for simulating the lattice CPN−1 model for N>2 has been tested so far, which also works at finite density. To this end we propose a new type of worm algorithm which is appropriate to simulate the lattice CPN−1 model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications. In addition to the usual worm moves where a defect is just moved from one lattice site to the next, our algorithm additionally allows for worm-type moves in the internal variable space of single links, which accelerates the Monte Carlo evolution. We use our algorithm to compare the two popular CPN−1 lattice actions and exhibit marked differences in their approach to the continuum limit.

  4. Algorithms and Models for the Web Graph

    NARCIS (Netherlands)

    Gleich, David F.; Komjathy, Julia; Litvak, Nelly

    2015-01-01

    This volume contains the papers presented at WAW2015, the 12th Workshop on Algorithms and Models for the Web-Graph held during December 10–11, 2015, in Eindhoven. There were 24 submissions. Each submission was reviewed by at least one, and on average two, Program Committee members. The committee dec

  5. Algorithms and Models for the Web Graph

    NARCIS (Netherlands)

    Gleich, David F.; Komjathy, Julia; Litvak, Nelli

    2015-01-01

    This volume contains the papers presented at WAW2015, the 12th Workshop on Algorithms and Models for the Web-Graph held during December 10–11, 2015, in Eindhoven. There were 24 submissions. Each submission was reviewed by at least one, and on average two, Program Committee members. The committee

  6. Development of target-tracking algorithms using neural network

    Energy Technology Data Exchange (ETDEWEB)

    Park, Dong Sun; Lee, Joon Whaoan; Yoon, Sook; Baek, Seong Hyun; Lee, Myung Jae [Chonbuk National University, Chonjoo (Korea)

    1998-04-01

    The utilization of remote-control robot system in atomic power plants or nuclear-related facilities grows rapidly, to protect workers form high radiation environments. Such applications require complete stability of the robot system, so that precisely tracking the robot is essential for the whole system. This research is to accomplish the goal by developing appropriate algorithms for remote-control robot systems. A neural network tracking system is designed and experimented to trace a robot Endpoint. This model is aimed to utilized the excellent capabilities of neural networks; nonlinear mapping between inputs and outputs, learning capability, and generalization capability. The neural tracker consists of two networks for position detection and prediction. Tracking algorithms are developed and experimented for the two models. Results of the experiments show that both models are promising as real-time target-tracking systems for remote-control robot systems. (author). 10 refs., 47 figs.

  7. Robust Algorithm Development for Application of Pinch Analysis on HEN

    Directory of Open Access Journals (Sweden)

    Ritesh Sojitra

    2016-10-01

    Full Text Available Since its genesis, Pinch Analysis is continuously evolving and its application is widening, reaching new horizons. The original concept of pinch approach was quite clear and, because of flexibility of this approach, innumerable applications have been developed in the industry. Consequently, a designer gets thoroughly muddled among these flexibilities. Hence, there was a need for a rigorous and robust model which could guide the optimisation engineer on deciding the applicability of the pinch approach and direct sequential step of procedure in predefined workflow, so that the precision of approach is ensured. Exploring the various options of a novice hands-on algorithm development that can be coded and interfaced with GUI and keeping in mind the difficulties faced by designers, an effort was made to formulate a new algorithm for the optimisation activity. As such, the work aims at easing out application hurdles and providing hands-on information to the Developer for use during preparation of new application tools. This paper presents a new algorithm, the application which ensures the Developer does not violate basic pinch rules. To achieve this, intermittent check gates are provided in the algorithm, which eliminate violation of predefined basic pinch rules, design philosophy, and Engineering Standards and ensure that constraints are adequately considered. On the other side, its sequential instruction to develop the pinch analysis and reiteration promises Maximum Energy Recovery (MER.

  8. Weekly Fleet Assignment Model and Algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHU Xing-hui; ZHU Jin-fu; GONG Zai-wu

    2007-01-01

    A 0-1 integer programming model for weekly fleet assignment was put forward based on linear network and weekly flight scheduling in China. In this model, the objective function is to maximize the total profit of fleet assignment, subject to the constraints of coverage, aircraft flow balance, fleet size, aircraft availability, aircraft usage, flight restriction, aircraft seat capacity,and stopover. Then the branch-and-bound algorithm based on special ordered set was applied to solve the model. At last, a realworld case study on an airline with 5 fleets, 48 aircrafts and 1 786 flight legs indicated that the profit increase was $1591276 one week and the running time was no more than 4 min, which shows that the model and algorithm are fairly good for domestic airline.

  9. Multi-level Algorithm for the Anderson Impurity Model

    Science.gov (United States)

    Chandrasekharan, S.; Yoo, J.; Baranger, H. U.

    2004-03-01

    We develop a new quantum Monte Carlo algorithm to solve the Anderson impurity model. Instead of integrating out the Fermions, we work in the Fermion occupation number basis and thus have direct access to the Fermionic physics. The sign problem that arises in this formulation can be solved by a multi-level technique developed by Luscher and Weisz in the context of lattice QCD [JHEP, 0109 (2001) 010]. We use the directed-loop algorithm to update the degrees of freedom. Further, this algorithm allows us to work directly in the Euclidean time continuum limit for arbitrary values of the interaction strength thus avoiding time discretization errors. We present results for the impurity susceptibility and the properties of the screening cloud obtained using the algorithm.

  10. Computational Granular Dynamics Models and Algorithms

    CERN Document Server

    Pöschel, Thorsten

    2005-01-01

    Computer simulations not only belong to the most important methods for the theoretical investigation of granular materials, but also provide the tools that have enabled much of the expanding research by physicists and engineers. The present book is intended to serve as an introduction to the application of numerical methods to systems of granular particles. Accordingly, emphasis is placed on a general understanding of the subject rather than on the presentation of the latest advances in numerical algorithms. Although a basic knowledge of C++ is needed for the understanding of the numerical methods and algorithms in the book, it avoids usage of elegant but complicated algorithms to remain accessible for those who prefer to use a different programming language. While the book focuses more on models than on the physics of granular material, many applications to real systems are presented.

  11. Efficient Algorithms for Parsing the DOP Model

    CERN Document Server

    Goodman, J

    1996-01-01

    Excellent results have been reported for Data-Oriented Parsing (DOP) of natural language texts (Bod, 1993). Unfortunately, existing algorithms are both computationally intensive and difficult to implement. Previous algorithms are expensive due to two factors: the exponential number of rules that must be generated and the use of a Monte Carlo parsing algorithm. In this paper we solve the first problem by a novel reduction of the DOP model to a small, equivalent probabilistic context-free grammar. We solve the second problem by a novel deterministic parsing strategy that maximizes the expected number of correct constituents, rather than the probability of a correct parse tree. Using the optimizations, experiments yield a 97% crossing brackets rate and 88% zero crossing brackets rate. This differs significantly from the results reported by Bod, and is comparable to results from a duplication of Pereira and Schabes's (1992) experiment on the same data. We show that Bod's results are at least partially due to an e...

  12. A Building Model Framework for a Genetic Algorithm Multi-objective Model Predictive Control

    DEFF Research Database (Denmark)

    Arendt, Krzysztof; Ionesi, Ana; Jradi, Muhyiddine

    2016-01-01

    implemented only in few buildings. The following difficulties hinder the widespread usage of MPC: (1) significant model development time, (2) limited portability of models, (3) model computational demand. In the present study a new model development framework for an MPC system based on a Genetic Algorithm (GA...

  13. Computational Fluid Dynamics. [numerical methods and algorithm development

    Science.gov (United States)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  14. Development of hybrid artificial intelligent based handover decision algorithm

    Directory of Open Access Journals (Sweden)

    A.M. Aibinu

    2017-04-01

    Full Text Available The possibility of seamless handover remains a mirage despite the plethora of existing handover algorithms. The underlying factor responsible for this has been traced to the Handover decision module in the Handover process. Hence, in this paper, the development of novel hybrid artificial intelligent handover decision algorithm has been developed. The developed model is made up of hybrid of Artificial Neural Network (ANN based prediction model and Fuzzy Logic. On accessing the network, the Received Signal Strength (RSS was acquired over a period of time to form a time series data. The data was then fed to the newly proposed k-step ahead ANN-based RSS prediction system for estimation of prediction model coefficients. The synaptic weights and adaptive coefficients of the trained ANN was then used to compute the k-step ahead ANN based RSS prediction model coefficients. The predicted RSS value was later codified as Fuzzy sets and in conjunction with other measured network parameters were fed into the Fuzzy logic controller in order to finalize handover decision process. The performance of the newly developed k-step ahead ANN based RSS prediction algorithm was evaluated using simulated and real data acquired from available mobile communication networks. Results obtained in both cases shows that the proposed algorithm is capable of predicting ahead the RSS value to about ±0.0002 dB. Also, the cascaded effect of the complete handover decision module was also evaluated. Results obtained show that the newly proposed hybrid approach was able to reduce ping-pong effect associated with other handover techniques.

  15. Image processing algorithm acceleration using reconfigurable macro processor model

    Institute of Scientific and Technical Information of China (English)

    孙广富; 陈华明; 卢焕章

    2004-01-01

    The concept and advantage of reconfigurable technology is introduced. A kind of processor architecture of reconfigurable macro processor (RMP) model based on FPGA array and DSP is put forward and has been implemented.Two image algorithms are developed: template-based automatic target recognition and zone labeling. One is estimating for motion direction in the infrared image background, another is line picking-up algorithm based on image zone labeling and phase grouping technique. It is a kind of "hardware" function that can be called by the DSP in high-level algorithm.It is also a kind of hardware algorithm of the DSP. The results of experiments show the reconfigurable computing technology based on RMP is an ideal accelerating means to deal with the high-speed image processing tasks. High real time performance is obtained in our two applications on RMP.

  16. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  17. Genetic Algorithm Based Microscale Vehicle Emissions Modelling

    Directory of Open Access Journals (Sweden)

    Sicong Zhu

    2015-01-01

    Full Text Available There is a need to match emission estimations accuracy with the outputs of transport models. The overall error rate in long-term traffic forecasts resulting from strategic transport models is likely to be significant. Microsimulation models, whilst high-resolution in nature, may have similar measurement errors if they use the outputs of strategic models to obtain traffic demand predictions. At the microlevel, this paper discusses the limitations of existing emissions estimation approaches. Emission models for predicting emission pollutants other than CO2 are proposed. A genetic algorithm approach is adopted to select the predicting variables for the black box model. The approach is capable of solving combinatorial optimization problems. Overall, the emission prediction results reveal that the proposed new models outperform conventional equations in terms of accuracy and robustness.

  18. Synaptic dynamics: linear model and adaptation algorithm.

    Science.gov (United States)

    Yousefi, Ali; Dibazar, Alireza A; Berger, Theodore W

    2014-08-01

    In this research, temporal processing in brain neural circuitries is addressed by a dynamic model of synaptic connections in which the synapse model accounts for both pre- and post-synaptic processes determining its temporal dynamics and strength. Neurons, which are excited by the post-synaptic potentials of hundred of the synapses, build the computational engine capable of processing dynamic neural stimuli. Temporal dynamics in neural models with dynamic synapses will be analyzed, and learning algorithms for synaptic adaptation of neural networks with hundreds of synaptic connections are proposed. The paper starts by introducing a linear approximate model for the temporal dynamics of synaptic transmission. The proposed linear model substantially simplifies the analysis and training of spiking neural networks. Furthermore, it is capable of replicating the synaptic response of the non-linear facilitation-depression model with an accuracy better than 92.5%. In the second part of the paper, a supervised spike-in-spike-out learning rule for synaptic adaptation in dynamic synapse neural networks (DSNN) is proposed. The proposed learning rule is a biologically plausible process, and it is capable of simultaneously adjusting both pre- and post-synaptic components of individual synapses. The last section of the paper starts with presenting the rigorous analysis of the learning algorithm in a system identification task with hundreds of synaptic connections which confirms the learning algorithm's accuracy, repeatability and scalability. The DSNN is utilized to predict the spiking activity of cortical neurons and pattern recognition tasks. The DSNN model is demonstrated to be a generative model capable of producing different cortical neuron spiking patterns and CA1 Pyramidal neurons recordings. A single-layer DSNN classifier on a benchmark pattern recognition task outperforms a 2-Layer Neural Network and GMM classifiers while having fewer numbers of free parameters and

  19. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    Science.gov (United States)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  20. Development of antibiotic regimens using graph based evolutionary algorithms.

    Science.gov (United States)

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems.

  1. Developing and Implementing the Data Mining Algorithms in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Ramazan Sonat [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Daniel Patrick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  2. Sparse modeling theory, algorithms, and applications

    CERN Document Server

    Rish, Irina

    2014-01-01

    ""A comprehensive, clear, and well-articulated book on sparse modeling. This book will stand as a prime reference to the research community for many years to come.""-Ricardo Vilalta, Department of Computer Science, University of Houston""This book provides a modern introduction to sparse methods for machine learning and signal processing, with a comprehensive treatment of both theory and algorithms. Sparse Modeling is an ideal book for a first-year graduate course.""-Francis Bach, INRIA - École Normale Supřieure, Paris

  3. Multiscale modeling for classification of SAR imagery using hybrid EM algorithm and genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    Xianbin Wen; Hua Zhang; Jianguang Zhang; Xu Jiao; Lei Wang

    2009-01-01

    A novel method that hybridizes genetic algorithm (GA) and expectation maximization (EM) algorithm for the classification of syn-thetic aperture radar (SAR) imagery is proposed by the finite Gaussian mixtures model (GMM) and multiscale autoregressive (MAR)model. This algorithm is capable of improving the global optimality and consistency of the classification performance. The experiments on the SAR images show that the proposed algorithm outperforms the standard EM method significantly in classification accuracy.

  4. Oscillation Detection Algorithm Development Summary Report and Test Plan

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang

    2009-10-03

    Small signal stability problems are one of the major threats to grid stability and reliability in California and the western U.S. power grid. An unstable oscillatory mode can cause large-amplitude oscillations and may result in system breakup and large-scale blackouts. There have been several incidents of system-wide oscillations. Of them, the most notable is the August 10, 1996 western system breakup produced as a result of undamped system-wide oscillations. There is a great need for real-time monitoring of small-signal oscillations in the system. In power systems, a small-signal oscillation is the result of poor electromechanical damping. Considerable understanding and literature have been developed on the small-signal stability problem over the past 50+ years. These studies have been mainly based on a linearized system model and eigenvalue analysis of its characteristic matrix. However, its practical feasibility is greatly limited as power system models have been found inadequate in describing real-time operating conditions. Significant efforts have been devoted to monitoring system oscillatory behaviors from real-time measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision time-synchronized data needed for estimating oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measure-ments to estimate system oscillation modes and their damping. Low damping indicates potential system stability issues. Oscillation alarms can be issued when the power system is lightly damped. A good oscillation alarm tool can provide time for operators to take remedial reaction and reduce the probability of a system breakup as a result of a light damping condition. Real-time oscillation monitoring requires ModeMeter algorithms to have the capability to work with various kinds of measurements: disturbance data (ringdown signals), noise probing data, and ambient data. Several measurement

  5. An Extended Clustering Algorithm for Statistical Language Models

    CERN Document Server

    Ueberla, J P

    1994-01-01

    Statistical language models frequently suffer from a lack of training data. This problem can be alleviated by clustering, because it reduces the number of free parameters that need to be trained. However, clustered models have the following drawback: if there is ``enough'' data to train an unclustered model, then the clustered variant may perform worse. On currently used language modeling corpora, e.g. the Wall Street Journal corpus, how do the performances of a clustered and an unclustered model compare? While trying to address this question, we develop the following two ideas. First, to get a clustering algorithm with potentially high performance, an existing algorithm is extended to deal with higher order N-grams. Second, to make it possible to cluster large amounts of training data more efficiently, a heuristic to speed up the algorithm is presented. The resulting clustering algorithm can be used to cluster trigrams on the Wall Street Journal corpus and the language models it produces can compete with exi...

  6. A new efficient Cluster Algorithm for the Ising Model

    CERN Document Server

    Nyffeler, M; Wiese, U J; Nyfeler, Matthias; Pepe, Michele; Wiese, Uwe-Jens

    2005-01-01

    Using D-theory we construct a new efficient cluster algorithm for the Ising model. The construction is very different from the standard Swendsen-Wang algorithm and related to worm algorithms. With the new algorithm we have measured the correlation function with high precision over a surprisingly large number of orders of magnitude.

  7. Link mining models, algorithms, and applications

    CERN Document Server

    Yu, Philip S; Faloutsos, Christos

    2010-01-01

    This book presents in-depth surveys and systematic discussions on models, algorithms and applications for link mining. Link mining is an important field of data mining. Traditional data mining focuses on 'flat' data in which each data object is represented as a fixed-length attribute vector. However, many real-world data sets are much richer in structure, involving objects of multiple types that are related to each other. Hence, recently link mining has become an emerging field of data mining, which has a high impact in various important applications such as text mining, social network analysi

  8. Genetic Algorithms Principles Towards Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Nabil M. Hewahi

    2011-10-01

    Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find
    out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values.

  9. Models and Algorithms for Tracking Target with Coordinated Turn Motion

    Directory of Open Access Journals (Sweden)

    Xianghui Yuan

    2014-01-01

    Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness.

  10. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    Science.gov (United States)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  11. Connected-Health Algorithm: Development and Evaluation.

    Science.gov (United States)

    Vlahu-Gjorgievska, Elena; Koceski, Saso; Kulev, Igor; Trajkovik, Vladimir

    2016-04-01

    Nowadays, there is a growing interest towards the adoption of novel ICT technologies in the field of medical monitoring and personal health care systems. This paper proposes design of a connected health algorithm inspired from social computing paradigm. The purpose of the algorithm is to give a recommendation for performing a specific activity that will improve user's health, based on his health condition and set of knowledge derived from the history of the user and users with similar attitudes to him. The algorithm could help users to have bigger confidence in choosing their physical activities that will improve their health. The proposed algorithm has been experimentally validated using real data collected from a community of 1000 active users. The results showed that the recommended physical activity, contributed towards weight loss of at least 0.5 kg, is found in the first half of the ordered list of recommendations, generated by the algorithm, with the probability > 0.6 with 1 % level of significance.

  12. A tuning algorithm for model predictive controllers based on genetic algorithms and fuzzy decision making.

    Science.gov (United States)

    van der Lee, J H; Svrcek, W Y; Young, B R

    2008-01-01

    Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.

  13. A combined model reduction algorithm for controlled biochemical systems.

    Science.gov (United States)

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-02-13

    Systems Biology continues to produce increasingly large models of complex biochemical reaction networks. In applications requiring, for example, parameter estimation, the use of agent-based modelling approaches, or real-time simulation, this growing model complexity can present a significant hurdle. Often, however, not all portions of a model are of equal interest in a given setting. In such situations methods of model reduction offer one possible approach for addressing the issue of complexity by seeking to eliminate those portions of a pathway that can be shown to have the least effect upon the properties of interest. In this paper a model reduction algorithm bringing together the complementary aspects of proper lumping and empirical balanced truncation is presented. Additional contributions include the development of a criterion for the selection of state-variable elimination via conservation analysis and use of an 'averaged' lumping inverse. This combined algorithm is highly automatable and of particular applicability in the context of 'controlled' biochemical networks. The algorithm is demonstrated here via application to two examples; an 11 dimensional model of bacterial chemotaxis in Escherichia coli and a 99 dimensional model of extracellular regulatory kinase activation (ERK) mediated via the epidermal growth factor (EGF) and nerve growth factor (NGF) receptor pathways. In the case of the chemotaxis model the algorithm was able to reduce the model to 2 state-variables producing a maximal relative error between the dynamics of the original and reduced models of only 2.8% whilst yielding a 26 fold speed up in simulation time. For the ERK activation model the algorithm was able to reduce the system to 7 state-variables, incurring a maximal relative error of 4.8%, and producing an approximately 10 fold speed up in the rate of simulation. Indices of controllability and observability are additionally developed and demonstrated throughout the paper. These provide

  14. A Unified Approach for Developing Efficient Algorithmic Programs

    Institute of Scientific and Technical Information of China (English)

    薛锦云

    1997-01-01

    A unified approach called partition-and-recur for developing efficient and correct algorithmic programs is presented.An algorithm(represented by recurrence and initiation)is separated from program,and special attention is paid to algorithm manipulation rather than proram calculus.An algorithm is exactly a set of mathematical formulae.It is easier for formal erivation and proof.After getting efficient and correct algorithm,a trivial transformation is used to get a final rogram,The approach covers several known algorithm design techniques,e.g.dynamic programming,greedy,divide-and-conquer and enumeration,etc.The techniques of partition and recurrence are not new.Partition is a general approach for dealing with complicated objects and is typically used in divide-and-conquer approach.Recurrence is used in algorithm analysis,in developing loop invariants and dynamic programming approach.The main contribution is combining two techniques used in typical algorithm development into a unified and systematic approach to develop general efficient algorithmic programs and presenting a new representation of algorithm that is easier for understanding and demonstrating the correctness and ingenuity of algorithmicprograms.

  15. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  16. Warehouse Optimization Model Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Guofeng Qin

    2013-01-01

    Full Text Available This paper takes Bao Steel logistics automated warehouse system as an example. The premise is to maintain the focus of the shelf below half of the height of the shelf. As a result, the cost time of getting or putting goods on the shelf is reduced, and the distance of the same kind of goods is also reduced. Construct a multiobjective optimization model, using genetic algorithm to optimize problem. At last, we get a local optimal solution. Before optimization, the average cost time of getting or putting goods is 4.52996 s, and the average distance of the same kinds of goods is 2.35318 m. After optimization, the average cost time is 4.28859 s, and the average distance is 1.97366 m. After analysis, we can draw the conclusion that this model can improve the efficiency of cargo storage.

  17. Adaptive Numerical Algorithms in Space Weather Modeling

    Science.gov (United States)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  18. Dynamical behavior of the Niedermayer algorithm applied to Potts models

    OpenAIRE

    Girardi, D.; Penna, T. J. P.; Branco, N. S.

    2012-01-01

    In this work we make a numerical study of the dynamic universality class of the Niedermayer algorithm applied to the two-dimensional Potts model with 2, 3, and 4 states. This algorithm updates clusters of spins and has a free parameter, $E_0$, which controls the size of these clusters, such that $E_0=1$ is the Metropolis algorithm and $E_0=0$ regains the Wolff algorithm, for the Potts model. For $-1

  19. A genetic algorithm for solving supply chain network design model

    Science.gov (United States)

    Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.

    2013-09-01

    Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.

  20. Routine Discovery of Complex Genetic Models using Genetic Algorithms.

    Science.gov (United States)

    Moore, Jason H; Hahn, Lance W; Ritchie, Marylyn D; Thornton, Tricia A; White, Bill C

    2004-02-01

    Simulation studies are useful in various disciplines for a number of reasons including the development and evaluation of new computational and statistical methods. This is particularly true in human genetics and genetic epidemiology where new analytical methods are needed for the detection and characterization of disease susceptibility genes whose effects are complex, nonlinear, and partially or solely dependent on the effects of other genes (i.e. epistasis or gene-gene interaction). Despite this need, the development of complex genetic models that can be used to simulate data is not always intuitive. In fact, only a few such models have been published. We have previously developed a genetic algorithm approach to discovering complex genetic models in which two single nucleotide polymorphisms (SNPs) influence disease risk solely through nonlinear interactions. In this paper, we extend this approach for the discovery of high-order epistasis models involving three to five SNPs. We demonstrate that the genetic algorithm is capable of routinely discovering interesting high-order epistasis models in which each SNP influences risk of disease only through interactions with the other SNPs in the model. This study opens the door for routine simulation of complex gene-gene interactions among SNPs for the development and evaluation of new statistical and computational approaches for identifying common, complex multifactorial disease susceptibility genes.

  1. Probabilistic structural analysis algorithm development for computational efficiency

    Science.gov (United States)

    Wu, Y.-T.

    1991-01-01

    The PSAM (Probabilistic Structural Analysis Methods) program is developing a probabilistic structural risk assessment capability for the SSME components. An advanced probabilistic structural analysis software system, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), is being developed as part of the PSAM effort to accurately simulate stochastic structures operating under severe random loading conditions. One of the challenges in developing the NESSUS system is the development of the probabilistic algorithms that provide both efficiency and accuracy. The main probability algorithms developed and implemented in the NESSUS system are efficient, but approximate in nature. In the last six years, the algorithms have improved very significantly.

  2. Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling

    Science.gov (United States)

    Lohn, Jason; Colombano, Silvano

    1997-01-01

    We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.

  3. Firefly algorithm versus genetic algorithm as powerful variable selection tools and their effect on different multivariate calibration models in spectroscopy: A comparative study

    Science.gov (United States)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2017-01-01

    For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration.

  4. Bayesian online algorithms for learning in discrete Hidden Markov Models

    OpenAIRE

    Alamino, Roberto C.; Caticha, Nestor

    2008-01-01

    We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.

  5. Modelling Agro-Met Station Observations Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Prashant Kumar

    2014-01-01

    Full Text Available The present work discusses the development of a nonlinear data-fitting technique based on genetic algorithm (GA for the prediction of routine weather parameters using observations from Agro-Met Stations (AMS. The algorithm produces the equations that best describe the temporal evolutions of daily minimum and maximum near-surface (at 2.5-meter height air temperature and relative humidity and daily averaged wind speed (at 10-meter height at selected AMS locations. These enable the forecasts of these weather parameters, which could have possible use in crop forecast models. The forecast equations developed in the present study use only the past observations of the above-mentioned parameters. This approach, unlike other prediction methods, provides explicit analytical forecast equation for each parameter. The predictions up to 3 days in advance have been validated using independent datasets, unknown to the training algorithm, with impressive results. The power of the algorithm has also been demonstrated by its superiority over persistence forecast used as a benchmark.

  6. Immune System Model Calibration by Genetic Algorithm

    NARCIS (Netherlands)

    Presbitero, A.; Krzhizhanovskaya, V.; Mancini, E.; Brands, R.; Sloot, P.

    2016-01-01

    We aim to develop a mathematical model of the human immune system for advanced individualized healthcare where medication plan is fine-tuned to fit a patient's conditions through monitored biochemical processes. One of the challenges is calibrating model parameters to satisfy existing experimental

  7. Bouc–Wen hysteresis model identification using Modified Firefly Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Zaman, Mohammad Asif, E-mail: zaman@stanford.edu [Department of Electrical Engineering, Stanford University (United States); Sikder, Urmita [Department of Electrical Engineering and Computer Sciences, University of California, Berkeley (United States)

    2015-12-01

    The parameters of Bouc–Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc–Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc–Wen model parameters. Finally, the proposed method is used to find the Bouc–Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data. - Highlights: • We describe a new method to find the Bouc–Wen hysteresis model parameters. • We propose a Modified Firefly Algorithm. • We compare our method with existing methods to find that the proposed method performs better. • We use our model to fit experimental results. Good agreement is found.

  8. Motion Model Employment using interacting Motion Model Algorithm

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar

    2006-01-01

    The paper presents a simulation study to track a maneuvering target using a selective approach in choosing Interacting Multiple Models (IMM) algorithm to provide a wider coverage to track such targets.  Initially, there are two motion models in the system to track a target.  Probability of each...... model being correct is computed through a likelihood function for each model.  The study presented a simple technique to introduce additional models into the system using deterministic acceleration which basically defines the dynamics of the system.  Therefore, based on this value more motion models can...... be employed to increase the coverage.  Finally, the combined estimate is obtained using posteriori probabilities from different filter models.   The implemented approach provides an adaptive scheme for selecting various number of motion models.  Motion model description is important as it defines the kind...

  9. Algorithm for Realistic Modeling of Graphitic Systems

    Directory of Open Access Journals (Sweden)

    A.V. Khomenko

    2011-01-01

    Full Text Available An algorithm for molecular dynamics simulations of graphitic systems using realistic semiempirical interaction potentials of carbon atoms taking into account both short-range and long-range contributions is proposed. Results of the use of the algorithm for a graphite sample are presented. The scalability of the algorithm depending on the system size and the number of processor cores involved in the calculations is analyzed.

  10. Modeling of higher order systems using artificial bee colony algorithm

    Directory of Open Access Journals (Sweden)

    Aytekin Bağış

    2016-05-01

    Full Text Available In this work, modeling of the higher order systems based on the use of the artificial bee colony (ABC algorithm were examined. Proposed model parameters for the sample systems in the literature were obtained by using the algorithm, and its performance was presented comparatively with the other methods. Simulation results show that the ABC algorithm based system modeling approach can be used as an efficient and powerful method for higher order systems.

  11. A new algorithm for designing developable Bézier surfaces

    Institute of Scientific and Technical Information of China (English)

    ZHANG Xing-wang; WANG Guo-jin

    2006-01-01

    A new algorithm is presented that generates developable Bézier surfaces through a Bézier curve called a directrix. The algorithm is based on differential geometry theory on necessary and sufficient conditions for a surface which is developable, and on degree evaluation formula for parameter curves and linear independence for Bernstein basis. No nonlinear characteristic equations have to be solved. Moreover the vertex for a cone and the edge of regression for a tangent surface can be obtained easily.Aumann's algorithm for developable surfaces is a special case of this paper.

  12. Development of Educational Support System for Algorithm using Flowchart

    Science.gov (United States)

    Ohchi, Masashi; Aoki, Noriyuki; Furukawa, Tatsuya; Takayama, Kanta

    Recently, an information technology is indispensable for the business and industrial developments. However, it has been a social problem that the number of software developers has been insufficient. To solve the problem, it is necessary to develop and implement the environment for learning the algorithm and programming language. In the paper, we will describe the algorithm study support system for a programmer using the flowchart. Since the proposed system uses Graphical User Interface(GUI), it will become easy for a programmer to understand the algorithm in programs.

  13. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  14. Further development of an improved altimeter wind speed algorithm

    Science.gov (United States)

    Chelton, Dudley B.; Wentz, Frank J.

    1986-01-01

    A previous altimeter wind speed retrieval algorithm was developed on the basis of wind speeds in the limited range from about 4 to 14 m/s. In this paper, a new approach which gives a wind speed model function applicable over the range 0 to 21 m/s is used. The method is based on comparing 50 km along-track averages of the altimeter normalized radar cross section measurements with neighboring off-nadir scatterometer wind speed measurements. The scatterometer winds are constructed from 100 km binned measurements of radar cross section and are located approximately 200 km from the satellite subtrack. The new model function agrees very well with earlier versions up to wind speeds of 14 m/s, but differs significantly at higher wind speeds. The relevance of these results to the Geosat altimeter launched in March 1985 is discussed.

  15. Enhanced hybrid search algorithm for protein structure prediction using the 3D-HP lattice model.

    Science.gov (United States)

    Zhou, Changjun; Hou, Caixia; Zhang, Qiang; Wei, Xiaopeng

    2013-09-01

    The problem of protein structure prediction in the hydrophobic-polar (HP) lattice model is the prediction of protein tertiary structure. This problem is usually referred to as the protein folding problem. This paper presents a method for the application of an enhanced hybrid search algorithm to the problem of protein folding prediction, using the three dimensional (3D) HP lattice model. The enhanced hybrid search algorithm is a combination of the particle swarm optimizer (PSO) and tabu search (TS) algorithms. Since the PSO algorithm entraps local minimum in later evolution extremely easily, we combined PSO with the TS algorithm, which has properties of global optimization. Since the technologies of crossover and mutation are applied many times to PSO and TS algorithms, so enhanced hybrid search algorithm is called the MCMPSO-TS (multiple crossover and mutation PSO-TS) algorithm. Experimental results show that the MCMPSO-TS algorithm can find the best solutions so far for the listed benchmarks, which will help comparison with any future paper approach. Moreover, real protein sequences and Fibonacci sequences are verified in the 3D HP lattice model for the first time. Compared with the previous evolutionary algorithms, the new hybrid search algorithm is novel, and can be used effectively to predict 3D protein folding structure. With continuous development and changes in amino acids sequences, the new algorithm will also make a contribution to the study of new protein sequences.

  16. Polynomial search and global modeling: Two algorithms for modeling chaos.

    Science.gov (United States)

    Mangiarotti, S; Coudret, R; Drapeau, L; Jarlan, L

    2012-10-01

    Global modeling aims to build mathematical models of concise description. Polynomial Model Search (PoMoS) and Global Modeling (GloMo) are two complementary algorithms (freely downloadable at the following address: http://www.cesbio.ups-tlse.fr/us/pomos_et_glomo.html) designed for the modeling of observed dynamical systems based on a small set of time series. Models considered in these algorithms are based on ordinary differential equations built on a polynomial formulation. More specifically, PoMoS aims at finding polynomial formulations from a given set of 1 to N time series, whereas GloMo is designed for single time series and aims to identify the parameters for a selected structure. GloMo also provides basic features to visualize integrated trajectories and to characterize their structure when it is simple enough: One allows for drawing the first return map for a chosen Poincaré section in the reconstructed space; another one computes the Lyapunov exponent along the trajectory. In the present paper, global modeling from single time series is considered. A description of the algorithms is given and three examples are provided. The first example is based on the three variables of the Rössler attractor. The second one comes from an experimental analysis of the copper electrodissolution in phosphoric acid for which a less parsimonious global model was obtained in a previous study. The third example is an exploratory case and concerns the cycle of rainfed wheat under semiarid climatic conditions as observed through a vegetation index derived from a spatial sensor.

  17. Evaluating Multicore Algorithms on the Unified Memory Model

    Directory of Open Access Journals (Sweden)

    John E. Savage

    2009-01-01

    Full Text Available One of the challenges to achieving good performance on multicore architectures is the effective utilization of the underlying memory hierarchy. While this is an issue for single-core architectures, it is a critical problem for multicore chips. In this paper, we formulate the unified multicore model (UMM to help understand the fundamental limits on cache performance on these architectures. The UMM seamlessly handles different types of multiple-core processors with varying degrees of cache sharing at different levels. We demonstrate that our model can be used to study a variety of multicore architectures on a variety of applications. In particular, we use it to analyze an option pricing problem using the trinomial model and develop an algorithm for it that has near-optimal memory traffic between cache levels. We have implemented the algorithm on a two Quad-Core Intel Xeon 5310 1.6 GHz processors (8 cores. It achieves a peak performance of 19.5 GFLOPs, which is 38% of the theoretical peak of the multicore system. We demonstrate that our algorithm outperforms compiler-optimized and auto-parallelized code by a factor of up to 7.5.

  18. Electromagnetic Model and Image Reconstruction Algorithms Based on EIT System

    Institute of Scientific and Technical Information of China (English)

    CAO Zhang; WANG Huaxiang

    2006-01-01

    An intuitive 2 D model of circular electrical impedance tomography ( EIT) sensor with small size electrodes is established based on the theory of analytic functions.The validation of the model is proved using the result from the solution of Laplace equation.Suggestions on to electrode optimization and explanation to the ill-condition property of the sensitivity matrix are provided based on the model,which takes electrode distance into account and can be generalized to the sensor with any simple connected region through a conformal transformation.Image reconstruction algorithms based on the model are implemented to show feasibility of the model using experimental data collected from the EIT system developed in Tianjin University.In the simulation with a human chestlike configuration,electrical conductivity distributions are reconstructed using equi-potential backprojection (EBP) and Tikhonov regularization (TR) based on a conformal transformation of the model.The algorithms based on the model are suitable for online image reconstruction and the reconstructed results are good both in size and position.

  19. Load-balancing algorithms for the parallel community climate model

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.T.; Toonen, B.R.

    1995-01-01

    Implementations of climate models on scalable parallel computer systems can suffer from load imbalances resulting from temporal and spatial variations in the amount of computation required for physical parameterizations such as solar radiation and convective adjustment. We have developed specialized techniques for correcting such imbalances. These techniques are incorporated in a general-purpose, programmable load-balancing library that allows the mapping of computation to processors to be specified as a series of maps generated by a programmer-supplied load-balancing module. The communication required to move from one map to another is performed automatically by the library, without programmer intervention. In this paper, we describe the load-balancing problem and the techniques that we have developed to solve it. We also describe specific load-balancing algorithms that we have developed for PCCM2, a scalable parallel implementation of the Community Climate Model, and present experimental results that demonstrate the effectiveness of these algorithms on parallel computers. The load-balancing library developed in this work is available for use in other climate models.

  20. Multiobjective Route Planning Model and Algorithm for Emergency Management

    Directory of Open Access Journals (Sweden)

    Wen-mei Gai

    2015-01-01

    Full Text Available In order to model route planning problem for emergency logistics management taking both route timeliness and safety into account, a multiobjective mathematical model is proposed based on the theories of bounded rationality. The route safety is modeled as the product of safety through arcs included in the path. For solving this model, we convert the multiobjective optimization problem into its equivalent deterministic form. We take uncertainty of the weight coefficient for each objective function in actual multiobjective optimization into account. Finally, we develop an easy-to-implement heuristic in order to gain an efficient and feasible solution and its corresponding appropriate vector of weight coefficients quickly. Simulation results show the effectiveness and feasibility of the models and algorithms presented in this paper.

  1. High speed railway track dynamics models, algorithms and applications

    CERN Document Server

    Lei, Xiaoyan

    2017-01-01

    This book systematically summarizes the latest research findings on high-speed railway track dynamics, made by the author and his research team over the past decade. It explores cutting-edge issues concerning the basic theory of high-speed railways, covering the dynamic theories, models, algorithms and engineering applications of the high-speed train and track coupling system. Presenting original concepts, systematic theories and advanced algorithms, the book places great emphasis on the precision and completeness of its content. The chapters are interrelated yet largely self-contained, allowing readers to either read through the book as a whole or focus on specific topics. It also combines theories with practice to effectively introduce readers to the latest research findings and developments in high-speed railway track dynamics. It offers a valuable resource for researchers, postgraduates and engineers in the fields of civil engineering, transportation, highway & railway engineering.

  2. Critical dynamics of cluster algorithms in the dilute Ising model

    Science.gov (United States)

    Hennecke, M.; Heyken, U.

    1993-08-01

    Autocorrelation times for thermodynamic quantities at T C are calculated from Monte Carlo simulations of the site-diluted simple cubic Ising model, using the Swendsen-Wang and Wolff cluster algorithms. Our results show that for these algorithms the autocorrelation times decrease when reducing the concentration of magnetic sites from 100% down to 40%. This is of crucial importance when estimating static properties of the model, since the variances of these estimators increase with autocorrelation time. The dynamical critical exponents are calculated for both algorithms, observing pronounced finite-size effects in the energy autocorrelation data for the algorithm of Wolff. We conclude that, when applied to the dilute Ising model, cluster algorithms become even more effective than local algorithms, for which increasing autocorrelation times are expected.

  3. Comparison of evolutionary algorithms in gene regulatory network model inference.

    LENUS (Irish Health Repository)

    2010-01-01

    ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.

  4. "Updates to Model Algorithms & Inputs for the Biogenic ...

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.

  5. Performance analysis of FXLMS algorithm with secondary path modeling error

    Institute of Scientific and Technical Information of China (English)

    SUN Xu; CHEN Duanshi

    2003-01-01

    Performance analysis of filtered-X LMS (FXLMS) algorithm with secondary path modeling error is carried out in both time and frequency domain. It is shown firstly that the effects of secondary path modeling error on the performance of FXLMS algorithm are determined by the distribution of the relative error of secondary path model along with frequency.In case of that the distribution of relative error is uniform the modeling error of secondary path will have no effects on the performance of the algorithm. In addition, a limitation property of FXLMS algorithm is proved, which implies that the negative effects of secondary path modeling error can be compensated by increasing the adaptive filter length. At last, some insights into the "spillover" phenomenon of FXLMS algorithm are given.

  6. Hospital Case Cost Estimates Modelling - Algorithm Comparison

    CERN Document Server

    Andru, Peter

    2008-01-01

    Ontario (Canada) Health System stakeholders support the idea and necessity of the integrated source of data that would include both clinical (e.g. diagnosis, intervention, length of stay, case mix group) and financial (e.g. cost per weighted case, cost per diem) characteristics of the Ontario healthcare system activities at the patient-specific level. At present, the actual patient-level case costs in the explicit form are not available in the financial databases for all hospitals. The goal of this research effort is to develop financial models that will assign each clinical case in the patient-specific data warehouse a dollar value, representing the cost incurred by the Ontario health care facility which treated the patient. Five mathematical models have been developed and verified using real dataset. All models can be classified into two groups based on their underlying method: 1. Models based on using relative intensity weights of the cases, and 2. Models based on using cost per diem.

  7. Developer Tools for Evaluating Multi-Objective Algorithms

    Science.gov (United States)

    Giuliano, Mark E.; Johnston, Mark D.

    2011-01-01

    Multi-objective algorithms for scheduling offer many advantages over the more conventional single objective approach. By keeping user objectives separate instead of combined, more information is available to the end user to make trade-offs between competing objectives. Unlike single objective algorithms, which produce a single solution, multi-objective algorithms produce a set of solutions, called a Pareto surface, where no solution is strictly dominated by another solution for all objectives. From the end-user perspective a Pareto-surface provides a tool for reasoning about trade-offs between competing objectives. From the perspective of a software developer multi-objective algorithms provide an additional challenge. How can you tell if one multi-objective algorithm is better than another? This paper presents formal and visual tools for evaluating multi-objective algorithms and shows how the developer process of selecting an algorithm parallels the end-user process of selecting a solution for execution out of the Pareto-Surface.

  8. Fireworks algorithm for mean-VaR/CVaR models

    Science.gov (United States)

    Zhang, Tingting; Liu, Zhifeng

    2017-10-01

    Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.

  9. Kriging-approximation simulated annealing algorithm for groundwater modeling

    Science.gov (United States)

    Shen, C. H.

    2015-12-01

    Optimization algorithms are often applied to search best parameters for complex groundwater models. Running the complex groundwater models to evaluate objective function might be time-consuming. This research proposes a Kriging-approximation simulated annealing algorithm. Kriging is a spatial statistics method used to interpolate unknown variables based on surrounding given data. In the algorithm, Kriging method is used to estimate complicate objective function and is incorporated with simulated annealing. The contribution of the Kriging-approximation simulated annealing algorithm is to reduce calculation time and increase efficiency.

  10. Development of hybrid genetic algorithms for product line designs.

    Science.gov (United States)

    Balakrishnan, P V Sundar; Gupta, Rakesh; Jacob, Varghese S

    2004-02-01

    In this paper, we investigate the efficacy of artificial intelligence (AI) based meta-heuristic techniques namely genetic algorithms (GAs), for the product line design problem. This work extends previously developed methods for the single product design problem. We conduct a large scale simulation study to determine the effectiveness of such an AI based technique for providing good solutions and bench mark the performance of this against the current dominant approach of beam search (BS). We investigate the potential advantages of pursuing the avenue of developing hybrid models and then implement and study such hybrid models using two very distinct approaches: namely, seeding the initial GA population with the BS solution, and employing the BS solution as part of the GA operator's process. We go on to examine the impact of two alternate string representation formats on the quality of the solutions obtained by the above proposed techniques. We also explicitly investigate a critical managerial factor of attribute importance in terms of its impact on the solutions obtained by the alternate modeling procedures. The alternate techniques are then evaluated, using statistical analysis of variance, on a fairy large number of data sets, as to the quality of the solutions obtained with respect to the state-of-the-art benchmark and in terms of their ability to provide multiple, unique product line options.

  11. An implicit algorithm for a rate-dependent ductile failure model

    Science.gov (United States)

    Zuo, Q. H.; Rice, Jeremy R.

    2008-10-01

    An implicit numerical algorithm has been developed for a rate-dependent model for damage and failure of ductile materials under high-rate dynamic loading [F. L. Addessio and J. N. Johnson, J. Appl. Phys. 74, 1640 (1993)]. Over each time step, the algorithm first implicitly determines the equilibrium state on a Gurson surface, and then calculates the final state by solving viscous relaxation equations, also implicitly. Numerical examples are given to demonstrate the key features of the algorithm. Compared to the explicit algorithm used previously, the current algorithm allows significantly larger time steps that can be used in the analysis. As the viscosity of the material vanishes, the results of the rate-dependent model are shown here to converge to that of the corresponding rate-independent model, a result not achieved with the explicit algorithm.

  12. Data mining concepts models methods and algorithms

    CERN Document Server

    Kantardzic, Mehmed

    2011-01-01

    This book reviews state-of-the-art methodologies and techniques for analyzing enormous quantities of raw data in high-dimensional data spaces, to extract new information for decision making. The goal of this book is to provide a single introductory source, organized in a systematic way, in which we could direct the readers in analysis of large data sets, through the explanation of basic concepts, models and methodologies developed in recent decades.

  13. Algorithm development for Maxwell's equations for computational electromagnetism

    Science.gov (United States)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  14. [A new algorithm for NIR modeling based on manifold learning].

    Science.gov (United States)

    Hong, Ming-Jian; Wen, Zhi-Yu; Zhang, Xiao-Hong; Wen, Quan

    2009-07-01

    Manifold learning is a new kind of algorithm originating from the field of machine learning to find the intrinsic dimensionality of numerous and complex data and to extract most important information from the raw data to develop a regression or classification model. The basic assumption of the manifold learning is that the high-dimensional data measured from the same object using some devices must reside on a manifold with much lower dimensions determined by a few properties of the object. While NIR spectra are characterized by their high dimensions and complicated band assignment, the authors may assume that the NIR spectra of the same kind of substances with different chemical concentrations should reside on a manifold with much lower dimensions determined by the concentrations, according to the above assumption. As one of the best known algorithms of manifold learning, locally linear embedding (LLE) further assumes that the underlying manifold is locally linear. So, every data point in the manifold should be a linear combination of its neighbors. Based on the above assumptions, the present paper proposes a new algorithm named least square locally weighted regression (LS-LWR), which is a kind of LWR with weights determined by the least squares instead of a predefined function. Then, the NIR spectra of glucose solutions with various concentrations are measured using a NIR spectrometer and LS-LWR is verified by predicting the concentrations of glucose solutions quantitatively. Compared with the existing algorithms such as principal component regression (PCR) and partial least squares regression (PLSR), the LS-LWR has better predictability measured by the standard error of prediction (SEP) and generates an elegant model with good stability and efficiency.

  15. Engineering of Algorithms for Hidden Markov models and Tree Distances

    DEFF Research Database (Denmark)

    Sand, Andreas

    grown exponentially because of drastic improvements in the technology behind DNA and RNA sequencing, and focus on the research field has increased due to its potential to expand our knowledge about biological mechanisms and to improve public health. There has therefore been a continuously growing demand...... of the algorithms to exploit the parallel architecture of modern computers. In this PhD dissertation, I present my work with algorithmic optimizations and parallelizations in primarily two areas in algorithmic bioinformatics: algorithms for analyzing hidden Markov models and algorithms for computing distance...... measures between phylogenetic trees. Hidden Markov models is a class of probabilistic models that is used in a number of core applications in bioinformatics such as modeling of proteins, gene finding and reconstruction of species and population histories. I show how a relatively simple parallelization can...

  16. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  17. Model-Free Adaptive Control Algorithm with Data Dropout Compensation

    OpenAIRE

    Xuhui Bu; Fashan Yu; Zhongsheng Hou; Hongwei Zhang

    2012-01-01

    The convergence of model-free adaptive control (MFAC) algorithm can be guaranteed when the system is subject to measurement data dropout. The system output convergent speed gets slower as dropout rate increases. This paper proposes a MFAC algorithm with data compensation. The missing data is first estimated using the dynamical linearization method, and then the estimated value is introduced to update control input. The convergence analysis of the proposed MFAC algorithm is given, and the effe...

  18. Performance modeling and prediction for linear algebra algorithms

    OpenAIRE

    Iakymchuk, Roman

    2012-01-01

    This dissertation incorporates two research projects: performance modeling and prediction for dense linear algebra algorithms, and high-performance computing on clouds. The first project is focused on dense matrix computations, which are often used as computational kernels for numerous scientific applications. To solve a particular mathematical operation, linear algebra libraries provide a variety of algorithms. The algorithm of choice depends, obviously, on its performance. Performance of su...

  19. JPSS Cryosphere Algorithms: Integration and Testing in Algorithm Development Library (ADL)

    Science.gov (United States)

    Tsidulko, M.; Mahoney, R. L.; Meade, P.; Baldwin, D.; Tschudi, M. A.; Das, B.; Mikles, V. J.; Chen, W.; Tang, Y.; Sprietzer, K.; Zhao, Y.; Wolf, W.; Key, J.

    2014-12-01

    JPSS is a next generation satellite system that is planned to be launched in 2017. The satellites will carry a suite of sensors that are already on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite. The NOAA/NESDIS/STAR Algorithm Integration Team (AIT) works within the Algorithm Development Library (ADL) framework which mimics the operational JPSS Interface Data Processing Segment (IDPS). The AIT contributes in development, integration and testing of scientific algorithms employed in the IDPS. This presentation discusses cryosphere related activities performed in ADL. The addition of a new ancillary data set - NOAA Global Multisensor Automated Snow/Ice data (GMASI) - with ADL code modifications is described. Preliminary GMASI impact on the gridded Snow/Ice product is estimated. Several modifications to the Ice Age algorithm that demonstrates mis-classification of ice type for certain areas/time periods are tested in the ADL. Sensitivity runs for day time, night time and terminator zone are performed and presented. Comparisons between the original and modified versions of the Ice Age algorithm are also presented.

  20. IMPACT fragmentation model developments

    Science.gov (United States)

    Sorge, Marlon E.; Mains, Deanna L.

    2016-09-01

    The IMPACT fragmentation model has been used by The Aerospace Corporation for more than 25 years to analyze orbital altitude explosions and hypervelocity collisions. The model is semi-empirical, combining mass, energy and momentum conservation laws with empirically derived relationships for fragment characteristics such as number, mass, area-to-mass ratio, and spreading velocity as well as event energy distribution. Model results are used for several types of analysis including assessment of short-term risks to satellites from orbital altitude fragmentations, prediction of the long-term evolution of the orbital debris environment and forensic assessments of breakup events. A new version of IMPACT, version 6, has been completed and incorporates a number of advancements enabled by a multi-year long effort to characterize more than 11,000 debris fragments from more than three dozen historical on-orbit breakup events. These events involved a wide range of causes, energies, and fragmenting objects. Special focus was placed on the explosion model, as the majority of events examined were explosions. Revisions were made to the mass distribution used for explosion events, increasing the number of smaller fragments generated. The algorithm for modeling upper stage large fragment generation was updated. A momentum conserving asymmetric spreading velocity distribution algorithm was implemented to better represent sub-catastrophic events. An approach was developed for modeling sub-catastrophic explosions, those where the majority of the parent object remains intact, based on estimated event energy. Finally, significant modifications were made to the area-to-mass ratio distribution to incorporate the tendencies of different materials to fragment into different shapes. This ability enabled better matches between the observed area-to-mass ratios and those generated by the model. It also opened up additional possibilities for post-event analysis of breakups. The paper will discuss

  1. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.

    2017-03-13

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  2. Gravitational Lens Modeling with Genetic Algorithms and Particle Swarm Optimizers

    CERN Document Server

    Rogers, Adam

    2011-01-01

    Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automa...

  3. Model-Free Adaptive Control Algorithm with Data Dropout Compensation

    Directory of Open Access Journals (Sweden)

    Xuhui Bu

    2012-01-01

    Full Text Available The convergence of model-free adaptive control (MFAC algorithm can be guaranteed when the system is subject to measurement data dropout. The system output convergent speed gets slower as dropout rate increases. This paper proposes a MFAC algorithm with data compensation. The missing data is first estimated using the dynamical linearization method, and then the estimated value is introduced to update control input. The convergence analysis of the proposed MFAC algorithm is given, and the effectiveness is also validated by simulations. It is shown that the proposed algorithm can compensate the effect of the data dropout, and the better output performance can be obtained.

  4. A motion retargeting algorithm based on model simplification

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A new motion retargeting algorithm is presented, which adapts the motion capture data to a new character. To make the resulting motion realistic, the physically-based optimization method is adopted. However, the optimization process is difficult to converge to the optimal value because of high complexity of the physical human model. In order to address this problem, an appropriate simplified model automatically determined by a motion analysis technique is utilized, and then motion retargeting with this simplified model as an intermediate agent is implemented. The entire motion retargeting algorithm involves three steps of nonlinearly constrained optimization: forward retargeting, motion scaling and inverse retargeting. Experimental results show the validity of this algorithm.

  5. Quantum Monte Carlo methods algorithms for lattice models

    CERN Document Server

    Gubernatis, James; Werner, Philipp

    2016-01-01

    Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...

  6. Modeling for design process planning of product development and its solution using genetic algorithm%产品设计规划问题建模及遗传算法求解

    Institute of Scientific and Technical Information of China (English)

    闫利军; 申清明; 刘敏; 杨建民

    2013-01-01

    In view of the complexity of design process planning problem and limitation of existed methods, this paper takes total time and cost of the whole tasks in product development as objectives by considering all kinds of uncertain factors in practical product development and further described design process planning as a simulation based stochastic optimization problem. A new hybrid algorithm is proposed to solve this problem by introducing optimal computing budget allocation technique into genetic algorithm framework to improve algorithm searching efficiency and result reliability. Finally, the development of rotor and bearing system in turbine is adopted as example to validate the effectiveness of proposed method. Results demonstrate effectiveness of modeling method and high efficiency of solving algorithm. The method can be extended to solve all kinds of product development process and is of universality.%针对目前产品设计过程规划研究中存在的不足,在充分考虑实际设计过程中存在的各种不确定因素的基础上,以产品开发过程中的全体任务为规划对象,以设计迭代时间和成本为目标,将设计过程规划问题描述为基于仿真的随机优化问题进行处理.提出一种模型求解的混合遗传算法,该算法引入最优计算量分配技术进行样本分配,极大地提高了算法的搜索效率,有效地改善了遗传算法搜索的可靠性.以汽轮机轴承转子系统的设计为例,对提出的方法的有效性进行了验证,仿真结果表明,该建模方法有效且算法求解效率高.该方法可推广应用于各种产品设计过程的规划,具有普遍意义.

  7. Model predictive control algorithms and their application to a continuous fermenter

    Directory of Open Access Journals (Sweden)

    R. G. SILVA

    1999-06-01

    Full Text Available In many continuous fermentation processes, the control objective is to maximize productivity per unit time. The optimum operational point in the steady state can be obtained by maximizing the productivity rate using feed substrate concentration as the independent variable with the equations of the static model as constraints. In the present study, three model-based control schemes have been developed and implemented for a continuous fermenter. The first method modifies the well-known dynamic matrix control (DMC algorithm by making it adaptive. The other two use nonlinear model predictive control algorithms (NMPC, nonlinear model predictive control for calculation of control actions. The NMPC1 algorithm, which uses orthogonal collocation in finite elements, acted similar to NMPC2, which uses equidistant collocation. These algorithms are compared with DMC. The results obtained show the good performance of nonlinear algorithms.

  8. A predictor-corrector algorithm to estimate the fractional flow in oil-water models

    Energy Technology Data Exchange (ETDEWEB)

    Savioli, Gabriela B [Laboratorio de IngenierIa de Reservorios, IGPUBA and Departamento de IngenierIa Quimica, Facultad de IngenierIa, Universidad de Buenos Aires, Av. Las Heras 2214 Piso 3 C1127AAR Buenos Aires (Argentina); Berdaguer, Elena M Fernandez [Instituto de Calculo, Facultad de Ciencias Exactas y Naturales, UBA-CONICET and Departamento de Matematica, Facultad de IngenierIa, Universidad de Buenos Aires, 1428 Buenos Aires (Argentina)], E-mail: gsavioli@di.fcen.uba.ar, E-mail: efernan@ic.fcen.uba.ar

    2008-11-01

    We introduce a predictor-corrector algorithm to estimate parameters in a nonlinear hyperbolic problem. It can be used to estimate the oil-fractional flow function from the Buckley-Leverett equation. The forward model is non-linear: the sought- for parameter is a function of the solution of the equation. Traditionally, the estimation of functions requires the selection of a fitting parametric model. The algorithm that we develop does not require a predetermined parameter model. Therefore, the estimation problem is carried out over a set of parameters which are functions. The algorithm is based on the linearization of the parameter-to-output mapping. This technique is new in the field of nonlinear estimation. It has the advantage of laying aside parametric models. The algorithm is iterative and is of predictor-corrector type. We present theoretical results on the inverse problem. We use synthetic data to test the new algorithm.

  9. New Model and Algorithm for Hardware/Software Partitioning

    Institute of Scientific and Technical Information of China (English)

    Ji-Gang Wu; Thambipillai Srikanthan; Guang-Wei Zou

    2008-01-01

    This paper focuses on the algorithmic aspects for the hardware/software (HW/SW) partitioning which searches a reasonable composition of hardware and software components which not only satisfies the constraint of hardware area but also optimizes the execution time. The computational model is extended so that all possible types of communications can be taken into account for the HW/SW partitioning. Also, a new dynamic programming algorithm is proposed on the basis of the computational model, in which source data, rather than speedup in previous work, of basic scheduling blocks are directly utilized to calculate the optimal solution. The proposed algorithm runs in O(n. A) for n code fragments and the available hardware area A. Simulation results show that the proposed algorithm solves the HW/SW partitioning without increase in running time, compared with the algorithm cited in the literature.

  10. An Ontology Based Reuse Algorithm towards Process Planning in Software Development

    Directory of Open Access Journals (Sweden)

    Shilpa Sharma

    2011-09-01

    Full Text Available The process planning task for specified design provisions in software development can be significantly developed by referencing the knowledge reuse scheme. Reuse is considered to be one of the most promising techniques to improve software excellence and productivity. Reuse during software development depends much on the existing design knowledge in meta-model, a “read only” repository of information. We have proposed, an ontology based reuse algorithm towards process planning in software development. According to the common conceptual base facilitated by ontology and the characteristics of knowledge, the concepts and the entities are represented into meta-model and endeavor prospects. The relations between these prospects and its linkage knowledge are used to construct an ontology based reuse algorithm. In addition, our experiment illustrates realization of process planning in software development by practicing this algorithm. Subsequently, its benefits are delineated.

  11. Development & Performance Analysis of Korean WADGPS Positioning Algorithm

    Institute of Scientific and Technical Information of China (English)

    Kim Do-yoon; Kee Chang-don

    2003-01-01

    Today, many countries are developing their own WADGPS-type systems. The U.S. WAAS is already available for non-aviation users and its full operation is expected by the end of 2003. The European EGNOS and the Japanese MSAS is also in progress. China now propels the SNAS (Satellite Navigation Augmentation System) project,and India made the plan for its GAGAN (GPS And GEO Augmented Navigation) project. Recently, the Ministry of Maritime Affairs and Fisheries of Korea decided to develop Korean WADGPS. It is a very first step for the implementation of practical system. Till now, we have contrived the algorithm for the WADGPS in Korea & East Asia and evaluated its performance by simulations. In this paper, we complemented the positioning algorithm for the actual data processing. We analyzed the performance of the proposed algorithm with the actual data from the reference stations of Korean NDGPS network, which covers almost the whole country.

  12. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well...... for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...

  13. NONSMOOTH MODEL FOR PLASTIC LIMIT ANALYSIS AND ITS SMOOTHING ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    LI Jian-yu; PAN Shao-hua; LI Xing-si

    2006-01-01

    By means of Lagrange duality theory of the convex program, a dual problem of Hill's maximum plastic work principle under Mises' yield condition has been derived and whereby a non-differentiable convex optimization model for the limit analysis is developed. With this model, it is not necessary to linearize the yield condition and its discrete form becomes a minimization problem of the sum of Euclidean norms subject to linear constraints. Aimed at resolving the non-differentiability of Euclidean norms, a smoothing algorithm for the limit analysis of perfect-plastic continuum media is proposed.Its efficiency is demonstrated by computing the limit load factor and the collapse state for some plane stress and plain strain problems.

  14. A Mining Algorithm for Extracting Decision Process Data Models

    Directory of Open Access Journals (Sweden)

    Cristina-Claudia DOLEAN

    2011-01-01

    Full Text Available The paper introduces an algorithm that mines logs of user interaction with simulation software. It outputs a model that explicitly shows the data perspective of the decision process, namely the Decision Data Model (DDM. In the first part of the paper we focus on how the DDM is extracted by our mining algorithm. We introduce it as pseudo-code and, then, provide explanations and examples of how it actually works. In the second part of the paper, we use a series of small case studies to prove the robustness of the mining algorithm and how it deals with the most common patterns we found in real logs.

  15. Efficient Cluster Algorithm for CP(N-1) Models

    CERN Document Server

    Beard, B B; Riederer, S; Wiese, U J

    2006-01-01

    Despite several attempts, no efficient cluster algorithm has been constructed for CP(N-1) models in the standard Wilson formulation of lattice field theory. In fact, there is a no-go theorem that prevents the construction of an efficient Wolff-type embedding algorithm. In this paper, we construct an efficient cluster algorithm for ferromagnetic SU(N)-symmetric quantum spin systems. Such systems provide a regularization for CP(N-1) models in the framework of D-theory. We present detailed studies of the autocorrelations and find a dynamical critical exponent that is consistent with z = 0.

  16. Efficient cluster algorithm for CP(N-1) models

    Science.gov (United States)

    Beard, B. B.; Pepe, M.; Riederer, S.; Wiese, U.-J.

    2006-11-01

    Despite several attempts, no efficient cluster algorithm has been constructed for CP(N-1) models in the standard Wilson formulation of lattice field theory. In fact, there is a no-go theorem that prevents the construction of an efficient Wolff-type embedding algorithm. In this paper, we construct an efficient cluster algorithm for ferromagnetic SU(N)-symmetric quantum spin systems. Such systems provide a regularization for CP(N-1) models in the framework of D-theory. We present detailed studies of the autocorrelations and find a dynamical critical exponent that is consistent with z=0.

  17. Petri net model for analysis of concurrently processed complex algorithms

    Science.gov (United States)

    Stoughton, John W.; Mielke, Roland R.

    1986-01-01

    This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.

  18. An efficient algorithm for corona simulation with complex chemical models

    Science.gov (United States)

    Villa, Andrea; Barbieri, Luca; Gondola, Marco; Leon-Garzon, Andres R.; Malgesini, Roberto

    2017-05-01

    The simulation of cold plasma discharges is a leading field of applied sciences with many applications ranging from pollutant control to surface treatment. Many of these applications call for the development of novel numerical techniques to implement fully three-dimensional corona solvers that can utilize complex and physically detailed chemical databases. This is a challenging task since it multiplies the difficulties inherent to a three-dimensional approach by the complexity of databases comprising tens of chemical species and hundreds of reactions. In this paper a novel approach, capable of reducing significantly the computational burden, is developed. The proposed method is based on a proper time stepping algorithm capable of decomposing the original problem into simpler ones: each of them has then been tackled with either finite element, finite volume or ordinary differential equations solvers. This last solver deals with the chemical model and its efficient implementation is one of the main contributions of this work.

  19. A simple algorithm for optimization and model fitting: AGA (asexual genetic algorithm)

    Science.gov (United States)

    Cantó, J.; Curiel, S.; Martínez-Gómez, E.

    2009-07-01

    Context: Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims: We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (asexual genetic algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two examples: the orbits of exoplanets by taking a set of radial velocity data, and the spectral energy distribution (SED) observed towards a YSO (Young Stellar Object). Methods: The algorithm AGA may also be called genetic, although it differs from standard genetic algorithms in two main aspects: a) the initial population is not encoded; and b) the new generations are constructed by asexual reproduction. Results: Applying our algorithm in optimizing some complicated functions, we find the global maxima within a few iterations. For model fitting to the orbits of exoplanets and the SED of a YSO, we estimate the parameters and their associated errors.

  20. Development of a robust algorithm to compute reactive azeotropes

    Directory of Open Access Journals (Sweden)

    M. H. M. Reis

    2006-09-01

    Full Text Available In this paper, a novel approach for establishing the route for process intensification through the application of two developed softwares to characterize reactive mixtures is presented. A robust algorithm was developed to build up reactive phase diagrams and to predict the existence and the location of reactive azeotropes. The proposed algorithm does not depend on initial estimates and is able to compute all reactive azeotropes present in the mixture. It also allows verifying if there are no azeotropes, which are the major troubles in this kind of programming. An additional software was developed in order to calculate reactive residue curve maps. Results obtained with the developed program were compared with the published in the literature for several mixtures, showing the efficiency and robustness of the developed softwares.

  1. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  2. An efficient Cellular Potts Model algorithm that forbids cell fragmentation

    Science.gov (United States)

    Durand, Marc; Guesnet, Etienne

    2016-11-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique which is widely used for simulating cellular patterns such as foams or biological tissues. Despite its realism and generality, the standard Monte Carlo algorithm used in the scientific literature to evolve this model preserves connectivity of cells on a limited range of simulation temperature only. We present a new algorithm in which cell fragmentation is forbidden for all simulation temperatures. This allows to significantly enhance realism of the simulated patterns. It also increases the computational efficiency compared with the standard CPM algorithm even at same simulation temperature, thanks to the time spared in not doing unrealistic moves. Moreover, our algorithm restores the detailed balance equation, ensuring that the long-term stage is independent of the chosen acceptance rate and chosen path in the temperature space.

  3. Dynamical behavior of the Niedermayer algorithm applied to Potts models

    Science.gov (United States)

    Girardi, D.; Penna, T. J. P.; Branco, N. S.

    2012-08-01

    In this work, we make a numerical study of the dynamic universality class of the Niedermayer algorithm applied to the two-dimensional Potts model with 2, 3, and 4 states. This algorithm updates clusters of spins and has a free parameter, E0, which controls the size of these clusters, such that E0=1 is the Metropolis algorithm and E0=0 regains the Wolff algorithm, for the Potts model. For -1clusters of equal spins can be formed: we show that the mean size of the clusters of (possibly) turned spins initially grows with the linear size of the lattice, L, but eventually saturates at a given lattice size L˜, which depends on E0. For L≥L˜, the Niedermayer algorithm is in the same dynamic universality class of the Metropolis one, i.e, they have the same dynamic exponent. For E0>0, spins in different states may be added to the cluster but the dynamic behavior is less efficient than for the Wolff algorithm (E0=0). Therefore, our results show that the Wolff algorithm is the best choice for Potts models, when compared to the Niedermayer's generalization.

  4. Challenges of the algorithms optimization and high performance arithmetic coprocessors development for numerical modeling of gas flow and heat transfer in the combustion problem

    Science.gov (United States)

    Aryashev, Sergey; Bobkov, Sergey; Zubkovskiy, Pavel; Ivasyuk, Eugene; Stepchenkov, Yuri

    2016-06-01

    Computer simulation of multiscale burning and detonation processes requires an exaflop-scale performance supercomputer. The paper present research from SRISA intended to development high-performance architectures of DSP extensions for burning process simulations. Also a number of solutions for dataflow coprocessor development based on self-timed circuits are proposed.

  5. Combining Diffusion and Grey Models Based on Evolutionary Optimization Algorithms to Forecast Motherboard Shipments

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2012-01-01

    Full Text Available It is important for executives to predict the future trends. Otherwise, their companies cannot make profitable decisions and investments. The Bass diffusion model can describe the empirical adoption curve for new products and technological innovations. The Grey model provides short-term forecasts using four data points. This study develops a combined model based on the rolling Grey model (RGM and the Bass diffusion model to forecast motherboard shipments. In addition, we investigate evolutionary optimization algorithms to determine the optimal parameters. Our results indicate that the combined model using a hybrid algorithm outperforms other methods for the fitting and forecasting processes in terms of mean absolute percentage error.

  6. Transmission function models of finite population genetic algorithms

    NARCIS (Netherlands)

    Kemenade, C.H.M. van; Kok, J.N.; La Poutré, J.A.; Thierens, D.

    1998-01-01

    Infinite population models show a deterministic behaviour. Genetic algorithms with finite populations behave non-deterministicly. For small population sizes, the results obtained with these models differ strongly from the results predicted by the infinite population model. When the population size i

  7. Genetic Algorithms for Development of New Financial Products

    OpenAIRE

    Eder de Oliveira Abensur

    2007-01-01

    New Product Development (NPD) is recognized as a fundamental activity that has a relevant impact on the performance of companies. Despite the relevance of the financial market there is a lack of work on new financial product development. The aim of this research is to propose the use of Genetic Algorithms (GA) as an alternative procedure for evaluating the most favorable combination of variables for the product launch. The paper focuses on: (i) determining the essential variables of the finan...

  8. DiamondTorre Algorithm for High-Performance Wave Modeling

    Directory of Open Access Journals (Sweden)

    Vadim Levchenko

    2016-08-01

    Full Text Available Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.

  9. Ripple-Spreading Network Model Optimization by Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiao-Bing Hu

    2013-01-01

    Full Text Available Small-world and scale-free properties are widely acknowledged in many real-world complex network systems, and many network models have been developed to capture these network properties. The ripple-spreading network model (RSNM is a newly reported complex network model, which is inspired by the natural ripple-spreading phenomenon on clam water surface. The RSNM exhibits good potential for describing both spatial and temporal features in the development of many real-world networks where the influence of a few local events spreads out through nodes and then largely determines the final network topology. However, the relationships between ripple-spreading related parameters (RSRPs of RSNM and small-world and scale-free topologies are not as obvious or straightforward as in many other network models. This paper attempts to apply genetic algorithm (GA to tune the values of RSRPs, so that the RSNM may generate these two most important network topologies. The study demonstrates that, once RSRPs are properly tuned by GA, the RSNM is capable of generating both network topologies and therefore has a great flexibility to study many real-world complex network systems.

  10. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, D.

    2017-04-05

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.

  11. Models and algorithms for stochastic online scheduling

    NARCIS (Netherlands)

    Megow, N.; Uetz, Marc Jochen; Vredeveld, T.

    We consider a model for scheduling under uncertainty. In this model, we combine the main characteristics of online and stochastic scheduling in a simple and natural way. Job processing times are assumed to be stochastic, but in contrast to traditional stochastic scheduling models, we assume that

  12. Developing NASA's VIIRS LST and Emissivity EDRs using a physics based Temperature Emissivity Separation (TES) algorithm

    Science.gov (United States)

    Islam, T.; Hulley, G. C.; Malakar, N.; Hook, S. J.

    2015-12-01

    Land Surface Temperature and Emissivity (LST&E) data are acknowledged as critical Environmental Data Records (EDRs) by the NASA Earth Science Division. The current operational LST EDR for the recently launched Suomi National Polar-orbiting Partnership's (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) payload utilizes a split-window algorithm that relies on previously-generated fixed emissivity dependent coefficients and does not produce a dynamically varying and multi-spectral land surface emissivity product. Furthermore, this algorithm deviates from its MODIS counterpart (MOD11) resulting in a discontinuity in the MODIS/VIIRS LST time series. This study presents an alternative physics based algorithm for generation of the NASA VIIRS LST&E EDR in order to provide continuity with its MODIS counterpart algorithm (MOD21). The algorithm, known as temperature emissivity separation (TES) algorithm, uses a fast radiative transfer model - Radiative Transfer for (A)TOVS (RTTOV) in combination with an emissivity calibration model to isolate the surface radiance contribution retrieving temperature and emissivity. Further, a new water-vapor scaling (WVS) method is developed and implemented to improve the atmospheric correction process within the TES system. An independent assessment of the VIIRS LST&E outputs is performed against in situ LST measurements and laboratory measured emissivity spectra samples over dedicated validation sites in the Southwest USA. Emissivity retrievals are also validated with the latest ASTER Global Emissivity Database Version 4 (GEDv4). An overview and current status of the algorithm as well as the validation results will be discussed.

  13. Tools and Algorithms to Link Horizontal Hydrologic and Vertical Hydrodynamic Models and Provide a Stochastic Modeling Framework

    Science.gov (United States)

    Salah, Ahmad M.; Nelson, E. James; Williams, Gustavious P.

    2010-04-01

    We present algorithms and tools we developed to automatically link an overland flow model to a hydrodynamic water quality model with different spatial and temporal discretizations. These tools run the linked models which provide a stochastic simulation frame. We also briefly present the tools and algorithms we developed to facilitate and analyze stochastic simulations of the linked models. We demonstrate the algorithms by linking the Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model for overland flow with the CE-QUAL-W2 model for water quality and reservoir hydrodynamics. GSSHA uses a two-dimensional horizontal grid while CE-QUAL-W2 uses a two-dimensional vertical grid. We implemented the algorithms and tools in the Watershed Modeling System (WMS) which allows modelers to easily create and use models. The algorithms are general and could be used for other models. Our tools create and analyze stochastic simulations to help understand uncertainty in the model application. While a number of examples of linked models exist, the ability to perform automatic, unassisted linking is a step forward and provides the framework to easily implement stochastic modeling studies.

  14. Tools and Algorithms to Link Horizontal Hydrologic and Vertical Hydrodynamic Models and Provide a Stochastic Modeling Framework

    Directory of Open Access Journals (Sweden)

    Ahmad M Salah

    2010-12-01

    Full Text Available We present algorithms and tools we developed to automatically link an overland flow model to a hydrodynamic water quality model with different spatial and temporal discretizations. These tools run the linked models which provide a stochastic simulation frame. We also briefly present the tools and algorithms we developed to facilitate and analyze stochastic simulations of the linked models. We demonstrate the algorithms by linking the Gridded Surface Subsurface Hydrologic Analysis (GSSHA model for overland flow with the CE-QUAL-W2 model for water quality and reservoir hydrodynamics. GSSHA uses a two-dimensional horizontal grid while CE-QUAL-W2 uses a two-dimensional vertical grid. We implemented the algorithms and tools in the Watershed Modeling System (WMS which allows modelers to easily create and use models. The algorithms are general and could be used for other models. Our tools create and analyze stochastic simulations to help understand uncertainty in the model application. While a number of examples of linked models exist, the ability to perform automatic, unassisted linking is a step forward and provides the framework to easily implement stochastic modeling studies.

  15. Guide to Selected Algorithms, Distributions, and Databases Used in Exposure Models Developed By the Office of Air Quality Planning and Standards

    Science.gov (United States)

    In the evaluation of emissions standards, OAQPS frequently uses one or more computer-based models to estimate the number of people who will be exposed to the air pollution levels that are expected to occur under various air quality scenarios.

  16. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmet Demir

    2017-01-01

    Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions. 

  17. A NEW GENETIC SIMULATED ANNEALING ALGORITHM FOR FLOOD ROUTING MODEL

    Institute of Scientific and Technical Information of China (English)

    KANG Ling; WANG Cheng; JIANG Tie-bing

    2004-01-01

    In this paper, a new approach, the Genetic Simulated Annealing (GSA), was proposed for optimizing the parameters in the Muskingum routing model. By integrating the simulated annealing method into the genetic algorithm, the hybrid method could avoid some troubles of traditional methods, such as arduous trial-and-error procedure, premature convergence in genetic algorithm and search blindness in simulated annealing. The principle and implementing procedure of this algorithm were described. Numerical experiments show that the GSA can adjust the optimization population, prevent premature convergence and seek the global optimal result.Applications to the Nanyunhe River and Qingjiang River show that the proposed approach is of higher forecast accuracy and practicability.

  18. Implementing Modifed Burg Algorithms in Multivariate Subset Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    A. Alexandre Trindade

    2003-02-01

    Full Text Available The large number of parameters in subset vector autoregressive models often leads one to procure fast, simple, and efficient alternatives or precursors to maximum likelihood estimation. We present the solution of the multivariate subset Yule-Walker equations as one such alternative. In recent work, Brockwell, Dahlhaus, and Trindade (2002, show that the Yule-Walker estimators can actually be obtained as a special case of a general recursive Burg-type algorithm. We illustrate the structure of this Algorithm, and discuss its implementation in a high-level programming language. Applications of the Algorithm in univariate and bivariate modeling are showcased in examples. Univariate and bivariate versions of the Algorithm written in Fortran 90 are included in the appendix, and their use illustrated.

  19. Datasets for radiation network algorithm development and testing

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S [ORNL; Sen, Satyabrata [ORNL; Berry, M. L.. [New Jersey Institute of Technology; Wu, Qishi [University of Memphis; Grieme, M. [New Jersey Institute of Technology; Brooks, Richard R [ORNL; Cordone, G. [Clemson University

    2016-01-01

    Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) program supported the development of networks of commercial-off-the-shelf (COTS) radiation counters for detecting, localizing, and identifying low-level radiation sources. Under this program, a series of indoor and outdoor tests were conducted with multiple source strengths and types, different background profiles, and various types of source and detector movements. Following the tests, network algorithms were replayed in various re-constructed scenarios using sub-networks. These measurements and algorithm traces together provide a rich collection of highly valuable datasets for testing the current and next generation radiation network algorithms, including the ones (to be) developed by broader R&D communities such as distributed detection, information fusion, and sensor networks. From this multiple TeraByte IRSS database, we distilled out and packaged the first batch of canonical datasets for public release. They include measurements from ten indoor and two outdoor tests which represent increasingly challenging baseline scenarios for robustly testing radiation network algorithms.

  20. Iterative learning control algorithm for spiking behavior of neuron model

    Science.gov (United States)

    Li, Shunan; Li, Donghui; Wang, Jiang; Yu, Haitao

    2016-11-01

    Controlling neurons to generate a desired or normal spiking behavior is the fundamental building block of the treatment of many neurologic diseases. The objective of this work is to develop a novel control method-closed-loop proportional integral (PI)-type iterative learning control (ILC) algorithm to control the spiking behavior in model neurons. In order to verify the feasibility and effectiveness of the proposed method, two single-compartment standard models of different neuronal excitability are specifically considered: Hodgkin-Huxley (HH) model for class 1 neural excitability and Morris-Lecar (ML) model for class 2 neural excitability. ILC has remarkable advantages for the repetitive processes in nature. To further highlight the superiority of the proposed method, the performances of the iterative learning controller are compared to those of classical PI controller. Either in the classical PI control or in the PI control combined with ILC, appropriate background noises are added in neuron models to approach the problem under more realistic biophysical conditions. Simulation results show that the controller performances are more favorable when ILC is considered, no matter which neuronal excitability the neuron belongs to and no matter what kind of firing pattern the desired trajectory belongs to. The error between real and desired output is much smaller under ILC control signal, which suggests ILC of neuron’s spiking behavior is more accurate.

  1. The Cosparse Analysis Model and Algorithms

    CERN Document Server

    Nam, Sangnam; Elad, Michael; Gribonval, Rémi

    2011-01-01

    After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This work proposes effective pursuit methods that aim to solve inverse problems regularized with the analysis-model prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments.

  2. Application of firefly algorithm to the dynamic model updating problem

    Science.gov (United States)

    Shabbir, Faisal; Omenzetter, Piotr

    2015-04-01

    Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.

  3. Multiple QoS modeling and algorithm in computational grid

    Institute of Scientific and Technical Information of China (English)

    Li Chunlin; Feng Meilai; Li Layuan

    2007-01-01

    Multiple QoS modeling and algorithm in grid system is considered.Grid QoS requirements can be formulated as a utility function for each task as a weighted sum of its each dimensional QoS utility functions.Multiple QoS constraint resource scheduling optimization in computational grid is distributed to two subproblems: optimization of grid user and grid resource provider.Grid QoS scheduling can be achieved by solving sub problems via an iterative algorithm.

  4. A LOAD BALANCING MODEL USING FIREFLY ALGORITHM IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    A. Paulin Florence

    2014-01-01

    Full Text Available Cloud computing is a model that points at streamlining the on-demand provisioning of software, hardware and data as services and providing end-users with flexible and scalable services accessible through the Internet. The main objective of the proposed approach is to maximize the resource utilization and provide a good balanced load among all the resources in cloud servers. Initially, a load model of every resource will be derived based on several factors such as, memory usage, processing time and access rate. Based on the newly derived load index, the current load will be computed for all the resources shared in virtual machine of cloud servers. Once the load index is computed for all the resources, load balancing operation will be initiated to effectively use the resources dynamically with the process of assigning resources to the corresponding node to reduce the load value. So, assigning of resources to proper nodes is an optimal distribution problem so that many optimization algorithms such as genetic algorithm and modified genetic algorithm are utilized for load balancing. These algorithms are not much effective in providing the neighbour solutions since it does not overcome exploration and exploration problem. So, utilizing the effective optimization procedure instead of genetic algorithm can lead to better load balancing since it is a traditional and old algorithm. Accordingly, I have planned to utilize a recent optimization algorithm, called firefly algorithm to do the load balancing operation in our proposed work. At first, the index table will be maintained by considering the availability of virtual servers and sequence of request. Then, load index will be computed based on the newly derived formulae. Based on load index, load balancing operation will be carried out using firefly algorithm. The performance analysis produced expected results and thus proved the proposed approach is efficient in optimizing schedules by balancing the

  5. Basic Research on Adaptive Model Algorithmic Control

    Science.gov (United States)

    1985-12-01

    Control Conference. Richalet, J., A. Rault, J.L. Testud and J. Papon (1978). Model predictive heuristic control: applications to industrial...pp.977-982. Richalet, J., A. Rault, J. L. Testud and J. Papon (1978). Model predictive heuristic control: applications to industrial processes

  6. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation a

  7. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation a

  8. An Algorithm for Optimally Fitting a Wiener Model

    Directory of Open Access Journals (Sweden)

    Lucas P. Beverlin

    2011-01-01

    Full Text Available The purpose of this work is to present a new methodology for fitting Wiener networks to datasets with a large number of variables. Wiener networks have the ability to model a wide range of data types, and their structures can yield parameters with phenomenological meaning. There are several challenges to fitting such a model: model stiffness, the nonlinear nature of a Wiener network, possible overfitting, and the large number of parameters inherent with large input sets. This work describes a methodology to overcome these challenges by using several iterative algorithms under supervised learning and fitting subsets of the parameters at a time. This methodology is applied to Wiener networks that are used to predict blood glucose concentrations. The predictions of validation sets from models fit to four subjects using this methodology yielded a higher correlation between observed and predicted observations than other algorithms, including the Gauss-Newton and Levenberg-Marquardt algorithms.

  9. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    Energy Technology Data Exchange (ETDEWEB)

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms` performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.

  10. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    Energy Technology Data Exchange (ETDEWEB)

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.

  11. Development of a two wheeled self balancing robot with speech recognition and navigation algorithm

    Science.gov (United States)

    Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh

    2016-07-01

    This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.

  12. On Models of Nonlinear Evolution Paths in Adiabatic Quantum Algorithms

    Institute of Scientific and Technical Information of China (English)

    SUN Jie; LU Song-Feng; Samuel L.Braunstein

    2013-01-01

    In this paper,we study two different nonlinear interpolating paths in adiabatic evolution algorithms for solving a particular class of quantum search problems where both the initial and final Hamiltonian are one-dimensional projector Hamiltonians on the corresponding ground state.If the overlap between the initial state and final state of the quantum system is not equal to zero,both of these models can provide a constant time speedup over the usual adiabatic algorithms by increasing some another corresponding "complexity".But when the initial state has a zero overlap with the solution state in the problem,the second model leads to an infinite time complexity of the algorithm for whatever interpolating functions being applied while the first one can still provide a constant running time.However,inspired by a related reference,a variant of the first model can be constructed which also fails for the problem when the overlap is exactly equal to zero if we want to make up the "intrinsic" fault of the second model — an increase in energy.Two concrete theorems are given to serve as explanations why neither of these two models can improve the usual adiabatic evolution algorithms for the phenomenon above.These just tell us what should be noted when using certain nonlinear evolution paths in adiabatic quantum algorithms for some special kind of problems.

  13. A simple algorithm for optimization and model fitting: AGA (asexual genetic algorithm)

    CERN Document Server

    Canto, J; Martinez-Gomez, E; 10.1051/0004-6361/200911740

    2009-01-01

    Context. Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims. We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (Asexual Genetic Algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two e...

  14. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    Directory of Open Access Journals (Sweden)

    Zhipeng Gui

    Full Text Available Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1 an Integer Linear Programming (ILP based algorithm from combinational optimization perspective; 2 a K-Means and Kernighan-Lin combined heuristic algorithm (K&K integrating geometric and coordinate-free methods by merging local and global partitioning; 3 an automatic seeded region growing based geometric and local partitioning algorithm (ASRG. The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric

  15. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    Science.gov (United States)

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  16. A general model for matroids and the greedy algorithm

    NARCIS (Netherlands)

    Faigle, U.; Fujishige, Saturo

    2009-01-01

    We present a general model for set systems to be independence families with respect to set families which determine classes of proper weight functions on a ground set. Within this model, matroids arise from a natural subclass and can be characterized by the optimality of the greedy algorithm. This

  17. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well for th...

  18. DEVELOPMENT OF ALGORITHMS OF NUMERICAL PROJECT OPTIMIZATION FOR THE CONSTRUCTION AND RECONSTRUCTION OF ENGINEERING STRUCTURES

    Directory of Open Access Journals (Sweden)

    MENEJLJUK О. І.

    2016-08-01

    Full Text Available Raising of problem. The paper analyzes the numerical optimization methods of construction projects and reconstruction of engineering structures. Purpose. Possible ways of modeling organizational and technological solutions in construction are presented. Based on the analysis the most effective method of optimization by experimental and statistical modeling with application of modern computer programs in the field of project management and mathematical statistics is selected. Conclusion. An algorithm for solving optimization problems by means of experimental and statistical modeling is developed.

  19. The Promise and Pitfalls of Algorithmic Governance for Developing Societies

    Directory of Open Access Journals (Sweden)

    Rick SEARLE

    2016-06-01

    Full Text Available Many democracies in an early stage of development, such as Nigeria, experience a period of endemic corruption and difficulty providing needed public services. The careful use of algorithms may be of use in helping new democracies transition to a more objective, equitable, and accountable form of governance, though technology should not be viewed as a panacea for structural problems or without challenges of its own.

  20. Development and application of unified algorithms for problems in computational science

    Science.gov (United States)

    Shankar, Vijaya; Chakravarthy, Sukumar

    1987-01-01

    A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.

  1. A modified EM algorithm for estimation in generalized mixed models.

    Science.gov (United States)

    Steele, B M

    1996-12-01

    Application of the EM algorithm for estimation in the generalized mixed model has been largely unsuccessful because the E-step cannot be determined in most instances. The E-step computes the conditional expectation of the complete data log-likelihood and when the random effect distribution is normal, this expectation remains an intractable integral. The problem can be approached by numerical or analytic approximations; however, the computational burden imposed by numerical integration methods and the absence of an accurate analytic approximation have limited the use of the EM algorithm. In this paper, Laplace's method is adapted for analytic approximation within the E-step. The proposed algorithm is computationally straightforward and retains much of the conceptual simplicity of the conventional EM algorithm, although the usual convergence properties are not guaranteed. The proposed algorithm accommodates multiple random factors and random effect distributions besides the normal, e.g., the log-gamma distribution. Parameter estimates obtained for several data sets and through simulation show that this modified EM algorithm compares favorably with other generalized mixed model methods.

  2. Co-clustering models, algorithms and applications

    CERN Document Server

    Govaert, Gérard

    2013-01-01

    Cluster or co-cluster analyses are important tools in a variety of scientific areas. The introduction of this book presents a state of the art of already well-established, as well as more recent methods of co-clustering. The authors mainly deal with the two-mode partitioning under different approaches, but pay particular attention to a probabilistic approach. Chapter 1 concerns clustering in general and the model-based clustering in particular. The authors briefly review the classical clustering methods and focus on the mixture model. They present and discuss the use of different mixture

  3. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    Science.gov (United States)

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  4. Impulsive Neural Networks Algorithm Based on the Artificial Genome Model

    Directory of Open Access Journals (Sweden)

    Yuan Gao

    2014-05-01

    Full Text Available To describe gene regulatory networks, this article takes the framework of the artificial genome model and proposes impulsive neural networks algorithm based on the artificial genome model. Firstly, the gene expression and the cell division tree are applied to generate spiking neurons with specific attributes, neural network structure, connection weights and specific learning rules of each neuron. Next, the gene segment duplications and divergence model are applied to design the evolutionary algorithm of impulsive neural networks at the level of the artificial genome. The dynamic changes of developmental gene regulatory networks are controlled during the whole evolutionary process. Finally, the behavior of collecting food for autonomous intelligent agent is simulated, which is driven by nerves. Experimental results demonstrate that the algorithm in this article has the evolutionary ability on large-scale impulsive neural networks

  5. Differential Evolution algorithm applied to FSW model calibration

    Science.gov (United States)

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  6. Software Model Checking for Verifying Distributed Algorithms

    Science.gov (United States)

    2014-10-28

    Verification procedure is an intelligent exhaustive search of the state space of the design Model Checking 6 Verifying Synchronous Distributed App...Distributed App Sagar Chaki, June 11, 2014 © 2014 Carnegie Mellon University Tool Usage Project webpage (http://mcda.googlecode.com) • Tutorial

  7. Economic Models and Algorithms for Distributed Systems

    CERN Document Server

    Neumann, Dirk; Altmann, Jorn; Rana, Omer F

    2009-01-01

    Distributed computing models for sharing resources such as Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. This book intends to discover fresh avenues of research and amendments to existing technologies, aiming at the successful deployment of commercial distributed systems

  8. A tractable algorithm for the wellfounded model

    NARCIS (Netherlands)

    Jonker, C.M.; Renardel de Lavalette, G.R.

    In the area of general logic programming (negated atoms allowed in the bodies of rules) and reason maintenance systems, the wellfounded model (first defined by Van Gelder, Ross and Schlipf in 1988) is generally considered to be the declarative semantics of the program. In this paper we present

  9. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    Science.gov (United States)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  10. Programming Non-Trivial Algorithms in the Measurement Based Quantum Computation Model

    Energy Technology Data Exchange (ETDEWEB)

    Alsing, Paul [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Fanto, Michael [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Lott, Capt. Gordon [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Tison, Christoper C. [United States Air Force Research Laboratory, Wright-Patterson Air Force Base

    2014-01-01

    We provide a set of prescriptions for implementing a quantum circuit model algorithm as measurement based quantum computing (MBQC) algorithm1, 2 via a large cluster state. As means of illustration we draw upon our numerical modeling experience to describe a large graph state capable of searching a logical 8 element list (a non-trivial version of Grover's algorithm3 with feedforward). We develop several prescriptions based on analytic evaluation of cluster states and graph state equations which can be generalized into any circuit model operations. Such a resulting cluster state will be able to carry out the desired operation with appropriate measurements and feed forward error correction. We also discuss the physical implementation and the analysis of the principal 3-qubit entangling gate (Toffoli) required for a non-trivial feedforward realization of an 8-element Grover search algorithm.

  11. Efficiency of Evolutionary Algorithms for Calibration of Watershed Models

    Science.gov (United States)

    Ahmadi, M.; Arabi, M.

    2009-12-01

    Since the promulgation of the Clean Water Act in the U.S. and other similar legislations around the world over the past three decades, watershed management programs have focused on the nexus of pollution prevention and mitigation. In this context, hydrologic/water quality models have been increasingly embedded in the decision making process. Simulation models are now commonly used to investigate the hydrologic response of watershed systems under varying climatic and land use conditions, and also to study the fate and transport of contaminants at various spatiotemporal scales. Adequate calibration and corroboration of models for various outputs at varying scales is an essential component of watershed modeling. The parameter estimation process could be challenging when multiple objectives are important. For example, improving streamflow predictions of the model at a stream location may result in degradation of model predictions for sediments and/or nutrient at the same location or other outlets. This paper aims to evaluate the applicability and efficiency of single and multi objective evolutionary algorithms for parameter estimation of complex watershed models. To this end, the Shuffled Complex Evolution (SCE-UA) algorithm, a single-objective genetic algorithm (GA), and a multi-objective genetic algorithm (i.e., NSGA-II) were reconciled with the Soil and Water Assessment Tool (SWAT) to calibrate the model at various locations within the Wildcat Creek Watershed, Indiana. The efficiency of these methods were investigated using different error statistics including root mean square error, coefficient of determination and Nash-Sutcliffe efficiency coefficient for the output variables as well as the baseflow component of the stream discharge. A sensitivity analysis was carried out to screening model parameters that bear significant uncertainties. Results indicated that while flow processes can be reasonably ascertained, parameterization of nutrient and pesticide processes

  12. Genetic Algorithm Modeling with GPU Parallel Computing Technology

    CERN Document Server

    Cavuoti, Stefano; Brescia, Massimo; Pescapé, Antonio; Longo, Giuseppe; Ventre, Giorgio

    2012-01-01

    We present a multi-purpose genetic algorithm, designed and implemented with GPGPU / CUDA parallel computing technology. The model was derived from a multi-core CPU serial implementation, named GAME, already scientifically successfully tested and validated on astrophysical massive data classification problems, through a web application resource (DAMEWARE), specialized in data mining based on Machine Learning paradigms. Since genetic algorithms are inherently parallel, the GPGPU computing paradigm has provided an exploit of the internal training features of the model, permitting a strong optimization in terms of processing performances and scalability.

  13. Development of CAD implementing the algorithm of boundary elements’ numerical analytical method

    Directory of Open Access Journals (Sweden)

    Yulia V. Korniyenko

    2015-03-01

    Full Text Available Up to recent days the algorithms for numerical-analytical boundary elements method had been implemented with programs written in MATLAB environment language. Each program had a local character, i.e. used to solve a particular problem: calculation of beam, frame, arch, etc. Constructing matrices in these programs was carried out “manually” therefore being time-consuming. The research was purposed onto a reasoned choice of programming language for new CAD development, allows to implement algorithm of numerical analytical boundary elements method and to create visualization tools for initial objects and calculation results. Research conducted shows that among wide variety of programming languages the most efficient one for CAD development, employing the numerical analytical boundary elements method algorithm, is the Java language. This language provides tools not only for development of calculating CAD part, but also to build the graphic interface for geometrical models construction and calculated results interpretation.

  14. Forward and backward models for fault diagnosis based on parallel genetic algorithms

    Institute of Scientific and Technical Information of China (English)

    Yi LIU; Ying LI; Yi-jia CAO; Chuang-xin GUO

    2008-01-01

    In this paper, a mathematical model consisting of forward and backward models is built on parallel genetic algorithms (PGAs) for fault diagnosis in a transmission power system. A new method to reduce the scale of fault sections is developed in the forward model and the message passing interface (MPI) approach is chosen to parallel the genetic algorithms by global sin-gle-population master-slave method (GPGAs). The proposed approach is applied to a sample system consisting of 28 sections, 84 protective relays and 40 circuit breakers. Simulation results show that the new model based on GPGAs can achieve very fast computation in online applications of large-scale power systems.

  15. Numerical algorithm of distributed TOPKAPI model and its application

    Institute of Scientific and Technical Information of China (English)

    Deng Peng; Li Zhijia; Liu Zhiyu

    2008-01-01

    The TOPKAPI (TOPographic Kinematic APproximation and Integration) model is a physically based rainfall-runoff model derived from the integration in space of the kinematic wave model. In the TOPKAPI model, rainfall-runoff and runoff routing processes are described by three nonlinear reservoir differential equations that are structurally similar and describe different hydrological and hydraulic processes. Equations are integrated over grid cells that describe the geometry of the catchment, leading to a cascade of nonlinear reservoir equations. For the sake of improving the model's computation precision, this paper provides the general form of these equations and describes the solution by means of a numerical algorithm, the variable-step fourth-order Runge-Kutta algorithm. For the purpose of assessing the quality of the comprehensive numerical algorithm, this paper presents a case study application to the Buliu River Basin, which has an area of 3 310 km2, using a DEM (digital elevation model) grid with a resolution of 1 km. The results show that the variable-step fourth-order Runge-Kutta algorithm for nonlinear reservoir equations is a good approximation of subsurface flow in the soil matrix, overland flow over the slopes, and surface flow in the channel network, allowing us to retain the physical properties of the original equations at scales ranging from a few meters to 1 km.

  16. A study on the application of topic models to motif finding algorithms.

    Science.gov (United States)

    Basha Gutierrez, Josep; Nakai, Kenta

    2016-12-22

    Topic models are statistical algorithms which try to discover the structure of a set of documents according to the abstract topics contained in them. Here we try to apply this approach to the discovery of the structure of the transcription factor binding sites (TFBS) contained in a set of biological sequences, which is a fundamental problem in molecular biology research for the understanding of transcriptional regulation. Here we present two methods that make use of topic models for motif finding. First, we developed an algorithm in which first a set of biological sequences are treated as text documents, and the k-mers contained in them as words, to then build a correlated topic model (CTM) and iteratively reduce its perplexity. We also used the perplexity measurement of CTMs to improve our previous algorithm based on a genetic algorithm and several statistical coefficients. The algorithms were tested with 56 data sets from four different species and compared to 14 other methods by the use of several coefficients both at nucleotide and site level. The results of our first approach showed a performance comparable to the other methods studied, especially at site level and in sensitivity scores, in which it scored better than any of the 14 existing tools. In the case of our previous algorithm, the new approach with the addition of the perplexity measurement clearly outperformed all of the other methods in sensitivity, both at nucleotide and site level, and in overall performance at site level. The statistics obtained show that the performance of a motif finding method based on the use of a CTM is satisfying enough to conclude that the application of topic models is a valid method for developing motif finding algorithms. Moreover, the addition of topic models to a previously developed method dramatically increased its performance, suggesting that this combined algorithm can be a useful tool to successfully predict motifs in different kinds of sets of DNA sequences.

  17. An exponential modeling algorithm for protein structure completion by X-ray crystallography.

    Science.gov (United States)

    Shneerson, V L; Wild, D L; Saldin, D K

    2001-03-01

    An exponential modeling algorithm is developed for protein structure completion by X-ray crystallography and tested on experimental data from a 59-residue protein. An initial noisy difference Fourier map of missing residues of up to half of the protein is transformed by the algorithm into one that allows easy identification of the continuous tube of electron density associated with that polypeptide chain. The method incorporates the paradigm of phase hypothesis generation and cross validation within an automated scheme.

  18. Hybrid model based on Genetic Algorithms and SVM applied to variable selection within fruit juice classification.

    Science.gov (United States)

    Fernandez-Lozano, C; Canto, C; Gestal, M; Andrade-Garda, J M; Rabuñal, J R; Dorado, J; Pazos, A

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected.

  19. Development of algorithm for single axis sun tracking system

    Science.gov (United States)

    Yi, Lim Zi; Singh, Balbir Singh Mahinder; Ching, Dennis Ling Chuan; Jin, Calvin Low Eu

    2016-11-01

    The output power from a solar panel depends on the amount of sunlight that is intercepted by the photovoltaic (PV) solar panel. The value of solar irradiance varies due to the change of position of sun and the local meteorological conditions. This causes the output power of a PV based solar electricity generating system (SEGS) to fluctuate as well. In this paper, the focus is on the integration of solar tracking system with performance analyzer system through the development of an algorithm for optimizing the performance of SEGS. The proposed algorithm displays real-time processed data that would enable users to understand the trend of the SEGS output for maintenance prediction and optimization purposes.

  20. Study on Fleet Assignment Problem Model and Algorithm

    Directory of Open Access Journals (Sweden)

    Yaohua Li

    2013-01-01

    Full Text Available The Fleet Assignment Problem (FAP of aircraft scheduling in airlines is studied, and the optimization model of FAP is proposed. The objective function of this model is revenue maximization, and it considers comprehensively the difference of scheduled flights and aircraft models in flight areas and mean passenger flows. In order to solve the model, a self-adapting genetic algorithm is supposed to solve the model, which uses natural number coding, adjusts dynamically crossover and mutation operator probability, and adopts intelligent heuristic adjusting to quicken optimization pace. The simulation with production data of an airline shows that the model and algorithms suggested in this paper are feasible and have a good application value.

  1. Financial Data Modeling by Using Asynchronous Parallel Evolutionary Algorithms

    Institute of Scientific and Technical Information of China (English)

    Wang Chun; Li Qiao-yun

    2003-01-01

    In this paper, the high-level knowledge of financial data modeled by ordinary differential equations (ODEs) is discovered in dynamic data by using an asynchronous parallel evolutionary modeling algorithm (APHEMA). A numerical example of Nasdaq index analysis is used to demonstrate the potential of APHEMA. The results show that the dynamic models automatically discovered in dynamic data by computer can be used to predict the financial trends.

  2. A variational surface hopping algorithm for the sub-Ohmic spin-boson model

    CERN Document Server

    Yao, Yao

    2013-01-01

    The Davydov D1 ansatz, which assigns an individual bosonic trajectory to each spin state, is an efficient, yet extremely accurate trial state for time-dependent variation of the the sub-Ohmic spin-boson model [J. Chem. Phys. 138, 084111 (2013)]. A surface hopping algorithm is developed employing the Davydov D1 ansatz to study the spin dynamics with a sub-Ohmic bosonic bath. The algorithm takes into account both coherent and incoherent dynamics of the population evolution in a unified manner, and compared with semiclassical surface hopping algorithms, hopping rates calculated in this work follow more closely the Marcus formula.

  3. Assessing the Graphical and Algorithmic Structure of Hierarchical Coloured Petri Net Models

    Directory of Open Access Journals (Sweden)

    George Benwell

    1994-11-01

    Full Text Available Petri nets, as a modelling formalism, are utilised for the analysis of processes, whether for explicit understanding, database design or business process re-engineering. The formalism, however, can be represented on a virtual continuum from highly graphical to largely algorithmic. The use and understanding of the formalism will, in part, therefore depend on the resultant complexity and power of the representation and, on the graphical or algorithmic preference of the user. This paper develops a metric which will indicate the graphical or algorithmic tendency of hierarchical coloured Petri nets.

  4. Stochastic gradient algorithm for a dual-rate Box-Jenkins model based on auxiliary model and FIR model

    Institute of Scientific and Technical Information of China (English)

    Jing CHEN; Rui-feng DING

    2014-01-01

    Based on the work in Ding and Ding (2008), we develop a modifi ed stochastic gradient (SG) parameter estimation algorithm for a dual-rate Box-Jenkins model by using an auxiliary model. We simplify the complex dual-rate Box-Jenkins model to two fi nite impulse response (FIR) models, present an auxiliary model to estimate the missing outputs and the unknown noise variables, and compute all the unknown parameters of the system with colored noises. Simulation results indicate that the proposed method is effective.

  5. Methodology, models and algorithms in thermographic diagnostics

    CERN Document Server

    Živčák, Jozef; Madarász, Ladislav; Rudas, Imre J

    2013-01-01

    This book presents  the methodology and techniques of  thermographic applications with focus primarily on medical thermography implemented for parametrizing the diagnostics of the human body. The first part of the book describes the basics of infrared thermography, the possibilities of thermographic diagnostics and the physical nature of thermography. The second half includes tools of intelligent engineering applied for the solving of selected applications and projects. Thermographic diagnostics was applied to problematics of paraplegia and tetraplegia and carpal tunnel syndrome (CTS). The results of the research activities were created with the cooperation of the four projects within the Ministry of Education, Science, Research and Sport of the Slovak Republic entitled Digital control of complex systems with two degrees of freedom, Progressive methods of education in the area of control and modeling of complex object oriented systems on aircraft turbocompressor engines, Center for research of control of te...

  6. Computational modeling of red blood cells: A symplectic integration algorithm

    Science.gov (United States)

    Schiller, Ulf D.; Ladd, Anthony J. C.

    2010-03-01

    Red blood cells can undergo shape transformations that impact the rheological properties of blood. Computational models have to account for the deformability and red blood cells are often modeled as elastically deformable objects. We present a symplectic integration algorithm for deformable objects. The surface is represented by a set of marker points obtained by surface triangulation, along with a set of fiber vectors that describe the orientation of the material plane. The various elastic energies are formulated in terms of these variables and the equations of motion are obtained by exact differentiation of a discretized Hamiltonian. The integration algorithm preserves the Hamiltonian structure and leads to highly accurate energy conservation, hence he method is expected to be more stable than conventional finite element methods. We apply the algorithm to simulate the shape dynamics of red blood cells.

  7. An Efficient Cluster Algorithm for CP(N-1) Models

    CERN Document Server

    Beard, B B; Riederer, S; Wiese, U J

    2005-01-01

    We construct an efficient cluster algorithm for ferromagnetic SU(N)-symmetric quantum spin systems. Such systems provide a new regularization for CP(N-1) models in the framework of D-theory, which is an alternative non-perturbative approach to quantum field theory formulated in terms of discrete quantum variables instead of classical fields. Despite several attempts, no efficient cluster algorithm has been constructed for CP(N-1) models in the standard formulation of lattice field theory. In fact, there is even a no-go theorem that prevents the construction of an efficient Wolff-type embedding algorithm. We present various simulations for different correlation lengths, couplings and lattice sizes. We have simulated correlation lengths up to 250 lattice spacings on lattices as large as 640x640 and we detect no evidence for critical slowing down.

  8. Calibration of microscopic traffic simulation models using metaheuristic algorithms

    Directory of Open Access Journals (Sweden)

    Miao Yu

    2017-06-01

    Full Text Available This paper presents several metaheuristic algorithms to calibrate a microscopic traffic simulation model. The genetic algorithm (GA, Tabu Search (TS, and a combination of the GA and TS (i.e., warmed GA and warmed TS are implemented and compared. A set of traffic data collected from the I-5 Freeway, Los Angles, California, is used. Objective functions are defined to minimize the difference between simulated and field traffic data which are built based on the flow and speed. Several car-following parameters in VISSIM, which can significantly affect the simulation outputs, are selected to calibrate. A better match to the field measurements is reached with the GA, TS, and warmed GA and TS when comparing with that only using the default parameters in VISSIM. Overall, TS performs very well and can be used to calibrate parameters. Combining metaheuristic algorithms clearly performs better and therefore is highly recommended for calibrating microscopic traffic simulation models.

  9. Control of the Damped, Driven Pendulum, in both Numerical Models and Physical Apparatus to develop algorithms appropriate to the control chaotic formation of Taylor Vortex Pairs in Modified Taylor-Couette Flow

    Science.gov (United States)

    Douglass, Eric; Zhao, Yunjie; Hill, Lucas; Brenman, David; Olsen, Thomas; Wiener, Richard

    2008-11-01

    Chaos has been observed in the formation of Taylor Vortex pairs in Modified Taylor Couette flow with hourglass geometry. Control of chaos has been demonstrated in this system employing the RPF algorithm. Seeking alternative algorithms, we are implementing the OGY algorithm in a numerical model of a damped driven mechanical pendulum and a physical apparatus. We report on both and future plans for the Modified Taylor-Couette system. Wiener et al, Phys. Rev. E 55, 5489 (1997). Rollins et al, Phys. Rev. E 47, R780 (1993). Wiener et al, Phys. Rev. Lett. 83, 2340 (1999). E. Ott, C. Grebogi, & J. A. Yorke, Phys. Rev. Lett. 64, 1196 (1990). G. L. Baker, Am. J. Phys. 63, 832 (1995). J. A. Blackburn et al, Rev. Sci. Instr. 60, 422 (1989).

  10. Numerical algorithm of distributed TOPKAPI model and its application

    Directory of Open Access Journals (Sweden)

    Deng Peng

    2008-12-01

    Full Text Available The TOPKAPI (TOPographic Kinematic APproximation and Integration model is a physically based rainfall-runoff model derived from the integration in space of the kinematic wave model. In the TOPKAPI model, rainfall-runoff and runoff routing processes are described by three nonlinear reservoir differential equations that are structurally similar and describe different hydrological and hydraulic processes. Equations are integrated over grid cells that describe the geometry of the catchment, leading to a cascade of nonlinear reservoir equations. For the sake of improving the model’s computation precision, this paper provides the general form of these equations and describes the solution by means of a numerical algorithm, the variable-step fourth-order Runge-Kutta algorithm. For the purpose of assessing the quality of the comprehensive numerical algorithm, this paper presents a case study application to the Buliu River Basin, which has an area of 3 310 km2, using a DEM (digital elevation model grid with a resolution of 1 km. The results show that the variable-step fourth-order Runge-Kutta algorithm for nonlinear reservoir equations is a good approximation of subsurface flow in the soil matrix, overland flow over the slopes, and surface flow in the channel network, allowing us to retain the physical properties of the original equations at scales ranging from a few meters to 1 km.

  11. Hammerstein Model Based RLS Algorithm for Modeling the Intelligent Pneumatic Actuator (IPA System

    Directory of Open Access Journals (Sweden)

    Siti Fatimah Sulaiman

    2017-08-01

    Full Text Available An Intelligent Pneumatic Actuator (IPA system is considered highly nonlinear and subject to nonlinearities which make the precise position control of this actuator is difficult to achieve. Thus, it is appropriate to model the system using nonlinear approach because the linear model sometimes not sufficient enough to represent the nonlinearity of the system in the real process. This study presents a new modeling of an IPA system using Hammerstein model based Recursive Least Square (RLS algorithm. The Hammerstein model is one of the blocks structured nonlinear models often used to model a nonlinear system and it consists of a static nonlinear block followed by a linear block of dynamic element. In this study, the static nonlinear block was represented by a deadzone of the pneumatic valve, while the linear block was represented by a dynamic element of IPA system. A RLS has been employed as the main algorithm in order to estimate the parameters of the Hammerstein model. The validity of the proposed model has been verified by conducting a real-time experiment. All of the criteria as outlined in the system identification’s procedures were successfully complied by the proposed Hammerstein model as it managed to provide a stable system, higher best fit, lower loss function and lower final prediction error than a linear model developed before. The performance of the proposed Hammerstein model in controlling the IPA’s positioning system is also considered good. Thus, this new developed Hammerstein model is sufficient enough to represents the IPA system utilized in this study.

  12. Parallel flow accumulation algorithms for graphical processing units with application to RUSLE model

    Science.gov (United States)

    Sten, Johan; Lilja, Harri; Hyväluoma, Jari; Westerholm, Jan; Aspnäs, Mats

    2016-04-01

    Digital elevation models (DEMs) are widely used in the modeling of surface hydrology, which typically includes the determination of flow directions and flow accumulation. The use of high-resolution DEMs increases the accuracy of flow accumulation computation, but as a drawback, the computational time may become excessively long if large areas are analyzed. In this paper we investigate the use of graphical processing units (GPUs) for efficient flow accumulation calculations. We present two new parallel flow accumulation algorithms based on dependency transfer and topological sorting and compare them to previously published flow transfer and indegree-based algorithms. We benchmark the GPU implementations against industry standards, ArcGIS and SAGA. With the flow-transfer D8 flow routing model and binary input data, a speed up of 19 is achieved compared to ArcGIS and 15 compared to SAGA. We show that on GPUs the topological sort-based flow accumulation algorithm leads on average to a speedup by a factor of 7 over the flow-transfer algorithm. Thus a total speed up of the order of 100 is achieved. We test the algorithms by applying them to the Revised Universal Soil Loss Equation (RUSLE) erosion model. For this purpose we present parallel versions of the slope, LS factor and RUSLE algorithms and show that the RUSLE erosion results for an area of 12 km x 24 km containing 72 million cells can be calculated in less than a second. Since flow accumulation is needed in many hydrological models, the developed algorithms may find use in many other applications than RUSLE modeling. The algorithm based on topological sorting is particularly promising for dynamic hydrological models where flow accumulations are repeatedly computed over an unchanged DEM.

  13. Parallel and Distributed Genetic Algorithm with Multiple-Objectives to Improve and Develop of Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Khalil Ibrahim Mohammad Abuzanouneh

    2016-05-01

    Full Text Available In this paper, we argue that the timetabling problem reflects the problem of scheduling university courses, So you must specify the range of time periods and a group of instructors for a range of lectures to check a set of constraints and reduce the cost of other constraints ,this is the problem called NP-hard, it is a class of problems that are informally, it’s mean that necessary operations to solve the problem will increases exponentially and directly proportional to the size of the problem, The construction of timetable is most complicated problem that was facing many universities, and increased by size of the university data and overlapping disciplines between colleges, and when a traditional algorithm (EA is unable to provide satisfactory results, a distributed EA (dEA, which deploys the population on distributed systems ,it also offers an opportunity to solve extremely high dimensional problems through distributed coevolution using a divide-and-conquer mechanism, Further, the distributed environment allows a dEA to maintain population diversity, thereby avoiding local optima and also facilitating multi-objective search, by employing different distributed models to parallelize the processing of EAs, we designed a genetic algorithm suitable for Universities environment and the constraints facing it when building timetable for lectures.

  14. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  15. Algorithm for automatic forced spirometry quality assessment: technological developments.

    Science.gov (United States)

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.

  16. Algorithm for automatic forced spirometry quality assessment: technological developments.

    Directory of Open Access Journals (Sweden)

    Umberto Melia

    Full Text Available We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1 an initial version using the standard FS curves recommended by the ATS; and, (2 a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95% and sensitivity (96%. The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.

  17. Epidemic Processes on Complex Networks: Modelling, Simulation and Algorithms

    NARCIS (Netherlands)

    Van de Bovenkamp, R.

    2015-01-01

    Local interactions on a graph will lead to global dynamic behaviour. In this thesis we focus on two types of dynamic processes on graphs: the Susceptible-Infected-Susceptilbe (SIS) virus spreading model, and gossip style epidemic algorithms. The largest part of this thesis is devoted to the SIS mode

  18. Worm Algorithm for CP(N-1) Model

    CERN Document Server

    Rindlisbacher, Tobias

    2017-01-01

    The CP(N-1) model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CP(N-1) on the lattice is much lower than that for simulating 4D QCD. However, to our knowledge, no efficient algorithm for simulating the lattice CP(N-1) model has been tested so far, which also works at finite density. To this end we propose a new type of worm algorithm which is appropriate to simulate the lattice CP(N-1) model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications. In addition to the usual worm moves where a defect is just moved from one lattice site to the next, our algorithm additionally allows for worm-type moves in the internal variable space of single links, which accelerates the Monte Carlo evolution. We use our algorithm to compare the two popular CP(N-1) l...

  19. Worm algorithm for the CP N - 1 model

    Science.gov (United States)

    Rindlisbacher, Tobias; de Forcrand, Philippe

    2017-05-01

    The CP N - 1 model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CP N - 1 on the lattice is much lower than that for simulating 4D QCD. However, to our knowledge, no efficient algorithm for simulating the lattice CP N - 1 model for N > 2 has been tested so far, which also works at finite density. To this end we propose a new type of worm algorithm which is appropriate to simulate the lattice CP N - 1 model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications. In addition to the usual worm moves where a defect is just moved from one lattice site to the next, our algorithm additionally allows for worm-type moves in the internal variable space of single links, which accelerates the Monte Carlo evolution. We use our algorithm to compare the two popular CP N - 1 lattice actions and exhibit marked differences in their approach to the continuum limit.

  20. Evolving the Topology of Hidden Markov Models using Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Thomsen, Réne

    2002-01-01

    Hidden Markov models (HMM) are widely used for speech recognition and have recently gained a lot of attention in the bioinformatics community, because of their ability to capture the information buried in biological sequences. Usually, heuristic algorithms such as Baum-Welch are used to estimate...

  1. Developing an atrial activity-based algorithm for detection of atrial fibrillation.

    Science.gov (United States)

    Ladavich, Steven; Ghoraani, Behnaz

    2014-01-01

    In this study we propose a novel atrial activity-based method for atrial fibrillation (AF) identification that detects the absence of normal sinus rhythm (SR) P-waves from the surface ECG. The proposed algorithm extracts nine features from P-waves during SR and develops a statistical model to describe the distribution of the features. The Expectation-Maximization algorithm is applied to a training set to create a multivariate Gaussian Mixture Model (GMM) of the feature space. This model is used to identify P-wave absence (PWA) and, in turn, AF. An optional post-processing stage, which takes a majority vote of successive outputs, is applied to improve classier performance. The algorithm was tested on 20 records in the MIT-BIH Atrial Fibrillation Database. Classification combining seven beats showed a sensitivity of 99.28%, a specificity of 90.21%. The presented algorithm has a classification performance comparable to current Heartrate-based algorithms yet is rate-independent and capable of making an AF determination in a few beats.

  2. Genetic Algorithms for Development of New Financial Products

    Directory of Open Access Journals (Sweden)

    Eder Oliveira Abensur

    2007-06-01

    Full Text Available New Product Development (NPD is recognized as a fundamental activity that has a relevant impact on the performance of companies. Despite the relevance of the financial market there is a lack of work on new financial product development. The aim of this research is to propose the use of Genetic Algorithms (GA as an alternative procedure for evaluating the most favorable combination of variables for the product launch. The paper focuses on: (i determining the essential variables of the financial product studied (investment fund; (ii determining how to evaluate the success of a new investment fund launch and (iii how GA can be applied to the financial product development problem. The proposed framework was tested using 4 years of real data from the Brazilian financial market and the results suggest that this is an innovative development methodology and useful for designing complex financial products with many attributes.

  3. Implementation and testing of a simple data assimilation algorithm in the regional air pollution forecast model, DEOM

    DEFF Research Database (Denmark)

    Frydendall, Jan; Brandt, J.; Christensen, J. H.

    2009-01-01

    A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM), applied for air pollution forecasting at the National Environmental Research Institute (NERI), Denmark....... In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP...... configuration of the data assimilation algorithm, were found. The data assimilation algorithm will in the future be used in the operational THOR integrated air pollution forecast system, which includes the DEOM....

  4. Study on model and algorithm of inventory routing problem

    Science.gov (United States)

    Wan, Fengjiao

    Vehicle routing problem(VRP) is one of important research in the logistics system. Nowadays, there are many researches on the VRP, but their don't consider the cost of inventory. Thus, the conclusion doesn't meet reality. This paper studies on the inventory routing problem (IRP)and uses one target function to describe these two conflicting problems, which are very important in the logistics optimization. The paper establishes the model of single client and many clients' inventory routing problem. An optimizing iterative algorithm is presented to solve the model. According to the model we can confirm the best quantity, efficiency and route of delivery. Finally, an example is given to illustrate the efficiency of model and algorithm.

  5. Model-based Bayesian signal extraction algorithm for peripheral nerves

    Science.gov (United States)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of

  6. The mathematical model realization algorithm of high voltage cable

    OpenAIRE

    2006-01-01

    At mathematical model realization algorithm is very important to know the account order of necessary relations and how it presents. Depending of loads or signal sources connection in selected points of mathematical model its very important to know as to make the equations in this point that it was possible to determine all unknown variables in this point. The number of equations which describe this point must to coincide with number of unknown variables, and matrix which describes factor...

  7. A Business Intelligence Model to Predict Bankruptcy using Financial Domain Ontology with Association Rule Mining Algorithm

    CERN Document Server

    Martin, A; Venkatesan, Dr V Prasanna

    2011-01-01

    Today in every organization financial analysis provides the basis for understanding and evaluating the results of business operations and delivering how well a business is doing. This means that the organizations can control the operational activities primarily related to corporate finance. One way that doing this is by analysis of bankruptcy prediction. This paper develops an ontological model from financial information of an organization by analyzing the Semantics of the financial statement of a business. One of the best bankruptcy prediction models is Altman Z-score model. Altman Z-score method uses financial rations to predict bankruptcy. From the financial ontological model the relation between financial data is discovered by using data mining algorithm. By combining financial domain ontological model with association rule mining algorithm and Zscore model a new business intelligence model is developed to predict the bankruptcy.

  8. Crime Busting Model Based on Dynamic Ranking Algorithms

    Directory of Open Access Journals (Sweden)

    Yang Cao

    2013-01-01

    Full Text Available This paper proposed a crime busting model with two dynamic ranking algorithms to detect the likelihood of a suspect and the possibility of a leader in a complex social network. Signally, in order to obtain the priority list of suspects, an advanced network mining approach with a dynamic cumulative nominating algorithm is adopted to rapidly reduce computational expensiveness than most other topology-based approaches. Our method can also greatly increase the accuracy of solution with the enhancement of semantic learning filtering at the same time. Moreover, another dynamic algorithm of node contraction is also presented to help identify the leader among conspirators. Test results are given to verify the theoretical results, which show the great performance for either small or large datasets.

  9. Threat Modeling-Oriented Attack Path Evaluating Algorithm

    Institute of Scientific and Technical Information of China (English)

    LI Xiaohong; LIU Ran; FENG Zhiyong; HE Ke

    2009-01-01

    In order to evaluate all attack paths in a threat tree,based on threat modeling theory,a weight distribution algorithm of the root node in a threat tree is designed,which computes threat coefficients of leaf nodes in two ways including threat occurring possibility and the degree of damage.Besides,an algorithm of searching attack path was also obtained in accordence with its definition.Finally,an attack path evaluation system was implemented which can output the threat coefficients of the leaf nodes in a target threat tree,the weight distribution information,and the attack paths.An example threat tree is given to verify the effectiveness of the algorithms.

  10. Gray Cerebrovascular Image Skeleton Extraction Algorithm Using Level Set Model

    Directory of Open Access Journals (Sweden)

    Jian Wu

    2010-06-01

    Full Text Available The ambiguity and complexity of medical cerebrovascular image makes the skeleton gained by conventional skeleton algorithm discontinuous, which is sensitive at the weak edges, with poor robustness and too many burrs. This paper proposes a cerebrovascular image skeleton extraction algorithm based on Level Set model, using Euclidean distance field and improved gradient vector flow to obtain two different energy functions. The first energy function controls the  obtain of topological nodes for the beginning of skeleton curve. The second energy function controls the extraction of skeleton surface. This algorithm avoids the locating and classifying of the skeleton connection points which guide the skeleton extraction. Because all its parameters are gotten by the analysis and reasoning, no artificial interference is needed.

  11. Time-Based Dynamic Trust Model Using Ant Colony Algorithm

    Institute of Scientific and Technical Information of China (English)

    TANG Zhuo; LU Zhengding; LI Kai

    2006-01-01

    The trust in distributed environment is uncertain, which is variation for various factors. This paper introduces TDTM, a model for time-based dynamic trust. Every entity in the distribute environment is endowed with a trust-vector, which figures the trust intensity between this entity and the others. The trust intensity is dynamic due to the time and the inter-operation between two entities, a method is proposed to quantify this change based on the mind of ant colony algorithm and then an algorithm for the transfer of trust relation is also proposed. Furthermore, this paper analyses the influence to the trust intensity among all entities that is aroused by the change of trust intensity between the two entities, and presents an algorithm to resolve the problem. Finally, we show the process of the trusts'change that is aroused by the time' lapse and the inter-operation through an instance.

  12. Development of a validation algorithm for 'present on admission' flagging

    Directory of Open Access Journals (Sweden)

    Cheng Diana

    2009-12-01

    Full Text Available Abstract Background The use of routine hospital data for understanding patterns of adverse outcomes has been limited in the past by the fact that pre-existing and post-admission conditions have been indistinguishable. The use of a 'Present on Admission' (or POA indicator to distinguish pre-existing or co-morbid conditions from those arising during the episode of care has been advocated in the US for many years as a tool to support quality assurance activities and improve the accuracy of risk adjustment methodologies. The USA, Australia and Canada now all assign a flag to indicate the timing of onset of diagnoses. For quality improvement purposes, it is the 'not-POA' diagnoses (that is, those acquired in hospital that are of interest. Methods Our objective was to develop an algorithm for assessing the validity of assignment of 'not-POA' flags. We undertook expert review of the International Classification of Diseases, 10th Revision, Australian Modification (ICD-10-AM to identify conditions that could not be plausibly hospital-acquired. The resulting computer algorithm was tested against all diagnoses flagged as complications in the Victorian (Australia Admitted Episodes Dataset, 2005/06. Measures reported include rates of appropriate assignment of the new Australian 'Condition Onset' flag by ICD chapter, and patterns of invalid flagging. Results Of 18,418 diagnosis codes reviewed, 93.4% (n = 17,195 reflected agreement on status for flagging by at least 2 of 3 reviewers (including 64.4% unanimous agreement; Fleiss' Kappa: 0.61. In tests of the new algorithm, 96.14% of all hospital-acquired diagnosis codes flagged were found to be valid in the Victorian records analysed. A lower proportion of individual codes was judged to be acceptably flagged (76.2%, but this reflected a high proportion of codes used Conclusion An indicator variable about the timing of occurrence of diagnoses can greatly expand the use of routinely coded data for hospital quality

  13. Developing a paradigm of drug innovation: an evaluation algorithm.

    Science.gov (United States)

    Caprino, Luciano; Russo, Pierluigi

    2006-11-01

    Assessment of drug innovation is a burning issue because it involves so many different perspectives, mainly those of patients, decision- and policy-makers, regulatory authorities and pharmaceutical companies. Moreover, the innovative value of a new medicine is usually an intrinsic property of the compound, but it also depends on the specific context in which the medicine is introduced and the availability of other medicines for treating the same clinical condition. Thus, a model designed to assess drug innovation should be able to capture the intrinsic properties of a compound (which usually emerge during R&D) and/or modification of its innovative value with time. Here we describe the innovation assessment algorithm (IAA), a simulation model for assessing drug innovation. IAA provides a score of drug innovation by assessing information generated during both the pre-marketing and the post-marketing authorization phase.

  14. Tuning, Diagnostics & Data Preparation for Generalized Linear Models Supervised Algorithm in Data Mining Technologies

    Directory of Open Access Journals (Sweden)

    Sachin Bhaskar

    2015-07-01

    Full Text Available Data mining techniques are the result of a long process of research and product development. Large amount of data are searched by the practice of Data Mining to find out the trends and patterns that go beyond simple analysis. For segmentation of data and also to evaluate the possibility of future events, complex mathematical algorithms are used here. Specific algorithm produces each Data Mining model. More than one algorithms are used to solve in best way by some Data Mining problems. Data Mining technologies can be used through Oracle. Generalized Linear Models (GLM Algorithm is used in Regression and Classification Oracle Data Mining functions. For linear modelling, GLM is one the popular statistical techniques. For regression and binary classification, GLM is implemented by Oracle Data Mining. Row diagnostics as well as model statistics and extensive co-efficient statistics are provided by GLM. It also supports confidence bounds.. This paper outlines and produces analysis of GLM algorithm, which will guide to understand the tuning, diagnostics & data preparation process and the importance of Regression & Classification supervised Oracle Data Mining functions and it is utilized in marketing, time series prediction, financial forecasting, overall business planning, trend analysis, environmental modelling, biomedical and drug response modelling, etc.

  15. Models of performance of evolutionary program induction algorithms based on indicators of problem difficulty.

    Science.gov (United States)

    Graff, Mario; Poli, Riccardo; Flores, Juan J

    2013-01-01

    Modeling the behavior of algorithms is the realm of evolutionary algorithm theory. From a practitioner's point of view, theory must provide some guidelines regarding which algorithm/parameters to use in order to solve a particular problem. Unfortunately, most theoretical models of evolutionary algorithms are difficult to apply to realistic situations. However, in recent work (Graff and Poli, 2008, 2010), where we developed a method to practically estimate the performance of evolutionary program-induction algorithms (EPAs), we started addressing this issue. The method was quite general; however, it suffered from some limitations: it required the identification of a set of reference problems, it required hand picking a distance measure in each particular domain, and the resulting models were opaque, typically being linear combinations of 100 features or more. In this paper, we propose a significant improvement of this technique that overcomes the three limitations of our previous method. We achieve this through the use of a novel set of features for assessing problem difficulty for EPAs which are very general, essentially based on the notion of finite difference. To show the capabilities or our technique and to compare it with our previous performance models, we create models for the same two important classes of problems-symbolic regression on rational functions and Boolean function induction-used in our previous work. We model a variety of EPAs. The comparison showed that for the majority of the algorithms and problem classes, the new method produced much simpler and more accurate models than before. To further illustrate the practicality of the technique and its generality (beyond EPAs), we have also used it to predict the performance of both autoregressive models and EPAs on the problem of wind speed forecasting, obtaining simpler and more accurate models that outperform in all cases our previous performance models.

  16. PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.

    Science.gov (United States)

    Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A

    2016-06-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Explicit incremental-update algorithm for modeling crystal elasto-viscoplastic response in finite element simulation

    Institute of Scientific and Technical Information of China (English)

    LI Hong-wei; YANG He; SUN Zhi-chao

    2006-01-01

    Computational stability and efficiency are the key problems for numerical modeling of crystal plasticity,which will limit its development and application in finite element (FE) simulation evidently. Since implicit iterative algorithms are inefficient and have difficulty to determine initial values,an explicit incremental-update algorithm for the elasto-viscoplastic constitutive relation was developed in the intermediate frame by using the second Piola-Kirchoff (P-K) stress and Green stain. The increment of stress and slip resistance were solved by a calculation loop of linear equations sets. The reorientation of the crystal as well as the elastic strain can be obtained from a polar decomposition of the elastic deformation gradient. User material subroutine VUMAT was developed to combine crystal elasto-viscoplastic constitutive model with ABAQUS/Explicit. Numerical studies were performed on a cubic upset model with OFHC material (FCC crystal). The comparison of the numerical results with those obtained by implicit iterative algorithm and those from experiments demonstrates that the explicit algorithm is reliable. Furthermore,the effect rules of material anisotropy,rate sensitivity coefficient (RSC) and loading speeds on the deformation were studied. The numerical studies indicate that the explicit algorithm is suitable and efficient for large deformation analyses where anisotropy due to texture is important.

  18. Develop a Model Component

    Science.gov (United States)

    Ensey, Tyler S.

    2013-01-01

    During my internship at NASA, I was a model developer for Ground Support Equipment (GSE). The purpose of a model developer is to develop and unit test model component libraries (fluid, electrical, gas, etc.). The models are designed to simulate software for GSE (Ground Special Power, Crew Access Arm, Cryo, Fire and Leak Detection System, Environmental Control System (ECS), etc. .) before they are implemented into hardware. These models support verifying local control and remote software for End-Item Software Under Test (SUT). The model simulates the physical behavior (function, state, limits and 110) of each end-item and it's dependencies as defined in the Subsystem Interface Table, Software Requirements & Design Specification (SRDS), Ground Integrated Schematic (GIS), and System Mechanical Schematic.(SMS). The software of each specific model component is simulated through MATLAB's Simulink program. The intensiv model development life cycle is a.s follows: Identify source documents; identify model scope; update schedule; preliminary design review; develop model requirements; update model.. scope; update schedule; detailed design review; create/modify library component; implement library components reference; implement subsystem components; develop a test script; run the test script; develop users guide; send model out for peer review; the model is sent out for verifictionlvalidation; if there is empirical data, a validation data package is generated; if there is not empirical data, a verification package is generated; the test results are then reviewed; and finally, the user. requests accreditation, and a statement of accreditation is prepared. Once each component model is reviewed and approved, they are intertwined together into one integrated model. This integrated model is then tested itself, through a test script and autotest, so that it can be concluded that all models work conjointly, for a single purpose. The component I was assigned, specifically, was a

  19. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Aarle, Wim van, E-mail: wim.vanaarle@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, Willem Jan, E-mail: willemjan.palenstijn@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); De Beenhouwer, Jan, E-mail: jan.debeenhouwer@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Altantzis, Thomas, E-mail: thomas.altantzis@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Bals, Sara, E-mail: sara.bals@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Batenburg, K. Joost, E-mail: joost.batenburg@cwi.nl [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); Mathematical Institute, Leiden University, P.O. Box 9512, NL-2300 RA Leiden (Netherlands); Sijbers, Jan, E-mail: jan.sijbers@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-10-15

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series.

  20. Modeling and performance analysis of GPS vector tracking algorithms

    Science.gov (United States)

    Lashley, Matthew

    This dissertation provides a detailed analysis of GPS vector tracking algorithms and the advantages they have over traditional receiver architectures. Standard GPS receivers use a decentralized architecture that separates the tasks of signal tracking and position/velocity estimation. Vector tracking algorithms combine the two tasks into a single algorithm. The signals from the various satellites are processed collectively through a Kalman filter. The advantages of vector tracking over traditional, scalar tracking methods are thoroughly investigated. A method for making a valid comparison between vector and scalar tracking loops is developed. This technique avoids the ambiguities encountered when attempting to make a valid comparison between tracking loops (which are characterized by noise bandwidths and loop order) and the Kalman filters (which are characterized by process and measurement noise covariance matrices) that are used by vector tracking algorithms. The improvement in performance offered by vector tracking is calculated in multiple different scenarios. Rule of thumb analysis techniques for scalar Frequency Lock Loops (FLL) are extended to the vector tracking case. The analysis tools provide a simple method for analyzing the performance of vector tracking loops. The analysis tools are verified using Monte Carlo simulations. Monte Carlo simulations are also used to study the effects of carrier to noise power density (C/N0) ratio estimation and the advantage offered by vector tracking over scalar tracking. The improvement from vector tracking ranges from 2.4 to 6.2 dB in various scenarios. The difference in the performance of the three vector tracking architectures is analyzed. The effects of using a federated architecture with and without information sharing between the receiver's channels are studied. A combination of covariance analysis and Monte Carlo simulation is used to analyze the performance of the three algorithms. The federated algorithm without

  1. A Software Pattern of the Genetic Algorithm -a Study on Reusable Object Model of Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The Genetic Algorithm (GA) has been a pop research field, butthere is little concern on GA in view of Software Engineering and this result in a serie s of problems. In this paper, we extract a GA's software pattern, draw a model d iagram of the reusable objects, analyze the advantages and disadvantages of the pattern, and give a sample code at the end. We are then able to improve the reus ability and expansibility of GA. The results make it easier to program a new GA code by using some existing successful operators, thereby reducing the difficult ies and workload of programming a GA's code, and facilitate the GA application.

  2. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  3. Genetic algorithm-based multi-objective model for scheduling of linear construction projects

    OpenAIRE

    Senouci, Ahmed B.; Al-Derham, H.R.

    2007-01-01

    This paper presents a genetic algorithm-based multi-objective optimization model for the scheduling of linear construction projects. The model allows construction planners to generate and evaluate optimal/near-optimal construction scheduling plans that minimize both project time and cost. The computations in the present model are organized in three major modules. A scheduling module that develops practical schedules for linear construction projects. A cost module that computes the project's c...

  4. Improved Marquardt Algorithm for Training Neural Networks for Chemical Process Modeling

    Institute of Scientific and Technical Information of China (English)

    吴建昱; 何小荣

    2002-01-01

    Back-propagation (BP) artificial neural networks have been widely used to model chemical processes. BP networks are often trained using the generalized delta-rule (GDR) algorithm but application of such networks is limited because of the low convergent speed of the algorithm. This paper presents a new algorithm incorporating the Marquardt algorithm into the BP algorithm for training feedforward BP neural networks. The new algorithm was tested with several case studies and used to model the Reid vapor pressure (RVP) of stabilizer gasoline. The new algorithm has faster convergence and is much more efficient than the GDR algorithm.

  5. An overview on recent radiation transport algorithm development for optical tomography imaging

    Energy Technology Data Exchange (ETDEWEB)

    Charette, Andre [Groupe de Recherche en Ingenierie des Procedes et Systemes, Universite du Quebec a Chicoutimi, Chicoutimi, QC, G7H 2B1 (Canada)], E-mail: Andre_Charette@uqac.ca; Boulanger, Joan [Laboratoire des Turbines a Gaz, Institut pour la Recherche Aerospatiale-Conseil National de Recherche du Canada, Ottawa, ON, K1A 0R6 (Canada); Kim, Hyun K [Department of Biomedical Engineering, Columbia University, New York, NY 10027 (United States)

    2008-11-15

    Optical tomography belongs to the promising set of non-invasive methods for probing applications of semi-transparent media. This covers a wide range of fields. Nowadays, it is mainly driven by medical imaging in search of new less aggressive and affordable diagnostic means. This paper aims at presenting the most recent research accomplished in the authors' laboratories as well as that of collaborative institutions concerning the development of imaging algorithms. The light transport modelling is not a difficult question as it used to be. Research is now focused on data treatment and reconstruction. Since the turn of the century, the rapid expansion of low cost computing has permitted the development of enhanced imaging algorithms with great potential. Some of these developments are already on the verge of clinical applications. This paper presents these developments and also provides some insights on still unresolved challenges. Intrinsic difficulties are identified and promising directions for solutions are discussed.

  6. Computational Modeling of Teaching and Learning through Application of Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Richard Lamb

    2015-09-01

    Full Text Available Within the mind, there are a myriad of ideas that make sense within the bounds of everyday experience, but are not reflective of how the world actually exists; this is particularly true in the domain of science. Classroom learning with teacher explanation are a bridge through which these naive understandings can be brought in line with scientific reality. The purpose of this paper is to examine how the application of a Multiobjective Evolutionary Algorithm (MOEA can work in concert with an existing computational-model to effectively model critical-thinking in the science classroom. An evolutionary algorithm is an algorithm that iteratively optimizes machine learning based computational models. The research question is, does the application of an evolutionary algorithm provide a means to optimize the Student Task and Cognition Model (STAC-M and does the optimized model sufficiently represent and predict teaching and learning outcomes in the science classroom? Within this computational study, the authors outline and simulate the effect of teaching on the ability of a “virtual” student to solve a Piagetian task. Using the Student Task and Cognition Model (STAC-M a computational model of student cognitive processing in science class developed in 2013, the authors complete a computational experiment which examines the role of cognitive retraining on student learning. Comparison of the STAC-M and the STAC-M with inclusion of the Multiobjective Evolutionary Algorithm shows greater success in solving the Piagetian science-tasks post cognitive retraining with the Multiobjective Evolutionary Algorithm. This illustrates the potential uses of cognitive and neuropsychological computational modeling in educational research. The authors also outline the limitations and assumptions of computational modeling.

  7. Development of a New Fractal Algorithm to Predict Quality Traits of MRI Loins

    DEFF Research Database (Denmark)

    Caballero, Daniel; Caro, Andrés; Amigo, José Manuel

    2017-01-01

    to analyze MRI could be another possibility for this purpose. In this paper, a new fractal algorithm is developed, to obtain features from MRI based on fractal characteristics. This algorithm is called OPFTA (One Point Fractal Texture Algorithm). Three fractal algorithms were tested in this study: CFA...... (Classical fractal algorithm), FTA (Fractal texture algorithm) and OPFTA. The results obtained by means of these three fractal algorithms were correlated to the results obtained by means of physico-chemical methods. OPFTA and FTA achieved correlation coefficients higher than 0.75 and CFA reached low...

  8. a Model-Based Autofocus Algorithm for Ultrasonic Imaging Using a Flexible Array

    Science.gov (United States)

    Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.

    2010-02-01

    Autofocus is a methodology for estimating and correcting errors in the assumed parameters of an imaging algorithm. It provides improved image quality and, therefore, better defect detection and characterization capabilities. In this paper, we present a new autofocus algorithm developed specifically for ultrasonic non-destructive testing and evaluation (NDE). We consider the estimation and correction of errors in the assumed element positions for a flexible ultrasonic array coupled to a specimen with an unknown surface profile. The algorithm performs a weighted least-squares minimization of the time-of-arrival errors in the echo data using assumed models for known features in the specimen. The algorithm is described for point and planar specimen features and demonstrated using experimental data from a flexible array prototype.

  9. A smoothing expectation and substitution algorithm for the semiparametric accelerated failure time frailty model.

    Science.gov (United States)

    Johnson, Lynn M; Strawderman, Robert L

    2012-09-20

    This paper proposes an estimation procedure for the semiparametric accelerated failure time frailty model that combines smoothing with an Expectation and Maximization-like algorithm for estimating equations. The resulting algorithm permits simultaneous estimation of the regression parameter, the baseline cumulative hazard, and the parameter indexing a general frailty distribution. We develop novel moment-based estimators for the frailty parameter, including a generalized method of moments estimator. Standard error estimates for all parameters are easily obtained using a randomly weighted bootstrap procedure. For the commonly used gamma frailty distribution, the proposed algorithm is very easy to implement using widely available numerical methods. Simulation results demonstrate that the algorithm performs very well in this setting. We re-analyz several previously analyzed data sets for illustrative purposes.

  10. Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Gregory H. [Univ. of California, Davis, CA (United States); Forest, Gregory [Univ. of California, Davis, CA (United States)

    2014-05-01

    We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.

  11. LMI-Based Generation of Feedback Laws for a Robust Model Predictive Control Algorithm

    Science.gov (United States)

    Acikmese, Behcet; Carson, John M., III

    2007-01-01

    This technical note provides a mathematical proof of Corollary 1 from the paper 'A Nonlinear Model Predictive Control Algorithm with Proven Robustness and Resolvability' that appeared in the 2006 Proceedings of the American Control Conference. The proof was omitted for brevity in the publication. The paper was based on algorithms developed for the FY2005 R&TD (Research and Technology Development) project for Small-body Guidance, Navigation, and Control [2].The framework established by the Corollary is for a robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems that guarantees the resolvability of the associated nite-horizon optimal control problem in a receding-horizon implementation. Additional details of the framework are available in the publication.

  12. Comparison of most adaptive meta model With newly created Quality Meta-Model using CART Algorithm

    Directory of Open Access Journals (Sweden)

    Jasbir Malik

    2012-09-01

    Full Text Available To ensure that the software developed is of high quality, it is now widely accepted that various artifacts generated during the development process should be rigorously evaluated using domain-specific quality model. However, a domain-specific quality model should be derived from a generic quality model which is time-proven, well-validated and widely-accepted. This thesis lays down a clear definition of quality meta-model and then identifies various quality meta-models existing in the research and practice-domains. This thesis then compares the various existing quality meta-models to identify which model is the most adaptable to various domains. A set of criteria is used to compare the various quality meta-models. In this we specify the categories, as the CART Algorithms is completely a tree architecture which works on either true or false meta model decision making power .So in the process it has been compared that , if the following items has been found in one category then it falls under true section else under false section .

  13. Development of algorithms for building inventory compilation through remote sensing and statistical inferencing

    Science.gov (United States)

    Sarabandi, Pooya

    Building inventories are one of the core components of disaster vulnerability and loss estimations models, and as such, play a key role in providing decision support for risk assessment, disaster management and emergency response efforts. In may parts of the world inclusive building inventories, suitable for the use in catastrophe models cannot be found. Furthermore, there are serious shortcomings in the existing building inventories that include incomplete or out-dated information on critical attributes as well as missing or erroneous values for attributes. In this dissertation a set of methodologies for updating spatial and geometric information of buildings from single and multiple high-resolution optical satellite images are presented. Basic concepts, terminologies and fundamentals of 3-D terrain modeling from satellite images are first introduced. Different sensor projection models are then presented and sources of optical noise such as lens distortions are discussed. An algorithm for extracting height and creating 3-D building models from a single high-resolution satellite image is formulated. The proposed algorithm is a semi-automated supervised method capable of extracting attributes such as longitude, latitude, height, square footage, perimeter, irregularity index and etc. The associated errors due to the interactive nature of the algorithm are quantified and solutions for minimizing the human-induced errors are proposed. The height extraction algorithm is validated against independent survey data and results are presented. The validation results show that an average height modeling accuracy of 1.5% can be achieved using this algorithm. Furthermore, concept of cross-sensor data fusion for the purpose of 3-D scene reconstruction using quasi-stereo images is developed in this dissertation. The developed algorithm utilizes two or more single satellite images acquired from different sensors and provides the means to construct 3-D building models in a more

  14. Development of Navigation Control Algorithm for AGV Using D* search Algorithm

    Directory of Open Access Journals (Sweden)

    Jeong Geun Kim

    2013-06-01

    Full Text Available In this paper, we present a navigation control algorithm for Automatic Guided Vehicles (AGV that move in industrial environments including static and moving obstacles using D* algorithm. This algorithm has ability to get paths planning in unknown, partially known and changing environments efficiently. To apply the D* search algorithm, the grid map represent the known environment is generated. By using the laser scanner LMS-151 and laser navigation sensor NAV-200, the grid map is updated according to the changing of environment and obstacles. When the AGV finds some new map information such as new unknown obstacles, it adds the information to its map and re-plans a new shortest path from its current coordinates to the given goal coordinates. It repeats the process until it reaches the goal coordinates. This algorithm is verified through simulation and experiment. The simulation and experimental results show that the algorithm can be used to move the AGV successfully to reach the goal position while it avoids unknown moving and static obstacles. [Keywords— navigation control algorithm; Automatic Guided Vehicles (AGV; D* search algorithm

  15. A dynamic model reduction algorithm for atmospheric chemistry models

    Science.gov (United States)

    Santillana, Mauricio; Le Sager, Philippe; Jacob, Daniel J.; Brenner, Michael

    2010-05-01

    Understanding the dynamics of the chemical composition of our atmosphere is essential to address a wide range of environmental issues from air quality to climate change. Current models solve a very large and stiff system of nonlinear advection-reaction coupled partial differential equations in order to calculate the time evolution of the concentration of over a hundred chemical species. The numerical solution of this system of equations is difficult and the development of efficient and accurate techniques to achieve this has inspired research for the past four decades. In this work, we propose an adaptive method that dynamically adjusts the chemical mechanism to be solved to the local environment and we show that the use of our approach leads to accurate results and considerable computational savings. Our strategy consists of partitioning the computational domain in active and inactive regions for each chemical species at every time step. In a given grid-box, the concentration of active species is calculated using an accurate numerical scheme, whereas the concentration of inactive species is calculated using a simple and computationally inexpensive formula. We demonstrate the performance of the method by application to the GEOS-Chem global chemical transport model.

  16. Underground water quality model inversion of genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    MA Ruijie; LI Xin

    2009-01-01

    The underground water quality model with non-linear inversion problem is ill-posed, and boils down to solving the minimum of nonlinear function. Genetic algorithms are adopted in a number of individuals of groups by iterative search to find the optimal solution of the problem, the encoding strings as its operational objective, and achieving the iterative calculations by the genetic operators. It is an effective method of inverse problems of groundwater, with incomparable advantages and practical significances.

  17. DR-model-based estimation algorithm for NCS

    Institute of Scientific and Technical Information of China (English)

    HUANG Si-niu; CHEN Zong-ji; WEI Chen

    2006-01-01

    A novel estimation scheme based on dead reckoning (DR) model for networked control system (NCS)is proposed in this paper.Both the detailed DR estimation algorithm and the stability analysis of the system are given.By using DR estimation of the state,the effect of communication delays is overcome.This makes a controller designed without considering delays still applicable in NCS Moreover,the scheme can effectively solve the problem of data packet loss or timeout.

  18. A Ka-Band Backscatter Model Function and an Algorithm for Measurement of the Wind Vector Over the Sea Surface

    NARCIS (Netherlands)

    Nekrasov, A.; Hoogeboom, P.

    2005-01-01

    A Ka-band backscatter model and an algorithm for measurement of the wind speed and direction over the sea surface by a frequency-modulated continous-wave radar demonstrator system operated in scatterometer mode have been developed. To evaluate the proposed algorithm, a simulation of the wind vector

  19. How effective and efficient are multiobjective evolutionary algorithms at hydrologic model calibration?

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2006-01-01

    Full Text Available This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ε-NSGAII, the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA, and the Strength Pareto Evolutionary Algorithm 2 (SPEA2. This study uses three test cases to compare the algorithms' performances: (1 a standardized test function suite from the computer science literature, (2 a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3 a computationally intensive integrated surface-subsurface model application in the Shale Hills watershed in Pennsylvania. One challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 attained competitive to superior results for most of the problems tested in this study. The primary strengths of the SPEA2 algorithm lie in its search reliability and its diversity preservation operator. The biggest challenge in maximizing the performance of SPEA2 lies in specifying an effective archive size without a priori knowledge of the Pareto set. In practice, this would require significant trial-and-error analysis, which is problematic for more complex, computationally intensive calibration applications. ε-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration. ε-NSGAII's primary strength lies in its ease-of-use due to its dynamic population sizing and archiving which lead to rapid convergence to very high quality solutions with minimal user input. MOSCEM-UA is best suited for hydrologic model calibration applications that have small

  20. Development of wind turbine control algorithms for industrial use

    Energy Technology Data Exchange (ETDEWEB)

    Van Engelen, T.G.; Van der Hooft, E.L; Schaak, P. [ECN Wind, Petten (Netherlands)

    2001-09-01

    A tool has been developed for design of industry-ready control algorithms. These pertain to the prevailing wind turbine type: variable speed, active pitch to vane. Main control objectives are rotor speed regulation, energy yield optimisation and structural fatigue reduction. These objectives are satisfied through individually tunable control loops. The split-up in loops for power control and damping of tower and drive-train resonance is allowed by the use of dedicated filters. Time domain simulation results from the design tool show high-performance power regulation by feed forward of the estimated wind speed and enhanced damping in sideward tower bending by generator torque control. The tool for control design has been validated through extensive test runs with the authorised aerodynamic code PHATAS-IV. 7 refs.

  1. Development of computer algorithms for radiation treatment planning.

    Science.gov (United States)

    Cunningham, J R

    1989-06-01

    As a result of an analysis of data relating tissue response to radiation absorbed dose the ICRU has recommended a target for accuracy of +/- 5 for dose delivery in radiation therapy. This is a difficult overall objective to achieve because of the many steps that make up a course of radiotherapy. The calculation of absorbed dose is only one of the steps and so to achieve an overall accuracy of better than +/- 5% the accuracy in dose calculation must be better yet. The physics behind the problem is sufficiently complicated so that no exact method of calculation has been found and consequently approximate solutions must be used. The development of computer algorithms for this task involves the search for better and better approximate solutions. To achieve the desired target of accuracy a fairly sophisticated calculation procedure must be used. Only when this is done can we hope to further improve our knowledge of the way in which tissues respond to radiation treatments.

  2. Experiments in Model-Checking Optimistic Replication Algorithms

    CERN Document Server

    Boucheneb, Hanifa

    2008-01-01

    This paper describes a series of model-checking experiments to verify optimistic replication algorithms based on Operational Transformation (OT) approach used for supporting collaborative edition. We formally define, using tool UPPAAL, the behavior and the main consistency requirement (i.e. convergence property) of the collaborative editing systems, as well as the abstract behavior of the environment where these systems are supposed to operate. Due to data replication and the unpredictable nature of user interactions, such systems have infinitely many states. So, we show how to exploit some features of the UPPAAL specification language to attenuate the severe state explosion problem. Two models are proposed. The first one, called concrete model, is very close to the system implementation but runs up against a severe explosion of states. The second model, called symbolic model, aims to overcome the limitation of the concrete model by delaying the effective selection and execution of editing operations until th...

  3. QAP collaborates in development of the sick child algorithm.

    Science.gov (United States)

    1994-01-01

    Algorithms which specify procedures for proper diagnosis and treatment of common diseases have been available to primary health care services in less developed countries for the past decade. Whereas each algorithm has usually been limited to a single ailment, children often present with the need for more comprehensive assessment and treatment. Treating just one illness in these children leads to incomplete treatment or missed opportunities for preventive services. To address this problem, the World Health Organization has recently developed a Sick Child Algorithm (SCA) for children aged 2 months-5 years. In addition to specifying case management procedures for acute respiratory illness, diarrhea/dehydration, fever, otitis, and malnutrition, the SCA prompts a check of the child's immunization status. The specificity and sensitivity of this SCA were field-tested in Kenya and the Gambia. In Kenya, the Malaria Branch of the US Centers for Disease Control and Prevention tested the SCA under typical conditions in Siaya District. The Quality Assurance Project of the Center for Human Services carried out a parallel facility-based systems analysis at the request of the Malaria Branch. The assessment which took place in September-October 1993, took the form of observations of provider/patient interactions, provider interviews, and verification of supplies and equipment in 19 rural health facilities to determine how current practices compare to actions prescribed by the SCA. This will reveal the type and amount of technical support needed to achieve conformity to the SCA's clinical practice recommendations. The data will allow officials to devise the proper training programs and will predict quality improvements likely to be achieved through adoption of the SCA in terms of effective case treatment and fewer missed immunization opportunities. Preliminary analysis indicates that the primary health care delivery in Siya deviates in several significant respects from performance

  4. Developing a corpus to verify the performance of a tone labelling algorithm

    CSIR Research Space (South Africa)

    Raborife, M

    2011-11-01

    Full Text Available The authors report on a study that involved the development of a corpus used to verify the performance of two tone labelling algorithms, with one algorithm being an improvement on the other. These algorithms were developed for speech synthesis...

  5. Motion Model Employment using interacting Motion Model Algorithm

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar

    2006-01-01

    model being correct is computed through a likelihood function for each model.  The study presented a simple technique to introduce additional models into the system using deterministic acceleration which basically defines the dynamics of the system.  Therefore, based on this value more motion models can...... be employed to increase the coverage.  Finally, the combined estimate is obtained using posteriori probabilities from different filter models.   The implemented approach provides an adaptive scheme for selecting various number of motion models.  Motion model description is important as it defines the kind...

  6. A proposed Fast algorithm to construct the system matrices for a reduced-order groundwater model

    Science.gov (United States)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2017-04-01

    Past research has demonstrated that a reduced-order model (ROM) can be two-to-three orders of magnitude smaller than the original model and run considerably faster with acceptable error. A standard method to construct the system matrices for a ROM is Proper Orthogonal Decomposition (POD), which projects the system matrices from the full model space onto a subspace whose range spans the full model space but has a much smaller dimension than the full model space. This projection can be prohibitively expensive to compute if it must be done repeatedly, as with a Monte Carlo simulation. We propose a Fast Algorithm to reduce the computational burden of constructing the system matrices for a parameterized, reduced-order groundwater model (i.e. one whose parameters are represented by zones or interpolation functions). The proposed algorithm decomposes the expensive system matrix projection into a set of simple scalar-matrix multiplications. This allows the algorithm to efficiently construct the system matrices of a POD reduced-order model at a significantly reduced computational cost compared with the standard projection-based method. The developed algorithm is applied to three test cases for demonstration purposes. The first test case is a small, two-dimensional, zoned-parameter, finite-difference model; the second test case is a small, two-dimensional, interpolated-parameter, finite-difference model; and the third test case is a realistically-scaled, two-dimensional, zoned-parameter, finite-element model. In each case, the algorithm is able to accurately and efficiently construct the system matrices of the reduced-order model.

  7. Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model

    Science.gov (United States)

    Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai

    2017-01-01

    Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences. PMID:28208694

  8. A coupled model tree genetic algorithm scheme for flow and water quality predictions in watersheds

    Science.gov (United States)

    Preis, Ami; Ostfeld, Avi

    2008-02-01

    SummaryThe rapid advance in information processing systems along with the increasing data availability have directed research towards the development of intelligent systems that evolve models of natural phenomena automatically. This is the discipline of data driven modeling which is the study of algorithms that improve automatically through experience. Applications of data driven modeling range from data mining schemes that discover general rules in large data sets, to information filtering systems that automatically learn users' interests. This study presents a data driven modeling algorithm for flow and water quality load predictions in watersheds. The methodology is comprised of a coupled model tree-genetic algorithm scheme. The model tree predicts flow and water quality constituents while the genetic algorithm is employed for calibrating the model tree parameters. The methodology is demonstrated through base runs and sensitivity analysis for daily flow and water quality load predictions on a watershed in northern Israel. The method produced close fits in most cases, but was limited in estimating the peak flows and water quality loads.

  9. Final Report for DOE Grant DE-FG02-03ER25579; Development of High-Order Accurate Interface Tracking Algorithms and Improved Constitutive Models for Problems in Continuum Mechanics with Applications to Jetting

    Energy Technology Data Exchange (ETDEWEB)

    Puckett, Elbridge Gerry [U.C. Davis, Department of Mathematics; Miller, Gregory Hale [.C. Davis, Department of Chemical Engineering

    2012-10-14

    published by Dr. Phillip Colella, the head of ANAG, and some of his colleagues. Chris Algieri is now employed as a staff member in Dr. Bill Collins' Climate Science Department in the Earth Sciences Division at LBNL working with computational models of climate change. Finally, it should be noted that the work conducted by Professor Puckett and his students Sarah Williams and Chris Algieri and described in this final report for DOE grant # DE-FC02-03ER25579 is closely related to work performed by Professor Puckett and his students under the auspices of Professor Puckett's DOE SciDAC grant DE-FC02-01ER25473 An Algorithmic and Software Framework for Applied Partial Differential Equations: A DOE SciDAC Integrated Software Infrastructure Center (ISIC). Dr. Colella was the lead PI for this SciDAC grant, which was comprised of several research groups from DOE national laboratories and five university PI's from five different universities. In theory Professor Puckett tried to use funds from the SciDAC grant to support work directly involved in implementing algorithms developed by members of his research group at UCD as software that might be of use to Puckett's SciDAC CoPIs. (For example, see the work reported in Section 2.2.2 of this final report.) However, since there is considerable lead time spent developing such algorithms before they are ready to become `software' and research plans and goals change as the research progresses, Professor Puckett supported each member of his research group partially with funds from the SciDAC APDEC ISIC DE-FC02-01ER25473 and partially with funds from this DOE MICS grant DE-FC02-03ER25579. This has necessarily resulted in a significant overlap of project areas that were funded by both grants. In particular, both Sarah Williams and Chris Algieri were supported partially with funds from grant # DE-FG02-03ER25579, for which this is the final report, and in part with funds from Professor Puckett's DOE SciDAC grant # DE

  10. Final Report for DOE Grant DE-FG02-03ER25579; Development of High-Order Accurate Interface Tracking Algorithms and Improved Constitutive Models for Problems in Continuum Mechanics with Applications to Jetting

    Energy Technology Data Exchange (ETDEWEB)

    Puckett, Elbridge Gerry [U.C. Davis, Department of Mathematics; Miller, Gregory Hale [.C. Davis, Department of Chemical Engineering

    2012-10-14

    published by Dr. Phillip Colella, the head of ANAG, and some of his colleagues. Chris Algieri is now employed as a staff member in Dr. Bill Collins' Climate Science Department in the Earth Sciences Division at LBNL working with computational models of climate change. Finally, it should be noted that the work conducted by Professor Puckett and his students Sarah Williams and Chris Algieri and described in this final report for DOE grant # DE-FC02-03ER25579 is closely related to work performed by Professor Puckett and his students under the auspices of Professor Puckett's DOE SciDAC grant DE-FC02-01ER25473 An Algorithmic and Software Framework for Applied Partial Differential Equations: A DOE SciDAC Integrated Software Infrastructure Center (ISIC). Dr. Colella was the lead PI for this SciDAC grant, which was comprised of several research groups from DOE national laboratories and five university PI's from five different universities. In theory Professor Puckett tried to use funds from the SciDAC grant to support work directly involved in implementing algorithms developed by members of his research group at UCD as software that might be of use to Puckett's SciDAC CoPIs. (For example, see the work reported in Section 2.2.2 of this final report.) However, since there is considerable lead time spent developing such algorithms before they are ready to become `software' and research plans and goals change as the research progresses, Professor Puckett supported each member of his research group partially with funds from the SciDAC APDEC ISIC DE-FC02-01ER25473 and partially with funds from this DOE MICS grant DE-FC02-03ER25579. This has necessarily resulted in a significant overlap of project areas that were funded by both grants. In particular, both Sarah Williams and Chris Algieri were supported partially with funds from grant # DE-FG02-03ER25579, for which this is the final report, and in part with funds from Professor Puckett's DOE SciDAC grant # DE

  11. Modelling soil water retention using support vector machines with genetic algorithm optimisation.

    Science.gov (United States)

    Lamorski, Krzysztof; Sławiński, Cezary; Moreno, Felix; Barna, Gyöngyi; Skierucha, Wojciech; Arrue, José L

    2014-01-01

    This work presents point pedotransfer function (PTF) models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: -0.98, -3.10, -9.81, -31.02, -491.66, and -1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM) methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models' parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67-0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches.

  12. Parameter Estimation for Traffic Noise Models Using a Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Deok-Soon An

    2013-01-01

    Full Text Available A technique has been developed for predicting road traffic noise for environmental assessment, taking into account traffic volume as well as road surface conditions. The ASJ model (ASJ Prediction Model for Road Traffic Noise, 1999, which is based on the sound power level of the noise emitted by the interaction between the road surface and tires, employs regression models for two road surface types: dense-graded asphalt (DGA and permeable asphalt (PA. However, these models are not applicable to other types of road surfaces. Accordingly, this paper introduces a parameter estimation procedure for ASJ-based noise prediction models, utilizing a harmony search (HS algorithm. Traffic noise measurement data for four different vehicle types were used in the algorithm to determine the regression parameters for several road surface types. The parameters of the traffic noise prediction models were evaluated using another measurement set, and good agreement was observed between the predicted and measured sound power levels.

  13. Adjustment Criterion and Algorithm in Adjustment Model with Uncertain

    Directory of Open Access Journals (Sweden)

    SONG Yingchun

    2015-02-01

    Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.

  14. Linguistically motivated statistical machine translation models and algorithms

    CERN Document Server

    Xiong, Deyi

    2015-01-01

    This book provides a wide variety of algorithms and models to integrate linguistic knowledge into Statistical Machine Translation (SMT). It helps advance conventional SMT to linguistically motivated SMT by enhancing the following three essential components: translation, reordering and bracketing models. It also serves the purpose of promoting the in-depth study of the impacts of linguistic knowledge on machine translation. Finally it provides a systematic introduction of Bracketing Transduction Grammar (BTG) based SMT, one of the state-of-the-art SMT formalisms, as well as a case study of linguistically motivated SMT on a BTG-based platform.

  15. Developing mathematical modelling competence

    DEFF Research Database (Denmark)

    Blomhøj, Morten; Jensen, Tomas Højgaard

    2003-01-01

    In this paper we introduce the concept of mathematical modelling competence, by which we mean being able to carry through a whole mathematical modelling process in a certain context. Analysing the structure of this process, six sub-competences are identified. Mathematical modelling competence...... cannot be reduced to these six sub-competences, but they are necessary elements in the development of mathematical modelling competence. Experience from the development of a modelling course is used to illustrate how the different nature of the sub-competences can be used as a tool for finding...... the balance between different kinds of activities in a particular educational setting. Obstacles of social, cognitive and affective nature for the students' development of mathematical modelling competence are reported and discussed in relation to the sub-competences....

  16. An adaptive correspondence algorithm for modeling scenes with strong interreflections.

    Science.gov (United States)

    Xu, Yi; Aliaga, Daniel G

    2009-01-01

    Modeling real-world scenes, beyond diffuse objects, plays an important role in computer graphics, virtual reality, and other commercial applications. One active approach is projecting binary patterns in order to obtain correspondence and reconstruct a densely sampled 3D model. In such structured-light systems, determining whether a pixel is directly illuminated by the projector is essential to decoding the patterns. When a scene has abundant indirect light, this process is especially difficult. In this paper, we present a robust pixel classification algorithm for this purpose. Our method correctly establishes the lower and upper bounds of the possible intensity values of an illuminated pixel and of a non-illuminated pixel. Based on the two intervals, our method classifies a pixel by determining whether its intensity is within one interval but not in the other. Our method performs better than standard method due to the fact that it avoids gross errors during decoding process caused by strong inter-reflections. For the remaining uncertain pixels, we apply an iterative algorithm to reduce the inter-reflection within the scene. Thus, more points can be decoded and reconstructed after each iteration. Moreover, the iterative algorithm is carried out in an adaptive fashion for fast convergence.

  17. Models based on "out-of Kilter" algorithm

    Science.gov (United States)

    Adler, M. J.; Drobot, R.

    2012-04-01

    In case of many water users along the river stretches, it is very important, in case of low flows and droughty periods to develop an optimization model for water allocation, to cover all needs under certain predefined constraints, depending of the Contingency Plan for drought management. Such a program was developed during the implementation of the WATMAN Project, in Romania (WATMAN Project, 2005-2006, USTDA) for Arges-Dambovita-Ialomita Basins water transfers. This good practice was proposed for WATER CoRe Project- Good Practice Handbook for Drought Management, (InterregIVC, 2011), to be applied for the European Regions. Two types of simulation-optimization models based on an improved version of out-of-kilter algorithm as optimization technique have been developed and used in Romania: • models for founding of the short-term operation of a WMS, • models generically named SIMOPT that aim to the analysis of long-term WMS operation and have as the main results the statistical WMS functional parameters. A real WMS is modeled by an arcs-nodes network so the real WMS operation problem becomes a problem of flows in networks. The nodes and oriented arcs as well as their characteristics such as lower and upper limits and associated costs are the direct analog of the physical and operational WMS characteristics. Arcs represent both physical and conventional elements of WMS such as river branches, channels or pipes, water user demands or other water management requirements, trenches of water reservoirs volumes, water levels in channels or rivers, nodes are junctions of at least two arcs and stand for locations of lakes or water reservoirs and/or confluences of river branches, water withdrawal or wastewater discharge points, etc. Quantitative features of water resources, water users and water reservoirs or other water works are expressed as constraints of non-violating the lower and upper limits assigned on arcs. Options of WMS functioning i.e. water retention/discharge in

  18. Hierarchical Stochastic Simulation Algorithm for SBML Models of Genetic Circuits

    Directory of Open Access Journals (Sweden)

    Leandro eWatanabe

    2014-11-01

    Full Text Available This paper describes a hierarchical stochastic simulation algorithm which has been implemented within iBioSim, a tool used to model, analyze, and visualize genetic circuits. Many biological analysis tools flatten out hierarchy before simulation, but there are many disadvantages associated with this approach. First, the memory required to represent the model can quickly expand in the process. Second, the flattening process is computationally expensive. Finally, when modeling a dynamic cellular population within iBioSim, inlining the hierarchy of the model is inefficient since models must grow dynamically over time. This paper discusses a new approach to handle hierarchy on the fly to make the tool faster and more memory-efficient. This approach yields significant performance improvements as compared to the former flat analysis method.

  19. The Distance Field Model and Distance Constrained MAP Adaptation Algorithm

    Institute of Scientific and Technical Information of China (English)

    YUPeng; WANGZuoying

    2003-01-01

    Spatial structure information, i.e., the rel-ative position information of phonetic states in the feature space, is long to be carefully researched yet. In this pa-per, a new model named “Distance Field” is proposed to describe the spatial structure information. Based on this model, a modified MAP adaptation algorithm named dis-tance constrained maximum a poateriori (DCMAP) is in-troduced. The distance field model gives large penalty when the spatial structure is destroyed. As a result the DCMAP reserves the spatial structure information in adaptation process. Experiments show the Distance Field Model improves the performance of MAP adapta-tion. Further results show DCMAP has strong cross-state estimation ability, which is used to train a well-performed speaker-dependent model by data from only part of pho-

  20. A novel computer algorithm for modeling and treating mandibular fractures: A pilot study.

    Science.gov (United States)

    Rizzi, Christopher J; Ortlip, Timothy; Greywoode, Jewel D; Vakharia, Kavita T; Vakharia, Kalpesh T

    2017-02-01

    To describe a novel computer algorithm that can model mandibular fracture repair. To evaluate the algorithm as a tool to model mandibular fracture reduction and hardware selection. Retrospective pilot study combined with cross-sectional survey. A computer algorithm utilizing Aquarius Net (TeraRecon, Inc, Foster City, CA) and Adobe Photoshop CS6 (Adobe Systems, Inc, San Jose, CA) was developed to model mandibular fracture repair. Ten different fracture patterns were selected from nine patients who had already undergone mandibular fracture repair. The preoperative computed tomography (CT) images were processed with the computer algorithm to create virtual images that matched the actual postoperative three-dimensional CT images. A survey comparing the true postoperative image with the virtual postoperative images was created and administered to otolaryngology resident and attending physicians. They were asked to rate on a scale from 0 to 10 (0 = completely different; 10 = identical) the similarity between the two images in terms of the fracture reduction and fixation hardware. Ten mandible fracture cases were analyzed and processed. There were 15 survey respondents. The mean score for overall similarity between the images was 8.41 ± 0.91; the mean score for similarity of fracture reduction was 8.61 ± 0.98; and the mean score for hardware appearance was 8.27 ± 0.97. There were no significant differences between attending and resident responses. There were no significant differences based on fracture location. This computer algorithm can accurately model mandibular fracture repair. Images created by the algorithm are highly similar to true postoperative images. The algorithm can potentially assist a surgeon planning mandibular fracture repair. 4. Laryngoscope, 2016 127:331-336, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  1. IIR Filter Modeling Using an Algorithm Inspired on Electromagnetism

    Directory of Open Access Journals (Sweden)

    Cuevas-Jiménez E.

    2013-01-01

    Full Text Available Infinite-impulse-response (IIR filtering provides a powerful approach for solving a variety of problems. However, its design represents a very complicated task, since the error surface of IIR filters is generally multimodal, global optimization techniques are required in order to avoid local minima. In this paper, a new method based on the Electromagnetism-Like Optimization Algorithm (EMO is proposed for IIR filter modeling. EMO originates from the electro-magnetism theory of physics by assuming potential solutions as electrically charged particles which spread around the solution space. The charge of each particle depends on its objective function value. This algorithm employs a collective attraction-repulsion mechanism to move the particles towards optimality. The experimental results confirm the high performance of the proposed method in solving various benchmark identification problems.

  2. Extraction of battery parameters of the equivalent circuit model using a multi-objective genetic algorithm

    Science.gov (United States)

    Brand, Jonathan; Zhang, Zheming; Agarwal, Ramesh K.

    2014-02-01

    A simple but reasonably accurate battery model is required for simulating the performance of electrical systems that employ a battery for example an electric vehicle, as well as for investigating their potential as an energy storage device. In this paper, a relatively simple equivalent circuit based model is employed for modeling the performance of a battery. A computer code utilizing a multi-objective genetic algorithm is developed for the purpose of extracting the battery performance parameters. The code is applied to several existing industrial batteries as well as to two recently proposed high performance batteries which are currently in early research and development stage. The results demonstrate that with the optimally extracted performance parameters, the equivalent circuit based battery model can accurately predict the performance of various batteries of different sizes, capacities, and materials. Several test cases demonstrate that the multi-objective genetic algorithm can serve as a robust and reliable tool for extracting the battery performance parameters.

  3. Modelling Soil Water Retention Using Support Vector Machines with Genetic Algorithm Optimisation

    Directory of Open Access Journals (Sweden)

    Krzysztof Lamorski

    2014-01-01

    Full Text Available This work presents point pedotransfer function (PTF models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: –0.98, –3.10, –9.81, –31.02, –491.66, and –1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models’ parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67–0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches.

  4. Proposing an Algorithm for R&Q Inventory Control Model with Stochastic Demand Influenced by Shortage

    Directory of Open Access Journals (Sweden)

    Parviz fattahi

    2013-08-01

    Full Text Available In this article, the continuous - review inventory control system has been studied. A new constraint of demand dependent on the average percent of product shortage has been added to the problem. It means that the average demand has a direct relationship with shortage in a period. This constraint, which is related to the costs of credit loss of the organization due to product shortage, has been considered in the inventory model. In this paper, the mathematical model of this problem has been presented and then, two heuristic approaches based on the genetic and simulated annealing algorithms are developed. Computational results indicate that the simulated annealing algorithm can provide better results compare to the genetic algorithm.

  5. Interchanges Safety: Forecast Model Based on ISAT Algorithm

    Directory of Open Access Journals (Sweden)

    Sascia Canale

    2013-09-01

    Full Text Available The ISAT algorithm (Interchange Safety Analysis Tool, developed by the Federal Highway Administration (FHWA, provides design and safety engineers with an automated tool for assessing the safety effects of geometric design and traffic control features at an existing interchange and adjacent roadway network. Concerning the default calibration coefficients and crash distributions by severity and type, the user should modify these default values to more accurately reflect the safety experience of their local/State agency prior to using ISAT to perform actual safety assessments. This paper will present the calibration process of the FHWA algorithm to the local situation of Oriental Sicily. The aim is to realize an instrument for accident forecast analyses, useful to Highway Managers, in order to individuate those infrastructural elements that can contribute to improve the safety level of interchange areas, if suitably calibrated.

  6. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  7. RISK ANALYSIS DEVELOPED MODEL

    Directory of Open Access Journals (Sweden)

    Georgiana Cristina NUKINA

    2012-07-01

    Full Text Available Through Risk analysis developed model deciding whether control measures suitable for implementation. However, the analysis determines whether the benefits of a data control options cost more than the implementation.

  8. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    Science.gov (United States)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  9. Model reduction using the genetic algorithm and routh approximations

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A new method of model reduction combining the genetic algorithm(GA) with the Routh approximation method is presented. It is suggested that a high-order system can be approximated by a low-order model with a time delay. The denominator parameters of the reduced-order model are determined by the Routh approximation method, then the numerator parameters and time delay are identified by the GA. The reduced-order models obtained by the proposed method will always be stable if the original system is stable and produce a good approximation to the original system in both the frequency domain and time domain. Two numerical examples show that the method is computationally simple and efficient.

  10. A hybrid multiview stereo algorithm for modeling urban scenes.

    Science.gov (United States)

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep

    2013-01-01

    We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.

  11. A nonlinear regression model-based predictive control algorithm.

    Science.gov (United States)

    Dubay, R; Abu-Ayyad, M; Hernandez, J M

    2009-04-01

    This paper presents a unique approach for designing a nonlinear regression model-based predictive controller (NRPC) for single-input-single-output (SISO) and multi-input-multi-output (MIMO) processes that are common in industrial applications. The innovation of this strategy is that the controller structure allows nonlinear open-loop modeling to be conducted while closed-loop control is executed every sampling instant. Consequently, the system matrix is regenerated every sampling instant using a continuous function providing a more accurate prediction of the plant. Computer simulations are carried out on nonlinear plants, demonstrating that the new approach is easily implemented and provides tight control. Also, the proposed algorithm is implemented on two real time SISO applications; a DC motor, a plastic injection molding machine and a nonlinear MIMO thermal system comprising three temperature zones to be controlled with interacting effects. The experimental closed-loop responses of the proposed algorithm were compared to a multi-model dynamic matrix controller (MPC) with improved results for various set point trajectories. Good disturbance rejection was attained, resulting in improved tracking of multi-set point profiles in comparison to multi-model MPC.

  12. Modelling river dune development

    NARCIS (Netherlands)

    Paarlberg, Andries; Weerts, H.J.T.; Dohmen-Janssen, Catarine M.; Ritsema, I.L; Hulscher, Suzanne J.M.H.; van Os, A.G.; Termes, A.P.P.

    2005-01-01

    Since river dunes influence flow resistance, predictions of dune dimensions are required to make accurate water level predictions. A model approach to simulate developing river dunes is presented. The model is set-up to be appropriate, i.e. as simple as possible, but with sufficient accuracy for

  13. Developing a synergy algorithm for land surface temperature: the SEN4LST project

    Science.gov (United States)

    Sobrino, Jose A.; Jimenez, Juan C.; Ghent, Darren J.

    2013-04-01

    Land surface Temperature (LST) is one of the key parameters in the physics of land-surface processes on regional and global scales, combining the results of all surface-atmosphere interactions and energy fluxes between the surface and the atmosphere. An adequate characterization of LST distribution and its temporal evolution requires measurements with detailed spatial and temporal frequencies. With the advent of the Sentinel 2 (S2) and 3 (S3) series of satellites a unique opportunity exists to go beyond the current state of the art of single instrument algorithms. The Synergistic Use of The Sentinel Missions For Estimating And Monitoring Land Surface Temperature (SEN4LST) project aims at developing techniques to fully utilize synergy between S2 and S3 instruments in order to improve LST retrievals. In the framework of the SEN4LST project, three LST retrieval algorithms were proposed using the thermal infrared bands of the Sea and Land Surface Temperature Retrieval (SLSTR) instrument on board the S3 platform: split-window (SW), dual-angle (DA) and a combined algorithm using both split-window and dual-angle techniques (SW-DA). One of the objectives of the project is to select the best algorithm to generate LST products from the synergy between S2/S3 instruments. In this sense, validation is a critical step in the selection process for the best performing candidate algorithm. A unique match-up database constructed at University of Leicester (UoL) of in situ observations from over twenty ground stations and corresponding brightness temperature (BT) and LST match-ups from multi-sensor overpasses is utilised for validating the candidate algorithms. Furthermore, their performance is also evaluated against the standard ESA LST product and the enhanced offline UoL LST product. In addition, a simulation dataset is constructed using 17 synthetic images of LST and the radiative transfer model MODTRAN carried under 66 different atmospheric conditions. Each candidate LST

  14. Algorithm development for Prognostics and Health Management (PHM).

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Campbell, James E.; Doser, Adele Beatrice; Lowder, Kelly S.

    2003-10-01

    This report summarizes the results of a three-year LDRD project on prognostics and health management. System failure over some future time interval (an alternative definition is the capability to predict the remaining useful life of a system). Prognostics are integrated with health monitoring (through inspections, sensors, etc.) to provide an overall PHM capability that optimizes maintenance actions and results in higher availability at a lower cost. Our goal in this research was to develop PHM tools that could be applied to a wide variety of equipment (repairable, non-repairable, manufacturing, weapons, battlefield equipment, etc.) and require minimal customization to move from one system to the next. Thus, our approach was to develop a toolkit of reusable software objects/components and architecture for their use. We have developed two software tools: an Evidence Engine and a Consequence Engine. The Evidence Engine integrates information from a variety of sources in order to take into account all the evidence that impacts a prognosis for system health. The Evidence Engine has the capability for feature extraction, trend detection, information fusion through Bayesian Belief Networks (BBN), and estimation of remaining useful life. The Consequence Engine involves algorithms to analyze the consequences of various maintenance actions. The Consequence Engine takes as input a maintenance and use schedule, spares information, and time-to-failure data on components, then generates maintenance and failure events, and evaluates performance measures such as equipment availability, mission capable rate, time to failure, and cost. This report summarizes the capabilities we have developed, describes the approach and architecture of the two engines, and provides examples of their use. 'Prognostics' refers to the capability to predict the probability of

  15. Forecasting of the development of professional medical equipment engineering based on neuro-fuzzy algorithms

    Science.gov (United States)

    Vaganova, E. V.; Syryamkin, M. V.

    2015-11-01

    The purpose of the research is the development of evolutionary algorithms for assessments of promising scientific directions. The main attention of the present study is paid to the evaluation of the foresight possibilities for identification of technological peaks and emerging technologies in professional medical equipment engineering in Russia and worldwide on the basis of intellectual property items and neural network modeling. An automated information system consisting of modules implementing various classification methods for accuracy of the forecast improvement and the algorithm of construction of neuro-fuzzy decision tree have been developed. According to the study result, modern trends in this field will focus on personalized smart devices, telemedicine, bio monitoring, «e-Health» and «m-Health» technologies.

  16. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  17. Hybrid Swarm Algorithms for Parameter Identification of an Actuator Model in an Electrical Machine

    Directory of Open Access Journals (Sweden)

    Ying Wu

    2011-01-01

    Full Text Available Efficient identification and control algorithms are needed, when active vibration suppression techniques are developed for industrial machines. In the paper a new actuator for reducing rotor vibrations in electrical machines is investigated. Model-based control is needed in designing the algorithm for voltage input, and therefore proper models for the actuator must be available. In addition to the traditional prediction error method a new knowledge-based Artificial Fish-Swarm optimization algorithm (AFA with crossover, CAFAC, is proposed to identify the parameters in the new model. Then, in order to obtain a fast convergence of the algorithm in the case of a 30 kW two-pole squirrel cage induction motor, we combine the CAFAC and Particle Swarm Optimization (PSO to identify parameters of the machine to construct a linear time-invariant(LTI state-space model. Besides that, the prediction error method (PEM is also employed to identify the induction motor to produce a black box model with correspondence to input-output measurements.

  18. Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies.

    Science.gov (United States)

    Xu, Lei; Jeavons, Peter

    2015-11-01

    Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.

  19. Modeling the Swift BAT Trigger Algorithm with Machine Learning

    Science.gov (United States)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2015-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. (2014) is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of approximately greater than 97% (approximately less than 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89:6% (10:4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of eta(sub 0) approximately 0.48(+0.41/-0.23) Gpc(exp -3) yr(exp -1) with power-law indices of eta(sub 1) approximately 1.7(+0.6/-0.5) and eta(sub 2) approximately -5.9(+5.7/-0.1) for GRBs above and below a break point of z(sub 1) approximately 6.8(+2.8/-3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online.

  20. Modeling the Swift Bat Trigger Algorithm with Machine Learning

    Science.gov (United States)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2016-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift / BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of greater than or equal to 97 percent (less than or equal to 3 percent error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6 percent (10.4 percent error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of n (sub 0) approaching 0.48 (sup plus 0.41) (sub minus 0.23) per cubic gigaparsecs per year with power-law indices of n (sub 1) approaching 1.7 (sup plus 0.6) (sub minus 0.5) and n (sub 2) approaching minus 5.9 (sup plus 5.7) (sub minus 0.1) for GRBs above and below a break point of z (redshift) (sub 1) approaching 6.8 (sup plus 2.8) (sub minus 3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.

  1. Near infrared spectrometric technique for testing fruit quality: optimisation of regression models using genetic algorithms

    Science.gov (United States)

    Isingizwe Nturambirwe, J. Frédéric; Perold, Willem J.; Opara, Umezuruike L.

    2016-02-01

    Near infrared (NIR) spectroscopy has gained extensive use in quality evaluation. It is arguably one of the most advanced spectroscopic tools in non-destructive quality testing of food stuff, from measurement to data analysis and interpretation. NIR spectral data are interpreted through means often involving multivariate statistical analysis, sometimes associated with optimisation techniques for model improvement. The objective of this research was to explore the extent to which genetic algorithms (GA) can be used to enhance model development, for predicting fruit quality. Apple fruits were used, and NIR spectra in the range from 12000 to 4000 cm-1 were acquired on both bruised and healthy tissues, with different degrees of mechanical damage. GAs were used in combination with partial least squares regression methods to develop bruise severity prediction models, and compared to PLS models developed using the full NIR spectrum. A classification model was developed, which clearly separated bruised from unbruised apple tissue. GAs helped improve prediction models by over 10%, in comparison with full spectrum-based models, as evaluated in terms of error of prediction (Root Mean Square Error of Cross-validation). PLS models to predict internal quality, such as sugar content and acidity were developed and compared to the versions optimized by genetic algorithm. Overall, the results highlighted the potential use of GA method to improve speed and accuracy of fruit quality prediction.

  2. Development of new flux splitting schemes. [computational fluid dynamics algorithms

    Science.gov (United States)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1992-01-01

    Maximizing both accuracy and efficiency has been the primary objective in designing a numerical algorithm for computational fluid dynamics (CFD). This is especially important for solutions of complex three dimensional systems of Navier-Stokes equations which often include turbulence modeling and chemistry effects. Recently, upwind schemes have been well received for their capability in resolving discontinuities. With this in mind, presented are two new flux splitting techniques for upwind differencing. The first method is based on High-Order Polynomial Expansions (HOPE) of the mass flux vector. The second new flux splitting is based on the Advection Upwind Splitting Method (AUSM). The calculation of the hypersonic conical flow demonstrates the accuracy of the splitting in resolving the flow in the presence of strong gradients. A second series of tests involving the two dimensional inviscid flow over a NACA 0012 airfoil demonstrates the ability of the AUSM to resolve the shock discontinuity at transonic speed. A third case calculates a series of supersonic flows over a circular cylinder. Finally, the fourth case deals with tests of a two dimensional shock wave/boundary layer interaction.

  3. Stochastic geometry, spatial statistics and random fields models and algorithms

    CERN Document Server

    2015-01-01

    Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.

  4. Space resection model calculation based on Random Sample Consensus algorithm

    Science.gov (United States)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  5. Quantification of distention in CT colonography: development and validation of three computer algorithms.

    Science.gov (United States)

    Hung, Peter W; Paik, David S; Napel, Sandy; Yee, Judy; Jeffrey, R Brooke; Steinauer-Gebauer, Andreas; Min, Juno; Jathavedam, Ashwin; Beaulieu, Christopher F

    2002-02-01

    Three bowel distention-measuring algorithms for use at computed tomographic (CT) colonography were developed, validated in phantoms, and applied to a human CT colonographic data set. The three algorithms are the cross-sectional area method, the moving spheres method, and the segmental volume method. Each algorithm effectively quantified distention, but accuracy varied between methods. Clinical feasibility was demonstrated. Depending on the desired spatial resolution and accuracy, each algorithm can quantitatively depict colonic diameter in CT colonography.

  6. Development and Implementation of a Hardware In-the-Loop Test Bed for Unmanned Aerial Vehicle Control Algorithms

    Science.gov (United States)

    Nyangweso, Emmanuel; Bole, Brian

    2014-01-01

    Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.

  7. Outcomes analysis in epistaxis management: development of a therapeutic algorithm.

    Science.gov (United States)

    Shargorodsky, Josef; Bleier, Benjamin S; Holbrook, Eric H; Cohen, Jeffrey M; Busaba, Nicolas; Metson, Ralph; Gray, Stacey T

    2013-09-01

    This study explored the outcomes of epistaxis treatment modalities to optimize management and enable the development of a therapeutic algorithm. Case series with chart review. Tertiary care hospital. Adult patients presenting between 2005 and 2011 with epistaxis underwent cauterization, tamponade, and/or proximal vascular control. Outcomes of treatment modalities were compared. Multivariate logistic regression was used to calculate odds ratios (ORs) and 95% confidence intervals (CIs), adjusting for coagulopathy, hypertension, and bleeding site. The population included 147 patients (94 men, 53 women). For initial epistaxis, nondissolvable packing demonstrated the highest initial treatment failure rate of 57.4% (OR, 3.37; 95% CI, 1.33-8.59 compared with cautery). No significant differences were noted among initial posterior epistaxis treatment modalities. Length of nondissolvable pack placement for 3, 4, or 5 days had no significant impact on recurrence. Among patients who failed initial management, those who next underwent cautery or proximal vascular control required a significantly shorter inpatient stay of 5.3 vs 6.8 days compared with those who underwent packing (OR, 0.16; 95% CI, 0.04-0.68). There were no treatment failures following surgical arterial ligation. Initial management of anterior epistaxis with chemical cautery had a higher success rate and a lower number of total required interventions than did nondissolvable packing. Duration of packing did not affect recurrence. In patients who failed initially, progression to cautery or proximal vascular control led to significantly shorter inpatient stays than did packing.

  8. Ant Colony Optimization Algorithm for Continuous Domains Based on Position Distribution Model of Ant Colony Foraging

    OpenAIRE

    Liqiang Liu; Yuntao Dai; Jinyu Gao

    2014-01-01

    Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules...

  9. An Improved Technique Based on Firefly Algorithm to Estimate the Parameters of the Photovoltaic Model

    Directory of Open Access Journals (Sweden)

    Issa Ahmed Abed

    2016-12-01

    Full Text Available This paper present a method to enhance the firefly algorithm by coupling with a local search. The constructed technique is applied to identify the solar parameters model where the method has been proved its ability to obtain the photovoltaic parameters model. Standard firefly algorithm (FA, electromagnetism-like (EM algorithm, and electromagnetism-like without local (EMW search algorithm all are compared with the suggested method to test its capability to solve this model.

  10. A DIFFERENTIAL EVOLUTION ALGORITHM DEVELOPED FOR A NURSE SCHEDULING PROBLEM

    Directory of Open Access Journals (Sweden)

    Shahnazari-Shahrezaei, P.

    2012-11-01

    Full Text Available Nurse scheduling is a type of manpower allocation problem that tries to satisfy hospital managers objectives and nurses preferences as much as possible by generating fair shift schedules. This paper presents a nurse scheduling problem based on a real case study, and proposes two meta-heuristics a differential evolution algorithm (DE and a greedy randomised adaptive search procedure (GRASP to solve it. To investigate the efficiency of the proposed algorithms, two problems are solved. Furthermore, some comparison metrics are applied to examine the reliability of the proposed algorithms. The computational results in this paper show that the proposed DE outperforms the GRASP.

  11. Semi-Implicit Algorithm for Elastoplastic Damage Models Involving Energy Integration

    Directory of Open Access Journals (Sweden)

    Ji Zhang

    2016-01-01

    Full Text Available This study aims to develop a semi-implicit constitutive integration algorithm for a class of elastoplastic damage models where calculation of damage energy release rates involves integration of free energy. The constitutive equations with energy integration are split into the elastic predictor, plastic corrector, and damage corrector. The plastic corrector is solved with an improved format of the semi-implicit spectral return mapping, which is characterized by constant flow direction and plastic moduli calculated at initial yield, enforcement of consistency at the end, and coordinate-independent formulation with an orthogonally similar stress tensor. The tangent stiffness consistent with the updating algorithm is derived. The algorithm is implemented with a recently proposed elastoplastic damage model for concrete, and several typical mechanical tests of reinforced concrete components are simulated. The present semi-implicit algorithm proves to achieve a balance between accuracy, stability, and efficiency compared with the implicit and explicit algorithms and calculate free energy accurately with small time steps.

  12. Model-checking mean-field models: algorithms & applications

    NARCIS (Netherlands)

    Kolesnichenko, Anna Victorovna

    2014-01-01

    Large systems of interacting objects are highly prevalent in today's world. In this thesis we primarily address such large systems in computer science. We model such large systems using mean-field approximation, which allows to compute the limiting behaviour of an infinite population of identical o

  13. Wolff algorithm and anisotropic continuous-spin models: An application to the spin-van der Waals model

    Science.gov (United States)

    D'onorio de Meo, Marco; Oh, Suhk Kun

    1992-07-01

    The problem of applying Wolff's cluster algorithm to anisotropic classical spin models is resolved by modifying a part of the Wolff algorithm. To test the effectiveness of our modified algorithm, the spin-van der Waals model is investigated in detail. Our estimate of the dynamical exponent of the model is z=0.19+/-0.04.

  14. Metaheuristic Algorithm for Solving Biobjective Possibility Planning Model of Location-Allocation in Disaster Relief Logistics

    Directory of Open Access Journals (Sweden)

    Farnaz Barzinpour

    2014-01-01

    Full Text Available Thousands of victims and millions of affected people are hurt by natural disasters every year. Therefore, it is essential to prepare proper response programs that consider early activities of disaster management. In this paper, a multiobjective model for distribution centers which are located and allocated periodically to the damaged areas in order to distribute relief commodities is offered. The main objectives of this model are minimizing the total costs and maximizing the least rate of the satisfaction in the sense of being fair while distributing the items. The model simultaneously determines the location of relief distribution centers and the allocation of affected areas to relief distribution centers. Furthermore, an efficient solution approach based on genetic algorithm has been developed in order to solve the proposed mathematical model. The results of genetic algorithm are compared with the results provided by simulated annealing algorithm and LINGO software. The computational results show that the proposed genetic algorithm provides relatively good solutions in a reasonable time.

  15. Epidemic Modelling by Ripple-Spreading Network and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Jian-Qin Liao

    2013-01-01

    Full Text Available Mathematical analysis and modelling is central to infectious disease epidemiology. This paper, inspired by the natural ripple-spreading phenomenon, proposes a novel ripple-spreading network model for the study of infectious disease transmission. The new epidemic model naturally has good potential for capturing many spatial and temporal features observed in the outbreak of plagues. In particular, using a stochastic ripple-spreading process simulates the effect of random contacts and movements of individuals on the probability of infection well, which is usually a challenging issue in epidemic modeling. Some ripple-spreading related parameters such as threshold and amplifying factor of nodes are ideal to describe the importance of individuals’ physical fitness and immunity. The new model is rich in parameters to incorporate many real factors such as public health service and policies, and it is highly flexible to modifications. A genetic algorithm is used to tune the parameters of the model by referring to historic data of an epidemic. The well-tuned model can then be used for analyzing and forecasting purposes. The effectiveness of the proposed method is illustrated by simulation results.

  16. A MATLAB GUI based algorithm for modelling Magnetotelluric data

    Science.gov (United States)

    Timur, Emre; Onsen, Funda

    2016-04-01

    The magnetotelluric method is an electromagnetic survey technique that images the electrical resistivity distribution of layers in subsurface depths. Magnetotelluric method measures simultaneously total electromagnetic field components such as both time-varying magnetic field B(t) and induced electric field E(t). At the same time, forward modeling of magnetotelluric method is so beneficial for survey planning purpose, for comprehending the method, especially for students, and as part of an iteration process in inverting measured data. The MTINV program can be used to model and to interpret geophysical electromagnetic (EM) magnetotelluric (MT) measurements using a horizontally layered earth model. This program uses either the apparent resistivity and phase components of the MT data together or the apparent resistivity data alone. Parameter optimization, which is based on linearized inversion method, can be utilized in 1D interpretations. In this study, a new MATLAB GUI based algorithm has been written for the 1D-forward modeling of magnetotelluric response function for multiple layers to use in educational studies. The code also includes an automatic Gaussian noise option for a demanded ratio value. Numerous applications were carried out and presented for 2,3 and 4 layer models and obtained theoretical data were interpreted using MTINV, in order to evaluate the initial parameters and effect of noise. Keywords: Education, Forward Modelling, Inverse Modelling, Magnetotelluric

  17. New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration

    Science.gov (United States)

    Keshavarz, Kasra; Alizadeh, Hossein

    2017-04-01

    Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other

  18. Solar Flare Prediction Model with Three Machine-Learning Algorithms Using Ultraviolet Brightening and Vector Magnetogram

    CERN Document Server

    Nishizuka, N; Kubo, Y; Den, M; Watari, S; Ishii, M

    2016-01-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 h. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010-2015, such as vector magnetogram, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions from the full-disk magnetogram, from which 60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine learning algorithms: the support vector machine (SVM), k-nearest neighbors (k-NN), and ...

  19. Mathematical Model and Algorithm for the Reefer Mechanic Scheduling Problem at Seaports

    Directory of Open Access Journals (Sweden)

    Jiantong Zhang

    2017-01-01

    Full Text Available With the development of seaborne logistics, the international trade of goods transported in refrigerated containers is growing fast. Refrigerated containers, also known as reefers, are used in transportation of temperature sensitive cargo, such as perishable fruits. This trend brings new challenges to terminal managers, that is, how to efficiently arrange mechanics to plug and unplug power for the reefers (i.e., tasks at yards. This work investigates the reefer mechanics scheduling problem at container ports. To minimize the sum of the total tardiness of all tasks and the total working distance of all mechanics, we formulate a mathematical model. For the resolution of this problem, we propose a DE algorithm which is combined with efficient heuristics, local search strategies, and parameter adaption scheme. The proposed algorithm is tested and validated through numerical experiments. Computational results demonstrate the effectiveness and efficiency of the proposed algorithm.

  20. A simple algorithm to estimate genetic variance in an animal threshold model using Bayesian inference

    Directory of Open Access Journals (Sweden)

    Heringstad Bjørg

    2010-07-01

    Full Text Available Abstract Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (covariance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative" or "non-informative" with respect to genetic (covariance components. The "non-informative" individuals are characterized by their Mendelian sampling deviations (deviance from the mid-parent mean being completely confounded with a single residual on the underlying liability scale. For threshold models, residual variance on the underlying scale is not identifiable. Hence, variance of fully confounded Mendelian sampling deviations cannot be identified either, but can be inferred from the between-family variation. In the new algorithm, breeding values are sampled as in a standard animal model using the full relationship matrix, but genetic (covariance components are inferred from the sampled breeding values and relationships between "informative" individuals (usually parents only. The latter is analogous to a sire-dam model (in cases with no individual records on the parents. Results When applied to simulated data sets, the standard animal threshold model failed to produce useful results since samples of genetic variance always drifted towards infinity, while the new algorithm produced proper parameter estimates essentially identical to the results from a sire-dam model (given the fact that no individual records exist for the parents. Furthermore, the new algorithm showed much faster Markov chain mixing properties for genetic parameters (similar to

  1. An improved fiber tracking algorithm based on fiber assignment using the continuous tracking algorithm and two-tensor model

    Institute of Scientific and Technical Information of China (English)

    Liuhong Zhu; Gang Guo

    2012-01-01

    This study tested an improved fiber tracking algorithm, which was based on fiber assignment using a continuous tracking algorithm and a two-tensor model. Different models and tracking decisions were used by judging the type of estimation of each voxel. This method should solve the cross-track problem. This study included eight healthy subjects, two axonal injury patients and seven demyelinating disease patients. This new algorithm clearly exhibited a difference in nerve fiber direction between axonal injury and demyelinating disease patients and healthy control subjects. Compared with fiber assignment with a continuous tracking algorithm, our novel method can track more and longer nerve fibers, and also can solve the fiber crossing problem.

  2. Integer programming model for optimizing bus timetable using genetic algorithm

    Science.gov (United States)

    Wihartiko, F. D.; Buono, A.; Silalahi, B. P.

    2017-01-01

    Bus timetable gave an information for passengers to ensure the availability of bus services. Timetable optimal condition happened when bus trips frequency could adapt and suit with passenger demand. In the peak time, the number of bus trips would be larger than the off-peak time. If the number of bus trips were more frequent than the optimal condition, it would make a high operating cost for bus operator. Conversely, if the number of trip was less than optimal condition, it would make a bad quality service for passengers. In this paper, the bus timetabling problem would be solved by integer programming model with modified genetic algorithm. Modification was placed in the chromosomes design, initial population recovery technique, chromosomes reconstruction and chromosomes extermination on specific generation. The result of this model gave the optimal solution with accuracy 99.1%.

  3. An Intelligent Model for Pairs Trading Using Genetic Algorithms.

    Science.gov (United States)

    Huang, Chien-Feng; Hsu, Chi-Jen; Chen, Chi-Chung; Chang, Bao Rong; Li, Chen-An

    2015-01-01

    Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.

  4. An Algorithm for Solution of an Interval Valued EOQ Model

    Directory of Open Access Journals (Sweden)

    Susovan CHAKRABORTTY

    2013-01-01

    Full Text Available This paper deals with the problem of determining the economic order quantity (EOQin the interval sense. A purchasing inventory model with shortages and lead time, whose carryingcost, shortage cost, setup cost, demand quantity and lead time are considered as interval numbers,instead of real numbers. First, a brief survey of the existing works on comparing and ranking anytwo interval numbers on the real line is presented. A common algorithm for the optimum productionquantity (Economic lot-size per cycle of a single product (so as to minimize the total average cost isdeveloped which works well on interval number optimization under consideration. A numerical exampleis presented for better understanding the solution procedure. Finally a sensitive analysis of the optimalsolution with respect to the parameters of the model is examined.

  5. An Intelligent Model for Pairs Trading Using Genetic Algorithms

    Science.gov (United States)

    Hsu, Chi-Jen; Chen, Chi-Chung; Li, Chen-An

    2015-01-01

    Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice. PMID:26339236

  6. The development of Advanced robotic technology - Development of target-tracking algorithm for remote-control robot system

    Energy Technology Data Exchange (ETDEWEB)

    Park, Dong Sun; Lee, Joon Whan; Kim, Hyong Suk; Yoon, Sook; Lee, Jin Ho; Han, Jeong Soo; Baek, Seong Hyun; Choi, Gap Chu [Chonbuk National University, Chonju (Korea, Republic of)

    1996-07-01

    The utilization of remote-control robot system in atomic power plants or nuclear-related facilities grows rapidly, to protect workers from high radiation environments. Such applications require complete stability of the robot system, then precisely tracking the robot is essential for the whole system. This research is to accomplish the goal by developing appropriate algorithms for remote-control robot systems. The research consists of two different approaches: target-tracking systems using kalman filters and neural networks. The tracking system under study uses vision sensors to obtain features of targets. A kalman filter model using the moving-position estimation technique is designed, and tested for tracking an object with a circle movement. Attributions of the tracking object are investigated and best features are extracted from the input imagery for the kalman filter model. A neural network tracking system is designed and experimented to trace a robot endeffector. This model is aimed to utilize the excellent capabilities of neural networks; nonlinear mapping between inputs and outputs, learning capability, and generalization capability. The neural tracker consists of two networks for position detection and prediction. Tracking algorithms are developed and experimented for the two models. Results to the experiments show that both models are promising as real-time target-tracking systems for remote-control robot systems. 20 refs., 34 figs. (author)

  7. [Study on the Application of NAS-Based Algorithm in the NIR Model Optimization].

    Science.gov (United States)

    Geng, Ying; Xiang, Bing-ren; He, Lan

    2015-10-01

    In this paper, net analysis signal (NAS)-based concept was introduced to the analysis of multi-component Ginkgo biloba leaf extracts. NAS algorithm was utilized for the preprocessing of spectra, and NAS-based two-dimensional correlation analysis was used for the optimization of NIR model building. Simultaneous quantitative models for three flavonol aglycones: quercetin, keampferol and isorhamnetin were established respectively. The NAS vectors calculated using two algorithms introduced from Lorber and Goicoechea and Olivieri (HLA/GO) were applied in the development of calibration models, the reconstructed spectra were used as input of PLS modeling. For the first time, NAS-based two-dimensional correlation spectroscopy was used for wave number selection. The regions appeared in the main diagonal were selected as useful regions for model building. The results implied that two NAS-based preprocessing methods were successfully used for the analysis of quercetin, keampferol and isorhamnetin with a decrease of factor number and an improvement of model robustness. NAS-based algorithm was proven to be a useful tool for the preprocessing of spectra and for optimization of model calibration. The above research showed a practical application value for the NIRS in the analysis of complex multi-component petrochemical medicine with unknown interference.

  8. Application of stochastic weighted algorithms to a multidimensional silica particle model

    Energy Technology Data Exchange (ETDEWEB)

    Menz, William J. [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom); Patterson, Robert I.A.; Wagner, Wolfgang [Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstrasse 39, Berlin 10117 (Germany); Kraft, Markus, E-mail: mk306@cam.ac.uk [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom)

    2013-09-01

    Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associated majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.

  9. Identification of Hammerstein Model Based on Quantum Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Hai Li

    2013-07-01

    Full Text Available Nonlinear system identification is a main topic of modern identification. A new method for nonlinear system identification is presented by using Quantum Genetic Algorithm(QGA.The problems of nonlinear system identification are cast as function optimization overprameter space,and the Quantum Genetic Algorithm is adopted to solve the optimization problem. Simulation experiments show that: compared with the genetic algorithm, quantum genetic algorithm is an effective swarm intelligence algorithm, its salient features of the algorithm parameters, small population size, and the use of Quantum gate update populations, greatly improving the recognition in the optimization of speed and accuracy. Simulation results show the effectiveness of the proposed method.

  10. Performance of a distributed DCA algorithm under inhomogeneous traffic modelled from an operational GSM network

    NARCIS (Netherlands)

    Kennedy, K.D.; Vries, E.T. de; Koorevaar, P.

    1998-01-01

    This paper presents results obtained from two different Dynamic Channel Allocation (DCA) algorithms, namely the Timid and Persistent Polite Aggressive (PPA) algorithms, simulated under both static homogeneous and dynamic inhomogeneous traffic. The dynamic inhomogeneous traffic is modelled upon real

  11. Correlation of thermal mathematical models for thermal control of space vehicles by means of genetic algorithms

    Science.gov (United States)

    Anglada, Eva; Garmendia, Iñaki

    2015-03-01

    The design of the thermal control system of space vehicles, needed to maintain the equipment components into their admissible range of temperatures, is usually developed by means of thermal mathematical models. These thermal mathematical models need to be correlated with the equipment real behavior registered during the thermal test campaign, in order to adapt them to the real state of the vehicle "as built". The correlation of this type of mathematical models is a very complex task, usually based on manual procedures, which requires a big effort in time and cost. For this reason, the development of methodologies able to perform this correlation automatically, would be a key aspect in the improvement of the space vehicles thermal control design and validation. The implementation, study and validation of a genetic algorithm able to perform this type of correlation in an automatized way are presented in this paper. The study and validation of the algorithm have been performed based on a simplified model of a real space instrument. The algorithm is able to correlate thermal mathematical models in steady state and transient analyses, and it is also able to perform the simultaneous correlation of several cases, as for example hot and cold cases.

  12. Contact Modelling in Resistance Welding, Part I: Algorithms and Numerical Verification

    DEFF Research Database (Denmark)

    Song, Quanfeng; Zhang, Wenqi; Bay, Niels

    2006-01-01

    Finite element analysis of resistance welding involves the contact problems between different parts. The contact problem in resistance welding includes not only mechanical contact but also thermal and electrical contact. In this paper a contact model based on the penalty method is developed for s...... for simulation of resistance spot and projection welding. After a description of the algorithms several numerical examples are presented to validate the mechanical contact algorithm.......Finite element analysis of resistance welding involves the contact problems between different parts. The contact problem in resistance welding includes not only mechanical contact but also thermal and electrical contact. In this paper a contact model based on the penalty method is developed...

  13. Development of a new time domain-based algorithm for train detection and axle counting

    Science.gov (United States)

    Allotta, B.; D'Adamio, P.; Meli, E.; Pugi, L.

    2015-12-01

    This paper presents an innovative train detection algorithm, able to perform the train localisation and, at the same time, to estimate its speed, the crossing times on a fixed point of the track and the axle number. The proposed solution uses the same approach to evaluate all these quantities, starting from the knowledge of generic track inputs directly measured on the track (for example, the vertical forces on the sleepers, the rail deformation and the rail stress). More particularly, all the inputs are processed through cross-correlation operations to extract the required information in terms of speed, crossing time instants and axle counter. This approach has the advantage to be simple and less invasive than the standard ones (it requires less equipment) and represents a more reliable and robust solution against numerical noise because it exploits the whole shape of the input signal and not only the peak values. A suitable and accurate multibody model of railway vehicle and flexible track has also been developed by the authors to test the algorithm when experimental data are not available and in general, under any operating conditions (fundamental to verify the algorithm accuracy and robustness). The railway vehicle chosen as benchmark is the Manchester Wagon, modelled in the Adams VI-Rail environment. The physical model of the flexible track has been implemented in the Matlab and Comsol Multiphysics environments. A simulation campaign has been performed to verify the performance and the robustness of the proposed algorithm, and the results are quite promising. The research has been carried out in cooperation with Ansaldo STS and ECM Spa.

  14. The effect of different log P algorithms on the modeling of the soil sorption coefficient of nonionic pesticides.

    Science.gov (United States)

    dos Reis, Ralpho Rinaldo; Sampaio, Silvio César; de Melo, Eduardo Borges

    2013-10-01

    Collecting data on the effects of pesticides on the environment is a slow and costly process. Therefore, significant efforts have been focused on the development of models that predict physical, chemical or biological properties of environmental interest. The soil sorption coefficient normalized to the organic carbon content (Koc) is a key parameter that is used in environmental risk assessments. Thus, several log Koc prediction models that use the hydrophobic parameter log P as a descriptor have been reported in the literature. Often, algorithms are used to calculate the value of log P due to the lack of experimental values for this property. Despite the availability of various algorithms, previous studies fail to describe the procedure used to select the appropriate algorithm. In this study, models that correlate log Koc with log P were developed for a heterogeneous group of nonionic pesticides using different freeware algorithms. The statistical qualities and predictive power of all of the models were evaluated. Thus, this study was conducted to assess the effect of the log P algorithm choice on log Koc modeling. The results clearly demonstrate that the lack of a selection criterion may result in inappropriate prediction models. Seven algorithms were tested, of which only two (ALOGPS and KOWWIN) produced good results. A sensible choice may result in simple models with statistical qualities and predictive power values that are comparable to those of more complex models. Therefore, the selection of the appropriate log P algorithm for modeling log Koc cannot be arbitrary but must be based on the chemical structure of compounds and the characteristics of the available algorithms.

  15. Imaging metallic samples using electrical capacitance tomography: forward modelling and reconstruction algorithms

    Science.gov (United States)

    Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.

    2016-11-01

    Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.

  16. A self-organizing algorithm for modeling protein loops.

    Directory of Open Access Journals (Sweden)

    Pu Liu

    2009-08-01

    Full Text Available Protein loops, the flexible short segments connecting two stable secondary structural units in proteins, play a critical role in protein structure and function. Constructing chemically sensible conformations of protein loops that seamlessly bridge the gap between the anchor points without introducing any steric collisions remains an open challenge. A variety of algorithms have been developed to tackle the loop closure problem, ranging from inverse kinematics to knowledge-based approaches that utilize pre-existing fragments extracted from known protein structures. However, many of these approaches focus on the generation of conformations that mainly satisfy the fixed end point condition, leaving the steric constraints to be resolved in subsequent post-processing steps. In the present work, we describe a simple solution that simultaneously satisfies not only the end point and steric conditions, but also chirality and planarity constraints. Starting from random initial atomic coordinates, each individual conformation is generated independently by using a simple alternating scheme of pairwise distance adjustments of randomly chosen atoms, followed by fast geometric matching of the conformationally rigid components of the constituent amino acids. The method is conceptually simple, numerically stable and computationally efficient. Very importantly, additional constraints, such as those derived from NMR experiments, hydrogen bonds or salt bridges, can be incorporated into the algorithm in a straightforward and inexpensive way, making the method ideal for solving more complex multi-loop problems. The remarkable performance and robustness of the algorithm are demonstrated on a set of protein loops of length 4, 8, and 12 that have been used in previous studies.

  17. The production-distribution problem with order acceptance and package delivery: models and algorithm

    Directory of Open Access Journals (Sweden)

    Khalili Majid

    2016-01-01

    Full Text Available The production planning and distribution are among the most important decisions in the supply chain. Classically, in this problem, it is assumed that all orders have to produced and separately delivered; while, in practice, an order may be rejected if the cost that it brings to the supply chain exceeds its revenue. Moreover, orders can be delivered in a batch to reduce the related costs. This paper considers the production planning and distribution problem with order acceptance and package delivery to maximize the profit. At first, a new mathematical model based on mixed integer linear programming is developed. Using commercial optimization software, the model can optimally solve small or even medium sized instances. For large instances, a solution method, based on imperialist competitive algorithms, is also proposed. Using numerical experiments, the proposed model and algorithm are evaluated.

  18. The Integration of Cooperation Model and Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In the photogrammetry,some researchers have applied genetic algorithms in aerial image texture classification and reducing hyper-spectrum remote sensing data.Genetic algorithm can rapidly find the solutions which are close to the optimal solution.But it is not easy to find the optimal solution.In order to solve the problem,a cooperative evolution idea integrating genetic algorithm and ant colony algorithm is presented in this paper.On the basis of the advantages of ant colony algorithm,this paper proposes the method integrating genetic algorithms and ant colony algorithm to overcome the drawback of genetic algorithms.Moreover,the paper takes designing texture classification masks of aerial images as an example to illustrate the integration theory and procedures.

  19. GLASS Daytime All-Wave Net Radiation Product: Algorithm Development and Preliminary Validation

    Directory of Open Access Journals (Sweden)

    Bo Jiang

    2016-03-01

    Full Text Available Mapping surface all-wave net radiation (Rn is critically needed for various applications. Several existing Rn products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS daytime Rn product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS model is determined after comparison with three other algorithms. The validation of the GLASS Rn product based on high-quality in situ measurements in the United States shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm−2, and an average bias of −17.59 Wm−2. We also compare our product/algorithm with another satellite product (CERES-SYN and two reanalysis products (MERRA and JRA55, and find that the accuracy of the much higher spatial resolution GLASS Rn product is satisfactory. The GLASS Rn product from 2000 to the present is operational and freely available to the public.

  20. Efficient decoding algorithms for generalized hidden Markov model gene finders

    Directory of Open Access Journals (Sweden)

    Delcher Arthur L

    2005-01-01

    Full Text Available Abstract Background The Generalized Hidden Markov Model (GHMM has proven a useful framework for the task of computational gene prediction in eukaryotic genomes, due to its flexibility and probabilistic underpinnings. As the focus of the gene finding community shifts toward the use of homology information to improve prediction accuracy, extensions to the basic GHMM model are being explored as possible ways to integrate this homology information into the prediction process. Particularly prominent among these extensions are those techniques which call for the simultaneous prediction of genes in two or more genomes at once, thereby increasing significantly the computational cost of prediction and highlighting the importance of speed and memory efficiency in the implementation of the underlying GHMM algorithms. Unfortunately, the task of implementing an efficient GHMM-based gene finder is already a nontrivial one, and it can be expected that this task will only grow more onerous as our models increase in complexity. Results As a first step toward addressing the implementation challenges of these next-generation systems, we describe in detail two software architectures for GHMM-based gene finders, one comprising the common array-based approach, and the other a highly optimized algorithm which requires significantly less memory while achieving virtually identical speed. We then show how both of these architectures can be accelerated by a factor of two by optimizing their content sensors. We finish with a brief illustration of the impact these optimizations have had on the feasibility of our new homology-based gene finder, TWAIN. Conclusions In describing a number of optimizations for GHMM-based gene finders and making available two complete open-source software systems embodying these methods, it is our hope that others will be more enabled to explore promising extensions to the GHMM framework, thereby improving the state-of-the-art in gene prediction

  1. [A Hyperspectral Imagery Anomaly Detection Algorithm Based on Gauss-Markov Model].

    Science.gov (United States)

    Gao, Kun; Liu, Ying; Wang, Li-jing; Zhu, Zhen-yu; Cheng, Hao-bo

    2015-10-01

    With the development of spectral imaging technology, hyperspectral anomaly detection is getting more and more widely used in remote sensing imagery processing. The traditional RX anomaly detection algorithm neglects spatial correlation of images. Besides, it does not validly reduce the data dimension, which costs too much processing time and shows low validity on hyperspectral data. The hyperspectral images follow Gauss-Markov Random Field (GMRF) in space and spectral dimensions. The inverse matrix of covariance matrix is able to be directly calculated by building the Gauss-Markov parameters, which avoids the huge calculation of hyperspectral data. This paper proposes an improved RX anomaly detection algorithm based on three-dimensional GMRF. The hyperspectral imagery data is simulated with GMRF model, and the GMRF parameters are estimated with the Approximated Maximum Likelihood method. The detection operator is constructed with GMRF estimation parameters. The detecting pixel is considered as the centre in a local optimization window, which calls GMRF detecting window. The abnormal degree is calculated with mean vector and covariance inverse matrix, and the mean vector and covariance inverse matrix are calculated within the window. The image is detected pixel by pixel with the moving of GMRF window. The traditional RX detection algorithm, the regional hypothesis detection algorithm based on GMRF and the algorithm proposed in this paper are simulated with AVIRIS hyperspectral data. Simulation results show that the proposed anomaly detection method is able to improve the detection efficiency and reduce false alarm rate. We get the operation time statistics of the three algorithms in the same computer environment. The results show that the proposed algorithm improves the operation time by 45.2%, which shows good computing efficiency.

  2. Semi-Implicit Algorithm for Elastoplastic Damage Models Involving Energy Integration

    OpenAIRE

    Ji Zhang; Jie Li

    2016-01-01

    This study aims to develop a semi-implicit constitutive integration algorithm for a class of elastoplastic damage models where calculation of damage energy release rates involves integration of free energy. The constitutive equations with energy integration are split into the elastic predictor, plastic corrector, and damage corrector. The plastic corrector is solved with an improved format of the semi-implicit spectral return mapping, which is characterized by constant flow direction and plas...

  3. A genetic algorithm for optimizing multi-pole Debye models of tissue dielectric properties

    Science.gov (United States)

    Clegg, J.; Robinson, M. P.

    2012-10-01

    Models of tissue dielectric properties (permittivity and conductivity) enable the interactions of tissues and electromagnetic fields to be simulated, which has many useful applications in microwave imaging, radio propagation, and non-ionizing radiation dosimetry. Parametric formulae are available, based on a multi-pole model of tissue dispersions, but although they give the dielectric properties over a wide frequency range, they do not convert easily to the time domain. An alternative is the multi-pole Debye model which works well in both time and frequency domains. Genetic algorithms are an evolutionary approach to optimization, and we found that this technique was effective at finding the best values of the multi-Debye parameters. Our genetic algorithm optimized these parameters to fit to either a Cole-Cole model or to measured data, and worked well over wide or narrow frequency ranges. Over 10 Hz-10 GHz the best fits for muscle, fat or bone were each found for ten dispersions or poles in the multi-Debye model. The genetic algorithm is a fast and effective method of developing tissue models that compares favourably with alternatives such as the rational polynomial fit.

  4. A genetic algorithm for optimizing multi-pole Debye models of tissue dielectric properties.

    Science.gov (United States)

    Clegg, J; Robinson, M P

    2012-10-01

    Models of tissue dielectric properties (permittivity and conductivity) enable the interactions of tissues and electromagnetic fields to be simulated, which has many useful applications in microwave imaging, radio propagation, and non-ionizing radiation dosimetry. Parametric formulae are available, based on a multi-pole model of tissue dispersions, but although they give the dielectric properties over a wide frequency range, they do not convert easily to the time domain. An alternative is the multi-pole Debye model which works well in both time and frequency domains. Genetic algorithms are an evolutionary approach to optimization, and we found that this technique was effective at finding the best values of the multi-Debye parameters. Our genetic algorithm optimized these parameters to fit to either a Cole-Cole model or to measured data, and worked well over wide or narrow frequency ranges. Over 10 Hz-10 GHz the best fits for muscle, fat or bone were each found for ten dispersions or poles in the multi-Debye model. The genetic algorithm is a fast and effective method of developing tissue models that compares favourably with alternatives such as the rational polynomial fit.

  5. Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Forest, Mark Gregory [University of North Carolina at Chapel Hill

    2014-05-06

    The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.

  6. Evaluation of Species Distribution Model Algorithms For Fine-Scale Container Breeding Mosquito Risk Prediction

    Science.gov (United States)

    Khatchikian, C.; Sangermano, F.; Kendell, D.; Livdahl, T.

    2010-01-01

    The present work evaluates the use of species distribution model (SDM) algorithms to classify high density of small container Aedes mosquitoes at a fine scale, in the Bermuda islands. Weekly ovitrap data collected by the Health Department of Bermuda (UK) for the years 2006 and 2007 were used for the models. The models evaluated included the following algorithms: Bioclim, Domain, GARP, logistic regression, and MaxEnt. Models were evaluated according to performance and robustness. The area Receiver Operating Characteristic (ROC) curve was used to evaluate each model’s performance, and robustness was assessed considering the spatial correlation between classification risks for the two datasets. Relative to the other algorithms, logistic regression was the best model for classifying high risk areas, and the maximum entropy approach (MaxEnt) presented the second best performance. We report the importance of covariables for these two models, and discuss the utility of SDMs for vector control efforts and the potential for the development of scripts that automate the task of creating risk assessment maps. PMID:21198711

  7. Identification of Hammerstein Model Based on Quantum Genetic Algorithm

    OpenAIRE

    Zhang Hai Li

    2013-01-01

    Nonlinear system identification is a main topic of modern identification. A new method for nonlinear system identification is presented by using Quantum Genetic Algorithm(QGA).The problems of nonlinear system identification are cast as function optimization overprameter space,and the Quantum Genetic Algorithm is adopted to solve the optimization problem. Simulation experiments show that: compared with the genetic algorithm, quantum genetic algorithm is an effective swarm intelligence algorith...

  8. Developing a Learning Algorithm-Generated Empirical Relaxer

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Wayne [Univ. of Colorado, Boulder, CO (United States). Dept. of Applied Math; Kallman, Josh [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Toreja, Allen [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gallagher, Brian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jiang, Ming [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Laney, Dan [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-03-30

    One of the main difficulties when running Arbitrary Lagrangian-Eulerian (ALE) simulations is determining how much to relax the mesh during the Eulerian step. This determination is currently made by the user on a simulation-by-simulation basis. We present a Learning Algorithm-Generated Empirical Relaxer (LAGER) which uses a regressive random forest algorithm to automate this decision process. We also demonstrate that LAGER successfully relaxes a variety of test problems, maintains simulation accuracy, and has the potential to significantly decrease both the person-hours and computational hours needed to run a successful ALE simulation.

  9. Development of the Algorithm for Energy Efficiency Improvement of Bulk Material Transport System

    Directory of Open Access Journals (Sweden)

    Milan Bebic

    2013-06-01

    Full Text Available The paper presents a control strategy for the system of belt conveyors with adjustable speed drives based on the principle of optimum energy consumption. Different algorithms are developed for generating the reference speed of the system of belt conveyors in order to achieve maximum material cross section on the belts and thus reduction of required electrical drive power. Control structures presented in the paper are developed and tested on the detailed mathematical model of the drive system with the rubber belt. The performed analyses indicate that the application of the algorithm based on fuzzy logic control (FLC which incorporates drive torque as an input variable is the proper solution. Therefore, this solution is implemented on the newvariable speed belt conveyor system with remote control on an open pit mine. Results of measurements on the system prove that the applied algorithm based on fuzzy logic control provides minimum electrical energy consumption of the drive under given constraints. The paper also presents the additional analytical verification of the achieved results trough a method based on the sequential quadratic programming for finding a minimum of a nonlinear function of multiple variables under given constraints.

  10. Modeling of genetic algorithms with a finite population

    NARCIS (Netherlands)

    Kemenade, C.H.M. van

    1997-01-01

    Cross-competition between non-overlapping building blocks can strongly influence the performance of evolutionary algorithms. The choice of the selection scheme can have a strong influence on the performance of a genetic algorithm. This paper describes a number of different genetic algorithms, all in

  11. Modelling and genetic algorithm based optimisation of inverse supply chain

    Science.gov (United States)

    Bányai, T.

    2009-04-01

    (Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a

  12. Elastic-plastic model identification for rock surrounding an underground excavation based on immunized genetic algorithm.

    Science.gov (United States)

    Gao, Wei; Chen, Dongliang; Wang, Xu

    2016-01-01

    To compute the stability of underground engineering, a constitutive model of surrounding rock must be identified. Many constitutive models for rock mass have been proposed. In this model identification study, a generalized constitutive law for an elastic-plastic constitutive model is applied. Using the generalized constitutive law, the problem of model identification is transformed to a problem of parameter identification, which is a typical and complicated optimization. To improve the efficiency of the traditional optimization method, an immunized genetic algorithm that is proposed by the author is applied in this study. In this new algorithm, the principle of artificial immune algorithm is combined with the genetic algorithm. Therefore, the entire computation efficiency of model identification will be improved. Using this new model identification method, a numerical example and an engineering example are used to verify the computing ability of the algorithm. The results show that this new model identification algorithm can significantly improve the computation efficiency and the computation effect.

  13. A hidden Markov model-based algorithm for identifying tumour subtype using array CGH data

    Directory of Open Access Journals (Sweden)

    Zhang Ke

    2011-12-01

    Full Text Available Abstract Background The recent advancement in array CGH (aCGH research has significantly improved tumor identification using DNA copy number data. A number of unsupervised learning methods have been proposed for clustering aCGH samples. Two of the major challenges for developing aCGH sample clustering are the high spatial correlation between aCGH markers and the low computing efficiency. A mixture hidden Markov model based algorithm was developed to address these two challenges. Results The hidden Markov model (HMM was used to model the spatial correlation between aCGH markers. A fast clustering algorithm was implemented and real data analysis on glioma aCGH data has shown that it converges to the optimal cluster rapidly and the computation time is proportional to the sample size. Simulation results showed that this HMM based clustering (HMMC method has a substantially lower error rate than NMF clustering. The HMMC results for glioma data were significantly associated with clinical outcomes. Conclusions We have developed a fast clustering algorithm to identify tumor subtypes based on DNA copy number aberrations. The performance of the proposed HMMC method has been evaluated using both simulated and real aCGH data. The software for HMMC in both R and C++ is available in ND INBRE website http://ndinbre.org/programs/bioinformatics.php.

  14. Nonlinear Random Effects Mixture Models: Maximum Likelihood Estimation via the EM Algorithm.

    Science.gov (United States)

    Wang, Xiaoning; Schumitzky, Alan; D'Argenio, David Z

    2007-08-15

    Nonlinear random effects models with finite mixture structures are used to identify polymorphism in pharmacokinetic/pharmacodynamic phenotypes. An EM algorithm for maximum likelihood estimation approach is developed and uses sampling-based methods to implement the expectation step, that results in an analytically tractable maximization step. A benefit of the approach is that no model linearization is performed and the estimation precision can be arbitrarily controlled by the sampling process. A detailed simulation study illustrates the feasibility of the estimation approach and evaluates its performance. Applications of the proposed nonlinear random effects mixture model approach to other population pharmacokinetic/pharmacodynamic problems will be of interest for future investigation.

  15. Product Development Process Modeling

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    The use of Concurrent Engineering and other modern methods of product development and maintenance require that a large number of time-overlapped "processes" be performed by many people. However, successfully describing and optimizing these processes are becoming even more difficult to achieve. The perspective of industrial process theory (the definition of process) and the perspective of process implementation (process transition, accumulation, and inter-operations between processes) are used to survey the method used to build one base model (multi-view) process model.

  16. Physics Based Model for Cryogenic Chilldown and Loading. Part I: Algorithm

    Science.gov (United States)

    Luchinsky, Dmitry G.; Smelyanskiy, Vadim N.; Brown, Barbara

    2014-01-01

    We report the progress in the development of the physics based model for cryogenic chilldown and loading. The chilldown and loading is model as fully separated non-equilibrium two-phase flow of cryogenic fluid thermally coupled to the pipe walls. The solution follow closely nearly-implicit and semi-implicit algorithms developed for autonomous control of thermal-hydraulic systems developed by Idaho National Laboratory. A special attention is paid to the treatment of instabilities. The model is applied to the analysis of chilldown in rapid loading system developed at NASA-Kennedy Space Center. The nontrivial characteristic feature of the analyzed chilldown regime is its active control by dump valves. The numerical predictions are in reasonable agreement with the experimental time traces. The obtained results pave the way to the development of autonomous loading operation on the ground and space.

  17. The Loop-Cluster Algorithm for the Case of the 6 Vertex Model

    CERN Document Server

    Evertz, H G

    1993-01-01

    We present the loop algorithm, a new type of cluster algorithm that we recently introduced for the F model. Using the framework of Kandel and Domany, we show how to GENERALIZE the algorithm to the arrow flip symmetric 6 vertex model. We propose the principle of least possible freezing as the guide to choosing the values of free parameters in the algorithm. Finally, we briefly discuss the application of our algorithm to simulations of quantum spin systems. In particular, all necessary information is provided for the simulation of spin $\\half$ Heisenberg and $xxz$ models.

  18. A memory-efficient staining algorithm in 3D seismic modelling and imaging

    Science.gov (United States)

    Jia, Xiaofeng; Yang, Lu

    2017-08-01

    The staining algorithm has been proven to generate high signal-to-noise ratio (S/N) images in poorly illuminated areas in two-dimensional cases. In the staining algorithm, the stained wavefield relevant to the target area and the regular source wavefield forward propagate synchronously. Cross-correlating these two wavefields with the backward propagated receiver wavefield separately, we obtain two images: the local image of the target area and the conventional reverse time migration (RTM) image. This imaging process costs massive computer memory for wavefield storage, especially in large scale three-dimensional cases. To make the staining algorithm applicable to three-dimensional RTM, we develop a method to implement the staining algorithm in three-dimensional acoustic modelling in a standard staggered grid finite difference (FD) scheme. The implementation is adaptive to the order of spatial accuracy of the FD operator. The method can be applied to elastic, electromagnetic, and other wave equations. Taking the memory requirement into account, we adopt a random boundary condition (RBC) to backward extrapolate the receiver wavefield and reconstruct it by reverse propagation using the final wavefield snapshot only. Meanwhile, we forward simulate the stained wavefield and source wavefield simultaneously using the nearly perfectly matched layer (NPML) boundary condition. Experiments on a complex geologic model indicate that the RBC-NPML collaborative strategy not only minimizes the memory consumption but also guarantees high quality imaging results. We apply the staining algorithm to three-dimensional RTM via the proposed strategy. Numerical results show that our staining algorithm can produce high S/N images in the target areas with other structures effectively muted.

  19. Calibration of Uncertainty Analysis of the SWAT Model Using Genetic Algorithms and Bayesian Model Averaging

    Science.gov (United States)

    In this paper, the Genetic Algorithms (GA) and Bayesian model averaging (BMA) were combined to simultaneously conduct calibration and uncertainty analysis for the Soil and Water Assessment Tool (SWAT). In this hybrid method, several SWAT models with different structures are first selected; next GA i...

  20. Model Versions and Fast Algorithms for Network Epidemiology

    Institute of Scientific and Technical Information of China (English)

    Petter Holme

    2014-01-01

    Network epidemiology has become a core framework for investigating the role of human contact patterns in the spreading of infectious diseases. In network epidemiology, one represents the contact structure as a network of nodes (individuals) connected by links (sometimes as a temporal network where the links are not continuously active) and the disease as a compartmental model (where individuals are assigned states with respect to the disease and follow certain transition rules between the states). In this paper, we discuss fast algorithms for such simulations and also compare two commonly used versions,one where there is a constant recovery rate (the number of individuals that stop being infectious per time is proportional to the number of such people);the other where the duration of the disease is constant. The results show that, for most practical purposes, these versions are qualitatively the same.

  1. Implementation and testing of a simple data assimilation algorithm in the regional air pollution forecast model, DEOM

    Directory of Open Access Journals (Sweden)

    J. Frydendall

    2009-08-01

    Full Text Available A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM, applied for air pollution forecasting at the National Environmental Research Institute (NERI, Denmark. In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP (European Monitoring and Evaluation Programme network covering a half-year period, April–September 1999. The best performing setup of the data assimilation algorithm for surface ozone concentrations has been found, including the combination of determining the covariances using the Hollingsworth method, varying the correlation length according to the number of adjacent observation stations and applying the assimilation routine at three successive hours during the morning. Improvements in the correlation coefficient in the range of 0.1 to 0.21 between the results from the reference and the optimal configuration of the data assimilation algorithm, were found. The data assimilation algorithm will in the future be used in the operational THOR integrated air pollution forecast system, which includes the DEOM.

  2. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    Directory of Open Access Journals (Sweden)

    Gonglin Yuan

    Full Text Available Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1 βk ≥ 0 2 the search direction has the trust region property without the use of any line search method 3 the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  3. Integrated Computational Model Development

    Science.gov (United States)

    2014-03-01

    68.5%, 9.6% and 21.9%, respectively. The alloy density and Vickers microhardness were ρ = 8.23 ± 0.01 g/cm3 and Hv = 5288 ± 1 MPa. [3...and 3-D. Techniques to mechanically test materials at smaller scales were developed to better inform the deformation models. Also methods were...situ microscale tension testing technique was adapted to enable microscale fatigue testing on tensile dog-bone specimens. Microscale tensile fatigue

  4. Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm

    NARCIS (Netherlands)

    Jansen, R.C.

    A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical

  5. An adaptive turbo-shaft engine modeling method based on PS and MRR-LSSVR algorithms

    Institute of Scientific and Technical Information of China (English)

    Wang Jiankang; Zhang Haibo; Yan Changkai; Duan Shujing; Huang Xianghua

    2013-01-01

    In order to establish an adaptive turbo-shaft engine model with high accuracy,a new modeling method based on parameter selection (PS) algorithm and multi-input multi-output recursive reduced least square support vector regression (MRR-LSSVR) machine is proposed.Firstly,the PS algorithm is designed to choose the most reasonable inputs of the adaptive module.During this process,a wrapper criterion based on least square support vector regression (LSSVR) machine is adopted,which can not only reduce computational complexity but also enhance generalization performance.Secondly,with the input variables determined by the PS algorithm,a mapping model of engine parameter estimation is trained off-line using MRR-LSSVR,which has a satisfying accuracy within 5‰.Finally,based on a numerical simulation platform of an integrated helicopter/turbo-shaft engine system,an adaptive turbo-shaft engine model is developed and tested in a certain flight envelope.Under the condition of single or multiple engine components being degraded,many simulation experiments are carried out,and the simulation results show the effectiveness and validity of the proposed adaptive modeling method.

  6. Algorithms for extraction of structural attitudes from 3D outcrop models

    Science.gov (United States)

    Duelis Viana, Camila; Endlein, Arthur; Ademar da Cruz Campanha, Ginaldo; Henrique Grohmann, Carlos

    2016-05-01

    The acquisition of geological attitudes on rock cuts using traditional field compass survey can be a time consuming, dangerous, or even impossible task depending on the conditions and location of outcrops. The importance of this type of data in rock-mass classifications and structural geology has led to the development of new techniques, in which the application of photogrammetric 3D digital models has had an increasing use. In this paper we present two algorithms for extraction of attitudes of geological discontinuities from virtual outcrop models: ply2atti and scanline, implemented with the Python programming language. The ply2atti algorithm allows for the virtual sampling of planar discontinuities appearing on the 3D model as individual exposed surfaces, while the scanline algorithm allows the sampling of discontinuities (surfaces and traces) along a virtual scanline. Application to digital models of a simplified test setup and a rock cut demonstrated a good correlation between the surveys undertaken using traditional field compass reading and virtual sampling on 3D digital models.

  7. Developing a Model Component

    Science.gov (United States)

    Fields, Christina M.

    2013-01-01

    The Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI) is responsible for providing simulations to support test and verification of SCCS hardware and software. The Universal Coolant Transporter System (UCTS) was a Space Shuttle Orbiter support piece of the Ground Servicing Equipment (GSE). The initial purpose of the UCTS was to provide two support services to the Space Shuttle Orbiter immediately after landing at the Shuttle Landing Facility. The UCTS is designed with the capability of servicing future space vehicles; including all Space Station Requirements necessary for the MPLM Modules. The Simulation uses GSE Models to stand in for the actual systems to support testing of SCCS systems during their development. As an intern at Kennedy Space Center (KSC), my assignment was to develop a model component for the UCTS. I was given a fluid component (dryer) to model in Simulink. I completed training for UNIX and Simulink. The dryer is a Catch All replaceable core type filter-dryer. The filter-dryer provides maximum protection for the thermostatic expansion valve and solenoid valve from dirt that may be in the system. The filter-dryer also protects the valves from freezing up. I researched fluid dynamics to understand the function of my component. The filter-dryer was modeled by determining affects it has on the pressure and velocity of the system. I used Bernoulli's Equation to calculate the pressure and velocity differential through the dryer. I created my filter-dryer model in Simulink and wrote the test script to test the component. I completed component testing and captured test data. The finalized model was sent for peer review for any improvements. I participated in Simulation meetings and was involved in the subsystem design process and team collaborations. I gained valuable work experience and insight into a career path as an engineer.

  8. Evaluation of kinetic models for industrial acetic fermentation: proposal of a new model optimized by genetic algorithms.

    Science.gov (United States)

    González-Sáiz, José M; Pizarro, Consuelo; Garrido-Vidal, Diego

    2003-01-01

    The most important kinetic models developed for acetic fermentation were evaluated to study their ability to explain the behavior of the industrial process of acetification. Each model was introduced into a simulation environment capable of replicating the conditions of the industrial plant. In this paper, it is proven that these models are not suitable to predict the evolution of the industrial fermentation by the comparison of the simulation results with an average sequence calculated from the industrial data. Therefore, a new kinetic model for the industrial acetic fermentation was developed. The kinetic parameters of the model were optimized by a specifically designed genetic algorithm. Only the representative sequence of industrial concentrations of acetic acid was required. The main novelty of the algorithm is the four-composed desirability function that works properly as the response to maximize. The new model developed is capable of explaining the behavior of the industrial process. The predictive ability of the model has been compared with that of the other models studied.

  9. An implementation of continuous genetic algorithm in parameter estimation of predator-prey model

    Science.gov (United States)

    Windarto

    2016-03-01

    Genetic algorithm is an optimization method based on the principles of genetics and natural selection in life organisms. The main components of this algorithm are chromosomes population (individuals population), parent selection, crossover to produce new offspring, and random mutation. In this paper, continuous genetic algorithm was implemented to estimate parameters in a predator-prey model of Lotka-Volterra type. For simplicity, all genetic algorithm parameters (selection rate and mutation rate) are set to be constant along implementation of the algorithm. It was found that by selecting suitable mutation rate, the algorithms can estimate these parameters well.

  10. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    Science.gov (United States)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  11. A simple and efficient parallel FFT algorithm using the BSP model

    NARCIS (Netherlands)

    Bisseling, R.H.; Inda, M.A.

    2000-01-01

    In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case

  12. Developing predictive models for return to work using the Military Power, Performance and Prevention (MP3) musculoskeletal injury risk algorithm: a study protocol for an injury risk assessment programme.

    Science.gov (United States)

    Rhon, Daniel I; Teyhen, Deydre S; Shaffer, Scott W; Goffar, Stephen L; Kiesel, Kyle; Plisky, Phil P

    2016-11-24

    Musculoskeletal injuries are a primary source of disability in the US Military, and low back pain and lower extremity injuries account for over 44% of limited work days annually. History of prior musculoskeletal injury increases the risk for future injury. This study aims to determine the risk of injury after returning to work from a previous injury. The objective is to identify criteria that can help predict likelihood for future injury or re-injury. There will be 480 active duty soldiers recruited from across four medical centres. These will be patients who have sustained a musculoskeletal injury in the lower extremity or lumbar/thoracic spine, and have now been cleared to return back to work without any limitations. Subjects will undergo a battery of physical performance tests and fill out sociodemographic surveys. They will be followed for a year to identify any musculoskeletal injuries that occur. Prediction algorithms will be derived using regression analysis from performance and sociodemographic variables found to be significantly different between injured and non-injured subjects. Due to the high rates of injuries, injury prevention and prediction initiatives are growing. This is the first study looking at predicting re-injury rates after an initial musculoskeletal injury. In addition, multivariate prediction models appear to have move value than models based on only one variable. This approach aims to validate a multivariate model used in healthy non-injured individuals to help improve variables that best predict the ability to return to work with lower risk of injury, after a recent musculoskeletal injury. NCT02776930. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  13. An algorithm for continuum modeling of rocks with multiple embedded nonlinearly-compliant joints

    Science.gov (United States)

    Hurley, R. C.; Vorobiev, O. Y.; Ezzedine, S. M.

    2017-08-01

    We present a numerical method for modeling the mechanical effects of nonlinearly-compliant joints in elasto-plastic media. The method uses a series of strain-rate and stress update algorithms to determine joint closure, slip, and solid stress within computational cells containing multiple "embedded" joints. This work facilitates efficient modeling of nonlinear wave propagation in large spatial domains containing a large number of joints that affect bulk mechanical properties. We implement the method within the massively parallel Lagrangian code GEODYN-L and provide verification and examples. We highlight the ability of our algorithms to capture joint interactions and multiple weakness planes within individual computational cells, as well as its computational efficiency. We also discuss the motivation for developing the proposed technique: to simulate large-scale wave propagation during the Source Physics Experiments (SPE), a series of underground explosions conducted at the Nevada National Security Site (NNSS).

  14. A robust model predictive control algorithm for uncertain nonlinear systems that guarantees resolvability

    Science.gov (United States)

    Acikmese, Ahmet Behcet; Carson, John M., III

    2006-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees resolvability. With resolvability, initial feasibility of the finite-horizon optimal control problem implies future feasibility in a receding-horizon framework. The control consists of two components; (i) feed-forward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives and derivatives in polytopes. An illustrative numerical example is also provided.

  15. Study about Interpretation Models and Algorithm of Water-Flooded Formation Based on Resistivity

    Institute of Scientific and Technical Information of China (English)

    WANGYinghui; TANDehui; WANGQiongfang; CAIHongjie

    2005-01-01

    Many oil fields are developed by water injection in the world, it's difficult to interpret by welllogging information. EPT and C/O identify residual oil saturation or moveable oil, but they are only fit for oil-reservoir with porosity over 20%, and not for borehole. Additionally, Archie model is not completely fit for dynamic but the static oil-reservoir. Therefore, it's more difficult for WF (Water-flooded) oil-zone (dynamic oil-reservoir) with LPP (Low porosity and low permeability) to be interpreted. Resistivity logging series are the dominating tools to WF formation, so it becomes significantly important to research new interpretation models and algorithm based on resistivity well-logging for WF oil-zone with LPP. A set of new interpretation models for WFZ (Water flooded zone) are established according to the “U” type curve from experimentation, as well as according to mathematics analysis. The notable Archie model is only one case of these new models under special conditions. It is most important that these new models are all fit from exploration stage to development stage in oil field. At last, algorithm process and application result of these models are described.

  16. Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm

    Science.gov (United States)

    Wang, Qimei; Yang, Zhihong; Wang, Yong

    In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.

  17. Modelling and control algorithms of the cross conveyors line with multiengine variable speed drives

    Science.gov (United States)

    Cheremushkina, M. S.; Baburin, S. V.

    2017-02-01

    The paper deals with the actual problem of developing the control algorithm that meets the technical requirements of the mine belt conveyors, and enables energy and resource savings taking into account a random sort of traffic. The most effective method of solution of these tasks is the construction of control systems with the use of variable speed drives for asynchronous motors. The authors designed the mathematical model of the system ‘variable speed multiengine drive – conveyor – control system of conveyors’ that takes into account the dynamic processes occurring in the elements of the transport system, provides an assessment of the energy efficiency of application the developed algorithms, which allows one to reduce the dynamic overload in the belt to 15-20%.

  18. A Cost-Effective Tracking Algorithm for Hypersonic Glide Vehicle Maneuver Based on Modified Aerodynamic Model

    Directory of Open Access Journals (Sweden)

    Yu Fan

    2016-10-01

    Full Text Available In order to defend the hypersonic glide vehicle (HGV, a cost-effective single-model tracking algorithm using Cubature Kalman filter (CKF is proposed in this paper based on modified aerodynamic model (MAM as process equation and radar measurement model as measurement equation. In the existing aerodynamic model, the two control variables attack angle and bank angle cannot be measured by the existing radar equipment and their control laws cannot be known by defenders. To establish the process equation, the MAM for HGV tracking is proposed by using additive white noise to model the rates of change of the two control variables. For the ease of comparison several multiple model algorithms based on CKF are presented, including interacting multiple model (IMM algorithm, adaptive grid interacting multiple model (AGIMM algorithm and hybrid grid multiple model (HGMM algorithm. The performances of these algorithms are compared and analyzed according to the simulation results. The simulation results indicate that the proposed tracking algorithm based on modified aerodynamic model has the best tracking performance with the best accuracy and least computational cost among all tracking algorithms in this paper. The proposed algorithm is cost-effective for HGV tracking.

  19. Melanoma prognostic model using tissue microarrays and genetic algorithms.

    Science.gov (United States)

    Gould Rothberg, Bonnie E; Berger, Aaron J; Molinaro, Annette M; Subtil, Antonio; Krauthammer, Michael O; Camp, Robert L; Bradley, William R; Ariyan, Stephan; Kluger, Harriet M; Rimm, David L

    2009-12-01

    As a result of the questionable risk-to-benefit ratio of adjuvant therapies, stage II melanoma is currently managed by observation because available clinicopathologic parameters cannot identify the 20% to 60% of such patients likely to develop metastatic disease. Here, we propose a multimarker molecular prognostic assay that can help triage patients at increased risk of recurrence. Protein expression for 38 candidates relevant to melanoma oncogenesis was evaluated using the automated quantitative analysis (AQUA) method for immunofluorescence-based immunohistochemistry in formalin-fixed, paraffin-embedded specimens from a cohort of 192 primary melanomas collected during 1959 to 1994. The prognostic assay was built using a genetic algorithm and validated on an independent cohort of 246 serial primary melanomas collected from 1997 to 2004. Multiple iterations of the genetic algorithm yielded a consistent five-marker solution. A favorable prognosis was predicted by ATF2 ln(non-nuclear/nuclear AQUA score ratio) of more than -0.052, p21(WAF1) nuclear compartment AQUA score of more than 12.98, p16(INK4A) ln(non-nuclear/nuclear AQUA score ratio) of < or = -0.083, beta-catenin total AQUA score of more than 38.68, and fibronectin total AQUA score of < or = 57.93. Primary tumors that met at least four of these five conditions were considered a low-risk group, and those that met three or fewer conditions formed a high-risk group (log-rank P < .0001). Multivariable proportional hazards analysis adjusting for clinicopathologic parameters shows that the high-risk group has significantly reduced survival on both the discovery (hazard ratio = 2.84; 95% CI, 1.46 to 5.49; P = .002) and validation (hazard ratio = 2.72; 95% CI, 1.12 to 6.58; P = .027) cohorts. This multimarker prognostic assay, an independent determinant of melanoma survival, might be beneficial in improving the selection of stage II patients for adjuvant therapy.

  20. Ant colony optimization algorithm for continuous domains based on position distribution model of ant colony foraging.

    Science.gov (United States)

    Liu, Liqiang; Dai, Yuntao; Gao, Jinyu

    2014-01-01

    Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm.

  1. Ant Colony Optimization Algorithm for Continuous Domains Based on Position Distribution Model of Ant Colony Foraging

    Directory of Open Access Journals (Sweden)

    Liqiang Liu

    2014-01-01

    Full Text Available Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm.

  2. An Iterative Algorithm to Build Chinese Language Models

    CERN Document Server

    Luo, X; Luo, Xiaoqiang; Roukos, Salim

    1996-01-01

    We present an iterative procedure to build a Chinese language model (LM). We segment Chinese text into words based on a word-based Chinese language model. However, the construction of a Chinese LM itself requires word boundaries. To get out of the chicken-and-egg problem, we propose an iterative procedure that alternates two operations: segmenting text into words and building an LM. Starting with an initial segmented corpus and an LM based upon it, we use a Viterbi-liek algorithm to segment another set of data. Then, we build an LM based on the second set and use the resulting LM to segment again the first corpus. The alternating procedure provides a self-organized way for the segmenter to detect automatically unseen words and correct segmentation errors. Our preliminary experiment shows that the alternating procedure not only improves the accuracy of our segmentation, but discovers unseen words surprisingly well. The resulting word-based LM has a perplexity of 188 for a general Chinese corpus.

  3. Spatial optimum collocation model of urban land and its algorithm

    Science.gov (United States)

    Kong, Xiangqiang; Li, Xinyun

    2007-06-01

    Optimizing the allocation of urban land is that layout and fix position the various types of land-use in space, maximize the overall benefits of urban space (including economic, social, environment) using a certain method and technique. There is two problems need to deal with in optimizing the allocation of urban land in the technique: one is the quantitative structure, the other is the space structure. In allusion to these problems, according to the principle of spatial coordination, a kind of new optimum collocation model about urban land was put forward in this text. In the model, we give a target function and a set of "soft" constraint conditions, and the area proportions of various types of land-use are restricted to the corresponding allowed scope. Spatial genetic algorithm is used to manipulate and calculate the space of urban land, the optimum spatial collocation scheme can be gradually approached, in which the three basic operations of reproduction, crossover and mutation are all operated on the space. Taking the built-up areas of Jinan as an example, we did the spatial optimum collocation experiment of urban land, the spatial aggregation of various types is better, and an approving result was got.

  4. Application of ANN and fuzzy logic algorithms for streamflow modelling of Savitri catchment

    Indian Academy of Sciences (India)

    Mahesh Kothari; K D Gharde

    2015-07-01

    The streamflow prediction is an essentially important aspect of any watershed modelling. The black box models (soft computing techniques) have proven to be an efficient alternative to physical (traditional) methods for simulating streamflow and sediment yield of the catchments. The present study focusses on development of models using ANN and fuzzy logic (FL) algorithm for predicting the streamflow for catchment of Savitri River Basin. The input vector to these models were daily rainfall, mean daily evaporation, mean daily temperature and lag streamflow used. In the present study, 20 years (1992–2011) rainfall and other hydrological data were considered, of which 13 years (1992–2004) was for training and rest 7 years (2005–2011) for validation of the models. The mode performance was evaluated by R, RMSE, EV, CE, and MAD statistical parameters. It was found that, ANN model performance improved with increasing input vectors. The results with fuzzy logic models predict the streamflow with single input as rainfall better in comparison to multiple input vectors. While comparing both ANN and FL algorithms for prediction of streamflow, ANN model performance is quite superior.

  5. A Cluster Algorithm for the 2-D SU(3) × SU(3) Chiral Model

    Science.gov (United States)

    Ji, Da-ren; Zhang, Jian-bo

    1996-07-01

    To extend the cluster algorithm to SU(N) × SU(N) chiral models, a variant version of Wolff's cluster algorithm is proposed and tested for the 2-dimensional SU(3) × SU(3) chiral model. The results show that the new method can reduce the critical slowing down in SU(3) × SU(3) chiral model.

  6. Advanced Models and Algorithms for Self-Similar IP Network Traffic Simulation and Performance Analysis

    Science.gov (United States)

    Radev, Dimitar; Lokshina, Izabella

    2010-11-01

    The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.

  7. Model and algorithm for optimization of rescue center location of emergent catastrophe

    Institute of Scientific and Technical Information of China (English)

    WANG Ding-wei; ZHANG Guo-xiang

    2006-01-01

    The location of rescue centers is a key problem in optimal resource allocation and logistics in emergency response.We propose a mathematical model for rescue center location with the considerations of emergency occurrence probability,catastrophe diffusion function and rescue function.Because the catastrophe diffusion and rescue functions are both nonlinear and time-variable,it cannot be solved by common mathematical programming methods.We develop a heuristic embedded genetic algorithm for the special model solution.The computation based on a large number of examples with practical data has shown us satisfactory results.

  8. A Developed Algorithm of Apriori Based on Association Analysis

    Institute of Scientific and Technical Information of China (English)

    LI Pingxiang; CHEN Jiangping; BIAN Fuling

    2004-01-01

    A method for mining frequent itemsets by evaluating their probability of supports based on association analysis is presented. This paper obtains the probability of every 1-itemset by scanning the database, then evaluates the probability of every 2-itemset, every 3-itemset, every k-itemset from the frequent 1-itemsets and gains all the candidate frequent itemsets. This paper also scans the database for verifying the support of the candidate frequent itemsets. Last, the frequent itemsets are mined. The method reduces a lot of time of scanning database and shortens the computation time of the algorithm.

  9. Update on Development of Mesh Generation Algorithms in MeshKit

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Vanderzee, Evan [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-09-30

    MeshKit uses a graph-based design for coding all its meshing algorithms, which includes the Reactor Geometry (and mesh) Generation (RGG) algorithms. This report highlights the developmental updates of all the algorithms, results and future work. Parallel versions of algorithms, documentation and performance results are reported. RGG GUI design was updated to incorporate new features requested by the users; boundary layer generation and parallel RGG support were added to the GUI. Key contributions to the release, upgrade and maintenance of other SIGMA1 libraries (CGM and MOAB) were made. Several fundamental meshing algorithms for creating a robust parallel meshing pipeline in MeshKit are under development. Results and current status of automated, open-source and high quality nuclear reactor assembly mesh generation algorithms such as trimesher, quadmesher, interval matching and multi-sweeper are reported.

  10. Modeling of genetic algorithms with a finite population

    NARCIS (Netherlands)

    C.H.M. van Kemenade

    1997-01-01

    textabstractCross-competition between non-overlapping building blocks can strongly influence the performance of evolutionary algorithms. The choice of the selection scheme can have a strong influence on the performance of a genetic algorithm. This paper describes a number of different genetic

  11. Editorial Commentary: The Importance of Developing an Algorithm When Diagnosing Hip Pain.

    Science.gov (United States)

    Coleman, Struan H

    2016-08-01

    The differential diagnosis of groin pain is broad and complex. Therefore, it is essential to develop an algorithm when differentiating the hip as a cause of groin pain from other sources. Selective injections in and around the hip can be helpful when making the diagnosis but are only one part of the algorithm.

  12. Development of Online Cognitive and Algorithm Tests as Assessment Tools in Introductory Computer Science Courses

    Science.gov (United States)

    Avancena, Aimee Theresa; Nishihara, Akinori; Vergara, John Paul

    2012-01-01

    This paper presents the online cognitive and algorithm tests, which were developed in order to determine if certain cognitive factors and fundamental algorithms correlate with the performance of students in their introductory computer science course. The tests were implemented among Management Information Systems majors from the Philippines and…

  13. Design patterns for the development of electronic health record-driven phenotype extraction algorithms.

    Science.gov (United States)

    Rasmussen, Luke V; Thompson, Will K; Pacheco, Jennifer A; Kho, Abel N; Carrell, David S; Pathak, Jyotishman; Peissig, Peggy L; Tromp, Gerard; Denny, Joshua C; Starren, Justin B

    2014-10-01

    Design patterns, in the context of software development and ontologies, provide generalized approaches and guidance to solving commonly occurring problems, or addressing common situations typically informed by intuition, heuristics and experience. While the biomedical literature contains broad coverage of specific phenotype algorithm implementations, no work to date has attempted to generalize common approaches into design patterns, which may then be distributed to the informatics community to efficiently develop more accurate phenotype algorithms. Using phenotyping algorithms stored in the Phenotype KnowledgeBase (PheKB), we conducted an independent iterative review to identify recurrent elements within the algorithm definitions. We extracted and generalized recurrent elements in these algorithms into candidate patterns. The authors then assessed the candidate patterns for validity by group consensus, and annotated them with attributes. A total of 24 electronic Medical Records and Genomics (eMERGE) phenotypes available in PheKB as of 1/25/2013 were downloaded and reviewed. From these, a total of 21 phenotyping patterns were identified, which are available as an online data supplement. Repeatable patterns within phenotyping algorithms exist, and when codified and cataloged may help to educate both experienced and novice algorithm developers. The dissemination and application of these patterns has the potential to decrease the time to develop algorithms, while improving portability and accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Evaluating Knowledge Structure-Based Adaptive Testing Algorithms and System Development

    Science.gov (United States)

    Wu, Huey-Min; Kuo, Bor-Chen; Yang, Jinn-Min

    2012-01-01

    In recent years, many computerized test systems have been developed for diagnosing students' learning profiles. Nevertheless, it remains a challenging issue to find an adaptive testing algorithm to both shorten testing time and precisely diagnose the knowledge status of students. In order to find a suitable algorithm, four adaptive testing…

  15. Evaluating Knowledge Structure-Based Adaptive Testing Algorithms and System Development

    Science.gov (United States)

    Wu, Huey-Min; Kuo, Bor-Chen; Yang, Jinn-Min

    2012-01-01

    In recent years, many computerized test systems have been developed for diagnosing students' learning profiles. Nevertheless, it remains a challenging issue to find an adaptive testing algorithm to both shorten testing time and precisely diagnose the knowledge status of students. In order to find a suitable algorithm, four adaptive testing…

  16. A new Gibbs sampling based algorithm for Bayesian model updating with incomplete complex modal data

    Science.gov (United States)

    Cheung, Sai Hung; Bansal, Sahil

    2017-08-01

    Model updating using measured system dynamic response has a wide range of applications in system response evaluation and control, health monitoring, or reliability and risk assessment. In this paper, we are interested in model updating of a linear dynamic system with non-classical damping based on incomplete modal data including modal frequencies, damping ratios and partial complex mode shapes of some of the dominant modes. In the proposed algorithm, the identification model is based on a linear structural model where the mass and stiffness matrix are represented as a linear sum of contribution of the corresponding mass and stiffness matrices from the individual prescribed substructures, and the damping matrix is represented as a sum of individual substructures in the case of viscous damping, in terms of mass and stiffness matrices in the case of Rayleigh damping or a combination of the former. To quantify the uncertainties and plausibility of the model parameters, a Bayesian approach is developed. A new Gibbs-sampling based algorithm is proposed that allows for an efficient update of the probability distribution of the model parameters. In addition to the model parameters, the probability distribution of complete mode shapes is also updated. Convergence issues and numerical issues arising in the case of high-dimensionality of the problem are addressed and solutions to tackle these problems are proposed. The effectiveness and efficiency of the proposed method are illustrated by numerical examples with complex modes.

  17. Algorithms for a parallel implementation of Hidden Markov Models with a small state space

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Sand, Andreas

    2011-01-01

    Two of the most important algorithms for Hidden Markov Models are the forward and the Viterbi algorithms. We show how formulating these using linear algebra naturally lends itself to parallelization. Although the obtained algorithms are slow for Hidden Markov Models with large state spaces......, they require very little communication between processors, and are fast in practice on models with a small state space. We have tested our implementation against two other imple- mentations on artificial data and observe a speed-up of roughly a factor of 5 for the forward algorithm and more than 6...... for the Viterbi algorithm. We also tested our algorithm in the Coalescent Hidden Markov Model framework, where it gave a significant speed-up....

  18. A Distributed and Deterministic TDMA Algorithm for Write-All-With-Collision Model

    CERN Document Server

    Arumugam, Mahesh

    2008-01-01

    Several self-stabilizing time division multiple access (TDMA) algorithms are proposed for sensor networks. In addition to providing a collision-free communication service, such algorithms enable the transformation of programs written in abstract models considered in distributed computing literature into a model consistent with sensor networks, i.e., write all with collision (WAC) model. Existing TDMA slot assignment algorithms have one or more of the following properties: (i) compute slots using a randomized algorithm, (ii) assume that the topology is known upfront, and/or (iii) assign slots sequentially. If these algorithms are used to transform abstract programs into programs in WAC model then the transformed programs are probabilistically correct, do not allow the addition of new nodes, and/or converge in a sequential fashion. In this paper, we propose a self-stabilizing deterministic TDMA algorithm where a sensor is aware of only its neighbors. We show that the slots are assigned to the sensors in a concu...

  19. Optimization of Land Use Suitability for Agriculture Using Integrated Geospatial Model and Genetic Algorithms

    Science.gov (United States)

    Mansor, S. B.; Pormanafi, S.; Mahmud, A. R. B.; Pirasteh, S.

    2012-08-01

    In this study, a geospatial model for land use allocation was developed from the view of simulating the biological autonomous adaptability to environment and the infrastructural preference. The model was developed based on multi-agent genetic algorithm. The model was customized to accommodate the constraint set for the study area, namely the resource saving and environmental-friendly. The model was then applied to solve the practical multi-objective spatial optimization allocation problems of land use in the core region of Menderjan Basin in Iran. The first task was to study the dominant crops and economic suitability evaluation of land. Second task was to determine the fitness function for the genetic algorithms. The third objective was to optimize the land use map using economical benefits. The results has indicated that the proposed model has much better performance for solving complex multi-objective spatial optimization allocation problems and it is a promising method for generating land use alternatives for further consideration in spatial decision-making.

  20. Parallel algorithms for interactive manipulation of digital terrain models

    Science.gov (United States)

    Davis, E. W.; Mcallister, D. F.; Nagaraj, V.

    1988-01-01

    Interactive three-dimensional graphics applications, such as terrain data representation and manipulation, require extensive arithmetic processing. Massively parallel machines are attractive for this application since they offer high computational rates, and grid connected architectures provide a natural mapping for grid based terrain models. Presented here are algorithms for data movement on the massive parallel processor (MPP) in support of pan and zoom functions over large data grids. It is an extension of earlier work that demonstrated real-time performance of graphics functions on grids that were equal in size to the physical dimensions of the MPP. When the dimensions of a data grid exceed the processing array size, data is packed in the array memory. Windows of the total data grid are interactively selected for processing. Movement of packed data is needed to distribute items across the array for efficient parallel processing. Execution time for data movement was found to exceed that for arithmetic aspects of graphics functions. Performance figures are given for routines written in MPP Pascal.

  1. Development of a Collins-type cryocooler floating piston control algorithm

    Science.gov (United States)

    Hogan, Jake; Hannon, Charles L.; Brisson, John

    2012-06-01

    The Collins-type cryocooler uses a floating piston design for the working fluid expansion. The piston floats between a cold volume, where the working fluid is expanded, and a warm volume. The piston is shuttled between opposite ends of the closed cylinder by opening and closing valves connecting several reservoirs at various pressures to the warm volume. Ideally, these pressures should be distributed between the high and low system pressure to gain good control of the piston motion. In this work, a numerical quasi-steady thermodynamic model is developed for the piston cycle. The model determines the steady state pressure distribution of the reservoirs for a given control algorithm. The results are then extended to show how valve timing modifications can be used to overcome helium leakage past the piston during operation.

  2. Development of a Low-Lift Chiller Controller and Simplified Precooling Control Algorithm - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Gayeski, N.; Armstrong, Peter; Alvira, M.; Gagne, J.; Katipamula, Srinivas

    2011-11-30

    KGS Buildings LLC (KGS) and Pacific Northwest National Laboratory (PNNL) have developed a simplified control algorithm and prototype low-lift chiller controller suitable for model-predictive control in a demonstration project of low-lift cooling. Low-lift cooling is a highly efficient cooling strategy conceived to enable low or net-zero energy buildings. A low-lift cooling system consists of a high efficiency low-lift chiller, radiant cooling, thermal storage, and model-predictive control to pre-cool thermal storage overnight on an optimal cooling rate trajectory. We call the properly integrated and controlled combination of these elements a low-lift cooling system (LLCS). This document is the final report for that project.

  3. Soft sensor development for Mooney viscosity prediction in rubber mixing process based on GMMDJITGPR algorithm

    Science.gov (United States)

    Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping

    2017-01-01

    In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.

  4. Development of Algorithms for Control of Motor Boat as Multidimensional Nonlinear Object

    Directory of Open Access Journals (Sweden)

    Gaiduk Anatoliy

    2015-01-01

    Full Text Available In this paper authors develop and research system for motor boat control, that allows to move along the stated paths with the given speed. It is assumed, that boat is equipped by the measuring system that provides current coordinates, linear and angular velocities. Control system is based upon the mathematical model, presented earlier (see references. In order to analytically find the necessary controls, all equations were transformed to Jordan controllable form. Besides solution this transformation also allows to handle model nonlinearities and get required quality of movement along the stated paths. Control system includes algorithms for control of longtitudal velocity and boat course. Research of the proposed control system according to boat design limitations for the values of control variables was performed by simulation in MATLAB. Results of two experiments, different in value of the required velocity are discussed.

  5. Unified C/VHDL Model Generation of FPGA-based LHCb VELO algorithms

    CERN Document Server

    Muecke, Manfred

    2007-01-01

    We show an alternative design approach for signal processing algorithms implemented on FPGAs. Instead of writing VHDL code for implementation and maintaining a C-model for algorithm simulation, we derive both models from one common source, allowing generation of synthesizeable VHDL and cycleand bit-accurate C-Code. We have tested our approach on the LHCb VELO pre-processing algorithms and report on experiences gained during the course of our work.

  6. Target Impact Detection Algorithm Using Computer-aided Design (CAD) Model Geometry

    Science.gov (United States)

    2014-09-01

    UNCLASSIFIED AD-E403 558 Technical Report ARMET-TR-13024 TARGET IMPACT DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ...DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ) MODEL GEOMETRY 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...This report documents a method and algorithm to export geometry from a three-dimensional, computer-aided design ( CAD ) model in a format that can be

  7. Using memristor crossbar structure to implement a novel adaptive real time fuzzy modeling algorithm

    OpenAIRE

    Afrakoti, Iman Esmaili Paeen; Shouraki, Saeed Bagheri; Merrikhbayat, Farnood

    2013-01-01

    Although fuzzy techniques promise fast meanwhile accurate modeling and control abilities for complicated systems, different difficulties have been re-vealed in real situation implementations. Usually there is no escape of it-erative optimization based on crisp domain algorithms. Recently memristor structures appeared promising to implement neural network structures and fuzzy algorithms. In this paper a novel adaptive real-time fuzzy modeling algorithm is proposed which uses active learning me...

  8. A fast algorithm for a three-dimensional synthetic model of intermittent turbulence

    CERN Document Server

    Malara, Francesco; Nigro, Giuseppina; Sorriso-Valvo, Luca

    2016-01-01

    Synthetic turbulence models are a useful tool that provide realistic representations of turbulence, necessary to test theoretical results, to serve as background fields in some numerical simulations, and to test analysis tools. Models of 1D and 3D synthetic turbulence previously developed still required large computational resources. A new wavelet-based model of synthetic turbulence, able to produce a field with tunable spectral law, intermittency and anisotropy, is presented here. The rapid algorithm introduced, based on the classic $p$-model of intermittent turbulence, allows to reach a broad spectral range using a modest computational effort. The model has been tested against the standard diagnostics for intermittent turbulence, i.e. the spectral analysis, the scale-dependent statistics of the field increments, and the multifractal analysis, all showing an excellent response.

  9. Parameter identification of ZnO surge arrester models based on genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bayadi, Abdelhafid [Laboratoire d' Automatique de Setif, Departement d' Electrotechnique, Faculte des Sciences de l' Ingenieur, Universite Ferhat ABBAS de Setif, Route de Bejaia Setif 19000 (Algeria)

    2008-07-15

    The correct and adequate modelling of ZnO surge arresters characteristics is very important for insulation coordination studies and systems reliability. In this context many researchers addressed considerable efforts to the development of surge arresters models to reproduce the dynamic characteristics observed in their behaviour when subjected to fast front impulse currents. The difficulties with these models reside essentially in the calculation and the adjustment of their parameters. This paper proposes a new technique based on genetic algorithm to obtain the best possible series of parameter values of ZnO surge arresters models. The validity of the predicted parameters is then checked by comparing the predicted results with the experimental results available in the literature. Using the ATP-EMTP package, an application of the arrester model on network system studies is presented and discussed. (author)

  10. A Development of Self-Organization Algorithm for Fuzzy Logic Controller

    Energy Technology Data Exchange (ETDEWEB)

    Park, Y.M.; Moon, U.C. [Seoul National Univ. (Korea, Republic of). Coll. of Engineering; Lee, K.Y. [Pennsylvania State Univ., University Park, PA (United States). Dept. of Electrical Engineering

    1994-09-01

    This paper proposes a complete design method for an on-line self-organizing fuzzy logic controller without using any plant model. By mimicking the human learning process, the control algorithm finds control rules of a system for which little knowledge has been known. To realize this, a concept of Fuzzy Auto-Regressive Moving Average(FARMA) rule is introduced. In a conventional fuzzy logic control, knowledge on the system supplied by an expert is required in developing control rules. However, the proposed new fuzzy logic controller needs no expert in making control rules. Instead, rules are generated using the history of input-output pairs, and new inference and defuzzification methods are developed. The generated rules are strode in the fuzzy rule space and updated on-line by a self-organizing procedure. The validity of the proposed fuzzy logic control method has been demonstrated numerically in controlling an inverted pendulum. (author). 28 refs., 16 figs.

  11. Adaptation of an Evolutionary Algorithm in Modeling Electric Circuits

    Directory of Open Access Journals (Sweden)

    J. Hájek

    2010-01-01

    Full Text Available This paper describes the influence of setting control parameters of a differential evolutionary algorithm (DE and the influence of adapting these parameters on the simulation of electric circuits and their components. Various DE algorithm strategies are investigated, and also the influence of adapting the controlling parameters (Cr, F during simulation and the effect of sample size. Optimizing an equivalent circuit diagram is chosen as a test task. Several strategies and settings of a DE algorithm are evaluated according to their convergence to the right solution. 

  12. Genetic Algorithms for Optimization of Machine-learning Models and their Applications in Bioinformatics

    KAUST Repository

    Magana-Mora, Arturo

    2017-04-29

    Machine-learning (ML) techniques have been widely applied to solve different problems in biology. However, biological data are large and complex, which often result in extremely intricate ML models. Frequently, these models may have a poor performance or may be computationally unfeasible. This study presents a set of novel computational methods and focuses on the application of genetic algorithms (GAs) for the simplification and optimization of ML models and their applications to biological problems. The dissertation addresses the following three challenges. The first is to develop a generalizable classification methodology able to systematically derive competitive models despite the complexity and nature of the data. Although several algorithms for the induction of classification models have been proposed, the algorithms are data dependent. Consequently, we developed OmniGA, a novel and generalizable framework that uses different classification models in a treeXlike decision structure, along with a parallel GA for the optimization of the OmniGA structure. Results show that OmniGA consistently outperformed existing commonly used classification models. The second challenge is the prediction of translation initiation sites (TIS) in plants genomic DNA. We performed a statistical analysis of the genomic DNA and proposed a new set of discriminant features for this problem. We developed a wrapper method based on GAs for selecting an optimal feature subset, which, in conjunction with a classification model, produced the most accurate framework for the recognition of TIS in plants. Finally, results demonstrate that despite the evolutionary distance between different plants, our approach successfully identified conserved genomic elements that may serve as the starting point for the development of a generic model for prediction of TIS in eukaryotic organisms. Finally, the third challenge is the accurate prediction of polyadenylation signals in human genomic DNA. To achieve

  13. Efficient Fourier based Algorithm Development for Airborne Moving Target Indication

    NARCIS (Netherlands)

    Lidicky, L.; Hoogeboom, P.

    2009-01-01

    This paper shows how the signal model that is commonly used as a starting point in multi-channel Space Time Adaptive Processing (STAP) for airborne Moving Target Indication (MTI) formally corresponds to a model that can be derived from a bi-static Synthetic Aperture Radar (SAR) model extended for

  14. A model-based circular binary segmentation algorithm for the analysis of array CGH data

    Directory of Open Access Journals (Sweden)

    Tu Shih-Hsin

    2011-10-01

    Full Text Available Abstract Background Circular Binary Segmentation (CBS is a permutation-based algorithm for array Comparative Genomic Hybridization (aCGH data analysis. CBS accurately segments data by detecting change-points using a maximal-t test; but extensive computational burden is involved for evaluating the significance of change-points using permutations. A recent implementation utilizing a hybrid method and early stopping rules (hybrid CBS to improve the performance in speed was subsequently proposed. However, a time analysis revealed that a major portion of computation time of the hybrid CBS was still spent on permutation. In addition, what the hybrid method provides is an approximation of the significance upper bound or lower bound, not an approximation of the significance of change-points itself. Results We developed a novel model-based algorithm, extreme-value based CBS (eCBS, which limits permutations and provides robust results without loss of accuracy. Thousands of aCGH data under null hypothesis were simulated in advance based on a variety of non-normal assumptions, and the corresponding maximal-t distribution was modeled by the Generalized Extreme Value (GEV distribution. The modeling results, which associate characteristics of aCGH data to the GEV parameters, constitute lookup tables (eXtreme model. Using the eXtreme model, the significance of change-points could be evaluated in a constant time complexity through a table lookup process. Conclusions A novel algorithm, eCBS, was developed in this study. The current implementation of eCBS consistently outperforms the hybrid CBS 4× to 20× in computation time without loss of accuracy. Source codes, supplementary materials, supplementary figures, and supplementary tables can be found at http://ntumaps.cgm.ntu.edu.tw/eCBSsupplementary.

  15. Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models

    KAUST Repository

    Vignal, Philippe

    2016-02-11

    Phase-field models are emerging as a promising strategy to simulate interfacial phenomena. Rather than tracking interfaces explicitly as done in sharp interface descriptions, these models use a diffuse order parameter to monitor interfaces implicitly. This implicit description, as well as solid physical and mathematical footings, allow phase-field models to overcome problems found by predecessors. Nonetheless, the method has significant drawbacks. The phase-field framework relies on the solution of high-order, nonlinear partial differential equations. Solving these equations entails a considerable computational cost, so finding efficient strategies to handle them is important. Also, standard discretization strategies can many times lead to incorrect solutions. This happens because, for numerical solutions to phase-field equations to be valid, physical conditions such as mass conservation and free energy monotonicity need to be guaranteed. In this work, we focus on the development of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure evolution. The algorithm developed conserves, guarantees energy stability and is second order accurate in time. The second part of the thesis presents two numerical schemes that generalize literature regarding energy-stable methods for conserved and non-conserved phase-field models. The time discretization strategies can conserve mass if needed, are energy-stable, and second order accurate in time. We also develop an adaptive time-stepping strategy, which can be applied to any second-order accurate scheme. This time-adaptive strategy relies on a backward approximation to give an accurate error estimator. The spatial discretization, in both parts, relies on a mixed finite element formulation and isogeometric analysis. The codes are

  16. The use of machine learning algorithms to design a generalized simplified denitrification model

    Directory of Open Access Journals (Sweden)

    F. Oehler

    2010-04-01

    Full Text Available We designed generalized simplified models using machine learning algorithms (ML to assess denitrification at the catchment scale. In particular, we designed an artificial neural network (ANN to simulate total nitrogen emissions from the denitrification process. Boosted regression trees (BRT, another ML was also used to analyse the relationships and the relative influences of different input variables towards total denitrification. To calibrate the ANN and BRT models, we used a large database obtained by collating datasets from the literature. We developed a simple methodology to give confidence intervals for the calibration and validation process. Both ML algorithms clearly outperformed a commonly used simplified model of nitrogen emissions, NEMIS. NEMIS is based on denitrification potential, temperature, soil water content and nitrate concentration. The ML models used soil organic matter % in place of a denitrification potential and pH as a fifth input variable. The BRT analysis reaffirms the importance of temperature, soil water content and nitrate concentration. Generality of the ANN model may also be improved if pH is used to differentiate between soil types. Further improvements in model performance can be achieved by lessening dataset effects.

  17. Application of micro-genetic algorithm for calibration of kinetic parameters in HCCI engine combustion model

    Institute of Scientific and Technical Information of China (English)

    Haozhong HUANG; Wanhua SU

    2008-01-01

    The micro-genetic algorithm (μGA) as a highly effective optimization method, is applied to calibrate to a newly developed reduced chemical kinetic model (40 species and 62 reactions) for the homogeneous charge compression ignition (HCCI) combustion of n-heptane to improve its autoignition predictions for different engine operating conditions. The seven kinetic parameters of the calibrated model are determined using a combination of the Micro-Genetic Algorithm and the SENKIN program of CHEMKIN chemical kinetics software package. Simulation results show that the autoignition predictions of the calibrated model agree better with those of the detailed chemical kinetic model (544 species and 2 446 reactions) than the original model over the range of equivalence ratios from 0.1-1.3 and temperature from 300-3 000 K. The results of this study have demonstrated that the μGA is an effective tool to facilitate the calibration of a large number of kinetic parameters in a reduced kinetic model.

  18. An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework

    Science.gov (United States)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong

    2016-07-01

    This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the

  19. PM Synchronous Motor Dynamic Modeling with Genetic Algorithm ...

    African Journals Online (AJOL)

    Adel

    intelligence like neural network, genetic algorithm, etc (El Shahat and El Shewy, ..... maximum power factor has the most powerful effect on all various machine .... Artificial Intelligence, Renewable Energy, Power System, Control Systems, PV ...

  20. Image Encryption Algorithm Based on Chaotic Economic Model

    Directory of Open Access Journals (Sweden)

    S. S. Askar

    2015-01-01

    Full Text Available In literature, chaotic economic systems have got much attention because of their complex dynamic behaviors such as bifurcation and chaos. Recently, a few researches on the usage of these systems in cryptographic algorithms have been conducted. In this paper, a new image encryption algorithm based on a chaotic economic map is proposed. An implementation of the proposed algorithm on a plain image based on the chaotic map is performed. The obtained results show that the proposed algorithm can successfully encrypt and decrypt the images with the same security keys. The security analysis is encouraging and shows that the encrypted images have good information entropy and very low correlation coefficients and the distribution of the gray values of the encrypted image has random-like behavior.

  1. Combinatorial Clustering Algorithm of Quantum-Behaved Particle Swarm Optimization and Cloud Model

    Directory of Open Access Journals (Sweden)

    Mi-Yuan Shan

    2013-01-01

    Full Text Available We propose a combinatorial clustering algorithm of cloud model and quantum-behaved particle swarm optimization (COCQPSO to solve the stochastic problem. The algorithm employs a novel probability model as well as a permutation-based local search method. We are setting the parameters of COCQPSO based on the design of experiment. In the comprehensive computational study, we scrutinize the performance of COCQPSO on a set of widely used benchmark instances. By benchmarking combinatorial clustering algorithm with state-of-the-art algorithms, we can show that its performance compares very favorably. The fuzzy combinatorial optimization algorithm of cloud model and quantum-behaved particle swarm optimization (FCOCQPSO in vague sets (IVSs is more expressive than the other fuzzy sets. Finally, numerical examples show the clustering effectiveness of COCQPSO and FCOCQPSO clustering algorithms which are extremely remarkable.

  2. FHSA-SED: Two-Locus Model Detection for Genome-Wide Association Study with Harmony Search Algorithm.

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    Full Text Available Two-locus model is a typical significant disease model to be identified in genome-wide association study (GWAS. Due to intensive computational burden and diversity of disease models, existing methods have drawbacks on low detection power, high computation cost, and preference for some types of disease models.In this study, two scoring functions (Bayesian network based K2-score and Gini-score are used for characterizing two SNP locus as a candidate model, the two criteria are adopted simultaneously for improving identification power and tackling the preference problem to disease models. Harmony search algorithm (HSA is improved for quickly finding the most likely candidate models among all two-locus models, in which a local search algorithm with two-dimensional tabu table is presented to avoid repeatedly evaluating some disease models that have strong marginal effect. Finally G-test statistic is used to further test the candidate models.We investigate our method named FHSA-SED on 82 simulated datasets and a real AMD dataset, and compare it with two typical methods (MACOED and CSE which have been developed recently based on swarm intelligent search algorithm. The results of simulation experiments indicate that our method outperforms the two compared algorithms in terms of detection power, computation time, evaluation times, sensitivity (TPR, specificity (SPC, positive predictive value (PPV and accuracy (ACC. Our method has identified two SNPs (rs3775652 and rs10511467 that may be also associated with disease in AMD dataset.

  3. The Evaluation Model About the Result of Enterprise Technological Innovation Based on DAGF Algorithm

    Institute of Scientific and Technical Information of China (English)

    LikeMao; ZigangZhang

    2004-01-01

    Based on DAGF Algorithm, an evaluation model about the result of enterprise's technological innovation is proposed. Furthermore, establishment of its system of evaluation indicators and DAGF Algorithm are discussed in detail. Besides, the result of the case shows that the model is fit for evaluation of the result of enterprise's technological innovation.

  4. Dual geometric worm algorithm for two-dimensional discrete classical lattice models

    Science.gov (United States)

    Hitchcock, Peter; Sørensen, Erik S.; Alet, Fabien

    2004-07-01

    We present a dual geometrical worm algorithm for two-dimensional Ising models. The existence of such dual algorithms was first pointed out by Prokof’ev and Svistunov [N. Prokof’ev and B. Svistunov, Phys. Rev. Lett. 87, 160601 (2001)]. The algorithm is defined on the dual lattice and is formulated in terms of bond variables and can therefore be generalized to other two-dimensional models that can be formulated in terms of bond variables. We also discuss two related algorithms formulated on the direct lattice, applicable in any dimension. These latter algorithms turn out to be less efficient but of considerable intrinsic interest. We show how such algorithms quite generally can be “directed” by minimizing the probability for the worms to erase themselves. Explicit proofs of detailed balance are given for all the algorithms. In terms of computational efficiency the dual geometrical worm algorithm is comparable to well known cluster algorithms such as the Swendsen-Wang and Wolff algorithms, however, it is quite different in structure and allows for a very simple and efficient implementation. The dual algorithm also allows for a very elegant way of calculating the domain wall free energy.

  5. Exponential Gaussian approach for spectral modeling: The EGO algorithm I. Band saturation

    Science.gov (United States)

    Pompilio, Loredana; Pedrazzi, Giuseppe; Sgavetti, Maria; Cloutis, Edward A.; Craig, Michael A.; Roush, Ted L.

    2009-06-01

    Curve fitting techniques are a widespread approach to spectral modeling in the VNIR range [Burns, R.G., 1970. Am. Mineral. 55, 1608-1632; Singer, R.B., 1981. J. Geophys. Res. 86, 7967-7982; Roush, T.L., Singer, R.B., 1986. J. Geophys. Res. 91, 10301-10308; Sunshine, J.M., Pieters, C.M., Pratt, S.F., 1990. J. Geophys. Res. 95, 6955-6966]. They have been successfully used to model reflectance spectra of powdered minerals and mixtures, natural rock samples and meteorites, and unknown remote spectra of the Moon, Mars and asteroids. Here, we test a new decomposition algorithm to model VNIR reflectance spectra and call it Exponential Gaussian Optimization (EGO). The EGO algorithm is derived from and complementary to the MGM of Sunshine et al. [Sunshine, J.M., Pieters, C.M., Pratt, S.F., 1990. J. Geophys. Res. 95, 6955-6966]. The general EGO equation has been especially designed to account for absorption bands affected by saturation and asymmetry. Here we present a special case of EGO and address it to model saturated electronic transition bands. Our main goals are: (1) to recognize and model band saturation in reflectance spectra; (2) to develop a basic approach for decomposition of rock spectra, where effects due to saturation are most prevalent; (3) to reduce the uncertainty related to quantitative estimation when band saturation is occurring. In order to accomplish these objectives, we simulate flat bands starting from pure Gaussians and test the EGO algorithm on those simulated spectra first. Then we test the EGO algorithm on a number of measurements acquired on powdered pyroxenes having different compositions and average grain size and binary mixtures of orthopyroxenes with barium sulfate. The main results arising from this study are: (1) EGO model is able to numerically account for the occurrence of saturation effects on reflectance spectra of powdered minerals and mixtures; (2) the systematic dilution of a strong absorber using a bright neutral material is not

  6. Ice classification algorithm development and verification for the Alaska SAR Facility using aircraft imagery

    Science.gov (United States)

    Holt, Benjamin; Kwok, Ronald; Rignot, Eric

    1989-01-01

    The Alaska SAR Facility (ASF) at the University of Alaska, Fairbanks is a NASA program designed to receive, process, and archive SAR data from ERS-1 and to support investigations that will use this regional data. As part of ASF, specialized subsystems and algorithms to produce certain geophysical products from the SAR data are under development. Of particular interest are ice motion, ice classification, and ice concentration. This work focuses on the algorithm under development for ice classification, and the verification of the algorithm using C-band aircraft SAR imagery recently acquired over the Alaskan arctic.

  7. A decision tree algorithm for investigation of model biases related to dynamical cores and physical parameterizations.

    Science.gov (United States)

    Soner Yorgun, M; Rood, Richard B

    2016-12-01

    An object-based evaluation method using a pattern recognition algorithm (i.e., classification trees) is applied to the simulated orographic precipitation for idealized experimental setups using the National Center of Atmospheric Research (NCAR) Community Atmosphere Model (CAM) with the finite volume (FV) and the Eulerian spectral transform dynamical cores with varying resolutions. Daily simulations were analyzed and three different types of precipitation features were identified by the classification tree algorithm. The statistical characteristics of these features (i.e., maximum value, mean value, and variance) were calculated to quantify the difference between the dynamical cores and changing resolutions. Even with the simple and smooth topography in the idealized setups, complexity in the precipitation fields simulated by the models develops quickly. The classification tree algorithm using objective thresholding successfully detected different types of precipitation features even as the complexity of the precipitation field increased. The results show that the complexity and the bias introduced in small-scale phenomena due to the spectral transform method of CAM Eulerian spectral dynamical core is prominent, and is an important reason for its dissimilarity from the FV dynamical core. The resolvable scales, both in horizontal and vertical dimensions, have significant effect on the simulation of precipitation. The results of this study also suggest that an efficient and informative study about the biases produced by GCMs should involve daily (or even hourly) output (rather than monthly mean) analysis over local scales.

  8. Model Algorithm Research on Cooling Path Control of Hot-rolled Dual-phase Steel

    Institute of Scientific and Technical Information of China (English)

    Xiao-qing XU; Xiao-dong HAO; Shi-guang ZHOU; Chang-sheng LIU; Qi-fu ZHANG

    2016-01-01

    With the development of advanced high strength steel,especially for dual-phase steel,the model algorithm for cooling control after hot rolling has to achieve the targeted coiling temperature control at the location of downcoiler whilst maintaining the cooling path control based on strip microstructure along the whole cooling section.A cooling path control algorithm was proposed for the laminar cooling process as a solution to practical difficulties associated with the realization of the thermal cycle during cooling process.The heat conduction equation coupled with the carbon diffusion equation with moving boundary was employed in order to simulate temperature change and phase transfor-mation kinetics,making it possible to observe the temperature field and the phase fraction of the strip in real time. On this basis,an optimization method was utilized for valve settings to ensure the minimum deviations between the predicted and actual cooling path of the strip,taking into account the constraints of the cooling equipment′s specific capacity,cooling line length,etc.Results showed that the model algorithm was able to achieve the online cooling path control for dual-phase steel.

  9. Plant development models

    NARCIS (Netherlands)

    Chuine, I.; Garcia de Cortazar-Atauri, I.; Kramer, K.; Hänninen, H.

    2013-01-01

    In this chapter we provide a brief overview of plant phenology modeling, focusing on mechanistic phenological models. After a brief history of plant phenology modeling, we present the different models which have been described in the literature so far and highlight the main differences between them,

  10. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  11. A comparison of computational efficiencies of stochastic algorithms in terms of two infection models.

    Science.gov (United States)

    Banks, H Thomas; Hu, Shuhua; Joyner, Michele; Broido, Anna; Canter, Brandi; Gayvert, Kaitlyn; Link, Kathryn

    2012-07-01

    In this paper, we investigate three particular algorithms: a stochastic simulation algorithm (SSA), and explicit and implicit tau-leaping algorithms. To compare these methods, we used them to analyze two infection models: a Vancomycin-resistant enterococcus (VRE) infection model at the population level, and a Human Immunodeficiency Virus (HIV) within host infection model. While the first has a low species count and few transitions, the second is more complex with a comparable number of species involved. The relative efficiency of each algorithm is determined based on computational time and degree of precision required. The numerical results suggest that all three algorithms have the similar computational efficiency for the simpler VRE model, and the SSA is the best choice due to its simplicity and accuracy. In addition, we have found that with the larger and more complex HIV model, implementation and modification of tau-Leaping methods are preferred.

  12. Prediction and Research on Vegetable Price Based on Genetic Algorithm and Neural Network Model

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Considering the complexity of vegetables price forecast,the prediction model of vegetables price was set up by applying the neural network based on genetic algorithm and using the characteristics of genetic algorithm and neural work.Taking mushrooms as an example,the parameters of the model are analyzed through experiment.In the end,the results of genetic algorithm and BP neural network are compared.The results show that the absolute error of prediction data is in the scale of 10%;in the scope that the absolute error in the prediction data is in the scope of 20% and 15%.The accuracy of genetic algorithm based on neutral network is higher than the BP neutral network model,especially the absolute error of prediction data is within the scope of 20%.The accuracy of genetic algorithm based on neural network is obviously better than BP neural network model,which represents the favorable generalization capability of the model.

  13. RECONFIGURING POWER SYSTEMS TO MINIMIZE CASCADING FAILURES: MODELS AND ALGORITHMS

    Energy Technology Data Exchange (ETDEWEB)

    Bienstock, Daniel

    2014-04-11

    the main goal of this project was to develop new scientific tools, based on optimization techniques, with the purpose of controlling and modeling cascading failures of electrical power transmission systems. We have developed a high-quality tool for simulating cascading failures. The problem of how to control a cascade was addressed, with the aim of stopping the cascade with a minimum of load lost. Yet another aspect of cascade is the investigation of which events would trigger a cascade, or more appropriately the computation of the most harmful initiating event given some constraint on the severity of the event. One common feature of the cascade models described (indeed, of several of the cascade models found in the literature) is that we study thermally-induced line tripping. We have produced a study that accounts for exogenous randomness (e.g. wind and ambient temperature) that could affect the thermal behavior of a line, with a focus on controlling the power flow of the line while maintaining safe probability of line overload. This was done by means of a rigorous analysis of a stochastic version of the heat equation. we incorporated a model of randomness in the behavior of wind power output; again modeling an OPF-like problem that uses chance-constraints to maintain low probability of line overloads; this work has been continued so as to account for generator dynamics as well.

  14. Algorithm for Modeling Wire Cut Electrical Discharge Machine Parameters using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    G.Sankara Narayanan

    2014-03-01

    Full Text Available Unconventional machining process finds lot of application in aerospace and precision industries. It is preferred over other conventional methods because of the advent of composite and high strength to weight ratio materials, complex parts and also because of its high accuracy and precision. Usually in unconventional machine tools, trial and error method is used to fix the values of process parameters which increase the production time and material wastage. A mathematical model functionally relating process parameters and operating parameters of a wire cut electric discharge machine (WEDM is developed incorporating Artificial neural network (ANN and the work piece material is SKD11 tool steel. This is accomplished by training a feed forward neural network with back propagation learning Levenberg-Marquardt algorithm. The required data used for training and testing the ANN are obtained by conducting trial runs in wire cut electric discharge machine in a small scale industry from South India. The programs for training and testing the neural network are developed, using matlab 7.0.1 package. In this work, we have considered the parameters such as thickness, time and wear as the input values and from that the values of the process parameters are related and a algorithm is arrived. Hence, the proposed algorithm reduces the time taken by trial runs to set the input process parameters of WEDM and thus reduces the production time along with reduction in material wastage. Thus the cost of machining processes is reduced and thereby increases the overall productivity.

  15. Development of Automatic Cluster Algorithm for Microcalcification in Digital Mammography

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Seok Yoon [Dept. of Medical Engineering, Korea University, Seoul (Korea, Republic of); Kim, Chang Soo [Dept. of Radiological Science, College of Health Sciences, Catholic University of Pusan, Pusan (Korea, Republic of)

    2009-03-15

    Digital Mammography is an efficient imaging technique for the detection and diagnosis of breast pathological disorders. Six mammographic criteria such as number of cluster, number, size, extent and morphologic shape of microcalcification, and presence of mass, were reviewed and correlation with pathologic diagnosis were evaluated. It is very important to find breast cancer early when treatment can reduce deaths from breast cancer and breast incision. In screening breast cancer, mammography is typically used to view the internal organization. Clusterig microcalcifications on mammography represent an important feature of breast mass, especially that of intraductal carcinoma. Because microcalcification has high correlation with breast cancer, a cluster of a microcalcification can be very helpful for the clinical doctor to predict breast cancer. For this study, three steps of quantitative evaluation are proposed : DoG filter, adaptive thresholding, Expectation maximization. Through the proposed algorithm, each cluster in the distribution of microcalcification was able to measure the number calcification and length of cluster also can be used to automatically diagnose breast cancer as indicators of the primary diagnosis.

  16. QSAR modeling for quinoxaline derivatives using genetic algorithm and simulated annealing based feature selection.

    Science.gov (United States)

    Ghosh, P; Bagchi, M C

    2009-01-01

    With a view to the rational design of selective quinoxaline derivatives, 2D and 3D-QSAR models have been developed for the prediction of anti-tubercular activities. Successful implementation of a predictive QSAR model largely depends on the selection of a preferred set of molecular descriptors that can signify the chemico-biological interaction. Genetic algorithm (GA) and simulated annealing (SA) are applied as variable selection methods for model development. 2D-QSAR modeling using GA or SA based partial least squares (GA-PLS and SA-PLS) methods identified some important topological and electrostatic descriptors as important factor for tubercular activity. Kohonen network and counter propagation artificial neural network (CP-ANN) considering GA and SA based feature selection methods have been applied for such QSAR modeling of Quinoxaline compounds. Out of a variable pool of 380 molecular descriptors, predictive QSAR models are developed for the training set and validated on the test set compounds and a comparative study of the relative effectiveness of linear and non-linear approaches has been investigated. Further analysis using 3D-QSAR technique identifies two models obtained by GA-PLS and SA-PLS methods leading to anti-tubercular activity prediction. The influences of steric and electrostatic field effects generated by the contribution plots are discussed. The results indicate that SA is a very effective variable selection approach for such 3D-QSAR modeling.

  17. A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials

    Energy Technology Data Exchange (ETDEWEB)

    Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A

    2008-12-04

    We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.

  18. A hybrid model for bankruptcy prediction using genetic algorithm, fuzzy c-means and mars

    CERN Document Server

    Martin, A; Saranya, G; Gayathri, P; Venkatesan, Prasanna

    2011-01-01

    Bankruptcy prediction is very important for all the organization since it affects the economy and rise many social problems with high costs. There are large number of techniques have been developed to predict the bankruptcy, which helps the decision makers such as investors and financial analysts. One of the bankruptcy prediction models is the hybrid model using Fuzzy C-means clustering and MARS, which uses static ratios taken from the bank financial statements for prediction, which has its own theoretical advantages. The performance of existing bankruptcy model can be improved by selecting the best features dynamically depend on the nature of the firm. This dynamic selection can be accomplished by Genetic Algorithm and it improves the performance of prediction model.

  19. Supplier selection based on a neural network model using genetic algorithm.

    Science.gov (United States)

    Golmohammadi, Davood; Creese, Robert C; Valian, Haleh; Kolassa, John

    2009-09-01

    In this paper, a decision-making model was developed to select suppliers using neural networks (NNs). This model used historical supplier performance data for selection of vendor suppliers. Input and output were designed in a unique manner for training purposes. The managers' judgments about suppliers were simulated by using a pairwise comparisons matrix for output estimation in the NN. To obtain the benefit of a search technique for model structure and training, genetic algorithm (GA) was applied for the initial weights and architecture of the network. The suppliers' database information (input) can be updated over time to change the suppliers' score estimation based on their performance. The case study illustrated shows how the model can be applied for suppliers' selection.

  20. A HYBRID MODEL FOR BANKRUPTCY PREDICTION USING GENETIC ALGORITHM, FUZZY C-MEANS AND MARS

    Directory of Open Access Journals (Sweden)

    A.Martin

    2011-05-01

    Full Text Available Bankruptcy prediction is very important for all the organization since it affects the economy and rise manysocial problems with high costs. There are large number of techniques have been developed to predict thebankruptcy, which helps the decision makers such as investors and financial analysts. One of thebankruptcy prediction models is the hybrid model using Fuzzy C-means clustering and MARS, which usesstatic ratios taken from the bank financial statements for prediction, which has its own theoreticaladvantages. The performance of existing bankruptcy model can be improved by selecting the best featuresdynamically depend on the nature of the firm. This dynamic selection can be accomplished by GeneticAlgorithm and it improves the performance of prediction model. .

  1. Modeling of Energy Demand in the Greenhouse Using PSO-GA Hybrid Algorithms

    Directory of Open Access Journals (Sweden)

    Jiaoliao Chen

    2015-01-01

    Full Text Available Modeling of energy demand in agricultural greenhouse is very important to maintain optimum inside environment for plant growth and energy consumption decreasing. This paper deals with the identification parameters for physical model of energy demand in the greenhouse using hybrid particle swarm optimization and genetic algorithms technique (HPSO-GA. HPSO-GA is developed to estimate the indistinct internal parameters of greenhouse energy model, which is built based on thermal balance. Experiments were conducted to measure environment and energy parameters in a cooling greenhouse with surface water source heat pump system, which is located in mid-east China. System identification experiments identify model parameters using HPSO-GA such as inertias and heat transfer constants. The performance of HPSO-GA on the parameter estimation is better than GA and PSO. This algorithm can improve the classification accuracy while speeding up the convergence process and can avoid premature convergence. System identification results prove that HPSO-GA is reliable in solving parameter estimation problems for modeling the energy demand in the greenhouse.

  2. Solving inverse problem for Markov chain model of customer lifetime value using flower pollination algorithm

    Science.gov (United States)

    Al-Ma'shumah, Fathimah; Permana, Dony; Sidarto, Kuntjoro Adji

    2015-12-01

    Customer Lifetime Value is an important and useful concept in marketing. One of its benefits is to help a company for budgeting marketing expenditure for customer acquisition and customer retention. Many mathematical models have been introduced to calculate CLV considering the customer retention/migration classification scheme. A fairly new class of these models which will be described in this paper uses Markov Chain Models (MCM). This class of models has the major advantage for its flexibility to be modified to several different cases/classification schemes. In this model, the probabilities of customer retention and acquisition play an important role. From Pfeifer and Carraway, 2000, the final formula of CLV obtained from MCM usually contains nonlinear form of the transition probability matrix. This nonlinearity makes the inverse problem of CLV difficult to solve. This paper aims to solve this inverse problem, yielding the approximate transition probabilities for the customers, by applying metaheuristic optimization algorithm developed by Yang, 2013, Flower Pollination Algorithm. The major interpretation of obtaining the transition probabilities are to set goals for marketing teams in keeping the relative frequencies of customer acquisition and customer retention.

  3. Choosing processor array configuration by performance modeling for a highly parallel linear algebra algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Littlefield, R.J.; Maschhoff, K.J.

    1991-04-01

    Many linear algebra algorithms utilize an array of processors across which matrices are distributed. Given a particular matrix size and a maximum number of processors, what configuration of processors, i.e., what size and shape array, will execute the fastest The answer to this question depends on tradeoffs between load balancing, communication startup and transfer costs, and computational overhead. In this paper we analyze in detail one algorithm: the blocked factored Jacobi method for solving dense eigensystems. A performance model is developed to predict execution time as a function of the processor array and matrix sizes, plus the basic computation and communication speeds of the underlying computer system. In experiments on a large hypercube (up to 512 processors), this model has been found to be highly accurate (mean error {approximately} 2%) over a wide range of matrix sizes (10 {times} 10 through 200 {times} 200) and processor counts (1 to 512). The model reveals, and direct experiment confirms, that the tradeoffs mentioned above can be surprisingly complex and counterintuitive. We propose decision procedures based directly on the performance model to choose configurations for fastest execution. The model-based decision procedures are compared to a heuristic strategy and shown to be significantly better. 7 refs., 8 figs., 1 tab.

  4. A Novel Algorithmic Cost Estimation Model Based on Soft Computing Technique

    Directory of Open Access Journals (Sweden)

    Iman Attarzadeh

    2010-01-01

    Full Text Available Problem statement: Software development effort estimation is the process of predicting the most realistic use of effort required for developing software based on some parameters. It has always characterized one of the biggest challenges in Computer Science for the last decades. Because time and cost estimate at the early stages of the software development are the most difficult to obtain and they are often the least accurate. Traditional algorithmic techniques such as regression models, Software Life Cycle Management (SLIM, COCOMO II model and function points, require an estimation process in a long term. But, nowadays that is not acceptable for software developers and companies. Newer soft computing techniques to effort estimation based on non-algorithmic techniques such as Fuzzy Logic (FL may offer an alternative for solving the problem. This work aims to propose a new fuzzy logic realistic model to achieve more accuracy in software effort estimation. The main objective of this research was to investigate the role of fuzzy logic technique in improving the effort estimation accuracy by characterizing inputs parameters using two-side Gaussian function which gave superior transition from one interval to another. Approach: The methodology adopted in this study was use of fuzzy logic approach rather than classical intervals in the COCOMO II. Using advantages of fuzzy logic such as fuzzy sets, inputs parameters can be specified by distribution of its possible values and these fuzzy sets were represented by membership functions. In this study to get a smoother transition in the membership function for input parameters, its associated linguistic values were represented by two-side Gaussian Membership Functions (2-D GMF and rules. Results: After analyzing the results attained by means of applying COCOMO II and proposed model based on fuzzy logic to the NASA dataset and created an artificial dataset, it had been found that proposed model was performing

  5. Model of stacked long Josephson junctions: Parallel algorithm and numerical results in case of weak coupling

    Science.gov (United States)

    Zemlyanaya, E. V.; Bashashin, M. V.; Rahmonov, I. R.; Shukrinov, Yu. M.; Atanasova, P. Kh.; Volokhova, A. V.

    2016-10-01

    We consider a model of system of long Josephson junctions (LJJ) with inductive and capacitive coupling. Corresponding system of nonlinear partial differential equations is solved by means of the standard three-point finite-difference approximation in the spatial coordinate and utilizing the Runge-Kutta method for solution of the resulting Cauchy problem. A parallel algorithm is developed and implemented on a basis of the MPI (Message Passing Interface) technology. Effect of the coupling between the JJs on the properties of LJJ system is demonstrated. Numerical results are discussed from the viewpoint of effectiveness of parallel implementation.

  6. New Algorithm Model for Processing Generalized Dynamic Nonlinear Data Derived from Deformation Monitoring Network

    Institute of Scientific and Technical Information of China (English)

    LIN Xiangguo; LIANG Yong

    2005-01-01

    The processing of nonlinear data was one of hot topics in surveying and mapping field in recent years.As a result, many linear methods and nonlinear methods have been developed.But the methods for processing generalized nonlinear surveying and mapping data, especially for different data types and including unknown parameters with random or nonrandom, are seldom noticed.A new algorithm model is presented in this paper for processing nonlinear dynamic multiple-period and multiple-accuracy data derived from deformation monitoring network.

  7. Development and validation of an algorithm to identify planned readmissions from claims data

    Science.gov (United States)

    Horwitz, Leora I.; Grady, Jacqueline N.; Cohen, Dorothy; Lin, Zhenqiu; Volpe, Mark; Ngo, Chi; Masica, Andrew L.; Long, Theodore; Wang, Jessica; Keenan, Megan; Montague, Julia; Suter, Lisa G.; Ross, Joseph S.; Drye, Elizabeth E.; Krumholz, Harlan M.; Bernheim, Susannah M.

    2017-01-01

    Background It is desirable not to include planned readmissions in readmission measures because they represent deliberate, scheduled care. Objectives To develop an algorithm to identify planned readmissions, describe its performance characteristics and identify improvements. Design Consensus-driven algorithm development and chart review validation study at 7 acute care hospitals in 2 health systems. Patients For development, all discharges qualifying for the publicly-reported hospital-wide readmission measure. For validation, all qualifying same-hospital readmissions that were characterized by the algorithm as planned, and a random sampling of same-hospital readmissions that were characterized as unplanned. Measurements We calculated weighted sensitivity and specificity, and positive and negative predictive values of the algorithm (version 2.1), compared to gold standard chart review. Results In consultation with 27 experts, we developed an algorithm that characterizes 7.8% of readmissions as planned. For validation we reviewed 634 readmissions. The weighted sensitivity of the algorithm was 45.1% overall; 50.9% in large teaching centers and 40.2% in smaller community hospitals. The weighted specificity was 95.9%, positive predictive value was 51.6% and negative predictive value was 94.7%. We identified 4 minor changes to improve algorithm performance. The revised algorithm had a weighted sensitivity 49.8% (57.1% at large hospitals), weighted specificity 96.5%, positive predictive value 58.7%, and negative predictive value 94.5%. Positive predictive value was poor for the two most common potentially planned procedures: diagnostic cardiac catheterization (25%) and procedures involving cardiac devices (33%). Conclusions An administrative claims-based algorithm to identify planned readmissions is feasible and can facilitate public reporting of primarily unplanned readmissions. PMID:26149225

  8. An Algorithm to Identify the Development of Lymphedema After Breast Cancer Treatment

    Science.gov (United States)

    Yen, Tina W.F.; Laud, Purushuttom W.; Sparapani, Rodney A.; Li, Jianing; Nattinger, Ann B.

    2014-01-01

    Purpose Large, population-based studies are needed to better understand lymphedema, a major source of morbidity among breast cancer survivors. One challenge is identifying lymphedema in a consistent fashion. We sought to develop and validate an algorithm using Medicare claims to identify lymphedema after breast cancer surgery. Methods From a population-based cohort of 2,597 elderly (65+) women who underwent incident breast cancer surgery in 2003 and completed annual telephone surveys through 2008, two algorithms were developed using Medicare claims from half of the cohort and validated in the remaining half. A lymphedema-positive case was defined by patient report. Results A simple two ICD-9 code algorithm had 69% sensitivity, 96% specificity, positive predictive value >75% if prevalence of lymphedema is >16%, negative predictive value >90%, and area under receiver operating characteristic curve (AUC) of 0.82 (95% CI: 0.80 – 0.85). A more sophisticated, multi-step algorithm utilizing diagnostic and treatment codes, logistic regression methods, and a reclassification step performed similarly to the two-code algorithm. Conclusions Given the similar performance of the two validated algorithms, the ease of implementing the simple algorithm and the fact that the simple algorithm does not include treatment codes, we recommend that this two-code algorithm be validated in and applied to other population-based breast cancer cohorts. Implications for Cancer Survivors This validated lymphedema algorithm will facilitate the conduct of large, population-based studies in key areas (incidence rates, risk factors, prevention measures, treatment and cost/economic analyses) that are critical to advancing our understanding and management of this challenging and debilitating chronic disease. PMID:25187004

  9. A Convex Optimization Model and Algorithm for Retinex

    Directory of Open Access Journals (Sweden)

    Qing-Nan Zhao

    2017-01-01

    Full Text Available Retinex is a theory on simulating and explaining how human visual system perceives colors under different illumination conditions. The main contribution of this paper is to put forward a new convex optimization model for Retinex. Different from existing methods, the main idea is to rewrite a multiplicative form such that the illumination variable and the reflection variable are decoupled in spatial domain. The resulting objective function involves three terms including the Tikhonov regularization of the illumination component, the total variation regularization of the reciprocal of the reflection component, and the data-fitting term among the input image, the illumination component, and the reciprocal of the reflection component. We develop an alternating direction method of multipliers (ADMM to solve the convex optimization model. Numerical experiments demonstrate the advantages of the proposed model which can decompose an image into the illumination and the reflection components.

  10. Split Bregman Iteration Algorithm for Image Deblurring Using Fourth-Order Total Bounded Variation Regularization Model

    Directory of Open Access Journals (Sweden)

    Yi Xu

    2013-01-01

    Full Text Available We propose a fourth-order total bounded variation regularization model which could reduce undesirable effects effectively. Based on this model, we introduce an improved split Bregman iteration algorithm to obtain the optimum solution. The convergence property of our algorithm is provided. Numerical experiments show the more excellent visual quality of the proposed model compared with the second-order total bounded variation model which is proposed by Liu and Huang (2010.

  11. Sensitivity-based finite element model updating using constrained optimization with a trust region algorithm

    Science.gov (United States)

    Bakir, Pelin Gundes; Reynders, Edwin; De Roeck, Guido

    2007-08-01

    The use of changes in dynamic system characteristics to detect damage has received considerable attention during the last years. Within this context, FE model updating technique, which belongs to the class of inverse problems in classical mechanics, is used to detect, locate and quantify damage. In this study, a sensitivity-based finite element (FE) model updating scheme using a trust region algorithm is developed and implemented in a complex structure. A damage scenario is applied on the structure in which the stiffness values of the beam elements close to the beam-column joints are decreased by stiffness reduction factors. A worst case and complex damage pattern is assumed such that the stiffnesses of adjacent elements are decreased by substantially different stiffness reduction factors. The objective of the model updating is to minimize the differences between the eigenfrequency and eigenmodes residuals. The updating parameters of the structure are the stiffness reduction factors. The changes of these parameters are determined iteratively by solving a nonlinear constrained optimization problem. The FE model updating algorithm is also tested in the presence of two levels of noise in simulated measurements. In all three cases, the updated MAC values are above 99% and the relative eigenfrequency differences improve substantially after model updating. In cases without noise and with moderate levels of noise; detection, localization and quantification of damage are successfully accomplished. In the case with substantially noisy measurements, detection and localization of damage are successfully realized. Damage quantification is also promising in the presence of high noise as the algorithm can still predict 18 out of 24 damage parameters relatively accurately in that case.

  12. Development of MODIS data-based algorithm for retrieving sea surface temperature in coastal waters.

    Science.gov (United States)

    Wang, Jiao; Deng, Zhiqiang

    2017-06-01

    A new algorithm was developed for retrieving sea surface temperature (SST) in coastal waters using satellite remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua platform. The new SST algorithm was trained using the Artificial Neural Network (ANN) method and tested using 8 years of remote sensing data from MODIS Aqua sensor and in situ sensing data from the US coastal waters in Louisiana, Texas, Florida, California, and New Jersey. The ANN algorithm could be utilized to map SST in both deep offshore and particularly shallow nearshore waters at the high spatial resolution of 1 km, greatly expanding the coverage of remote sensing-based SST data from offshore waters to nearshore waters. Applications of the ANN algorithm require only the remotely sensed reflectance values from the two MODIS Aqua thermal bands 31 and 32 as input data. Application results indicated that the ANN algorithm was able to explaining 82-90% variations in observed SST in US coastal waters. While the algorithm is generally applicable to the retrieval of SST, it works best for nearshore waters where important coastal resources are located and existing algorithms are either not applicable or do not work well, making the new ANN-based SST algorithm unique and particularly useful to coastal resource management.

  13. Discrete channel modelling based on genetic algorithm and simulated annealing for training hidden Markov model

    Institute of Scientific and Technical Information of China (English)

    Zhao Zhi-Jin; Zheng Shi-Lian; Xu Chun-Yun; Kong Xian-Zheng

    2007-01-01

    Hidden Markov models (HMMs) have been used to model burst error sources of wireless channels. This paper proposes a hybrid method of using genetic algorithm (GA) and simulated annealing (SA) to train HMM for discrete channel modelling. The proposed method is compared with pure GA, and experimental results show that the HMMs trained by the hybrid method can better describe the error sequences due to SA's ability of facilitating hill-climbing at the later stage of the search. The burst error statistics of the HMMs trained by the proposed method and the corresponding error sequences are also presented to validate the proposed method.

  14. Improving Computational Efficiency of Model Predictive Control Genetic Algorithms for Real-Time Decision Support

    Science.gov (United States)

    Minsker, B. S.; Zimmer, A. L.; Ostfeld, A.; Schmidt, A.

    2014-12-01

    Enabling real-time decision support, particularly under conditions of uncertainty, requires computationally efficient algorithms that can rapidly generate recommendations. In this paper, a suite of model predictive control (MPC) genetic algorithms are developed and tested offline to explore their value for reducing CSOs during real-time use in a deep-tunnel sewer system. MPC approaches include the micro-GA, the probability-based compact GA, and domain-specific GA methods that reduce the number of decision variable values analyzed within the sewer hydraulic model, thus reducing algorithm search space. Minimum fitness and constraint values achieved by all GA approaches, as well as computational times required to reach the minimum values, are compared to large population sizes with long convergence times. Optimization results for a subset of the Chicago combined sewer system indicate that genetic algorithm variations with coarse decision variable representation, eventually transitioning to the entire range of decision variable values, are most efficient at addressing the CSO control problem. Although diversity-enhancing micro-GAs evaluate a larger search space and exhibit shorter convergence times, these representations do not reach minimum fitness and constraint values. The domain-specific GAs prove to be the most efficient and are used to test CSO sensitivity to energy costs, CSO penalties, and pressurization constraint values. The results show that CSO volumes are highly dependent on the tunnel pressurization constraint, with reductions of 13% to 77% possible with less conservative operational strategies. Because current management practices may not account for varying costs at CSO locations and electricity rate changes in the summer and winter, the sensitivity of the results is evaluated for variable seasonal and diurnal CSO penalty costs and electricity-related system maintenance costs, as well as different sluice gate constraint levels. These findings indicate

  15. A multiple time stepping algorithm for efficient multiscale modeling of platelets flowing in blood plasma

    Science.gov (United States)

    Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny

    2015-03-01

    We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3-4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations.

  16. A spatially constrained generative model and an EM algorithm for image segmentation.

    Science.gov (United States)

    Diplaros, Aristeidis; Vlassis, Nikos; Gevers, Theo

    2007-05-01

    In this paper, we present a novel spatially constrained generative model and an expectation-maximization (EM) algorithm for model-based image segmentation. The generative model assumes that the unobserved class labels of neighboring pixels in the image are generated by prior distributions with similar parameters, where similarity is defined by entropic quantities relating to the neighboring priors. In order to estimate model parameters from observations, we derive a spatially constrained EM algorithm that iteratively maximizes a lower bound on the data log-likelihood, where the penalty term is data-dependent. Our algorithm is very easy to implement and is similar to the standard EM algorithm for Gaussian mixtures with the main difference that the labels posteriors are "smoothed" over pixels between each E- and M-step by a standard image filter. Experiments on synthetic and real images show that our algorithm achieves competitive segmentation results compared to other Markov-based methods, and is in general faster.

  17. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    Science.gov (United States)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  18. Robust Optimization Model and Algorithm for Railway Freight Center Location Problem in Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Xing-cai Liu

    2014-01-01

    Full Text Available Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  19. Single-cluster algorithm for the site-bond-correlated Ising model

    Science.gov (United States)

    Campos, P. R. A.; Onody, R. N.

    1997-12-01

    We extend the Wolff algorithm to include correlated spin interactions in diluted magnetic systems. This algorithm is applied to study the site-bond-correlated Ising model on a two-dimensional square lattice. We use a finite-size scaling procedure to obtain the phase diagram in the temperature-concentration space. We also have verified that the autocorrelation time diminishes in the presence of dilution and correlation, showing that the Wolff algorithm performs even better in such situations.

  20. The Fuzzy Modeling Algorithm for Complex Systems Based on Stochastic Neural Network

    Institute of Scientific and Technical Information of China (English)

    李波; 张世英; 李银惠

    2002-01-01

    A fuzzy modeling method for complex systems is studied. The notation of general stochastic neural network (GSNN) is presented and a new modeling method is given based on the combination of the modified Takagi and Sugeno's(MTS) fuzzy model and one-order GSNN. Using expectation-maximization (EM) algorithm, parameter estimation and model selection procedures are given. It avoids the shortcomings brought by other methods such as BP algorithm, when the number of parameters is large, BP algorithm is still difficult to apply directly without fine tuning and subjective tinkering. Finally, the simulated example demonstrates the effectiveness.