WorldWideScience

Sample records for modeling algorithm development

  1. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed

  2. Algorithm Development for the Two-Fluid Plasma Model

    National Research Council Canada - National Science Library

    Shumlak, Uri

    2002-01-01

    A preliminary algorithm based on the two-fluid plasma model is developed to investigate the possibility of simulating plasmas with a more physically accurate model than the MHD (magnetohydrodynamic) model...

  3. A Developed Artificial Bee Colony Algorithm Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Ye Jin

    2018-04-01

    Full Text Available The Artificial Bee Colony (ABC algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees’ search formula and changing the scout bees’ updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants.

  4. Development and evaluation of thermal model reduction algorithms for spacecraft

    Science.gov (United States)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  5. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  6. Toward Developing Genetic Algorithms to Aid in Critical Infrastructure Modeling

    Energy Technology Data Exchange (ETDEWEB)

    2007-05-01

    Today’s society relies upon an array of complex national and international infrastructure networks such as transportation, telecommunication, financial and energy. Understanding these interdependencies is necessary in order to protect our critical infrastructure. The Critical Infrastructure Modeling System, CIMS©, examines the interrelationships between infrastructure networks. CIMS© development is sponsored by the National Security Division at the Idaho National Laboratory (INL) in its ongoing mission for providing critical infrastructure protection and preparedness. A genetic algorithm (GA) is an optimization technique based on Darwin’s theory of evolution. A GA can be coupled with CIMS© to search for optimum ways to protect infrastructure assets. This includes identifying optimum assets to enforce or protect, testing the addition of or change to infrastructure before implementation, or finding the optimum response to an emergency for response planning. This paper describes the addition of a GA to infrastructure modeling for infrastructure planning. It first introduces the CIMS© infrastructure modeling software used as the modeling engine to support the GA. Next, the GA techniques and parameters are defined. Then a test scenario illustrates the integration with CIMS© and the preliminary results.

  7. Development of modelling algorithm of technological systems by statistical tests

    Science.gov (United States)

    Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.

    2018-03-01

    The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.

  8. Development and performance analysis of model-based fault detection and diagnosis algorithm

    International Nuclear Information System (INIS)

    Kim, Jung Taek; Park, Jae Chang; Lee, Jung Woon; Kim, Kyung Youn; Lee, In Soo; Kim, Bong Seok; Kang, Sook In

    2002-05-01

    It is important to note that an effective means to assure the reliability and security for the nuclear power plant is to detect and diagnose the faults (failures) as soon and as accurately as possible. The objective of the project is to develop model-based fault detection and diagnosis algorithm for the pressurized water reactor and evaluate the performance of the developed algorithm. The scope of the work can be classified into two categories. The one is state-space model-based FDD algorithm based on the interacting multiple model (IMM) algorithm. The other is input-output model-based FDD algorithm based on the ART neural network. Extensive computer simulations are carried out to evaluate the performance in terms of speed and accuracy

  9. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS

    Science.gov (United States)

    2015-09-30

    wind-and thermohaline -forced isopycnic coordinate model of the North Atlantic. J. Phys. Oceanogr. 22, 1486–1505. Bleck, R., 2002. An oceanic general... circulation model framed in hybrid isopycnic-Cartesian coordinates. Ocean Modell. 4, 55–88. Buijsman, M.C., Kanarska, Y., McWilliams, J.C., 2010...continental margin. Cont. Shelf Res. 24 (6), 693–720. Nakayama, K. and Imberger, J. 2010 Residual circulation due to internal waves shoaling on a slope

  10. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  11. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  12. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    OpenAIRE

    Keller Alevtina; Vinogradova Tatyana

    2017-01-01

    The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the...

  13. DEVELOPMENT OF A HYBRID FUZZY GENETIC ALGORITHM MODEL FOR SOLVING TRANSPORTATION SCHEDULING PROBLEM

    Directory of Open Access Journals (Sweden)

    H.C.W. Lau

    2015-12-01

    Full Text Available There has been an increasing public demand for passenger rail service in the recent times leading to a strong focus on the need for effective and efficient use of resources and managing the increasing passenger requirements, service reliability and variability by the railway management. Whilst shortening the passengers’ waiting and travelling time is important for commuter satisfaction, lowering operational costs is equally important for railway management. Hence, effective and cost optimised train scheduling based on the dynamic passenger demand is one of the main issues for passenger railway management. Although the passenger railway scheduling problem has received attention in operations research in recent years, there is limited literature investigating the adoption of practical approaches that capitalize on the merits of mathematical modeling and search algorithms for effective cost optimization. This paper develops a hybrid fuzzy logic based genetic algorithm model to solve the multi-objective passenger railway scheduling problem aiming to optimize total operational costs at a satisfactory level of customer service. This hybrid approach integrates genetic algorithm with the fuzzy logic approach which uses the fuzzy controller to determine the crossover rate and mutation rate in genetic algorithm approach in the optimization process. The numerical study demonstrates the improvement of the proposed hybrid approach, and the fuzzy genetic algorithm has demonstrated its effectiveness to generate better results than standard genetic algorithm and other traditional heuristic approaches, such as simulated annealing.

  14. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  15. Development of an Algorithm for Automatic Analysis of the Impedance Spectrum Based on a Measurement Model

    Science.gov (United States)

    Kobayashi, Kiyoshi; Suzuki, Tohru S.

    2018-03-01

    A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.

  16. Modelling Kara Sea phytoplankton primary production: Development and skill assessment of regional algorithms

    Science.gov (United States)

    Demidov, Andrey B.; Kopelevich, Oleg V.; Mosharov, Sergey A.; Sheberstov, Sergey V.; Vazyulya, Svetlana V.

    2017-07-01

    Empirical region-specific (RSM), depth-integrated (DIM) and depth-resolved (DRM) primary production models are developed based on data from the Kara Sea during the autumn (September-October 1993, 2007, 2011). The model is validated by using field and satellite (MODIS-Aqua) observations. Our findings suggest that RSM algorithms perform better than non-region-specific algorithms (NRSM) in terms of regression analysis, root-mean-square difference (RMSD) and model efficiency. In general, the RSM and NRSM underestimate or overestimate the in situ water column integrated primary production (IPP) by a factor of 2 and 2.8, respectively. Additionally, our results suggest that the model skill of the RSM increases when the chlorophyll specific carbon fixation rate, efficiency of photosynthesis and photosynthetically available radiation (PAR) are used as input variables. The parameterization of chlorophyll (chl a) vertical profiles is performed in Kara Sea waters with different trophic statuses. Model validation with field data suggests that the DIM and DRM algorithms perform equally (RMSD of 0.29 and 0.31, respectively). No changes in the performance of the DIM and DRM algorithms are observed (RMSD of 0.30 and 0.31, respectively) when satellite-derived chl a, PAR and the diffuse attenuation coefficient (Kd) are applied as input variables.

  17. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    Directory of Open Access Journals (Sweden)

    Christley Scott

    2010-08-01

    Full Text Available Abstract Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a

  18. A sonification algorithm for developing the off-roads models for driving simulators

    Science.gov (United States)

    Chiroiu, Veturia; Brişan, Cornel; Dumitriu, Dan; Munteanu, Ligia

    2018-01-01

    In this paper, a sonification algorithm for developing the off-road models for driving simulators, is proposed. The aim of this algorithm is to overcome difficulties of heuristics identification which are best suited to a particular off-road profile built by measurements. The sonification algorithm is based on the stochastic polynomial chaos analysis suitable in solving equations with random input data. The fluctuations are generated by incomplete measurements leading to inhomogeneities of the cross-sectional curves of off-roads before and after deformation, the unstable contact between the tire and the road and the unreal distribution of contact and friction forces in the unknown contact domains. The approach is exercised on two particular problems and results compare favorably to existing analytical and numerical solutions. The sonification technique represents a useful multiscale analysis able to build a low-cost virtual reality environment with increased degrees of realism for driving simulators and higher user flexibility.

  19. Cloud Model Bat Algorithm

    OpenAIRE

    Yongquan Zhou; Jian Xie; Liangliang Li; Mingzhi Ma

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformati...

  20. Development of a 3D modeling algorithm for tunnel deformation monitoring based on terrestrial laser scanning

    Directory of Open Access Journals (Sweden)

    Xiongyao Xie

    2017-03-01

    Full Text Available Deformation monitoring is vital for tunnel engineering. Traditional monitoring techniques measure only a few data points, which is insufficient to understand the deformation of the entire tunnel. Terrestrial Laser Scanning (TLS is a newly developed technique that can collect thousands of data points in a few minutes, with promising applications to tunnel deformation monitoring. The raw point cloud collected from TLS cannot display tunnel deformation; therefore, a new 3D modeling algorithm was developed for this purpose. The 3D modeling algorithm includes modules for preprocessing the point cloud, extracting the tunnel axis, performing coordinate transformations, performing noise reduction and generating the 3D model. Measurement results from TLS were compared to the results of total station and numerical simulation, confirming the reliability of TLS for tunnel deformation monitoring. Finally, a case study of the Shanghai West Changjiang Road tunnel is introduced, where TLS was applied to measure shield tunnel deformation over multiple sections. Settlement, segment dislocation and cross section convergence were measured and visualized using the proposed 3D modeling algorithm.

  1. Development of Mathematical Models for Investigating Maximal Power Point Tracking Algorithms

    Directory of Open Access Journals (Sweden)

    Dominykas Vasarevičius

    2012-04-01

    Full Text Available Solar cells generate maximum power only when the load is optimized according insolation and module temperature. This function is performed by MPPT systems. While developing MPPT, it is useful to create a mathematical model that allows the simulation of different weather conditions affecting solar modules. Solar insolation, cloud cover imitation and solar cell models have been created in Matlab/Simulink environment. Comparing the simulation of solar insolation on a cloudy day with the measurements made using a pyrometer show that the model generates signal changes according to the laws similar to those of a real life signal. The model can generate solar insolation values in real time, which is useful for predicting the amount of electrical energy produced from solar power. The model can operate with the help of using the stored signal, thus a comparison of different MPPT algorithms can be provided.Article in Lithuanian

  2. Cloud model bat algorithm.

    Science.gov (United States)

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.

  3. Cloud Model Bat Algorithm

    Directory of Open Access Journals (Sweden)

    Yongquan Zhou

    2014-01-01

    Full Text Available Bat algorithm (BA is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.

  4. Development of Predictive QSAR Models of 4-Thiazolidinones Antitrypanosomal Activity using Modern Machine Learning Algorithms.

    Science.gov (United States)

    Kryshchyshyn, Anna; Devinyak, Oleg; Kaminskyy, Danylo; Grellier, Philippe; Lesyk, Roman

    2017-11-14

    This paper presents novel QSAR models for the prediction of antitrypanosomal activity among thiazolidines and related heterocycles. The performance of four machine learning algorithms: Random Forest regression, Stochastic gradient boosting, Multivariate adaptive regression splines and Gaussian processes regression have been studied in order to reach better levels of predictivity. The results for Random Forest and Gaussian processes regression are comparable and outperform other studied methods. The preliminary descriptor selection with Boruta method improved the outcome of machine learning methods. The two novel QSAR-models developed with Random Forest and Gaussian processes regression algorithms have good predictive ability, which was proved by the external evaluation of the test set with corresponding Q 2 ext =0.812 and Q 2 ext =0.830. The obtained models can be used further for in silico screening of virtual libraries in the same chemical domain in order to find new antitrypanosomal agents. Thorough analysis of descriptors influence in the QSAR models and interpretation of their chemical meaning allows to highlight a number of structure-activity relationships. The presence of phenyl rings with electron-withdrawing atoms or groups in para-position, increased number of aromatic rings, high branching but short chains, high HOMO energy, and the introduction of 1-substituted 2-indolyl fragment into the molecular structure have been recognized as trypanocidal activity prerequisites. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Algorithm development and verification of UASCM for multi-dimension and multi-group neutron kinetics model

    International Nuclear Information System (INIS)

    Si, S.

    2012-01-01

    The Universal Algorithm of Stiffness Confinement Method (UASCM) for neutron kinetics model of multi-dimensional and multi-group transport equations or diffusion equations has been developed. The numerical experiments based on transport theory code MGSNM and diffusion theory code MGNEM have demonstrated that the algorithm has sufficient accuracy and stability. (authors)

  6. Analysis and Development of Walking Algorithm Kinematic Model for 5-Degree of Freedom Bipedal Robot

    Directory of Open Access Journals (Sweden)

    Gerald Wahyudi Setiono

    2012-12-01

    Full Text Available A design of walking diagram and the calculation of a bipedal robot have been developed. The bipedal robot was designed and constructed with several kinds of servo bracket for the legs, two feet and a hip. Each of the bipedal robot leg was 5-degrees of freedom, three pitches (hip joint, knee joint and ankle joint and two rolls (hip joint and ankle joint. The walking algorithm of this bipedal robot was based on the triangle formulation of cosine law to get the angle value at each joint. The hip height, height of the swinging leg and the step distance are derived based on linear equation. This paper discussed the kinematic model analysis and the development of the walking diagram of the bipedal robot. Kinematics equations were derived, the joint angles were simulated and coded into Arduino board to be executed to the robot.

  7. Models and Algorithms for Production Planning and Scheduling in Foundries – Current State and Development Perspectives

    Directory of Open Access Journals (Sweden)

    A. Stawowy

    2012-04-01

    Full Text Available Mathematical programming, constraint programming and computational intelligence techniques, presented in the literature in the field of operations research and production management, are generally inadequate for planning real-life production process. These methods are in fact dedicated to solving the standard problems such as shop floor scheduling or lot-sizing, or their simple combinations such as scheduling with batching. Whereas many real-world production planning problems require the simultaneous solution of several problems (in addition to task scheduling and lot-sizing, the problems such as cutting, workforce scheduling, packing and transport issues, including the problems that are difficult to structure. The article presents examples and classification of production planning and scheduling systems in the foundry industry described in the literature, and also outlines the possible development directions of models and algorithms used in such systems.

  8. Development of effluent removal prediction model efficiency in septic sludge treatment plant through clonal selection algorithm.

    Science.gov (United States)

    Ting, Sie Chun; Ismail, A R; Malek, M A

    2013-11-15

    This study aims at developing a novel effluent removal management tool for septic sludge treatment plants (SSTP) using a clonal selection algorithm (CSA). The proposed CSA articulates the idea of utilizing an artificial immune system (AIS) to identify the behaviour of the SSTP, that is, using a sequence batch reactor (SBR) technology for treatment processes. The novelty of this study is the development of a predictive SSTP model for effluent discharge adopting the human immune system. Septic sludge from the individual septic tanks and package plants will be desuldged and treated in SSTP before discharging the wastewater into a waterway. The Borneo Island of Sarawak is selected as the case study. Currently, there are only two SSTPs in Sarawak, namely the Matang SSTP and the Sibu SSTP, and they are both using SBR technology. Monthly effluent discharges from 2007 to 2011 in the Matang SSTP are used in this study. Cross-validation is performed using data from the Sibu SSTP from April 2011 to July 2012. Both chemical oxygen demand (COD) and total suspended solids (TSS) in the effluent were analysed in this study. The model was validated and tested before forecasting the future effluent performance. The CSA-based SSTP model was simulated using MATLAB 7.10. The root mean square error (RMSE), mean absolute percentage error (MAPE), and correction coefficient (R) were used as performance indexes. In this study, it was found that the proposed prediction model was successful up to 84 months for the COD and 109 months for the TSS. In conclusion, the proposed CSA-based SSTP prediction model is indeed beneficial as an engineering tool to forecast the long-run performance of the SSTP and in turn, prevents infringement of future environmental balance in other towns in Sarawak. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Prediction Model for Object Oriented Software Development Effort Estimation Using One Hidden Layer Feed Forward Neural Network with Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Chandra Shekhar Yadav

    2014-01-01

    Full Text Available The budget computation for software development is affected by the prediction of software development effort and schedule. Software development effort and schedule can be predicted precisely on the basis of past software project data sets. In this paper, a model for object-oriented software development effort estimation using one hidden layer feed forward neural network (OHFNN has been developed. The model has been further optimized with the help of genetic algorithm by taking weight vector obtained from OHFNN as initial population for the genetic algorithm. Convergence has been obtained by minimizing the sum of squared errors of each input vector and optimal weight vector has been determined to predict the software development effort. The model has been empirically validated on the PROMISE software engineering repository dataset. Performance of the model is more accurate than the well-established constructive cost model (COCOMO.

  10. Assessment of numerical optimization algorithms for the development of molecular models

    Science.gov (United States)

    Hülsmann, Marco; Vrabec, Jadran; Maaß, Astrid; Reith, Dirk

    2010-05-01

    In the pursuit to study the parameterization problem of molecular models with a broad perspective, this paper is focused on an isolated aspect: It is investigated, by which algorithms parameters can be best optimized simultaneously to different types of target data (experimental or theoretical) over a range of temperatures with the lowest number of iteration steps. As an example, nitrogen is regarded, where the intermolecular interactions are well described by the quadrupolar two-center Lennard-Jones model that has four state-independent parameters. The target data comprise experimental values for saturated liquid density, enthalpy of vaporization, and vapor pressure. For the purpose of testing algorithms, molecular simulations are entirely replaced by fit functions of vapor-liquid equilibrium (VLE) properties from the literature to assess efficiently the diverse numerical optimization algorithms investigated, being state-of-the-art gradient-based methods with very good convergency qualities. Additionally, artificial noise was superimposed onto the VLE fit results to evaluate the numerical optimization algorithms so that the calculation of molecular simulation data was mimicked. Large differences in the behavior of the individual optimization algorithms are found and some are identified to be capable to handle noisy function values.

  11. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 4: Preliminary nonscanner models and count conversion algorithms

    Science.gov (United States)

    Halyo, Nesim; Choi, Sang H.

    1987-01-01

    Two count conversion algorithms and the associated dynamic sensor model for the M/WFOV nonscanner radiometers are defined. The sensor model provides and updates the constants necessary for the conversion algorithms, though the frequency with which these updates were needed was uncertain. This analysis therefore develops mathematical models for the conversion of irradiance at the sensor field of view (FOV) limiter into data counts, derives from this model two algorithms for the conversion of data counts to irradiance at the sensor FOV aperture and develops measurement models which account for a specific target source together with a sensor. The resulting algorithms are of the gain/offset and Kalman filter types. The gain/offset algorithm was chosen since it provided sufficient accuracy using simpler computations.

  12. A novel hybrid classification model of genetic algorithms, modified k-Nearest Neighbor and developed backpropagation neural network.

    Directory of Open Access Journals (Sweden)

    Nader Salari

    Full Text Available Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that

  13. A Novel Hybrid Classification Model of Genetic Algorithms, Modified k-Nearest Neighbor and Developed Backpropagation Neural Network

    Science.gov (United States)

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the

  14. Feedback model predictive control by randomized algorithms

    NARCIS (Netherlands)

    Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep

    2001-01-01

    In this paper we present a further development of an algorithm for stochastic disturbance rejection in model predictive control with input constraints based on randomized algorithms. The algorithm presented in our work can solve the problem of stochastic disturbance rejection approximately but with

  15. A Robustly Stabilizing Model Predictive Control Algorithm

    Science.gov (United States)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  16. Developing algorithms for predicting protein-protein interactions of homology modeled proteins.

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Shawn Bryan; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Roe, Diana C.

    2006-01-01

    The goal of this project was to examine the protein-protein docking problem, especially as it relates to homology-based structures, identify the key bottlenecks in current software tools, and evaluate and prototype new algorithms that may be developed to improve these bottlenecks. This report describes the current challenges in the protein-protein docking problem: correctly predicting the binding site for the protein-protein interaction and correctly placing the sidechains. Two different and complementary approaches are taken that can help with the protein-protein docking problem. The first approach is to predict interaction sites prior to docking, and uses bioinformatics studies of protein-protein interactions to predict theses interaction site. The second approach is to improve validation of predicted complexes after docking, and uses an improved scoring function for evaluating proposed docked poses, incorporating a solvation term. This scoring function demonstrates significant improvement over current state-of-the art functions. Initial studies on both these approaches are promising, and argue for full development of these algorithms.

  17. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  18. High-resolution computational algorithms for simulating offshore wind turbines and farms: Model development and validation

    Energy Technology Data Exchange (ETDEWEB)

    Calderer, Antoni [Univ. of Minnesota, Minneapolis, MN (United States); Yang, Xiaolei [Stony Brook Univ., NY (United States); Angelidis, Dionysios [Univ. of Minnesota, Minneapolis, MN (United States); Feist, Chris [Univ. of Minnesota, Minneapolis, MN (United States); Guala, Michele [Univ. of Minnesota, Minneapolis, MN (United States); Ruehl, Kelley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guo, Xin [Univ. of Minnesota, Minneapolis, MN (United States); Boomsma, Aaron [Univ. of Minnesota, Minneapolis, MN (United States); Shen, Lian [Univ. of Minnesota, Minneapolis, MN (United States); Sotiropoulos, Fotis [Stony Brook Univ., NY (United States)

    2015-10-30

    The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.

  19. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    Science.gov (United States)

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  20. Developing Scoring Algorithms

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  1. Development of an algorithm as an implementation model for a wound management formulary across a UK health economy.

    Science.gov (United States)

    Stephen-Haynes, J

    2013-12-01

    This article outlines a strategic process for the evaluation of wound management products and the development of an algorithm as an implementation model for wound management. Wound management is an increasingly complex process given the variety of interactive dressings and other devices available. This article discusses the procurement process, access to wound management dressings and the use of wound management formularies within the UK. We conclude that the current commissioners of tissue viability within healthcare organisations need to adopt a proactive approach to ensure appropriate formulary evaluation and product selection, in order to achieve the most beneficial clinical and financial outcomes.

  2. Genetic algorithm guided population pharmacokinetic model development for simvastatin, concurrently or non-concurrently co-administered with amlodipine.

    Science.gov (United States)

    Chaturvedula, Ayyappa; Sale, Mark E; Lee, Howard

    2014-02-01

    An automated model development was performed for simvastatin, co-administered with amlodipine concurrently or non-concurrently (i.e., 4 hours later) in 17 patients with coexisting hyperlipidemia and hypertension. The single objective hybrid genetic algorithm (SOHGA) was implemented in the NONMEM software by defining the search space for structural, statistical and covariate models. Candidate models obtained from the SOHGA runs were further assessed for biological plausibility and the precision of parameter estimates, followed by traditional backward elimination process for model refinement. The final population pharmacokinetic model shows that the elimination rate constant for simvastatin acid, the active form by hydrolysis of its lactone prodrug (i.e., simvastatin), is only 44% in the concurrent amlodipine administration group compared with the non-concurrent group. The application of SOHGA for automated model selection, combined with traditional model selection strategies, appears to save time for model development, which also can generate new hypotheses that are biologically more plausible. © 2013, The American College of Clinical Pharmacology.

  3. Watershed model calibration framework developed using an influence coefficient algorithm and a genetic algorithm and analysis of pollutant discharge characteristics and load reduction in a TMDL planning area.

    Science.gov (United States)

    Cho, Jae Heon; Lee, Jong Ho

    2015-11-01

    Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction

  4. Development of a Quasi-3D Multiscale Modeling Framework: Motivation, basic algorithm and preliminary results

    Directory of Open Access Journals (Sweden)

    Joon-Hee Jung

    2010-11-01

    Full Text Available A new framework for modeling the atmosphere, which we call the quasi-3D (Q3D multi-scale modeling framework (MMF, is developed with the objective of including cloud-scale three-dimensional effects in a GCM without necessarily using a global cloud-resolving model (CRM. It combines a GCM with a Q3D CRM that has the horizontal domain consisting of two perpendicular sets of channels, each of which contains a locally 3D grid-point array. For computing efficiency, the widths of the channels are chosen to be narrow. Thus, it is crucial to select a proper lateral boundary condition to realistically simulate the statistics of cloud and cloud-associated processes. Among the various possibilities, a periodic lateral boundary condition is chosen for the deviations from background fields that are obtained by interpolations from the GCM grid points. Since the deviations tend to vanish as the GCM grid size approaches that of the CRM, the whole system of the Q3D MMF can converge to a fully 3D global CRM. Consequently, the horizontal resolution of the GCM can be freely chosen depending on the objective of application, without changing the formulation of model physics. To evaluate the newly developed Q3D CRM in an efficient way, idealized experiments have been performed using a small horizontal domain. In these tests, the Q3D CRM uses only one pair of perpendicular channels with only two grid points across each channel. Comparing the simulation results with those of a fully 3D CRM, it is concluded that the Q3D CRM can reproduce most of the important statistics of the 3D solutions, including the vertical distributions of cloud water and precipitants, vertical transports of potential temperature and water vapor, and the variances and covariances of dynamical variables. The main improvement from a corresponding 2D simulation appears in the surface fluxes and the vorticity transports that cause the mean wind to change. A comparison with a simulation using a coarse

  5. Multiagent scheduling models and algorithms

    CERN Document Server

    Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur

    2014-01-01

    This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.

  6. Development of simulators algorithms of planar radioactive sources for use in computer models of exposure

    International Nuclear Information System (INIS)

    Vieira, Jose Wilson; Leal Neto, Viriato; Lima Filho, Jose de Melo; Lima, Fernando Roberto de Andrade

    2013-01-01

    This paper presents as algorithm of a planar and isotropic radioactive source and by rotating the probability density function (PDF) Gaussian standard subjected to a translatory method which displaces its maximum throughout its field changes its intensity and makes the dispersion around the mean right asymmetric. The algorithm was used to generate samples of photons emerging from a plane and reach a semicircle involving a phantom voxels. The PDF describing this problem is already known, but the generating function of random numbers (FRN) associated with it can not be deduced by direct MC techniques. This is a significant problem because it can be adjusted to simulations involving natural terrestrial radiation or accidents in medical establishments or industries where the radioactive material spreads in a plane. Some attempts to obtain a FRN for the PDF of the problem have already been implemented by the Research Group in Numerical Dosimetry (GND) from Recife-PE, Brazil, always using the technique rejection sampling MC. This article followed methodology of previous work, except on one point: The problem of the PDF was replaced by a normal PDF transferred. To perform dosimetric comparisons, we used two MCES: the MSTA (Mash standing, composed by the adult male voxel phantom in orthostatic position, MASH (male mesh), available from the Department of Nuclear Energy (DEN) of the Federal University of Pernambuco (UFPE), coupled to MC EGSnrc code and the GND planar source based on the rejection technique) and MSTA N T. The two MCES are similar in all but FRN used in planar source. The results presented and discussed in this paper establish the new algorithm for a planar source to be used by GND

  7. Development of a thermal control algorithm using artificial neural network models for improved thermal comfort and energy efficiency in accommodation buildings

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Jung, Sung Kwon

    2016-01-01

    Highlights: • An ANN model for predicting optimal start moment of the cooling system was developed. • An ANN model for predicting the amount of cooling energy consumption was developed. • An optimal control algorithm was developed employing two ANN models. • The algorithm showed the advanced thermal comfort and energy efficiency. - Abstract: The aim of this study was to develop a control algorithm to demonstrate the improved thermal comfort and building energy efficiency of accommodation buildings in the cooling season. For this, two artificial neural network (ANN)-based predictive and adaptive models were developed and employed in the algorithm. One model predicted the cooling energy consumption during the unoccupied period for different setback temperatures and the other predicted the time required for restoring current indoor temperature to the normal set-point temperature. Using numerical simulation methods, the prediction accuracy of the two ANN models and the performance of the algorithm were tested. Through the test result analysis, the two ANN models showed their prediction accuracy with an acceptable error rate when applied in the control algorithm. In addition, the two ANN models based algorithm can be used to provide a more comfortable and energy efficient indoor thermal environment than the two conventional control methods, which respectively employed a fixed set-point temperature for the entire day and a setback temperature during the unoccupied period. Therefore, the operating range was 23–26 °C during the occupied period and 25–28 °C during the unoccupied period. Based on the analysis, it can be concluded that the optimal algorithm with two predictive and adaptive ANN models can be used to design a more comfortable and energy efficient indoor thermal environment for accommodation buildings in a comprehensive manner.

  8. Parallel Algorithms for Model Checking

    NARCIS (Netherlands)

    van de Pol, Jaco; Mousavi, Mohammad Reza; Sgall, Jiri

    2017-01-01

    Model checking is an automated verification procedure, which checks that a model of a system satisfies certain properties. These properties are typically expressed in some temporal logic, like LTL and CTL. Algorithms for LTL model checking (linear time logic) are based on automata theory and graph

  9. Algorithm development and simulation outcomes for hypoxic head and neck cancer radiotherapy using a Monte Carlo cell division model

    International Nuclear Information System (INIS)

    Harriss, W.M.; Bezak, E.; Yeoh, E.

    2010-01-01

    Full text: A temporal Monte Carlo tumour model, 'Hyp-RT'. sim ulating hypoxic head and neck cancer has been updated and extended to model radiothcrapy. The aim is to providc a convenient radiobio logical tool for clinicians to evaluate radiotherapy treatment schedules based on many individual tumour properties including oxygenation. FORTRAN95 and JA YA havc been utilised to develop the efficient algorithm, which can propagate 108 cells. Epithelial cell kill is affected by dose, oxygenation and proliferativc status. Accelerated repopulation (AR) has been modelled by increasing the symmetrical stem cell division probability, and reoxygenation (ROx) has been modelled using random incremental boosts of oxygen to the cell po ulation throughout therapy. Results The stem cell percentage and the degree of hypoxia dominate tumour growth rate. For conventional radiotherapy. 15-25% more dose was required for a hypox ic versus oxic tumours, depending on the time of AR onset (0-3 weeks after thc start of treatment). ROx of hypoxic tumours resulted in tumoUJ: sensitisation and therefore a dose reduction, of up to 35%, varying with the time of onset. Fig. I shows results for all combinations of AR and ROx onset times for the moderate hypoxia case. Conclusions In hypoxic tumours, accelerated repopulation and reoxy genation affect ccll kill in the same manner as when the effects are modelled individually. however the degree of the effect is altered and therefore the combined result is difficult to predict. providing evidence for the usefulness of computer models. Simulations have quantitatively

  10. Evaluation the Quality of Cloud Dataset from the Goddard Multi-Scale Modeling Framework for Supporting GPM Algorithm Development

    Science.gov (United States)

    Chern, J.; Tao, W.; Mohr, K. I.; Matsui, T.; Lang, S. E.

    2013-12-01

    With recent rapid advancement in computational technology, the multi-scale modeling framework (MMF) that replaces conventional cloud parameterizations with a cloud-resolving model (CRM) in each grid column of a GCM has been developed and improved at NASA Goddard. The Goddard MMF is based on the coupling of the Goddard Cumulus Ensemble (GCE), a CRM model, and the Goddard GEOS global model. In recent years, a few new and improved microphysical schemes are developed and implemented to the GCE based on observations from field campaigns. These schemes have been incorporated into the MMF. The MMF has global coverage and can provide detailed cloud properties such as cloud amount, hydrometeors types, and vertical profile of water contents at high spatial and temporal resolution of a cloud-resolving model. When coupled with the Goddard Satellite Data Simulation Unit (GSDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators, the MMF system can provide radiances and backscattering similar to what satellite directly observed. In this study, one-year (2007) MMF simulation has been performed with the new 4-ice (cloud ice, snow, graupel and hail) microphysical scheme. The GEOS global model is run at 2o x 2.5o resolution and the embedded two-dimensional GCEs each has 64 columns at 4 km horizontal resolution. The large-scale forcing from the GCM is nudged to EC-Interim analysis to reduce the influence of MMF model biases on the cloud-resolving model results. The simulation provides more than 300 millions of vertical profiles of cloud dataset in different season, geographic locations, and climate regimes. This cloud dataset is used to supplement observations over data sparse areas for supporting GPM algorithm development. The model simulated mean and variability of surface rainfall and snowfall, cloud and precipitation types, cloud properties, radiances and backscattering are evaluated against satellite observations. We will assess the strengths

  11. Algorithmic Issues in Modeling Motion

    DEFF Research Database (Denmark)

    Agarwal, P. K; Guibas, L. J; Edelsbrunner, H.

    2003-01-01

    This article is a survey of research areas in which motion plays a pivotal role. The aim of the article is to review current approaches to modeling motion together with related data structures and algorithms, and to summarize the challenges that lie ahead in producing a more unified theory...

  12. Development of algorithms for tsunami detection by High Frequency Radar based on modeling tsunami case studies in the Mediterranean Sea

    Science.gov (United States)

    Grilli, Stéphan; Guérin, Charles-Antoine; Grosdidier, Samuel

    2015-04-01

    Where coastal tsunami hazard is governed by near-field sources, Submarine Mass Failures (SMFs) or earthquakes, tsunami propagation times may be too small for a detection based on deep or shallow water buoys. To offer sufficient warning time, it has been proposed by others to implement early warning systems relying on High Frequency Surface Wave Radar (HFSWR) remote sensing, that has a dense spatial coverage far offshore. A new HFSWR, referred to as STRADIVARIUS, has been recently deployed by Diginext Inc. to cover the "Golfe du Lion" (GDL) in the Western Mediterranean Sea. This radar, which operates at 4.5 MHz, uses a proprietary phase coding technology that allows detection up to 300 km in a bistatic configuration (with a baseline of about 100 km). Although the primary purpose of the radar is vessel detection in relation to homeland security, it can also be used for ocean current monitoring. The current caused by an arriving tsunami will shift the Bragg frequency by a value proportional to a component of its velocity, which can be easily obtained from the Doppler spectrum of the HFSWR signal. Using state of the art tsunami generation and propagation models, we modeled tsunami case studies in the western Mediterranean basin (both seismic and SMFs) and simulated the HFSWR backscattered signal that would be detected for the entire GDL and beyond. Based on simulated HFSWR signal, we developed two types of tsunami detection algorithms: (i) one based on standard Doppler spectra, for which we found that to be detectable within the environmental and background current noises, the Doppler shift requires tsunami currents to be at least 10-15 cm/s, which typically only occurs on the continental shelf in fairly shallow water; (ii) to allow earlier detection, a second algorithm computes correlations of the HFSWR signals at two distant locations, shifted in time by the tsunami propagation time between these locations (easily computed based on bathymetry). We found that this

  13. To develop a universal gamut mapping algorithm

    International Nuclear Information System (INIS)

    Morovic, J.

    1998-10-01

    When a colour image from one colour reproduction medium (e.g. nature, a monitor) needs to be reproduced on another (e.g. on a monitor or in print) and these media have different colour ranges (gamuts), it is necessary to have a method for mapping between them. If such a gamut mapping algorithm can be used under a wide range of conditions, it can also be incorporated in an automated colour reproduction system and considered to be in some sense universal. In terms of preliminary work, a colour reproduction system was implemented, for which a new printer characterisation model (including grey-scale correction) was developed. Methods were also developed for calculating gamut boundary descriptors and for calculating gamut boundaries along given lines from them. The gamut mapping solution proposed in this thesis is a gamut compression algorithm developed with the aim of being accurate and universally applicable. It was arrived at by way of an evolutionary gamut mapping development strategy for the purposes of which five test images were reproduced between a CRT and printed media obtained using an inkjet printer. Initially, a number of previously published algorithms were chosen and psychophysically evaluated whereby an important characteristic of this evaluation was that it also considered the performance of algorithms for individual colour regions within the test images used. New algorithms were then developed on their basis, subsequently evaluated and this process was repeated once more. In this series of experiments the new GCUSP algorithm, which consists of a chroma-dependent lightness compression followed by a compression towards the lightness of the reproduction cusp on the lightness axis, gave the most accurate and stable performance overall. The results of these experiments were also useful for improving the understanding of some gamut mapping factors - in particular gamut difference. In addition to looking at accuracy, the pleasantness of reproductions obtained

  14. Complex fluids modeling and algorithms

    CERN Document Server

    Saramito, Pierre

    2016-01-01

    This book presents a comprehensive overview of the modeling of complex fluids, including many common substances, such as toothpaste, hair gel, mayonnaise, liquid foam, cement and blood, which cannot be described by Navier-Stokes equations. It also offers an up-to-date mathematical and numerical analysis of the corresponding equations, as well as several practical numerical algorithms and software solutions for the approximation of the solutions. It discusses industrial (molten plastics, forming process), geophysical (mud flows, volcanic lava, glaciers and snow avalanches), and biological (blood flows, tissues) modeling applications. This book is a valuable resource for undergraduate students and researchers in applied mathematics, mechanical engineering and physics.

  15. Optimization in engineering models and algorithms

    CERN Document Server

    Sioshansi, Ramteen

    2017-01-01

    This textbook covers the fundamentals of optimization, including linear, mixed-integer linear, nonlinear, and dynamic optimization techniques, with a clear engineering focus. It carefully describes classical optimization models and algorithms using an engineering problem-solving perspective, and emphasizes modeling issues using many real-world examples related to a variety of application areas. Providing an appropriate blend of practical applications and optimization theory makes the text useful to both practitioners and students, and gives the reader a good sense of the power of optimization and the potential difficulties in applying optimization to modeling real-world systems. The book is intended for undergraduate and graduate-level teaching in industrial engineering and other engineering specialties. It is also of use to industry practitioners, due to the inclusion of real-world applications, opening the door to advanced courses on both modeling and algorithm development within the industrial engineering ...

  16. Information Dynamics in Networks: Models and Algorithms

    Science.gov (United States)

    2016-09-13

    Information Dynamics in Networks: Models and Algorithms In this project, we investigated how network structure interplays with higher level processes in...Models and Algorithms Report Title In this project, we investigated how network structure interplays with higher level processes in online social...Received Paper 1.00 2.00 3.00 . A Note on Modeling Retweet Cascades on Twitter, Workshop on Algorithms and Models for the Web Graph. 09-DEC-15

  17. Development of hybrid genetic-algorithm-based neural networks using regression trees for modeling air quality inside a public transportation bus.

    Science.gov (United States)

    Kadiyala, Akhil; Kaur, Devinder; Kumar, Ashok

    2013-02-01

    The present study developed a novel approach to modeling indoor air quality (IAQ) of a public transportation bus by the development of hybrid genetic-algorithm-based neural networks (also known as evolutionary neural networks) with input variables optimized from using the regression trees, referred as the GART approach. This study validated the applicability of the GART modeling approach in solving complex nonlinear systems by accurately predicting the monitored contaminants of carbon dioxide (CO2), carbon monoxide (CO), nitric oxide (NO), sulfur dioxide (SO2), 0.3-0.4 microm sized particle numbers, 0.4-0.5 microm sized particle numbers, particulate matter (PM) concentrations less than 1.0 microm (PM10), and PM concentrations less than 2.5 microm (PM2.5) inside a public transportation bus operating on 20% grade biodiesel in Toledo, OH. First, the important variables affecting each monitored in-bus contaminant were determined using regression trees. Second, the analysis of variance was used as a complimentary sensitivity analysis to the regression tree results to determine a subset of statistically significant variables affecting each monitored in-bus contaminant. Finally, the identified subsets of statistically significant variables were used as inputs to develop three artificial neural network (ANN) models. The models developed were regression tree-based back-propagation network (BPN-RT), regression tree-based radial basis function network (RBFN-RT), and GART models. Performance measures were used to validate the predictive capacity of the developed IAQ models. The results from this approach were compared with the results obtained from using a theoretical approach and a generalized practicable approach to modeling IAQ that included the consideration of additional independent variables when developing the aforementioned ANN models. The hybrid GART models were able to capture majority of the variance in the monitored in-bus contaminants. The genetic-algorithm

  18. Development of a Synthetic Adaptive Neuro-Fuzzy Prediction Model for Tumor Motion Tracking in External Radiotherapy by Evaluating Various Data Clustering Algorithms.

    Science.gov (United States)

    Ghorbanzadeh, Leila; Torshabi, Ahmad Esmaili; Nabipour, Jamshid Soltani; Arbatan, Moslem Ahmadi

    2016-04-01

    In image guided radiotherapy, in order to reach a prescribed uniform dose in dynamic tumors at thorax region while minimizing the amount of additional dose received by the surrounding healthy tissues, tumor motion must be tracked in real-time. Several correlation models have been proposed in recent years to provide tumor position information as a function of time in radiotherapy with external surrogates. However, developing an accurate correlation model is still a challenge. In this study, we proposed an adaptive neuro-fuzzy based correlation model that employs several data clustering algorithms for antecedent parameters construction to avoid over-fitting and to achieve an appropriate performance in tumor motion tracking compared with the conventional models. To begin, a comparative assessment is done between seven nuero-fuzzy correlation models each constructed using a unique data clustering algorithm. Then, each of the constructed models are combined within an adaptive sevenfold synthetic model since our tumor motion database has high degrees of variability and that each model has its intrinsic properties at motion tracking. In the proposed sevenfold synthetic model, best model is selected adaptively at pre-treatment. The model also updates the steps for each patient using an automatic model selectivity subroutine. We tested the efficacy of the proposed synthetic model on twenty patients (divided equally into two control and worst groups) treated with CyberKnife synchrony system. Compared to Cyberknife model, the proposed synthetic model resulted in 61.2% and 49.3% reduction in tumor tracking error in worst and control group, respectively. These results suggest that the proposed model selection program in our synthetic neuro-fuzzy model can significantly reduce tumor tracking errors. Numerical assessments confirmed that the proposed synthetic model is able to track tumor motion in real time with high accuracy during treatment. © The Author(s) 2015.

  19. Development of a Thermal Equilibrium Prediction Algorithm

    International Nuclear Information System (INIS)

    Aviles-Ramos, Cuauhtemoc

    2002-01-01

    A thermal equilibrium prediction algorithm is developed and tested using a heat conduction model and data sets from calorimetric measurements. The physical model used in this study is the exact solution of a system of two partial differential equations that govern the heat conduction in the calorimeter. A multi-parameter estimation technique is developed and implemented to estimate the effective volumetric heat generation and thermal diffusivity in the calorimeter measurement chamber, and the effective thermal diffusivity of the heat flux sensor. These effective properties and the exact solution are used to predict the heat flux sensor voltage readings at thermal equilibrium. Thermal equilibrium predictions are carried out considering only 20% of the total measurement time required for thermal equilibrium. A comparison of the predicted and experimental thermal equilibrium voltages shows that the average percentage error from 330 data sets is only 0.1%. The data sets used in this study come from calorimeters of different sizes that use different kinds of heat flux sensors. Furthermore, different nuclear material matrices were assayed in the process of generating these data sets. This study shows that the integration of this algorithm into the calorimeter data acquisition software will result in an 80% reduction of measurement time. This reduction results in a significant cutback in operational costs for the calorimetric assay of nuclear materials. (authors)

  20. Testing algorithms for a passenger train braking performance model.

    Science.gov (United States)

    2011-09-01

    "The Federal Railroad Administrations Office of Research and Development funded a project to establish performance model to develop, analyze, and test positive train control (PTC) braking algorithms for passenger train operations. With a good brak...

  1. Modeling and Engineering Algorithms for Mobile Data

    DEFF Research Database (Denmark)

    Blunck, Henrik; Hinrichs, Klaus; Sondern, Joëlle

    2006-01-01

    In this paper, we present an object-oriented approach to modeling mobile data and algorithms operating on such data. Our model is general enough to capture any kind of continuous motion while at the same time allowing for encompassing algorithms optimized for specific types of motion. Such motion...

  2. Modelling and development of estimation and control algorithms: application to a bio process; Modelisation et elaboration d`algorithmes d`estimation et de commande: application a un bioprocede

    Energy Technology Data Exchange (ETDEWEB)

    Maher, M.

    1995-02-03

    Modelling, estimation and control of an alcoholic fermentation process is the purpose of this thesis. A simple mathematical model of a fermentation process is established by using experimental results obtained on the plant. This nonlinear model is used for numerical simulation, analysis and synthesis of estimation and control algorithms. The problem of state and parameter nonlinear estimation of bio-processes is studied. Two estimation techniques are developed and proposed to bypass the lack of sensors for certain physical variables. Their performances are studied by numerical simulation. One of these estimators is validated on experimental results of batch and continuous fermentations. An adaptive control by law is proposed for the regulation and tracking of the substrate concentration of the plant by acting on the dilution rate. It is a nonlinear control strategy coupled with the previous validated estimator. The performance of this control law is evaluated by a real application to a continuous flow fermentation process. (author) refs.

  3. Algorithms to solve the Sutherland model

    OpenAIRE

    Langmann, Edwin

    2001-01-01

    We give a self-contained presentation and comparison of two different algorithms to explicitly solve quantum many body models of indistinguishable particles moving on a circle and interacting with two-body potentials of $1/\\sin^2$-type. The first algorithm is due to Sutherland and well-known; the second one is a limiting case of a novel algorithm to solve the elliptic generalization of the Sutherland model. These two algorithms are different in several details. We show that they are equivalen...

  4. Developing Scoring Algorithms (Earlier Methods)

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  5. Critical function monitoring system algorithm development

    International Nuclear Information System (INIS)

    Harmon, D.L.

    1984-01-01

    Accurate critical function status information is a key to operator decision-making during events threatening nuclear power plant safety. The Critical Function Monitoring System provides continuous critical function status monitoring by use of algorithms which mathematically represent the processes by which an operating staff would determine critical function status. This paper discusses in detail the systematic design methodology employed to develop adequate Critical Function Monitoring System algorithms

  6. Loop algorithms for quantum simulations of fermion models on lattices

    International Nuclear Information System (INIS)

    Kawashima, N.; Gubernatis, J.E.; Evertz, H.G.

    1994-01-01

    Two cluster algorithms, based on constructing and flipping loops, are presented for world-line quantum Monte Carlo simulations of fermions and are tested on the one-dimensional repulsive Hubbard model. We call these algorithms the loop-flip and loop-exchange algorithms. For these two algorithms and the standard world-line algorithm, we calculated the autocorrelation times for various physical quantities and found that the ordinary world-line algorithm, which uses only local moves, suffers from very long correlation times that makes not only the estimate of the error difficult but also the estimate of the average values themselves difficult. These difficulties are especially severe in the low-temperature, large-U regime. In contrast, we find that new algorithms, when used alone or in combinations with themselves and the standard algorithm, can have significantly smaller autocorrelation times, in some cases being smaller by three orders of magnitude. The new algorithms, which use nonlocal moves, are discussed from the point of view of a general prescription for developing cluster algorithms. The loop-flip algorithm is also shown to be ergodic and to belong to the grand canonical ensemble. Extensions to other models and higher dimensions are briefly discussed

  7. Methodology and basic algorithms of the Livermore Economic Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    Bell, R.B.

    1981-03-17

    The methodology and the basic pricing algorithms used in the Livermore Economic Modeling System (EMS) are described. The report explains the derivations of the EMS equations in detail; however, it could also serve as a general introduction to the modeling system. A brief but comprehensive explanation of what EMS is and does, and how it does it is presented. The second part examines the basic pricing algorithms currently implemented in EMS. Each algorithm's function is analyzed and a detailed derivation of the actual mathematical expressions used to implement the algorithm is presented. EMS is an evolving modeling system; improvements in existing algorithms are constantly under development and new submodels are being introduced. A snapshot of the standard version of EMS is provided and areas currently under study and development are considered briefly.

  8. LCD motion blur: modeling, analysis, and algorithm.

    Science.gov (United States)

    Chan, Stanley H; Nguyen, Truong Q

    2011-08-01

    Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms.

  9. Models and algorithms for biomolecules and molecular networks

    CERN Document Server

    DasGupta, Bhaskar

    2016-01-01

    By providing expositions to modeling principles, theories, computational solutions, and open problems, this reference presents a full scope on relevant biological phenomena, modeling frameworks, technical challenges, and algorithms. * Up-to-date developments of structures of biomolecules, systems biology, advanced models, and algorithms * Sampling techniques for estimating evolutionary rates and generating molecular structures * Accurate computation of probability landscape of stochastic networks, solving discrete chemical master equations * End-of-chapter exercises

  10. Insertion algorithms for network model database management systems

    Science.gov (United States)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  11. Efficient Implementation Algorithms for Homogenized Energy Models

    National Research Council Canada - National Science Library

    Braun, Thomas R; Smith, Ralph C

    2005-01-01

    ... for real-time control implementation. In this paper, we develop algorithms employing lookup tables which permit the high speed implementation of formulations which incorporate relaxation mechanisms and electromechanical coupling...

  12. An Automatic Registration Algorithm for 3D Maxillofacial Model

    Science.gov (United States)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  13. Model Checking Algorithms for CTMDPs

    DEFF Research Database (Denmark)

    Buchholz, Peter; Hahn, Ernst Moritz; Hermanns, Holger

    2011-01-01

    Continuous Stochastic Logic (CSL) can be interpreted over continuoustime Markov decision processes (CTMDPs) to specify quantitative properties of stochastic systems that allow some external control. Model checking CSL formulae over CTMDPs requires then the computation of optimal control strategie...

  14. An Analysis of Audio Features to Develop a Human Activity Recognition Model Using Genetic Algorithms, Random Forests, and Neural Networks

    Directory of Open Access Journals (Sweden)

    Carlos E. Galván-Tejada

    2016-01-01

    Full Text Available This work presents a human activity recognition (HAR model based on audio features. The use of sound as an information source for HAR models represents a challenge because sound wave analyses generate very large amounts of data. However, feature selection techniques may reduce the amount of data required to represent an audio signal sample. Some of the audio features that were analyzed include Mel-frequency cepstral coefficients (MFCC. Although MFCC are commonly used in voice and instrument recognition, their utility within HAR models is yet to be confirmed, and this work validates their usefulness. Additionally, statistical features were extracted from the audio samples to generate the proposed HAR model. The size of the information is necessary to conform a HAR model impact directly on the accuracy of the model. This problem also was tackled in the present work; our results indicate that we are capable of recognizing a human activity with an accuracy of 85% using the HAR model proposed. This means that minimum computational costs are needed, thus allowing portable devices to identify human activities using audio as an information source.

  15. Rethinking exchange market models as optimization algorithms

    Science.gov (United States)

    Luquini, Evandro; Omar, Nizam

    2018-02-01

    The exchange market model has mainly been used to study the inequality problem. Although the human society inequality problem is very important, the exchange market models dynamics until stationary state and its capability of ranking individuals is interesting in itself. This study considers the hypothesis that the exchange market model could be understood as an optimization procedure. We present herein the implications for algorithmic optimization and also the possibility of a new family of exchange market models

  16. Fuzzy audit risk modeling algorithm

    Directory of Open Access Journals (Sweden)

    Zohreh Hajihaa

    2011-07-01

    Full Text Available Fuzzy logic has created suitable mathematics for making decisions in uncertain environments including professional judgments. One of the situations is to assess auditee risks. During recent years, risk based audit (RBA has been regarded as one of the main tools to fight against fraud. The main issue in RBA is to determine the overall audit risk an auditor accepts, which impact the efficiency of an audit. The primary objective of this research is to redesign the audit risk model (ARM proposed by auditing standards. The proposed model of this paper uses fuzzy inference systems (FIS based on the judgments of audit experts. The implementation of proposed fuzzy technique uses triangular fuzzy numbers to express the inputs and Mamdani method along with center of gravity are incorporated for defuzzification. The proposed model uses three FISs for audit, inherent and control risks, and there are five levels of linguistic variables for outputs. FISs include 25, 25 and 81 rules of if-then respectively and officials of Iranian audit experts confirm all the rules.

  17. Optimisation of Transfer Function Models using Genetic Algorithms ...

    African Journals Online (AJOL)

    In order to obtain an optimum transfer function estimate, open source software based on genetic algorithm was developed. The software was developed with Visual Basic programming language. In order to test the software, a transfer function model was developed from data obtained from industry. The forecast obtained ...

  18. Survey of chemically amplified resist models and simulator algorithms

    Science.gov (United States)

    Croffie, Ebo H.; Yuan, Lei; Cheng, Mosong; Neureuther, Andrew R.

    2001-08-01

    Modeling has become indespensable tool for chemically amplified resist (CAR) evaluations. It has been used extensively to study acid diffusion and its effects on resist image formation. Several commercial and academic simulators have been developed for CAR process simulation. For commercial simulators such as PROLITH (Finle Technologies) and Solid-C (Sigma-C), the user is allowed to choose between an empirical model or a concentration dependant diffusion model. The empirical model is faster but not very accurate for 2-dimension resist simulations. In this case there is a trade off between the speed of the simulator and the accuracy of the results. An academic simulator such as STORM (U.C. Berkeley) gives the user a choice of different algorithms including Fast Imaging 2nd order finite difference algorithm and Moving Boundary finite element algorithm. A user interested in simulating the volume shrinkage and polymer stress effects during post exposure bake will need the Moving Boundary algorithm whereas a user interested in the latent image formation without polymer deformations will find the Fast Imaging algorithm more appropriate. The Fast Imaging algorithm is generally faster and requires less computer memory. This choice of algorithm presents a trade off between speed and level of detail in resist profile prediction. This paper surveys the different models and simulator algorithms available in the literature. Contributions in the field of CAR modeling including contributions to characterization of CAR exposure and post exposure bake (PEB) processes for different resist systems. Several numerical algorithms and their performances will also be discussed in this paper.

  19. Model Checking Algorithms for Markov Reward Models

    NARCIS (Netherlands)

    Cloth, Lucia; Cloth, L.

    2006-01-01

    Model checking Markov reward models unites two different approaches of model-based system validation. On the one hand, Markov reward models have a long tradition in model-based performance and dependability evaluation. On the other hand, a formal method like model checking allows for the precise

  20. Worm algorithm for the CPN−1 model

    Directory of Open Access Journals (Sweden)

    Tobias Rindlisbacher

    2017-05-01

    Full Text Available The CPN−1 model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CPN−1 on the lattice is much lower than that for simulating 4D QCD. However, to our knowledge, no efficient algorithm for simulating the lattice CPN−1 model for N>2 has been tested so far, which also works at finite density. To this end we propose a new type of worm algorithm which is appropriate to simulate the lattice CPN−1 model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications. In addition to the usual worm moves where a defect is just moved from one lattice site to the next, our algorithm additionally allows for worm-type moves in the internal variable space of single links, which accelerates the Monte Carlo evolution. We use our algorithm to compare the two popular CPN−1 lattice actions and exhibit marked differences in their approach to the continuum limit.

  1. Algorithms and Models for the Web Graph

    NARCIS (Netherlands)

    Gleich, David F.; Komjathy, Julia; Litvak, Nelli

    2015-01-01

    This volume contains the papers presented at WAW2015, the 12th Workshop on Algorithms and Models for the Web-Graph held during December 10–11, 2015, in Eindhoven. There were 24 submissions. Each submission was reviewed by at least one, and on average two, Program Committee members. The committee

  2. Efficient Parallel Algorithms for Landscape Evolution Modelling

    Science.gov (United States)

    Moresi, L. N.; Mather, B.; Beucher, R.

    2017-12-01

    Landscape erosion and the deposition of sediments by river systems are strongly controlled bytopography, rainfall patterns, and the susceptibility of the basement to the action ofrunning water. It is well understood that each of these processes depends on the other, for example:topography results from active tectonic processes; deformation, metamorphosis andexhumation alter the competence of the basement; rainfall patterns depend on topography;uplift and subsidence in response to tectonic stress can be amplified by erosionand sediment deposition. We typically gain understanding of such coupled systems through forward models which capture theessential interactions of the various components and attempt parameterise those parts of the individual systemthat are unresolvable at the scale of the interaction. Here we address the problem of predicting erosion and deposition rates at a continental scalewith a resolution of tens to hundreds of metres in a dynamic, Lagrangian framework. This isa typical requirement for a code to interface with a mantle / lithosphere dynamics model anddemands an efficient, unstructured, parallel implementation. We address this through a very general algorithm that treats all parts of the landscape evolution equationsin sparse-matrix form including those for stream-flow accumulation, dam-filling and catchment determination. This givesus considerable flexibility in developing unstructured, parallel code, and in creating a modular packagethat can be configured by users to work at different temporal and spatial scales, but is also has potential advantagesin treating the non-linear parts of the problem in a general manner.

  3. 2D and 3D simulation of cavitating flows: development of an original algorithm in code Saturne and study of the influence of turbulence modeling

    International Nuclear Information System (INIS)

    Chebli, Rezki

    2014-01-01

    Cavitation is one of the most demanding physical phenomena influencing the performance of hydraulic machines. It is therefore important to predict correctly its inception and development, in order to quantify the performance drop it induces, and also to characterize the resulting flow instabilities. The aim of this work is to develop an unsteady 3D algorithm for the numerical simulation of cavitation in an industrial CFD solver 'Code Saturne'. It is based on a fractional step method and preserves the minimum/maximum principle of the void fraction. An implicit solver, based on a transport equation of the void fraction coupled with the Navier-Stokes equations is proposed. A specific numerical treatment of the cavitation source terms provides physical values of the void fraction (between 0 and 1) without including any artificial numerical limitation. The influence of RANS turbulence models on the simulation of cavitation on 2D geometries (Venturi and Hydrofoil) is then studied. It confirms the capability of the two-equation eddy viscosity models, k-epsilon and k-omega-SST, with the modification proposed by Reboud et al. (1998) to reproduce the main features of the unsteady sheet cavity behavior. The second order model RSM-SSG, based on the Reynolds stress transport, appears able to reproduce the highly unsteady flow behavior without including any arbitrary modification. The three-dimensional effects involved in the instability mechanisms are also analyzed. This work allows us to achieve a numerical tool, validated on complex configurations of cavitating flows, to improve the understanding of the physical mechanisms that control the three-dimensional unsteady effects involved in the mechanisms of instability. (author)

  4. Tactical weapons algorithm development for unitary and fused systems

    Science.gov (United States)

    Talele, Sunjay E.; Watson, John S.; Williams, Bradford D.; Amphay, Sengvieng A.

    1996-06-01

    A much needed capability in today's tactical Air Force is weapons systems capable of precision guidance in all weather conditions against targets in high clutter backgrounds. To achieve this capability, the Armament Directorate of Wright Laboratory, WL/MN, has been exploring various seeker technologies, including multi-sensor fusion, that may yield cost effective systems capable of operating under these conditions. A critical component of these seeker systems is their autonomous acquisition and tracking algorithms. It is these algorithms which will enable the autonomous operation of the weapons systems in the battlefield. In the past, a majority of the tactical weapon algorithms were developed in a manner which resulted in codes that were not releasable to the community, either because they were considered company proprietary or competition sensitive. As a result, the knowledge gained from these efforts was not transitioning through the technical community, thereby inhibiting the evolution of their development. In order to overcome this limitation, WL/MN has embarked upon a program to develop non-proprietary multi-sensor acquisition and tracking algorithms. To facilitate this development, a testbed has been constructed consisting of the Irma signature prediction model, data analysis workstations, and the modular algorithm concept evaluation tool (MACET) algorithm. All three of these components have been enhanced to accommodate both multi-spectral sensor fusion systems and the there dimensional signal processing techniques characteristic of ladar. MACET is a graphical interface driven system for rapid prototyping and evaluation of both unitary and fused sensor algorithms. This paper describes the MACET system and specifically elaborates on the three-dimensional capabilities recently incorporated into it.

  5. Modeling of Nonlinear Systems using Genetic Algorithm

    Science.gov (United States)

    Hayashi, Kayoko; Yamamoto, Toru; Kawada, Kazuo

    In this paper, a newly modeling system by using Genetic Algorithm (GA) is proposed. The GA is an evolutionary computational method that simulates the mechanisms of heredity or evolution of living things, and it is utilized in optimization and in searching for optimized solutions. Most process systems have nonlinearities, so it is necessary to anticipate exactly such systems. However, it is difficult to make a suitable model for nonlinear systems, because most nonlinear systems have a complex structure. Therefore the newly proposed method of modeling for nonlinear systems uses GA. Then, according to the newly proposed scheme, the optimal structure and parameters of the nonlinear model are automatically generated.

  6. DIDADTIC TOOLS FOR THE STUDENTS’ ALGORITHMIC THINKING DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    T. P. Pushkaryeva

    2017-01-01

    Full Text Available Introduction. Modern engineers must possess high potential of cognitive abilities, in particular, the algorithmic thinking (AT. In this regard, the training of future experts (university graduates of technical specialities has to provide the knowledge of principles and ways of designing of various algorithms, abilities to analyze them, and to choose the most optimal variants for engineering activity implementation. For full formation of AT skills it is necessary to consider all channels of psychological perception and cogitative processing of educational information: visual, auditory, and kinesthetic.The aim of the present research is theoretical basis of design, development and use of resources for successful development of AT during the educational process of training in programming.Methodology and research methods. Methodology of the research involves the basic thesis of cognitive psychology and information approach while organizing the educational process. The research used methods: analysis; modeling of cognitive processes; designing training tools that take into account the mentality and peculiarities of information perception; diagnostic efficiency of the didactic tools. Results. The three-level model for future engineers training in programming aimed at development of AT skills was developed. The model includes three components: aesthetic, simulative, and conceptual. Stages to mastering a new discipline are allocated. It is proved that for development of AT skills when training in programming it is necessary to use kinesthetic tools at the stage of mental algorithmic maps formation; algorithmic animation and algorithmic mental maps at the stage of algorithmic model and conceptual images formation. Kinesthetic tools for development of students’ AT skills when training in algorithmization and programming are designed. Using of kinesthetic training simulators in educational process provide the effective development of algorithmic style of

  7. Algorithm Development Library for Environmental Satellite Missions

    Science.gov (United States)

    Smith, D. C.; Grant, K. D.; Miller, S. W.; Jamilkowski, M. L.

    2012-12-01

    science will need to migrate into the operational system. In addition, as new techniques are found to improve, supplement, or replace existing products, these changes will also require implementation into the operational system. In the past, operationalizing science algorithms and integrating them into active systems often required months of work. In order to significantly shorten the time and effort required for this activity, Raytheon has developed the Algorithm Development Library (ADL). The ADL enables scientist and researchers to develop algorithms on their own platforms, and provide these to Raytheon in a form that can be rapidly integrated directly into the operational baseline. As the JPSS CGS is a multi-mission ground system, algorithms are not restricted to Suomi NPP or JPSS missions. The ADL provides a development environment that any environmental remote sensing mission scientist can use to create algorithms that will plug into a JPSS CGS instantiation. This paper describes the ADL and how scientists and researchers can use it in their own environments.

  8. Development of target-tracking algorithms using neural network

    Energy Technology Data Exchange (ETDEWEB)

    Park, Dong Sun; Lee, Joon Whaoan; Yoon, Sook; Baek, Seong Hyun; Lee, Myung Jae [Chonbuk National University, Chonjoo (Korea)

    1998-04-01

    The utilization of remote-control robot system in atomic power plants or nuclear-related facilities grows rapidly, to protect workers form high radiation environments. Such applications require complete stability of the robot system, so that precisely tracking the robot is essential for the whole system. This research is to accomplish the goal by developing appropriate algorithms for remote-control robot systems. A neural network tracking system is designed and experimented to trace a robot Endpoint. This model is aimed to utilized the excellent capabilities of neural networks; nonlinear mapping between inputs and outputs, learning capability, and generalization capability. The neural tracker consists of two networks for position detection and prediction. Tracking algorithms are developed and experimented for the two models. Results of the experiments show that both models are promising as real-time target-tracking systems for remote-control robot systems. (author). 10 refs., 47 figs.

  9. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  10. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  11. Modelling Evolutionary Algorithms with Stochastic Differential Equations.

    Science.gov (United States)

    Heredia, Jorge Pérez

    2017-11-20

    There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm.

  12. Development of GPT-based optimization algorithm

    International Nuclear Information System (INIS)

    White, J.R.; Chapman, D.M.; Biswas, D.

    1985-01-01

    The University of Lowell and Westinghouse Electric Corporation are involved in a joint effort to evaluate the potential benefits of generalized/depletion perturbation theory (GPT/DTP) methods for a variety of light water reactor (LWR) physics applications. One part of that work has focused on the development of a GPT-based optimization algorithm for the overall design, analysis, and optimization of LWR reload cores. The use of GPT sensitivity data in formulating the fuel management optimization problem is conceptually straightforward; it is the actual execution of the concept that is challenging. Thus, the purpose of this paper is to address some of the major difficulties, to outline our approach to these problems, and to present some illustrative examples of an efficient GTP-based optimization scheme

  13. Sparse modeling theory, algorithms, and applications

    CERN Document Server

    Rish, Irina

    2014-01-01

    ""A comprehensive, clear, and well-articulated book on sparse modeling. This book will stand as a prime reference to the research community for many years to come.""-Ricardo Vilalta, Department of Computer Science, University of Houston""This book provides a modern introduction to sparse methods for machine learning and signal processing, with a comprehensive treatment of both theory and algorithms. Sparse Modeling is an ideal book for a first-year graduate course.""-Francis Bach, INRIA - École Normale Supřieure, Paris

  14. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    Science.gov (United States)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  15. Developing Information Power Grid Based Algorithms and Software

    Science.gov (United States)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.

  16. Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach

    Science.gov (United States)

    Stocker, Erich Franz

    2009-01-01

    This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).

  17. Computational Fluid Dynamics. [numerical methods and algorithm development

    Science.gov (United States)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  18. Development of hybrid artificial intelligent based handover decision algorithm

    Directory of Open Access Journals (Sweden)

    A.M. Aibinu

    2017-04-01

    Full Text Available The possibility of seamless handover remains a mirage despite the plethora of existing handover algorithms. The underlying factor responsible for this has been traced to the Handover decision module in the Handover process. Hence, in this paper, the development of novel hybrid artificial intelligent handover decision algorithm has been developed. The developed model is made up of hybrid of Artificial Neural Network (ANN based prediction model and Fuzzy Logic. On accessing the network, the Received Signal Strength (RSS was acquired over a period of time to form a time series data. The data was then fed to the newly proposed k-step ahead ANN-based RSS prediction system for estimation of prediction model coefficients. The synaptic weights and adaptive coefficients of the trained ANN was then used to compute the k-step ahead ANN based RSS prediction model coefficients. The predicted RSS value was later codified as Fuzzy sets and in conjunction with other measured network parameters were fed into the Fuzzy logic controller in order to finalize handover decision process. The performance of the newly developed k-step ahead ANN based RSS prediction algorithm was evaluated using simulated and real data acquired from available mobile communication networks. Results obtained in both cases shows that the proposed algorithm is capable of predicting ahead the RSS value to about ±0.0002 dB. Also, the cascaded effect of the complete handover decision module was also evaluated. Results obtained show that the newly proposed hybrid approach was able to reduce ping-pong effect associated with other handover techniques.

  19. Algorithms for Optimal Model Distributions in Adaptive Switching Control Schemes

    Directory of Open Access Journals (Sweden)

    Debarghya Ghosh

    2016-03-01

    Full Text Available Several multiple model adaptive control architectures have been proposed in the literature. Despite many advances in theory, the crucial question of how to synthesize the pairs model/controller in a structurally optimal way is to a large extent not addressed. In particular, it is not clear how to place the pairs model/controller is such a way that the properties of the switching algorithm (e.g., number of switches, learning transient, final performance are optimal with respect to some criteria. In this work, we focus on the so-called multi-model unfalsified adaptive supervisory switching control (MUASSC scheme; we define a suitable structural optimality criterion and develop algorithms for synthesizing the pairs model/controller in such a way that they are optimal with respect to the structural optimality criterion we defined. The peculiarity of the proposed optimality criterion and algorithms is that the optimization is carried out so as to optimize the entire behavior of the adaptive algorithm, i.e., both the learning transient and the steady-state response. A comparison is made with respect to the model distribution of the robust multiple model adaptive control (RMMAC, where the optimization considers only the steady-state ideal response and neglects any learning transient.

  20. Remote System for Development, Implementation and Testing of Control Algorithms

    Directory of Open Access Journals (Sweden)

    Milan Matijevic

    2007-02-01

    Full Text Available Education in the field of automatic control requires adequate practice on real systems for better and full understanding of the control theory. Experimenting on real models developed exclusively for the purpose of education and gaining necessary experience is the most adequate and traditionally it requires physical presence in laboratories where the equipment is installed. Remote access to laboratories for control systems is a necessary precondition and support for implementation of the e learning in the area of control engineering. The main feature of the developed system is support for the development, implementation and testing of user defined control algorithms with remote controller laboratory. User can define control algorithm in some conventional programming language and test it using this remote system.

  1. Introduction to genetic algorithms as a modeling tool

    International Nuclear Information System (INIS)

    Wildberger, A.M.; Hickok, K.A.

    1990-01-01

    Genetic algorithms are search and classification techniques modeled on natural adaptive systems. This is an introduction to their use as a modeling tool with emphasis on prospects for their application in the power industry. It is intended to provide enough background information for its audience to begin to follow technical developments in genetic algorithms and to recognize those which might impact on electric power engineering. Beginning with a discussion of genetic algorithms and their origin as a model of biological adaptation, their advantages and disadvantages are described in comparison with other modeling tools such as simulation and neural networks in order to provide guidance in selecting appropriate applications. In particular, their use is described for improving expert systems from actual data and they are suggested as an aid in building mathematical models. Using the Thermal Performance Advisor as an example, it is suggested how genetic algorithms might be used to make a conventional expert system and mathematical model of a power plant adapt automatically to changes in the plant's characteristics

  2. Link mining models, algorithms, and applications

    CERN Document Server

    Yu, Philip S; Faloutsos, Christos

    2010-01-01

    This book presents in-depth surveys and systematic discussions on models, algorithms and applications for link mining. Link mining is an important field of data mining. Traditional data mining focuses on 'flat' data in which each data object is represented as a fixed-length attribute vector. However, many real-world data sets are much richer in structure, involving objects of multiple types that are related to each other. Hence, recently link mining has become an emerging field of data mining, which has a high impact in various important applications such as text mining, social network analysi

  3. Genetic Algorithms Principles Towards Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Nabil M. Hewahi

    2011-10-01

    Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find
    out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values.

  4. Developing and Implementing the Data Mining Algorithms in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Ramazan Sonat [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Daniel Patrick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  5. Developing and Implementing the Data Mining Algorithms in RAVEN

    International Nuclear Information System (INIS)

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian

    2015-01-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  6. Development of a versatile algorithm for optimization of radiation therapy

    International Nuclear Information System (INIS)

    Gustafsson, Anders.

    1996-12-01

    A flexible iterative gradient algorithm for radiation therapy optimization has been developed. The algorithm is based on dose calculation using the pencil-beam description of external radiation beams in uniform and heterogeneous patients. The properties of the algorithm are described, including its ability to treat variable bounds and linear constraints, its efficiency in gradient calculation, its convergence properties and termination criteria. 116 refs

  7. Models and Algorithms for Tracking Target with Coordinated Turn Motion

    Directory of Open Access Journals (Sweden)

    Xianghui Yuan

    2014-01-01

    Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness.

  8. Potts-model grain growth simulations: Parallel algorithms and applications

    Energy Technology Data Exchange (ETDEWEB)

    Wright, S.A.; Plimpton, S.J.; Swiler, T.P. [and others

    1997-08-01

    Microstructural morphology and grain boundary properties often control the service properties of engineered materials. This report uses the Potts-model to simulate the development of microstructures in realistic materials. Three areas of microstructural morphology simulations were studied. They include the development of massively parallel algorithms for Potts-model grain grow simulations, modeling of mass transport via diffusion in these simulated microstructures, and the development of a gradient-dependent Hamiltonian to simulate columnar grain growth. Potts grain growth models for massively parallel supercomputers were developed for the conventional Potts-model in both two and three dimensions. Simulations using these parallel codes showed self similar grain growth and no finite size effects for previously unapproachable large scale problems. In addition, new enhancements to the conventional Metropolis algorithm used in the Potts-model were developed to accelerate the calculations. These techniques enable both the sequential and parallel algorithms to run faster and use essentially an infinite number of grain orientation values to avoid non-physical grain coalescence events. Mass transport phenomena in polycrystalline materials were studied in two dimensions using numerical diffusion techniques on microstructures generated using the Potts-model. The results of the mass transport modeling showed excellent quantitative agreement with one dimensional diffusion problems, however the results also suggest that transient multi-dimension diffusion effects cannot be parameterized as the product of the grain boundary diffusion coefficient and the grain boundary width. Instead, both properties are required. Gradient-dependent grain growth mechanisms were included in the Potts-model by adding an extra term to the Hamiltonian. Under normal grain growth, the primary driving term is the curvature of the grain boundary, which is included in the standard Potts-model Hamiltonian.

  9. Development of an algorithm for controlling a multilevel three-phase converter

    Science.gov (United States)

    Taissariyeva, Kyrmyzy; Ilipbaeva, Lyazzat

    2017-08-01

    This work is devoted to the development of an algorithm for controlling transistors in a three-phase multilevel conversion system. The developed algorithm allows to organize a correct operation and describes the state of transistors at each moment of time when constructing a computer model of a three-phase multilevel converter. The developed algorithm of operation of transistors provides in-phase of a three-phase converter and obtaining a sinusoidal voltage curve at the converter output.

  10. An Interactive Personalized Recommendation System Using the Hybrid Algorithm Model

    Directory of Open Access Journals (Sweden)

    Yan Guo

    2017-10-01

    Full Text Available With the rapid development of e-commerce, the contradiction between the disorder of business information and customer demand is increasingly prominent. This study aims to make e-commerce shopping more convenient, and avoid information overload, by an interactive personalized recommendation system using the hybrid algorithm model. The proposed model first uses various recommendation algorithms to get a list of original recommendation results. Combined with the customer’s feedback in an interactive manner, it then establishes the weights of corresponding recommendation algorithms. Finally, the synthetic formula of evidence theory is used to fuse the original results to obtain the final recommendation products. The recommendation performance of the proposed method is compared with that of traditional methods. The results of the experimental study through a Taobao online dress shop clearly show that the proposed method increases the efficiency of data mining in the consumer coverage, the consumer discovery accuracy and the recommendation recall. The hybrid recommendation algorithm complements the advantages of the existing recommendation algorithms in data mining. The interactive assigned-weight method meets consumer demand better and solves the problem of information overload. Meanwhile, our study offers important implications for e-commerce platform providers regarding the design of product recommendation systems.

  11. A tuning algorithm for model predictive controllers based on genetic algorithms and fuzzy decision making.

    Science.gov (United States)

    van der Lee, J H; Svrcek, W Y; Young, B R

    2008-01-01

    Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.

  12. Ischemic postconditioning: experimental models and protocol algorithms.

    Science.gov (United States)

    Skyschally, Andreas; van Caster, Patrick; Iliodromitis, Efstathios K; Schulz, Rainer; Kremastinos, Dimitrios T; Heusch, Gerd

    2009-09-01

    Ischemic postconditioning, a simple mechanical maneuver at the onset of reperfusion, reduces infarct size after ischemia/reperfusion. After its first description in 2003 by Zhao et al. numerous experimental studies have investigated this protective phenomenon. Whereas the underlying mechanisms and signal transduction are not yet understood in detail, infarct size reduction by ischemic postconditioning was confirmed in all species tested so far, including man. We have now reviewed the literature with focus on experimental models and protocols to better understand the determinants of protection by ischemic postconditioning or lack of it. Only studies with infarct size as unequivocal endpoint were considered. In all species and models, the duration of index ischemia and the protective protocol algorithm impact on the outcome of ischemic postconditioning, and gender, age, and myocardial temperature contribute.

  13. Warehouse Optimization Model Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Guofeng Qin

    2013-01-01

    Full Text Available This paper takes Bao Steel logistics automated warehouse system as an example. The premise is to maintain the focus of the shelf below half of the height of the shelf. As a result, the cost time of getting or putting goods on the shelf is reduced, and the distance of the same kind of goods is also reduced. Construct a multiobjective optimization model, using genetic algorithm to optimize problem. At last, we get a local optimal solution. Before optimization, the average cost time of getting or putting goods is 4.52996 s, and the average distance of the same kinds of goods is 2.35318 m. After optimization, the average cost time is 4.28859 s, and the average distance is 1.97366 m. After analysis, we can draw the conclusion that this model can improve the efficiency of cargo storage.

  14. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  15. Adaptive numerical algorithms in space weather modeling

    Science.gov (United States)

    Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-02-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  16. Adaptive numerical algorithms in space weather modeling

    International Nuclear Information System (INIS)

    Tóth, Gábor; Holst, Bart van der; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-01-01

    Space weather describes the various processes in the Sun–Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  17. Adaptive Numerical Algorithms in Space Weather Modeling

    Science.gov (United States)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  18. Engineering of Algorithms for Hidden Markov models and Tree Distances

    DEFF Research Database (Denmark)

    Sand, Andreas

    speed up all the classical algorithms for analyses and training of hidden Markov models. And I show how two particularly important algorithms, the forward algorithm and the Viterbi algorithm, can be accelerated through a reformulation of the algorithms and a somewhat more complicated parallelization....... Lastly, I show how hidden Markov models can be trained orders of magnitude faster on a given input by rethinking the forward algorithm such that it can automatically adapt itself to the input. Together, these optimization have enabled us to perform analysis of full genomes in a few minutes and thereby...

  19. A genetic algorithm for solving supply chain network design model

    Science.gov (United States)

    Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.

    2013-09-01

    Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.

  20. Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling

    Science.gov (United States)

    Lohn, Jason; Colombano, Silvano

    1997-01-01

    We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.

  1. Performance modeling of parallel algorithms for solving neutron diffusion problems

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1995-01-01

    Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers

  2. Immune System Model Calibration by Genetic Algorithm

    NARCIS (Netherlands)

    Presbitero, A.; Krzhizhanovskaya, V.; Mancini, E.; Brands, R.; Sloot, P.

    2016-01-01

    We aim to develop a mathematical model of the human immune system for advanced individualized healthcare where medication plan is fine-tuned to fit a patient's conditions through monitored biochemical processes. One of the challenges is calibrating model parameters to satisfy existing experimental

  3. Software Piracy Detection Model Using Ant Colony Optimization Algorithm

    Science.gov (United States)

    Astiqah Omar, Nor; Zakuan, Zeti Zuryani Mohd; Saian, Rizauddin

    2017-06-01

    Internet enables information to be accessible anytime and anywhere. This scenario creates an environment whereby information can be easily copied. Easy access to the internet is one of the factors which contribute towards piracy in Malaysia as well as the rest of the world. According to a survey conducted by Compliance Gap BSA Global Software Survey in 2013 on software piracy, found out that 43 percent of the software installed on PCs around the world was not properly licensed, the commercial value of the unlicensed installations worldwide was reported to be 62.7 billion. Piracy can happen anywhere including universities. Malaysia as well as other countries in the world is faced with issues of piracy committed by the students in universities. Piracy in universities concern about acts of stealing intellectual property. It can be in the form of software piracy, music piracy, movies piracy and piracy of intellectual materials such as books, articles and journals. This scenario affected the owner of intellectual property as their property is in jeopardy. This study has developed a classification model for detecting software piracy. The model was developed using a swarm intelligence algorithm called the Ant Colony Optimization algorithm. The data for training was collected by a study conducted in Universiti Teknologi MARA (Perlis). Experimental results show that the model detection accuracy rate is better as compared to J48 algorithm.

  4. A model of algorithmic representation of a business process

    Directory of Open Access Journals (Sweden)

    E. I. Koshkarova

    2014-01-01

    Full Text Available This article presents and justifies the possibility of developing a method for estimation and optimization of an enterprise business processes; the proposed method is based on identity of two notions – an algorithm and a business process. The described method relies on extraction of a recursive model from the business process, based on the example of one process automated by the BPM system and further estimation and optimization of that process in accordance with estimation and optimization techniques applied to algorithms. The results of this investigation could be used by experts working in the field of reengineering of enterprise business processes, automation of business processes along with development of enterprise informational systems.

  5. Model order reduction using eigen algorithm

    African Journals Online (AJOL)

    DR OKE

    to use either for design or analysis. Hence, it is ... directly from the Eigen algorithm while the zeros are determined through factor division algorithm to obtain the reduced order system. ..... V. Singh, Chandra and H. Kar, “Improved Routh Pade approximationss: A computer aided approach”, IEEE Transaction on. Automat ...

  6. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    Science.gov (United States)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  7. Algorithms for Bayesian network modeling and reliability assessment of infrastructure systems

    International Nuclear Information System (INIS)

    Tien, Iris; Der Kiureghian, Armen

    2016-01-01

    Novel algorithms are developed to enable the modeling of large, complex infrastructure systems as Bayesian networks (BNs). These include a compression algorithm that significantly reduces the memory storage required to construct the BN model, and an updating algorithm that performs inference on compressed matrices. These algorithms address one of the major obstacles to widespread use of BNs for system reliability assessment, namely the exponentially increasing amount of information that needs to be stored as the number of components in the system increases. The proposed compression and inference algorithms are described and applied to example systems to investigate their performance compared to that of existing algorithms. Orders of magnitude savings in memory storage requirement are demonstrated using the new algorithms, enabling BN modeling and reliability analysis of larger infrastructure systems. - Highlights: • Novel algorithms developed for Bayesian network modeling of infrastructure systems. • Algorithm presented to compress information in conditional probability tables. • Updating algorithm presented to perform inference on compressed matrices. • Algorithms applied to example systems to investigate their performance. • Orders of magnitude savings in memory storage requirement demonstrated.

  8. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  9. High speed railway track dynamics models, algorithms and applications

    CERN Document Server

    Lei, Xiaoyan

    2017-01-01

    This book systematically summarizes the latest research findings on high-speed railway track dynamics, made by the author and his research team over the past decade. It explores cutting-edge issues concerning the basic theory of high-speed railways, covering the dynamic theories, models, algorithms and engineering applications of the high-speed train and track coupling system. Presenting original concepts, systematic theories and advanced algorithms, the book places great emphasis on the precision and completeness of its content. The chapters are interrelated yet largely self-contained, allowing readers to either read through the book as a whole or focus on specific topics. It also combines theories with practice to effectively introduce readers to the latest research findings and developments in high-speed railway track dynamics. It offers a valuable resource for researchers, postgraduates and engineers in the fields of civil engineering, transportation, highway & railway engineering.

  10. Comparison of evolutionary algorithms in gene regulatory network model inference.

    LENUS (Irish Health Repository)

    2010-01-01

    ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.

  11. Fireworks algorithm for mean-VaR/CVaR models

    Science.gov (United States)

    Zhang, Tingting; Liu, Zhifeng

    2017-10-01

    Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.

  12. Data mining concepts models methods and algorithms

    CERN Document Server

    Kantardzic, Mehmed

    2011-01-01

    This book reviews state-of-the-art methodologies and techniques for analyzing enormous quantities of raw data in high-dimensional data spaces, to extract new information for decision making. The goal of this book is to provide a single introductory source, organized in a systematic way, in which we could direct the readers in analysis of large data sets, through the explanation of basic concepts, models and methodologies developed in recent decades.

  13. Development of an inter-layer solute transport algorithm for SOLTR computer program. Part 1. The algorithm

    International Nuclear Information System (INIS)

    Miller, I.; Roman, K.

    1979-12-01

    In order to perform studies of the influence of regional groundwater flow systems on the long-term performance of potential high-level nuclear waste repositories, it was determined that an adequate computer model would have to consider the full three-dimensional flow system. Golder Associates' SOLTR code, while three-dimensional, has an overly simple algorithm for simulating the passage of radionuclides from one aquifier to another above or below it. Part 1 of this report describes the algorithm developed to provide SOLTR with an improved capability for simulating interaquifer transport

  14. "Updates to Model Algorithms & Inputs for the Biogenic ...

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.

  15. Crowd Behavior Algorithm Development for COMBAT XXI

    Science.gov (United States)

    2017-05-30

    forth between these two models in real time . This proof-of-concept is a demonstration of a crowd model that can communicate with a combat simulation...human cognitive agents form to the various ‘ social identities’ to which they are exposed. An interesting treatment of crowds in the context of networks ...the degree to which different ethinicities within the crowd influence each individual in terms of interpersonal disances and amount of desire to follow

  16. Development of morphing algorithms for Histfactory using information geometry

    Energy Technology Data Exchange (ETDEWEB)

    Bandyopadhyay, Anjishnu; Brock, Ian [University of Bonn (Germany); Cranmer, Kyle [New York University (United States)

    2016-07-01

    Many statistical analyses are based on likelihood fits. In any likelihood fit we try to incorporate all uncertainties, both systematic and statistical. We generally have distributions for the nominal and ±1 σ variations of a given uncertainty. Using that information, Histfactory morphs the distributions for any arbitrary value of the given uncertainties. In this talk, a new morphing algorithm will be presented, which is based on information geometry. The algorithm uses the information about the difference between various probability distributions. Subsequently, we map this information onto geometrical structures and develop the algorithm on the basis of different geometrical properties. Apart from varying all nuisance parameters together, this algorithm can also probe both small (< 1 σ) and large (> 2 σ) variations. It will also be shown how this algorithm can be used for interpolating other forms of probability distributions.

  17. Making the error-controlling algorithm of observable operator models constructive.

    Science.gov (United States)

    Zhao, Ming-Jie; Jaeger, Herbert; Thon, Michael

    2009-12-01

    Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algorithms has been developed, with increasing computational and statistical efficiency, whose recent culmination was the error-controlling (EC) algorithm developed by the first author. The EC algorithm is an iterative, asymptotically correct algorithm that yields (and minimizes) an assured upper bound on the modeling error. The run time is faster by at least one order of magnitude than EM-based HMM learning algorithms and yields significantly more accurate models than the latter. Here we present a significant improvement of the EC algorithm: the constructive error-controlling (CEC) algorithm. CEC inherits from EC the main idea of minimizing an upper bound on the modeling error but is constructive where EC needs iterations. As a consequence, we obtain further gains in learning speed without loss in modeling accuracy.

  18. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  19. Development of a Novel Locomotion Algorithm for Snake Robot

    International Nuclear Information System (INIS)

    Khan, Raisuddin; Billah, Md Masum; Watanabe, Mitsuru; Shafie, A A

    2013-01-01

    A novel algorithm for snake robot locomotion is developed and analyzed in this paper. Serpentine is one of the renowned locomotion for snake robot in disaster recovery mission to overcome narrow space navigation. Several locomotion for snake navigation, such as concertina or rectilinear may be suitable for narrow spaces, but is highly inefficient if the same type of locomotion is used even in open spaces resulting friction reduction which make difficulties for snake movement. A novel locomotion algorithm has been proposed based on the modification of the multi-link snake robot, the modifications include alterations to the snake segments as well elements that mimic scales on the underside of the snake body. Snake robot can be able to navigate in the narrow space using this developed locomotion algorithm. The developed algorithm surmount the others locomotion limitation in narrow space navigation

  20. Genetic Algorithm Optimization of Artificial Neural Networks for Hydrological Modelling

    Science.gov (United States)

    Abrahart, R. J.

    2004-05-01

    This paper will consider the case for genetic algorithm optimization in the development of an artificial neural network model. It will provide a methodological evaluation of reported investigations with respect to hydrological forecasting and prediction. The intention in such operations is to develop a superior modelling solution that will be: \\begin{itemize} more accurate in terms of output precision and model estimation skill; more tractable in terms of personal requirements and end-user control; and/or more robust in terms of conceptual and mechanical power with respect to adverse conditions. The genetic algorithm optimization toolbox could be used to perform a number of specific roles or purposes and it is the harmonious and supportive relationship between neural networks and genetic algorithms that will be highlighted and assessed. There are several neural network mechanisms and procedures that could be enhanced and potential benefits are possible at different stages in the design and construction of an operational hydrological model e.g. division of inputs; identification of structure; initialization of connection weights; calibration of connection weights; breeding operations between successful models; and output fusion associated with the development of ensemble solutions. Each set of opportunities will be discussed and evaluated. Two strategic questions will also be considered: [i] should optimization be conducted as a set of small individual procedures or as one large holistic operation; [ii] what specific function or set of weighted vectors should be optimized in a complex software product e.g. timings, volumes, or quintessential hydrological attributes related to the 'problem situation' - that might require the development flood forecasting, drought estimation, or record infilling applications. The paper will conclude with a consideration of hydrological forecasting solutions developed on the combined methodologies of co-operative co-evolution and

  1. From Point Clouds to Architectural Models: Algorithms for Shape Reconstruction

    Science.gov (United States)

    Canciani, M.; Falcolini, C.; Saccone, M.; Spadafora, G.

    2013-02-01

    The use of terrestrial laser scanners in architectural survey applications has become more and more common. Row data complexity, as given by scanner restitution, leads to several problems about design and 3D-modelling starting from Point Clouds. In this context we present a study on architectural sections and mathematical algorithms for their shape reconstruction, according to known or definite geometrical rules, focusing on shapes of different complexity. Each step of the semi-automatic algorithm has been developed using Mathematica software and CAD, integrating both programs in order to reconstruct a geometrical CAD model of the object. Our study is motivated by the fact that, for architectural survey, most of three dimensional modelling procedures concerning point clouds produce superabundant, but often unnecessary, information and are also very expensive in terms of cpu time using more and more sophisticated hardware and software. On the contrary, it's important to simplify/decimate the point cloud in order to recognize a particular form out of some definite geometric/architectonic shapes. Such a process consists of several steps: first the definition of plane sections and characterization of their architecture; secondly the construction of a continuous plane curve depending on some parameters. In the third step we allow the selection on the curve of some nodal points with given specific characteristics (symmetry, tangency conditions, shadowing exclusion, corners, … ). The fourth and last step is the construction of a best shape defined by the comparison with an abacus of known geometrical elements, such as moulding profiles, leading to a precise architectonical section. The algorithms have been developed and tested in very different situations and are presented in a case study of complex geometries such as some mouldings profiles in the Church of San Carlo alle Quattro Fontane.

  2. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.

    2017-03-13

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  3. Developer Tools for Evaluating Multi-Objective Algorithms

    Science.gov (United States)

    Giuliano, Mark E.; Johnston, Mark D.

    2011-01-01

    Multi-objective algorithms for scheduling offer many advantages over the more conventional single objective approach. By keeping user objectives separate instead of combined, more information is available to the end user to make trade-offs between competing objectives. Unlike single objective algorithms, which produce a single solution, multi-objective algorithms produce a set of solutions, called a Pareto surface, where no solution is strictly dominated by another solution for all objectives. From the end-user perspective a Pareto-surface provides a tool for reasoning about trade-offs between competing objectives. From the perspective of a software developer multi-objective algorithms provide an additional challenge. How can you tell if one multi-objective algorithm is better than another? This paper presents formal and visual tools for evaluating multi-objective algorithms and shows how the developer process of selecting an algorithm parallels the end-user process of selecting a solution for execution out of the Pareto-Surface.

  4. B ampersand W PWR advanced control system algorithm development

    International Nuclear Information System (INIS)

    Winks, R.W.; Wilson, T.L.; Amick, M.

    1992-01-01

    This paper discusses algorithm development of an Advanced Control System for the B ampersand W Pressurized Water Reactor (PWR) nuclear power plant. The paper summarizes the history of the project, describes the operation of the algorithm, and presents transient results from a simulation of the plant and control system. The history discusses the steps in the development process and the roles played by the utility owners, B ampersand W Nuclear Service Company (BWNS), Oak Ridge National Laboratory (ORNL), and the Foxboro Company. The algorithm description is a brief overview of the features of the control system. The transient results show that operation of the algorithm in a normal power maneuvering mode and in a moderately large upset following a feedwater pump trip

  5. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  6. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss...

  7. Evaluation of models generated via hybrid evolutionary algorithms ...

    African Journals Online (AJOL)

    2016-04-02

    Apr 2, 2016 ... Evaluation of models generated via hybrid evolutionary algorithms for the prediction of Microcystis ... evolutionary algorithms (HEA) proved to be highly applica- ble to the hypertrophic reservoirs of South Africa. .... discovered and optimised using a large-scale parallel computational device and relevant soft-.

  8. Mathematical model and coordination algorithms for ensuring complex security of an organization

    Science.gov (United States)

    Novoseltsev, V. I.; Orlova, D. E.; Dubrovin, A. S.; Irkhin, V. P.

    2018-03-01

    The mathematical model of coordination when ensuring complex security of the organization is considered. On the basis of use of a method of casual search three types of algorithms of effective coordination adequate to mismatch level concerning security are developed: a coordination algorithm at domination of instructions of the coordinator; a coordination algorithm at domination of decisions of performers; a coordination algorithm at parity of interests of the coordinator and performers. Assessment of convergence of the algorithms considered above it was made by carrying out a computing experiment. The described algorithms of coordination have property of convergence in the sense stated above. And, the following regularity is revealed: than more simply in the structural relation the algorithm, for the smaller number of iterations is provided to those its convergence.

  9. A Path Planning Algorithm using Generalized Potential Model for Hyper- Redundant Robots with 2-DOF Joints

    Directory of Open Access Journals (Sweden)

    Chien-Chou Lin

    2011-06-01

    Full Text Available This paper proposes a potential‐based path planning algorithm of articulated robots with 2‐DOF joints. The algorithm is an extension of a previous algorithm developed for 3‐DOF joints. While 3‐DOF joints result in a more straightforward potential minimization algorithm, 2‐DOF joints are obviously more practical for active operations. The proposed approach computes repulsive force and torque between charged objects by using generalized potential model. A collision‐free path can be obtained by locally adjusting the robot configuration to search for minimum potential configurations using these force and torque. The optimization of path safeness, through the innovative potential minimization algorithm, makes the proposed approach unique. In order to speedup the computation, a sequential planning strategy is adopted. Simulation results show that the proposed algorithm works well compared with 3‐DOF‐joint algorithm, in terms of collision avoidance and computation efficiency.

  10. Algorithms

    Indian Academy of Sciences (India)

    have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.

  11. Algorithm development for Maxwell's equations for computational electromagnetism

    Science.gov (United States)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  12. Focuss algorithm application in kinetic compartment modeling for PET tracer

    International Nuclear Information System (INIS)

    Huang Xinrui; Bao Shanglian

    2004-01-01

    dynamic data, comparing with the pre-existing data-led technique- spectral analysis. The results showed that our kinetic modeling technique for the quantitative analysis of dynamic in vivo radiotracer studies is a transparent data-driven modeling approach as it returns, not only macro parameter values, but also information on the underlying model structure. Furthermore, FOCUSS algorithm can avoid the over complete problems in spectral analysis and improve the error properties. Since this technique does not require a predefined compartmental structure and it can be used to characterize tracer kinetics in various tissue types or even mixtures of different tissue types, it provides a unique tool for image analysis of complex functional structures where image pixels may contain inhomogeneous tissue types. Moreover, it can make work m imaging probe, tracer and drug development, when their characteristic in vivo isn't known. Therefore, this kinetic modeling technique is of use for PET molecular imaging and drug development. (authors)

  13. Algorithmic detectability threshold of the stochastic block model

    Science.gov (United States)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  14. Two-Stage Electricity Demand Modeling Using Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Krzysztof Gajowniczek

    2017-10-01

    Full Text Available Forecasting of electricity demand has become one of the most important areas of research in the electric power industry, as it is a critical component of cost-efficient power system management and planning. In this context, accurate and robust load forecasting is supposed to play a key role in reducing generation costs, and deals with the reliability of the power system. However, due to demand peaks in the power system, forecasts are inaccurate and prone to high numbers of errors. In this paper, our contributions comprise a proposed data-mining scheme for demand modeling through peak detection, as well as the use of this information to feed the forecasting system. For this purpose, we have taken a different approach from that of time series forecasting, representing it as a two-stage pattern recognition problem. We have developed a peak classification model followed by a forecasting model to estimate an aggregated demand volume. We have utilized a set of machine learning algorithms to benefit from both accurate detection of the peaks and precise forecasts, as applied to the Polish power system. The key finding is that the algorithms can detect 96.3% of electricity peaks (load value equal to or above the 99th percentile of the load distribution and deliver accurate forecasts, with mean absolute percentage error (MAPE of 3.10% and resistant mean absolute percentage error (r-MAPE of 2.70% for the 24 h forecasting horizon.

  15. A Stress Update Algorithm for Constitutive Models of Glassy Polymers

    Science.gov (United States)

    Danielsson, Mats

    2013-06-01

    A semi-implicit stress update algorithm is developed for the elastic-viscoplastic behavior of glassy polymers. The case of near rate-insensitivity is addressed, and the stress update algorithm is designed to handle this case robustly. A consistent tangent stiffness matrix is derived based on a full linearization of the internal virtual work. The stress update algorithm and (a slightly modified) tangent stiffness matrix are implemented in a commercial finite element program. The stress update algorithm is tested on a large boundary value problem for illustrative purposes.

  16. Model predictive control algorithms and their application to a continuous fermenter

    Directory of Open Access Journals (Sweden)

    R. G. SILVA

    1999-06-01

    Full Text Available In many continuous fermentation processes, the control objective is to maximize productivity per unit time. The optimum operational point in the steady state can be obtained by maximizing the productivity rate using feed substrate concentration as the independent variable with the equations of the static model as constraints. In the present study, three model-based control schemes have been developed and implemented for a continuous fermenter. The first method modifies the well-known dynamic matrix control (DMC algorithm by making it adaptive. The other two use nonlinear model predictive control algorithms (NMPC, nonlinear model predictive control for calculation of control actions. The NMPC1 algorithm, which uses orthogonal collocation in finite elements, acted similar to NMPC2, which uses equidistant collocation. These algorithms are compared with DMC. The results obtained show the good performance of nonlinear algorithms.

  17. Using genetic algorithms to calibrate a water quality model.

    Science.gov (United States)

    Liu, Shuming; Butler, David; Brazier, Richard; Heathwaite, Louise; Khu, Soon-Thiam

    2007-03-15

    With the increasing concern over the impact of diffuse pollution on water bodies, many diffuse pollution models have been developed in the last two decades. A common obstacle in using such models is how to determine the values of the model parameters. This is especially true when a model has a large number of parameters, which makes a full range of calibration expensive in terms of computing time. Compared with conventional optimisation approaches, soft computing techniques often have a faster convergence speed and are more efficient for global optimum searches. This paper presents an attempt to calibrate a diffuse pollution model using a genetic algorithm (GA). Designed to simulate the export of phosphorus from diffuse sources (agricultural land) and point sources (human), the Phosphorus Indicators Tool (PIT) version 1.1, on which this paper is based, consisted of 78 parameters. Previous studies have indicated the difficulty of full range model calibration due to the number of parameters involved. In this paper, a GA was employed to carry out the model calibration in which all parameters were involved. A sensitivity analysis was also performed to investigate the impact of operators in the GA on its effectiveness in optimum searching. The calibration yielded satisfactory results and required reasonable computing time. The application of the PIT model to the Windrush catchment with optimum parameter values was demonstrated. The annual P loss was predicted as 4.4 kg P/ha/yr, which showed a good fitness to the observed value.

  18. Geometric algorithms for electromagnetic modeling of large scale structures

    Science.gov (United States)

    Pingenot, James

    With the rapid increase in the speed and complexity of integrated circuit designs, 3D full wave and time domain simulation of chip, package, and board systems becomes more and more important for the engineering of modern designs. Much effort has been applied to the problem of electromagnetic (EM) simulation of such systems in recent years. Major advances in boundary element EM simulations have led to O(n log n) simulations using iterative methods and advanced Fast. Fourier Transform (FFT), Multi-Level Fast Multi-pole Methods (MLFMM), and low-rank matrix compression techniques. These advances have been augmented with an explosion of multi-core and distributed computing technologies, however, realization of the full scale of these capabilities has been hindered by cumbersome and inefficient geometric processing. Anecdotal evidence from industry suggests that users may spend around 80% of turn-around time manipulating the geometric model and mesh. This dissertation addresses this problem by developing fast and efficient data structures and algorithms for 3D modeling of chips, packages, and boards. The methods proposed here harness the regular, layered 2D nature of the models (often referred to as "2.5D") to optimize these systems for large geometries. First, an architecture is developed for efficient storage and manipulation of 2.5D models. The architecture gives special attention to native representation of structures across various input models and special issues particular to 3D modeling. The 2.5D structure is then used to optimize the mesh systems First, circuit/EM co-simulation techniques are extended to provide electrical connectivity between objects. This concept is used to connect independently meshed layers, allowing simple and efficient 2D mesh algorithms to be used in creating a 3D mesh. Here, adaptive meshing is used to ensure that the mesh accurately models the physical unknowns (current and charge). Utilizing the regularized nature of 2.5D objects and

  19. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well...... for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...

  20. A hybrid algorithm and its applications to fuzzy logic modeling of nonlinear systems

    Science.gov (United States)

    Wang, Zhongjun

    System models allow us to simulate and analyze system dynamics efficiently. Most importantly, system models allow us to make prediction about system behaviors and to perform system parametric variation analysis without having to build the actual systems. The fuzzy logic modeling technique has been successfully applied in complex nonlinear system modeling such as unsteady aerodynamics modeling etc. recently. However, the current forward search algorithm to identify fuzzy logic model structures is very time-consuming. It is not unusual to spend several days or even a few weeks in computer CPU time to obtain better nonlinear system model structures by this forward search. Moreover, how to speed up the fuzzy logic model parameter identification process is also challenging when the number of influencing variables of nonlinear systems is large. To solve these problems, a hybrid algorithm for the nonlinear system modeling is proposed, formalized, implemented, and evaluated in this dissertation. By combining the fuzzy logic modeling technique with genetic algorithms, the developed hybrid algorithm is applied to both fuzzy logic model structure identification and model parameter identification. In the model structure identification process, the hybrid algorithm has the ability to find feasible structures more efficiently and effectively than the forward search. In the model parameter identification process (by using Newton gradient descent algorithm), the proposed hybrid algorithm incorporates genetic search algorithm to dynamically select convergence factors. It has the advantages of quick search yet maintains the monotonically convergent properties of the Newton gradient descent algorithm. To evaluate the properties of the developed hybrid algorithm, a nonlinear, unsteady aerodynamic normal force model with a complex system involving fourteen influencing variables is established from flight data. The results show that this hybrid algorithm can identify the aerodynamic

  1. Development of mathematical models and optimization of the process parameters of laser surface hardened EN25 steel using elitist non-dominated sorting genetic algorithm

    Science.gov (United States)

    Vignesh, S.; Dinesh Babu, P.; Surya, G.; Dinesh, S.; Marimuthu, P.

    2018-02-01

    The ultimate goal of all production entities is to select the process parameters that would be of maximum strength, minimum wear and friction. The friction and wear are serious problems in most of the industries which are influenced by the working set of parameters, oxidation characteristics and mechanism involved in formation of wear. The experimental input parameters such as sliding distance, applied load, and temperature are utilized in finding out the optimized solution for achieving the desired output responses such as coefficient of friction, wear rate, and volume loss. The optimization is performed with the help of a novel method, Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) based on an evolutionary algorithm. The regression equations obtained using Response Surface Methodology (RSM) are used in determining the optimum process parameters. Further, the results achieved through desirability approach in RSM are compared with that of the optimized solution obtained through NSGA-II. The results conclude that proposed evolutionary technique is much effective and faster than the desirability approach.

  2. An efficient algorithm for corona simulation with complex chemical models

    Science.gov (United States)

    Villa, Andrea; Barbieri, Luca; Gondola, Marco; Leon-Garzon, Andres R.; Malgesini, Roberto

    2017-05-01

    The simulation of cold plasma discharges is a leading field of applied sciences with many applications ranging from pollutant control to surface treatment. Many of these applications call for the development of novel numerical techniques to implement fully three-dimensional corona solvers that can utilize complex and physically detailed chemical databases. This is a challenging task since it multiplies the difficulties inherent to a three-dimensional approach by the complexity of databases comprising tens of chemical species and hundreds of reactions. In this paper a novel approach, capable of reducing significantly the computational burden, is developed. The proposed method is based on a proper time stepping algorithm capable of decomposing the original problem into simpler ones: each of them has then been tackled with either finite element, finite volume or ordinary differential equations solvers. This last solver deals with the chemical model and its efficient implementation is one of the main contributions of this work.

  3. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  4. Prediction models and control algorithms for predictive applications of setback temperature in cooling systems

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Yoon, Younju; Jeon, Young-Hoon; Kim, Sooyoung

    2017-01-01

    Highlights: • Initial ANN model was developed for predicting the time to the setback temperature. • Initial model was optimized for producing accurate output. • Optimized model proved its prediction accuracy. • ANN-based algorithms were developed and tested their performance. • ANN-based algorithms presented superior thermal comfort or energy efficiency. - Abstract: In this study, a temperature control algorithm was developed to apply a setback temperature predictively for the cooling system of a residential building during occupied periods by residents. An artificial neural network (ANN) model was developed to determine the required time for increasing the current indoor temperature to the setback temperature. This study involved three phases: development of the initial ANN-based prediction model, optimization and testing of the initial model, and development and testing of three control algorithms. The development and performance testing of the model and algorithm were conducted using TRNSYS and MATLAB. Through the development and optimization process, the final ANN model employed indoor temperature and the temperature difference between the current and target setback temperature as two input neurons. The optimal number of hidden layers, number of neurons, learning rate, and moment were determined to be 4, 9, 0.6, and 0.9, respectively. The tangent–sigmoid and pure-linear transfer function was used in the hidden and output neurons, respectively. The ANN model used 100 training data sets with sliding-window method for data management. Levenberg-Marquart training method was employed for model training. The optimized model had a prediction accuracy of 0.9097 root mean square errors when compared with the simulated results. Employing the ANN model, ANN-based algorithms maintained indoor temperatures better within target ranges. Compared to the conventional algorithm, the ANN-based algorithms reduced the duration of time, in which the indoor temperature

  5. Algorithms

    Indian Academy of Sciences (India)

    algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.

  6. Algorithms

    Indian Academy of Sciences (India)

    In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...

  7. DiamondTorre Algorithm for High-Performance Wave Modeling

    Directory of Open Access Journals (Sweden)

    Vadim Levchenko

    2016-08-01

    Full Text Available Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.

  8. Drexel University Shell Model (DUSM) algorithm

    Science.gov (United States)

    Valliéres, Michel; Novoselsky, Akiva

    1994-03-01

    This lecture is devoted to the Drexel University Shell Model (DUSM) code; this is a new shell-model code based on a separation of the various subspaces in which the single particle wavefunctions are defined. This is achieved via extensive use of permutation group concepts and a redefinition of the Coeficients of Fractional Parentage (CFP) to include permutation labels. This leads to a modern and efficient approach to nuclear shell-model.

  9. Drexel University Shell Model (DUSM) algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Vallieres, M. (Drexel Univ., Philadelphia, PA (United States). Dept. of Physics and Atmospheric Science); Novoselsky, A. (Hebrew Univ., Jerusalem (Israel). Dept. of Physics)

    1994-03-28

    This lecture is devoted to the Drexel University Shell Model (DUSM) code; this is a new shell-model code based on a separation of the various subspaces in which the single particle wavefunctions are defined. This is achieved via extensive use of permutation group concepts and a redefinition of the Coeficients of Fractional Parentage (CEP) to include permutation labels. This leads to a modern and efficient approach to nuclear shell-model. (orig.)

  10. Improved CHAID algorithm for document structure modelling

    Science.gov (United States)

    Belaïd, A.; Moinel, T.; Rangoni, Y.

    2010-01-01

    This paper proposes a technique for the logical labelling of document images. It makes use of a decision-tree based approach to learn and then recognise the logical elements of a page. A state-of-the-art OCR gives the physical features needed by the system. Each block of text is extracted during the layout analysis and raw physical features are collected and stored in the ALTO format. The data-mining method employed here is the "Improved CHi-squared Automatic Interaction Detection" (I-CHAID). The contribution of this work is the insertion of logical rules extracted from the logical layout knowledge to support the decision tree. Two setups have been tested; the first uses one tree per logical element, the second one uses a single tree for all the logical elements we want to recognise. The main system, implemented in Java, coordinates the third-party tools (Omnipage for the OCR part, and SIPINA for the I-CHAID algorithm) using XML and XSL transforms. It was tested on around 1000 documents belonging to the ICPR'04 and ICPR'08 conference proceedings, representing about 16,000 blocks. The final error rate for determining the logical labels (among 9 different ones) is less than 6%.

  11. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation

  12. Stochastic cluster algorithms for discrete Gaussian (SOS) models

    International Nuclear Information System (INIS)

    Evertz, H.G.; Hamburg Univ.; Hasenbusch, M.; Marcu, M.; Tel Aviv Univ.; Pinn, K.; Muenster Univ.; Solomon, S.

    1990-10-01

    We present new Monte Carlo cluster algorithms which eliminate critical slowing down in the simulation of solid-on-solid models. In this letter we focus on the two-dimensional discrete Gaussian model. The algorithms are based on reflecting the integer valued spin variables with respect to appropriately chosen reflection planes. The proper choice of the reflection plane turns out to be crucial in order to obtain a small dynamical exponent z. Actually, the successful versions of our algorithm are a mixture of two different procedures for choosing the reflection plane, one of them ergodic but slow, the other one non-ergodic and also slow when combined with a Metropolis algorithm. (orig.)

  13. Development of radio frequency interference detection algorithms for passive microwave remote sensing

    Science.gov (United States)

    Misra, Sidharth

    Radio Frequency Interference (RFI) signals are man-made sources that are increasingly plaguing passive microwave remote sensing measurements. RFI is of insidious nature, with some signals low power enough to go undetected but large enough to impact science measurements and their results. With the launch of the European Space Agency (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite in November 2009 and the upcoming launches of the new NASA sea-surface salinity measuring Aquarius mission in June 2011 and soil-moisture measuring Soil Moisture Active Passive (SMAP) mission around 2015, active steps are being taken to detect and mitigate RFI at L-band. An RFI detection algorithm was designed for the Aquarius mission. The algorithm performance was analyzed using kurtosis based RFI ground-truth. The algorithm has been developed with several adjustable location dependant parameters to control the detection statistics (false-alarm rate and probability of detection). The kurtosis statistical detection algorithm has been compared with the Aquarius pulse detection method. The comparative study determines the feasibility of the kurtosis detector for the SMAP radiometer, as a primary RFI detection algorithm in terms of detectability and data bandwidth. The kurtosis algorithm has superior detection capabilities for low duty-cycle radar like pulses, which are more prevalent according to analysis of field campaign data. Most RFI algorithms developed have generally been optimized for performance with individual pulsed-sinusoidal RFI sources. A new RFI detection model is developed that takes into account multiple RFI sources within an antenna footprint. The performance of the kurtosis detection algorithm under such central-limit conditions is evaluated. The SMOS mission has a unique hardware system, and conventional RFI detection techniques cannot be applied. Instead, an RFI detection algorithm for SMOS is developed and applied in the angular domain. This algorithm compares

  14. Applications of Flocking Algorithms to Input Modeling for Agent Movement

    Science.gov (United States)

    2011-12-01

    2445 Singham, Therkildsen, and Schruben We apply the following flocking algorithm to this leading boid to generate followers, who will then be mapped...due to the paths crossing. 2447 Singham, Therkildsen, and Schruben Figure 2: Plot of the path of a boid generated by the Group 4 flocking algorithm ...on the possible inputs. This method uses techniques from agent-based modeling to generate a flock of boids that follow the data. In this paper, we

  15. An Algorithm for Optimally Fitting a Wiener Model

    Directory of Open Access Journals (Sweden)

    Lucas P. Beverlin

    2011-01-01

    Full Text Available The purpose of this work is to present a new methodology for fitting Wiener networks to datasets with a large number of variables. Wiener networks have the ability to model a wide range of data types, and their structures can yield parameters with phenomenological meaning. There are several challenges to fitting such a model: model stiffness, the nonlinear nature of a Wiener network, possible overfitting, and the large number of parameters inherent with large input sets. This work describes a methodology to overcome these challenges by using several iterative algorithms under supervised learning and fitting subsets of the parameters at a time. This methodology is applied to Wiener networks that are used to predict blood glucose concentrations. The predictions of validation sets from models fit to four subjects using this methodology yielded a higher correlation between observed and predicted observations than other algorithms, including the Gauss-Newton and Levenberg-Marquardt algorithms.

  16. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, Denis

    2017-04-05

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.

  17. How to incorporate generic refraction models into multistatic tracking algorithms

    Science.gov (United States)

    Crouse, D. F.

    The vast majority of literature published on target tracking ignores the effects of atmospheric refraction. When refraction is considered, the solutions are generally tailored to a simple exponential atmospheric refraction model. This paper discusses how arbitrary refraction models can be incorporated into tracking algorithms. Attention is paid to multistatic tracking problems, where uncorrected refractive effects can worsen track accuracy and consistency in centralized tracking algorithms, and can lead to difficulties in track-to-track association in distributed tracking filters. Monostatic and bistatic track initialization using refraction-corrupted measurements is discussed. The results are demonstrated using an exponential refractive model, though an arbitrary refraction profile can be substituted.

  18. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmet Demir

    2017-01-01

    Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions. 

  19. Optimal parallel algorithms for problems modeled by a family of intervals

    Science.gov (United States)

    Olariu, Stephan; Schwing, James L.; Zhang, Jingyuan

    1992-01-01

    A family of intervals on the real line provides a natural model for a vast number of scheduling and VLSI problems. Recently, a number of parallel algorithms to solve a variety of practical problems on such a family of intervals have been proposed in the literature. Computational tools are developed, and it is shown how they can be used for the purpose of devising cost-optimal parallel algorithms for a number of interval-related problems including finding a largest subset of pairwise nonoverlapping intervals, a minimum dominating subset of intervals, along with algorithms to compute the shortest path between a pair of intervals and, based on the shortest path, a parallel algorithm to find the center of the family of intervals. More precisely, with an arbitrary family of n intervals as input, all algorithms run in O(log n) time using O(n) processors in the EREW-PRAM model of computation.

  20. TWO-STEP ALGORITHM OF TRAINING INITIALIZATION FOR ACOUSTIC MODELS BASED ON DEEP NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    I. P. Medennikov

    2016-03-01

    Full Text Available This paper presents a two-step initialization algorithm for training of acoustic models based on deep neural networks. The algorithm is focused on reducing the impact of the non-speech segments on the acoustic model training. The idea of the proposed algorithm is to reduce the percentage of non-speech examples in the training set. Effectiveness evaluation of the algorithm has been carried out on the example of English spontaneous telephone speech recognition (Switchboard. The application of the proposed algorithm has led to 3% relative word error rate reduction, compared with the training initialization by restricted Boltzmann machines. The results presented in the paper can be applied in the development of automatic speech recognition systems.

  1. Co-clustering models, algorithms and applications

    CERN Document Server

    Govaert, Gérard

    2013-01-01

    Cluster or co-cluster analyses are important tools in a variety of scientific areas. The introduction of this book presents a state of the art of already well-established, as well as more recent methods of co-clustering. The authors mainly deal with the two-mode partitioning under different approaches, but pay particular attention to a probabilistic approach. Chapter 1 concerns clustering in general and the model-based clustering in particular. The authors briefly review the classical clustering methods and focus on the mixture model. They present and discuss the use of different mixture

  2. Economic Models and Algorithms for Distributed Systems

    CERN Document Server

    Neumann, Dirk; Altmann, Jorn; Rana, Omer F

    2009-01-01

    Distributed computing models for sharing resources such as Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. This book intends to discover fresh avenues of research and amendments to existing technologies, aiming at the successful deployment of commercial distributed systems

  3. Robust Return Algorithm for Anisotropic Plasticity Models

    DEFF Research Database (Denmark)

    Tidemann, L.; Krenk, Steen

    2017-01-01

    Plasticity models can be defined by an energy potential, a plastic flow potential and a yield surface. The energy potential defines the relation between the observable elastic strains ϒe and the energy conjugate stresses Τe and between the non-observable internal strains i and the energy conjugat...

  4. Development of an Algorithm to Classify Colonoscopy Indication from Coded Health Care Data.

    Science.gov (United States)

    Adams, Kenneth F; Johnson, Eric A; Chubak, Jessica; Kamineni, Aruna; Doubeni, Chyke A; Buist, Diana S M; Williams, Andrew E; Weinmann, Sheila; Doria-Rose, V Paul; Rutter, Carolyn M

    2015-01-01

    Electronic health data are potentially valuable resources for evaluating colonoscopy screening utilization and effectiveness. The ability to distinguish screening colonoscopies from exams performed for other purposes is critical for research that examines factors related to screening uptake and adherence, and the impact of screening on patient outcomes, but distinguishing between these indications in secondary health data proves challenging. The objective of this study is to develop a new and more accurate algorithm for identification of screening colonoscopies using electronic health data. Data from a case-control study of colorectal cancer with adjudicated colonoscopy indication was used to develop logistic regression-based algorithms. The proposed algorithms predict the probability that a colonoscopy was indicated for screening, with variables selected for inclusion in the models using the Least Absolute Shrinkage and Selection Operator (LASSO). The algorithms had excellent classification accuracy in internal validation. The primary, restricted model had AUC= 0.94, sensitivity=0.91, and specificity=0.82. The secondary, extended model had AUC=0.96, sensitivity=0.88, and specificity=0.90. The LASSO approach enabled estimation of parsimonious algorithms that identified screening colonoscopies with high accuracy in our study population. External validation is needed to replicate these results and to explore the performance of these algorithms in other settings.

  5. Improved Expectation Maximization Algorithm for Gaussian Mixed Model Using the Kernel Method

    Directory of Open Access Journals (Sweden)

    Mohd Izhan Mohd Yusoff

    2013-01-01

    Full Text Available Fraud activities have contributed to heavy losses suffered by telecommunication companies. In this paper, we attempt to use Gaussian mixed model, which is a probabilistic model normally used in speech recognition to identify fraud calls in the telecommunication industry. We look at several issues encountered when calculating the maximum likelihood estimates of the Gaussian mixed model using an Expectation Maximization algorithm. Firstly, we look at a mechanism for the determination of the initial number of Gaussian components and the choice of the initial values of the algorithm using the kernel method. We show via simulation that the technique improves the performance of the algorithm. Secondly, we developed a procedure for determining the order of the Gaussian mixed model using the log-likelihood function and the Akaike information criteria. Finally, for illustration, we apply the improved algorithm to real telecommunication data. The modified method will pave the way to introduce a comprehensive method for detecting fraud calls in future work.

  6. Development and validation of algorithms to identify acute diverticulitis.

    Science.gov (United States)

    Kawatkar, Aniket; Chu, Li-Hao; Iyer, Rajan; Yen, Linnette; Chen, Wansu; Erder, M Haim; Hodgkins, Paul; Longstreth, George

    2015-01-01

    The objectives of this study were to develop and validate algorithms to accurately identify patients with diverticulitis using electronic medical records (EMRs). Using Kaiser Permanente Southern California's EMRs of adults (≥18 years) with International Classification of Diseases, Clinical Modifications, Ninth Revision diagnosis codes of diverticulitis (562.11, 562.13) between 1 January 2008 and 31 August 2009, we generated random samples for pilot (N = 692) and validation (N = 1502) respectively. Both samples were stratified by inpatient (IP), emergency department (ED), and outpatient (OP) care settings. We developed and validated several algorithms using EMR data on diverticulitis diagnosis code, antibiotics, computed tomography, diverticulosis history, pain medication and/or pain diagnosis, and excluding patients with infections and/or conditions that could mimic diverticulitis. Evidence of diverticulitis was confirmed through manual chart review. Agreement between EMR algorithm and manual chart confirmation was evaluated using sensitivity and positive predictive value (PPV). Both samples were similar in socio-demographics and clinical symptoms. An algorithm based on diverticulitis diagnosis code with antibiotic prescription dispensed within 7 days of diagnosis date, performed well overall. In the validation sample, sensitivity and PPV were (84.6, 98.2%), (95.8, 98.1%), and (91.8, 82.6%) for OP, ED, and IP, respectively. Using antibiotic prescriptions to supplement diagnostic codes improved the accuracy of case identification for diverticulitis, but results varied by care setting. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Development of the trigger algorithm for the MONOLITH experiment

    International Nuclear Information System (INIS)

    Gutsche, O.

    2001-05-01

    The MONOLITH project is proposed to prove atmospheric neutrino oscillations and to improve the corresponding measurements of Super-Kamiokande. The MONOLITH detector consists of a massive (34 kt) magnetized iron tracking calorimeter and is optimized for muon neutrino detection. This diploma thesis presents the development of the trigger algorithm for the MONOLITH experiment and related test measurements. Chapter two gives an introduction to the mechanism of neutrino oscillations. The two flavor approximation and the three flavor mechanism are described and influences of matter on neutrino oscillations are discussed. The principles of neutrino oscillation experiments are discussed and the results of Super-Kamiokande, a neutrino oscillation experiment, are presented. Super-Kamiokande gave the strongest indications for atmospheric neutrino oscillations so far. The third chapter introduces the MONOLITH project in the context of atmospheric neutrino oscillations. The MONOLITH detector is described and the main active component, the glass spark chamber, is presented. Chapter four presents the practical part of this thesis. A test setup of a glass spark chamber is built up including a cosmics trigger and a data acquisition system. Cosmic ray muons are used for the investigation of the chamber. During a long term test, the stability of the efficiency and the noise rate of the chamber are investigated. A status report of the results is given. The results are taken as input for the trigger development. In chapter five, the development of the trigger algorithm is presented. In the beginning, the structural design of the trigger algorithm is described. The efficiency and the rate of the trigger algorithm are investigated using two event sources, a Monte Carlo neutrino event sample and a generated noise sample. For the analysis, the data sources are processed by several processing stages which are visualized by corresponding event displays. In the course of the data processing

  8. Modeling Algorithms in SystemC and ACL2

    Directory of Open Access Journals (Sweden)

    John W. O'Leary

    2014-06-01

    Full Text Available We describe the formal language MASC, based on a subset of SystemC and intended for modeling algorithms to be implemented in hardware. By means of a special-purpose parser, an algorithm coded in SystemC is converted to a MASC model for the purpose of documentation, which in turn is translated to ACL2 for formal verification. The parser also generates a SystemC variant that is suitable as input to a high-level synthesis tool. As an illustration of this methodology, we describe a proof of correctness of a simple 32-bit radix-4 multiplier.

  9. Methodology, models and algorithms in thermographic diagnostics

    CERN Document Server

    Živčák, Jozef; Madarász, Ladislav; Rudas, Imre J

    2013-01-01

    This book presents  the methodology and techniques of  thermographic applications with focus primarily on medical thermography implemented for parametrizing the diagnostics of the human body. The first part of the book describes the basics of infrared thermography, the possibilities of thermographic diagnostics and the physical nature of thermography. The second half includes tools of intelligent engineering applied for the solving of selected applications and projects. Thermographic diagnostics was applied to problematics of paraplegia and tetraplegia and carpal tunnel syndrome (CTS). The results of the research activities were created with the cooperation of the four projects within the Ministry of Education, Science, Research and Sport of the Slovak Republic entitled Digital control of complex systems with two degrees of freedom, Progressive methods of education in the area of control and modeling of complex object oriented systems on aircraft turbocompressor engines, Center for research of control of te...

  10. MIP Models and Hybrid Algorithms for Simultaneous Job Splitting and Scheduling on Unrelated Parallel Machines

    Science.gov (United States)

    Ozmutlu, H. Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  11. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    Science.gov (United States)

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  12. Comparative Study on a Solving Model and Algorithm for a Flush Air Data Sensing System

    Directory of Open Access Journals (Sweden)

    Yanbin Liu

    2014-05-01

    Full Text Available With the development of high-performance aircraft, precise air data are necessary to complete challenging tasks such as flight maneuvering with large angles of attack and high speed. As a result, the flush air data sensing system (FADS was developed to satisfy the stricter control demands. In this paper, comparative stuides on the solving model and algorithm for FADS are conducted. First, the basic principles of FADS are given to elucidate the nonlinear relations between the inputs and the outputs. Then, several different solving models and algorithms of FADS are provided to compute the air data, including the angle of attck, sideslip angle, dynamic pressure and static pressure. Afterwards, the evaluation criteria of the resulting models and algorithms are discussed to satisfy the real design demands. Futhermore, a simulation using these algorithms is performed to identify the properites of the distinct models and algorithms such as the measuring precision and real-time features. The advantages of these models and algorithms corresponding to the different flight conditions are also analyzed, furthermore, some suggestions on their engineering applications are proposed to help future research.

  13. RSMASS system model development

    International Nuclear Information System (INIS)

    Marshall, A.C.; Gallup, D.R.

    1998-01-01

    RSMASS system mass models have been used for more than a decade to make rapid estimates of space reactor power system masses. This paper reviews the evolution of the RSMASS models and summarizes present capabilities. RSMASS has evolved from a simple model used to make rough estimates of space reactor and shield masses to a versatile space reactor power system model. RSMASS uses unique reactor and shield models that permit rapid mass optimization calculations for a variety of space reactor power and propulsion systems. The RSMASS-D upgrade of the original model includes algorithms for the balance of the power system, a number of reactor and shield modeling improvements, and an automatic mass optimization scheme. The RSMASS-D suite of codes cover a very broad range of reactor and power conversion system options as well as propulsion and bimodal reactor systems. Reactor choices include in-core and ex-core thermionic reactors, liquid metal cooled reactors, particle bed reactors, and prismatic configuration reactors. Power conversion options include thermoelectric, thermionic, Stirling, Brayton, and Rankine approaches. Program output includes all major component masses and dimensions, efficiencies, and a description of the design parameters for a mass optimized system. In the past, RSMASS has been used as an aid to identify and select promising concepts for space power applications. The RSMASS modeling approach has been demonstrated to be a valuable tool for guiding optimization of the power system design; consequently, the model is useful during system design and development as well as during the selection process. An improved in-core thermionic reactor system model RSMASS-T is now under development. The current development of the RSMASS-T code represents the next evolutionary stage of the RSMASS models. RSMASS-T includes many modeling improvements and is planned to be more user-friendly. RSMASS-T will be released as a fully documented, certified code at the end of

  14. Calibration of microscopic traffic simulation models using metaheuristic algorithms

    Directory of Open Access Journals (Sweden)

    Miao Yu

    2017-06-01

    Full Text Available This paper presents several metaheuristic algorithms to calibrate a microscopic traffic simulation model. The genetic algorithm (GA, Tabu Search (TS, and a combination of the GA and TS (i.e., warmed GA and warmed TS are implemented and compared. A set of traffic data collected from the I-5 Freeway, Los Angles, California, is used. Objective functions are defined to minimize the difference between simulated and field traffic data which are built based on the flow and speed. Several car-following parameters in VISSIM, which can significantly affect the simulation outputs, are selected to calibrate. A better match to the field measurements is reached with the GA, TS, and warmed GA and TS when comparing with that only using the default parameters in VISSIM. Overall, TS performs very well and can be used to calibrate parameters. Combining metaheuristic algorithms clearly performs better and therefore is highly recommended for calibrating microscopic traffic simulation models.

  15. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    Science.gov (United States)

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  16. Algorithms

    Indian Academy of Sciences (India)

    In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...

  17. Algorithms

    Indian Academy of Sciences (India)

    algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...

  18. Programming Non-Trivial Algorithms in the Measurement Based Quantum Computation Model

    Energy Technology Data Exchange (ETDEWEB)

    Alsing, Paul [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Fanto, Michael [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Lott, Capt. Gordon [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Tison, Christoper C. [United States Air Force Research Laboratory, Wright-Patterson Air Force Base

    2014-01-01

    We provide a set of prescriptions for implementing a quantum circuit model algorithm as measurement based quantum computing (MBQC) algorithm1, 2 via a large cluster state. As means of illustration we draw upon our numerical modeling experience to describe a large graph state capable of searching a logical 8 element list (a non-trivial version of Grover's algorithm3 with feedforward). We develop several prescriptions based on analytic evaluation of cluster states and graph state equations which can be generalized into any circuit model operations. Such a resulting cluster state will be able to carry out the desired operation with appropriate measurements and feed forward error correction. We also discuss the physical implementation and the analysis of the principal 3-qubit entangling gate (Toffoli) required for a non-trivial feedforward realization of an 8-element Grover search algorithm.

  19. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    Science.gov (United States)

    Fernandez-Lozano, C.; Canto, C.; Gestal, M.; Andrade-Garda, J. M.; Rabuñal, J. R.; Dorado, J.; Pazos, A.

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected. PMID:24453933

  20. GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2011-01-01

    Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image χ 2 and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest χ 2 is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.

  1. The use of genetic algorithms to model protoplanetary discs

    Science.gov (United States)

    Hetem, Annibal; Gregorio-Hetem, Jane

    2007-12-01

    The protoplanetary discs of T Tauri and Herbig Ae/Be stars have previously been studied using geometric disc models to fit their spectral energy distribution (SED). The simulations provide a means to reproduce the signatures of various circumstellar structures, which are related to different levels of infrared excess. With the aim of improving our previous model, which assumed a simple flat-disc configuration, we adopt here a reprocessing flared-disc model that assumes hydrostatic, radiative equilibrium. We have developed a method to optimize the parameter estimation based on genetic algorithms (GAs). This paper describes the implementation of the new code, which has been applied to Herbig stars from the Pico dos Dias Survey catalogue, in order to illustrate the quality of the fitting for a variety of SED shapes. The star AB Aur was used as a test of the GA parameter estimation, and demonstrates that the new code reproduces successfully a canonical example of the flared-disc model. The GA method gives a good quality of fit, but the range of input parameters must be chosen with caution, as unrealistic disc parameters can be derived. It is confirmed that the flared-disc model fits the flattened SEDs typical of Herbig stars; however, embedded objects (increasing SED slope) and debris discs (steeply decreasing SED slope) are not well fitted with this configuration. Even considering the limitation of the derived parameters, the automatic process of SED fitting provides an interesting tool for the statistical analysis of the circumstellar luminosity of large samples of young stars.

  2. Optimisation of Hidden Markov Model using Baum–Welch algorithm ...

    Indian Academy of Sciences (India)

    s12040-016-0780-0. Optimisation of Hidden Markov Model using. Baum–Welch algorithm for prediction of maximum and minimum temperature over Indian Himalaya. J C Joshi1,∗. , Tankeshwar Kumar2, Sunita Srivastava2 and Divya Sachdeva1.

  3. Optimisation of Hidden Markov Model using Baum–Welch algorithm

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 126; Issue 1. Optimisation of Hidden Markov Model using Baum–Welch algorithm for prediction of maximum and minimum temperature over Indian Himalaya. J C Joshi Tankeshwar Kumar Sunita Srivastava Divya Sachdeva. Volume 126 Issue 1 February 2017 ...

  4. Epidemic Processes on Complex Networks : Modelling, Simulation and Algorithms

    NARCIS (Netherlands)

    Van de Bovenkamp, R.

    2015-01-01

    Local interactions on a graph will lead to global dynamic behaviour. In this thesis we focus on two types of dynamic processes on graphs: the Susceptible-Infected-Susceptilbe (SIS) virus spreading model, and gossip style epidemic algorithms. The largest part of this thesis is devoted to the SIS

  5. Stochastic disturbance rejection in model predictive control by randomized algorithms

    NARCIS (Netherlands)

    Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep

    2001-01-01

    In this paper we consider model predictive control with stochastic disturbances and input constraints. We present an algorithm which can solve this problem approximately but with arbitrary high accuracy. The optimization at each time step is a closed loop optimization and therefore takes into

  6. Iteration Capping For Discrete Choice Models Using the EM Algorithm

    NARCIS (Netherlands)

    Kabatek, J.

    2013-01-01

    The Expectation-Maximization (EM) algorithm is a well-established estimation procedure which is used in many domains of econometric analysis. Recent application in a discrete choice framework (Train, 2008) facilitated estimation of latent class models allowing for very exible treatment of unobserved

  7. Evolving the Topology of Hidden Markov Models using Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Thomsen, Réne

    2002-01-01

    Hidden Markov models (HMM) are widely used for speech recognition and have recently gained a lot of attention in the bioinformatics community, because of their ability to capture the information buried in biological sequences. Usually, heuristic algorithms such as Baum-Welch are used to estimate ...

  8. Models and algorithms for Integration of Vehicle and Crew Scheduling

    NARCIS (Netherlands)

    R. Freling (Richard); D. Huisman (Dennis); A.P.M. Wagelmans (Albert)

    2000-01-01

    textabstractThis paper deals with models, relaxations and algorithms for an integrated approach to vehicle and crew scheduling. We discuss potential benefits of integration and provide an overview of the literature, which considers mainly partial integration. Our approach is new in the sense that we

  9. Heterogenous Agents Model with the Worst Out Algorithm

    Czech Academy of Sciences Publication Activity Database

    Vácha, Lukáš; Vošvrda, Miloslav

    -, č. 8 (2006), s. 3-19 ISSN 1801-5999 Institutional research plan: CEZ:AV0Z10750506 Keywords : efficient market hypothesis * fractal market hypothesis * agents' investment horizons * agents' trading strategies * technical trading rules * heterogeneous agent model with stochastic memory * Worst out algorithm Subject RIV: AH - Economics

  10. Research on Suspension with Novel Dampers Based on Developed FOA-LQG Control Algorithm

    Directory of Open Access Journals (Sweden)

    Xiao Ping

    2017-01-01

    Full Text Available To enhance working-performance robustness of suspension, a vehicle suspension with permanent-magnet magnetic-valve magnetorheological damper (PMMVMD was studied. Firstly, mechanical structure of traditional magnetorheological damper (MD used in vehicle suspensions was redesigned through introducing a permanent magnet and a magnetic valve. Based on theories of electromagnetics and Bingham model, prediction model of damping force was built. On this basis, two-degree-of-freedom vehicle suspension model was established. In addition, fruit fly optimization algorithm- (FOA- line quadratic Gaussian (LQG control algorithm suitable for PMMVMD suspensions was designed on the basis of developing normal FOA. Finally, comparison simulation experiments and bench tests were conducted by taking white noise and a sine wave as the road surface input and the results indicated that working performance of PMMVMD suspension based on FOA-LQG control algorithm was good.

  11. DOOCS environment for FPGA-based cavity control system and control algorithms development

    Energy Technology Data Exchange (ETDEWEB)

    Pucyk, P.; Koprek, W.; Kaleta, P.; Szewinski, J.; Pozniak, K.T.; Czarski, T.; Romaniuk, R.S. [Technical Univ. Warsaw (PL). Inst. of Electronic Systems (ISE)

    2005-07-01

    The paper describes the concept and realization of the DOOCS control software for FPGAbased TESLA cavity controller and simulator (SIMCON). It bases on universal software components, created for laboratory purposes and used in MATLAB based control environment. These modules have been recently adapted to the DOOCS environment to ensure a unified software to hardware communication model. The presented solution can be also used as a general platform for control algorithms development. The proposed interfaces between MATLAB and DOOCS modules allow to check the developed algorithm in the operation environment before implementation in the FPGA. As the examples two systems have been presented. (orig.)

  12. Application of genetic algorithm in radio ecological models parameter determination

    International Nuclear Information System (INIS)

    Pantelic, G.

    2006-01-01

    The method of genetic algorithms was used to determine the biological half-life of 137 Cs in cow milk after the accident in Chernobyl. Methodologically genetic algorithms are based on the fact that natural processes tend to optimize themselves and therefore this method should be more efficient in providing optimal solutions in the modeling of radio ecological and environmental events. The calculated biological half-life of 137 Cs in milk is (32 ± 3) days and transfer coefficient from grass to milk is (0.019 ± 0.005). (authors)

  13. Application of genetic algorithm in radio ecological models parameter determination

    Energy Technology Data Exchange (ETDEWEB)

    Pantelic, G. [Institute of Occupatioanl Health and Radiological Protection ' Dr Dragomir Karajovic' , Belgrade (Serbia)

    2006-07-01

    The method of genetic algorithms was used to determine the biological half-life of 137 Cs in cow milk after the accident in Chernobyl. Methodologically genetic algorithms are based on the fact that natural processes tend to optimize themselves and therefore this method should be more efficient in providing optimal solutions in the modeling of radio ecological and environmental events. The calculated biological half-life of 137 Cs in milk is (32 {+-} 3) days and transfer coefficient from grass to milk is (0.019 {+-} 0.005). (authors)

  14. Fuzzy model predictive control algorithm applied in nuclear power plant

    International Nuclear Information System (INIS)

    Zuheir, Ahmad

    2006-01-01

    The aim of this paper is to design a predictive controller based on a fuzzy model. The Takagi-Sugeno fuzzy model with an Adaptive B-splines neuro-fuzzy implementation is used and incorporated as a predictor in a predictive controller. An optimization approach with a simplified gradient technique is used to calculate predictions of the future control actions. In this approach, adaptation of the fuzzy model using dynamic process information is carried out to build the predictive controller. The easy description of the fuzzy model and the easy computation of the gradient sector during the optimization procedure are the main advantages of the computation algorithm. The algorithm is applied to the control of a U-tube steam generation unit (UTSG) used for electricity generation. (author)

  15. Developing an Orographic Adjustment for the SCaMPR Algorithm

    Science.gov (United States)

    Yucel, I.; Akcelik, M.; Kuligowski, R. J.

    2016-12-01

    In support of the National Oceanic and Atmospheric Administration (NOAA) National Weather Service's (NWS) flash flood warning and heavy precipitation forecast efforts, the NOAA National Environmental Satellite Data and Information Service (NESDIS) Center for Satellite Applications and Research (STAR) has been providing satellite-based precipitation estimates operationally since 1978. The GOES-R Algorithm Working Group (AWG) is responsible for developing and demonstrating algorithms for retrieving various geophysical parameters from GOES data, including rainfall. The rainfall algorithm selected by the GOES-R AWG is the Self-Calibrating Multivariate Precipitation Retrieval (SCaMPR). However, the SCaMPR does not currently make any adjustments for the effects of complex topography on rainfall. Elevation-dependent bias structures suggest that there is an increased sensitivity to deep convection, which generates heavy precipitation at the expense of missing lighter precipitation events. A regionally dependent empirical elevation-based bias correction technique may help improve the quality of satellite-derived precipitation products. This study investigates the potential for improving the SCaMPR algorithm by incorporating an orographic correction based on calibration of the SCaMPR against rain gauge transects in northwestern Mexico to identify correctable biases related to elevation, slope, and wind direction. The findings suggest that continued improvement to the developed orographic correction scheme is warranted in order to advance quantitative precipitation estimation in complex terrain regions for use in weather forecasting and hydrologic applications. The relationships that are isolated during this analysis will be used to create a more accurate terrain adjustment for SCaMPR.

  16. Recent Progress in Development of SWOT River Discharge Algorithms

    Science.gov (United States)

    Pavelsky, Tamlin M.; Andreadis, Konstantinos; Biancamaria, Sylvian; Durand, Michael; Moller, Dewlyn; Rodriguez, Enersto; Smith, Laurence C.

    2013-09-01

    The Surface Water and Ocean Topography (SWOT) Mission is a satellite mission under joint development by NASA and CNES. The mission will use interferometric synthetic aperture radar technology to continuously map, for the first time, water surface elevations and water surface extents in rivers, lakes, and oceans at high spatial resolutions. Among the primary goals of SWOT is the accurate retrieval of river discharge directly from SWOT measurements. Although it is central to the SWOT mission, discharge retrieval represents a substantial challenge due to uncertainties in SWOT measurements and because traditional discharge algorithms are not optimized for SWOT-like measurements. However, recent work suggests that SWOT may also have unique strengths that can be exploited to yield accurate estimates of discharge. A NASA-sponsored workshop convened June 18-20, 2012 at the University of North Carolina focused on progress and challenges in developing SWOT-specific discharge algorithms. Workshop participants agreed that the only viable approach to discharge estimation will be based on a slope-area scaling method such as Manning's equation, but modified slightly to reflect the fact that SWOT will estimate reach-averaged rather than cross- sectional discharge. While SWOT will provide direct measurements of some key parameters such as width and slope, others such as baseflow depth and channel roughness must be estimated. Fortunately, recent progress has suggested several algorithms that may allow the simultaneous estimation of these quantities from SWOT observations by using multitemporal observations over several adjacent reaches. However, these algorithms will require validation, which will require the collection of new field measurements, airborne imagery from AirSWOT (a SWOT analogue), and compilation of global datasets of channel roughness, river width, and other relevant variables.

  17. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  18. Model-based Bayesian signal extraction algorithm for peripheral nerves

    Science.gov (United States)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10–20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of

  19. An Algorithm for Modified Times Series Analysis Method for Modeling and Prognosis of the River Water Quality

    Directory of Open Access Journals (Sweden)

    Petrov M.

    2007-12-01

    Full Text Available An algorithm and programs for modeling, analysis, and prognosis of river quality has been developed, which is a modified method of the time series analysis (TSA. The algorithm and program are used for modeling and prognosis of the river quality of Bulgarian river ecosystems.

  20. Development of CAD implementing the algorithm of boundary elements’ numerical analytical method

    Directory of Open Access Journals (Sweden)

    Yulia V. Korniyenko

    2015-03-01

    Full Text Available Up to recent days the algorithms for numerical-analytical boundary elements method had been implemented with programs written in MATLAB environment language. Each program had a local character, i.e. used to solve a particular problem: calculation of beam, frame, arch, etc. Constructing matrices in these programs was carried out “manually” therefore being time-consuming. The research was purposed onto a reasoned choice of programming language for new CAD development, allows to implement algorithm of numerical analytical boundary elements method and to create visualization tools for initial objects and calculation results. Research conducted shows that among wide variety of programming languages the most efficient one for CAD development, employing the numerical analytical boundary elements method algorithm, is the Java language. This language provides tools not only for development of calculating CAD part, but also to build the graphic interface for geometrical models construction and calculated results interpretation.

  1. Development of computed tomography system and image reconstruction algorithm

    International Nuclear Information System (INIS)

    Khairiah Yazid; Mohd Ashhar Khalid; Azaman Ahmad; Khairul Anuar Mohd Salleh; Ab Razak Hamzah

    2006-01-01

    Computed tomography is one of the most advanced and powerful nondestructive inspection techniques, which is currently used in many different industries. In several CT systems, detection has been by combination of an X-ray image intensifier and charge -coupled device (CCD) camera or by using line array detector. The recent development of X-ray flat panel detector has made fast CT imaging feasible and practical. Therefore this paper explained the arrangement of a new detection system which is using the existing high resolution (127 μm pixel size) flat panel detector in MINT and the image reconstruction technique developed. The aim of the project is to develop a prototype flat panel detector based CT imaging system for NDE. The prototype consisted of an X-ray tube, a flat panel detector system, a rotation table and a computer system to control the sample motion and image acquisition. Hence this project is divided to two major tasks, firstly to develop image reconstruction algorithm and secondly to integrate X-ray imaging components into one CT system. The image reconstruction algorithm using filtered back-projection method is developed and compared to other techniques. The MATLAB program is the tools used for the simulations and computations for this project. (Author)

  2. Model and Algorithm for Substantiating Solutions for Organization of High-Rise Construction Project

    Science.gov (United States)

    Anisimov, Vladimir; Anisimov, Evgeniy; Chernysh, Anatoliy

    2018-03-01

    In the paper the models and the algorithm for the optimal plan formation for the organization of the material and logistical processes of the high-rise construction project and their financial support are developed. The model is based on the representation of the optimization procedure in the form of a non-linear problem of discrete programming, which consists in minimizing the execution time of a set of interrelated works by a limited number of partially interchangeable performers while limiting the total cost of performing the work. The proposed model and algorithm are the basis for creating specific organization management methodologies for the high-rise construction project.

  3. Implementation and testing of a simple data assimilation algorithm in the regional air pollution forecast model, DEOM

    DEFF Research Database (Denmark)

    Frydendall, Jan; Brandt, J.; Christensen, J. H.

    2009-01-01

    A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM), applied for air pollution forecasting at the National Environmental Research Institute (NERI), Denmark....... In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP...... configuration of the data assimilation algorithm, were found. The data assimilation algorithm will in the future be used in the operational THOR integrated air pollution forecast system, which includes the DEOM....

  4. Development and Testing of the Gust Front Algorithm.

    Science.gov (United States)

    1987-11-01

    NRO) and b) Cimarron (CIM), looking at the same gust front (April 13, 19.fi). ~vii LIST oF TABLEc Table .. List of Thresholds Table 2. Tower Data at...for the Norman and Cimarron Radars viii M MNSMW Development and Testing of the Gust Front Algorithm Arthur Witt and Steven D. Smith NOAA...Doppler radars (the NSSL radars located at Norman and Cimarron (CIM), wnich is about 40 km NW of Norman) looking at the saine gust front. The comparison was

  5. Development of Image Reconstruction Algorithms in electrical Capacitance Tomography

    International Nuclear Information System (INIS)

    Fernandez Marron, J. L.; Alberdi Primicia, J.; Barcala Riveira, J. M.

    2007-01-01

    The Electrical Capacitance Tomography (ECT) has not obtained a good development in order to be used at industrial level. That is due first to difficulties in the measurement of very little capacitances (in the range of femto farads) and second to the problem of reconstruction on- line of the images. This problem is due also to the small numbers of electrodes (maximum 16), that made the usual algorithms of reconstruction has many errors. In this work it is described a new purely geometrical method that could be used for this purpose. (Author) 4 refs

  6. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    International Nuclear Information System (INIS)

    Rolland, Joran; Simonnet, Eric

    2015-01-01

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations

  7. Parallelization of the model-based iterative reconstruction algorithm DIRA

    International Nuclear Information System (INIS)

    Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.

    2016-01-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)

  8. An API for Integrating Spatial Context Models with Spatial Reasoning Algorithms

    DEFF Research Database (Denmark)

    Kjærgaard, Mikkel Baun

    2006-01-01

    The integration of context-aware applications with spatial context models is often done using a common query language. However, algorithms that estimate and reason about spatial context information can benefit from a tighter integration. An object-oriented API makes such integration possible...... and can help reduce the complexity of algorithms making them easier to maintain and develop. This paper propose an object-oriented API for context models of the physical environment and extensions to a location modeling approach called geometric space trees for it to provide adequate support for location...... modeling. The utility of the API is evaluated in several real-world cases from an indoor location system, and spans several types of spatial reasoning algorithms....

  9. Parallel Genetic Algorithms for calibrating Cellular Automata models: Application to lava flows

    International Nuclear Information System (INIS)

    D'Ambrosio, D.; Spataro, W.; Di Gregorio, S.; Calabria Univ., Cosenza; Crisci, G.M.; Rongo, R.; Calabria Univ., Cosenza

    2005-01-01

    Cellular Automata are highly nonlinear dynamical systems which are suitable far simulating natural phenomena whose behaviour may be specified in terms of local interactions. The Cellular Automata model SCIARA, developed far the simulation of lava flows, demonstrated to be able to reproduce the behaviour of Etnean events. However, in order to apply the model far the prediction of future scenarios, a thorough calibrating phase is required. This work presents the application of Genetic Algorithms, general-purpose search algorithms inspired to natural selection and genetics, far the parameters optimisation of the model SCIARA. Difficulties due to the elevated computational time suggested the adoption a Master-Slave Parallel Genetic Algorithm far the calibration of the model with respect to the 2001 Mt. Etna eruption. Results demonstrated the usefulness of the approach, both in terms of computing time and quality of performed simulations

  10. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  11. Development of an algorithm for quantifying extremity biological tissue

    International Nuclear Information System (INIS)

    Pavan, Ana L.M.; Miranda, Jose R.A.; Pina, Diana R. de

    2013-01-01

    The computerized radiology (CR) has become the most widely used device for image acquisition and production, since its introduction in the 80s. The detection and early diagnosis, obtained via CR, are important for the successful treatment of diseases such as arthritis, metabolic bone diseases, tumors, infections and fractures. However, the standards used for optimization of these images are based on international protocols. Therefore, it is necessary to compose radiographic techniques for CR system that provides a secure medical diagnosis, with doses as low as reasonably achievable. To this end, the aim of this work is to develop a quantifier algorithm of tissue, allowing the construction of a homogeneous end used phantom to compose such techniques. It was developed a database of computed tomography images of hand and wrist of adult patients. Using the Matlab ® software, was developed a computational algorithm able to quantify the average thickness of soft tissue and bones present in the anatomical region under study, as well as the corresponding thickness in simulators materials (aluminium and lucite). This was possible through the application of mask and Gaussian removal technique of histograms. As a result, was obtained an average thickness of soft tissue of 18,97 mm and bone tissue of 6,15 mm, and their equivalents in materials simulators of 23,87 mm of acrylic and 1,07mm of aluminum. The results obtained agreed with the medium thickness of biological tissues of a patient's hand pattern, enabling the construction of an homogeneous phantom

  12. Statistical equivalence of prediction models of the soil sorption coefficient obtained using different log P algorithms.

    Science.gov (United States)

    Olguin, Carlos José Maria; Sampaio, Silvio César; Dos Reis, Ralpho Rinaldo

    2017-10-01

    The soil sorption coefficient normalized to the organic carbon content (K oc ) is a physicochemical parameter used in environmental risk assessments and in determining the final fate of chemicals released into the environment. Several models for predicting this parameter have been proposed based on the relationship between log K oc and log P. The difficulty and cost of obtaining experimental log P values led to the development of algorithms to calculate these values, some of which are free to use. However, quantitative structure-property relationship (QSPR) studies did not detail how or why a particular algorithm was chosen. In this study, we evaluated several free algorithms for calculating log P in the modeling of log K oc , using a broad and diverse set of compounds (n = 639) that included several chemical classes. In addition, we propose the adoption of a simple test to verify if there is statistical equivalence between models obtained using different data sets. Our results showed that the ALOGPs, KOWWIN and XLOGP3 algorithms generated the best models for modeling K oc , and these models are statistically equivalent. This finding shows that it is possible to use the different algorithms without compromising statistical quality and predictive capacity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. A Genetic Algorithm Approach for Modeling a Grounding Electrode

    Science.gov (United States)

    Mishra, Arbind Kumar; Nagaoka, Naoto; Ametani, Akihiro

    This paper has proposed a genetic algorithm based approach to determine a grounding electrode model circuit composed of resistances, inductances and capacitances. The proposed methodology determines the model circuit parameters based on a general ladder circuit directly from a measured result. Transient voltages of some electrodes were measured when applying a step like current. An EMTP simulation of a transient voltage on the grounding electrode has been carried out by adopting the proposed model circuits. The accuracy of the proposed method has been confirmed to be high in comparison with the measured transient voltage.

  14. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    Science.gov (United States)

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  15. A comparison of updating algorithms for large $N$ reduced models

    CERN Document Server

    Pérez, Margarita García; Keegan, Liam; Okawa, Masanori; Ramos, Alberto

    2015-01-01

    We investigate Monte Carlo updating algorithms for simulating $SU(N)$ Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole $SU(N)$ matrix at once, or iterating through $SU(2)$ subgroups of the $SU(N)$ matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  16. Sustainable logistics and transportation optimization models and algorithms

    CERN Document Server

    Gakis, Konstantinos; Pardalos, Panos

    2017-01-01

    Focused on the logistics and transportation operations within a supply chain, this book brings together the latest models, algorithms, and optimization possibilities. Logistics and transportation problems are examined within a sustainability perspective to offer a comprehensive assessment of environmental, social, ethical, and economic performance measures. Featured models, techniques, and algorithms may be used to construct policies on alternative transportation modes and technologies, green logistics, and incentives by the incorporation of environmental, economic, and social measures. Researchers, professionals, and graduate students in urban regional planning, logistics, transport systems, optimization, supply chain management, business administration, information science, mathematics, and industrial and systems engineering will find the real life and interdisciplinary issues presented in this book informative and useful.

  17. Structural assessment of aerospace components using image processing algorithms and Finite Element models

    DEFF Research Database (Denmark)

    Stamatelos, Dimtrios; Kappatos, Vassilios

    2017-01-01

    Purpose – This paper presents the development of an advanced structural assessment approach for aerospace components (metallic and composites). This work focuses on developing an automatic image processing methodology based on Non Destructive Testing (NDT) data and numerical models, for predicting...... the residual strength of these components. Design/methodology/approach – An image processing algorithm, based on the threshold method, has been developed to process and quantify the geometric characteristics of damages. Then, a parametric Finite Element (FE) model of the damaged component is developed based...... on the inputs acquired from the image processing algorithm. The analysis of the metallic structures is employing the Extended FE Method (XFEM), while for the composite structures the Cohesive Zone Model (CZM) technique with Progressive Damage Modelling (PDM) is used. Findings – The numerical analyses...

  18. Dynamic greedy algorithms for the Edwards-Anderson model

    Science.gov (United States)

    Schnabel, Stefan; Janke, Wolfhard

    2017-11-01

    To provide a novel tool for the investigation of the energy landscape of the Edwards-Anderson spin-glass model we introduce an algorithm that allows an efficient execution of a greedy optimization based on data from a previously performed optimization for a similar configuration. As an application we show how the technique can be used to perform higher-order greedy optimizations and simulated annealing searches with improved performance.

  19. Managing and learning with multiple models: Objectives and optimization algorithms

    Science.gov (United States)

    Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.

    2011-01-01

    The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.

  20. Energy demand forecasting in Iranian metal industry using linear and nonlinear models based on evolutionary algorithms

    International Nuclear Information System (INIS)

    Piltan, Mehdi; Shiri, Hiva; Ghaderi, S.F.

    2012-01-01

    Highlights: ► Investigating different fitness functions for evolutionary algorithms in energy forecasting. ► Energy forecasting of Iranian metal industry by value added, energy prices, investment and employees. ► Using real-coded instead of binary-coded genetic algorithm decreases energy forecasting error. - Abstract: Developing energy-forecasting models is known as one of the most important steps in long-term planning. In order to achieve sustainable energy supply toward economic development and social welfare, it is required to apply precise forecasting model. Applying artificial intelligent models for estimation complex economic and social functions is growing up considerably in many researches recently. In this paper, energy consumption in industrial sector as one of the critical sectors in the consumption of energy has been investigated. Two linear and three nonlinear functions have been used in order to forecast and analyze energy in the Iranian metal industry, Particle Swarm Optimization (PSO) and Genetic Algorithms (GAs) are applied to attain parameters of the models. The Real-Coded Genetic Algorithm (RCGA) has been developed based on real numbers, which is introduced as a new approach in the field of energy forecasting. In the proposed model, electricity consumption has been considered as a function of different variables such as electricity tariff, manufacturing value added, prevailing fuel prices, the number of employees, the investment in equipment and consumption in the previous years. Mean Square Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Deviation (MAD) and Mean Absolute Percent Error (MAPE) are the four functions which have been used as the fitness function in the evolutionary algorithms. The results show that the logarithmic nonlinear model using PSO algorithm with 1.91 error percentage has the best answer. Furthermore, the prediction of electricity consumption in industrial sector of Turkey and also Turkish industrial sector

  1. Computational Modeling of Teaching and Learning through Application of Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Richard Lamb

    2015-09-01

    Full Text Available Within the mind, there are a myriad of ideas that make sense within the bounds of everyday experience, but are not reflective of how the world actually exists; this is particularly true in the domain of science. Classroom learning with teacher explanation are a bridge through which these naive understandings can be brought in line with scientific reality. The purpose of this paper is to examine how the application of a Multiobjective Evolutionary Algorithm (MOEA can work in concert with an existing computational-model to effectively model critical-thinking in the science classroom. An evolutionary algorithm is an algorithm that iteratively optimizes machine learning based computational models. The research question is, does the application of an evolutionary algorithm provide a means to optimize the Student Task and Cognition Model (STAC-M and does the optimized model sufficiently represent and predict teaching and learning outcomes in the science classroom? Within this computational study, the authors outline and simulate the effect of teaching on the ability of a “virtual” student to solve a Piagetian task. Using the Student Task and Cognition Model (STAC-M a computational model of student cognitive processing in science class developed in 2013, the authors complete a computational experiment which examines the role of cognitive retraining on student learning. Comparison of the STAC-M and the STAC-M with inclusion of the Multiobjective Evolutionary Algorithm shows greater success in solving the Piagetian science-tasks post cognitive retraining with the Multiobjective Evolutionary Algorithm. This illustrates the potential uses of cognitive and neuropsychological computational modeling in educational research. The authors also outline the limitations and assumptions of computational modeling.

  2. Application of artificial neural networks and genetic algorithms for crude fractional distillation process modeling

    OpenAIRE

    Pater, Lukasz

    2016-01-01

    This work presents the application of the artificial neural networks, trained and structurally optimized by genetic algorithms, for modeling of crude distillation process at PKN ORLEN S.A. refinery. Models for the main fractionator distillation column products were developed using historical data. Quality of the fractions were predicted based on several chosen process variables. The performance of the model was validated using test data. Neural networks used in companion with genetic algorith...

  3. New Flexible Models and Design Construction Algorithms for Mixtures and Binary Dependent Variables

    OpenAIRE

    Ruseckaite, Aiste

    2017-01-01

    markdownabstractThis thesis discusses new mixture(-amount) models, choice models and the optimal design of experiments. Two chapters of the thesis relate to the so-called mixture, which is a product or service whose ingredients’ proportions sum to one. The thesis begins by introducing mixture models in the choice context and develops new optimal design construction algorithms for choice experiments involving mixtures. Building further, varying the total amount of a mixture, and not only its i...

  4. LMI-Based Generation of Feedback Laws for a Robust Model Predictive Control Algorithm

    Science.gov (United States)

    Acikmese, Behcet; Carson, John M., III

    2007-01-01

    This technical note provides a mathematical proof of Corollary 1 from the paper 'A Nonlinear Model Predictive Control Algorithm with Proven Robustness and Resolvability' that appeared in the 2006 Proceedings of the American Control Conference. The proof was omitted for brevity in the publication. The paper was based on algorithms developed for the FY2005 R&TD (Research and Technology Development) project for Small-body Guidance, Navigation, and Control [2].The framework established by the Corollary is for a robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems that guarantees the resolvability of the associated nite-horizon optimal control problem in a receding-horizon implementation. Additional details of the framework are available in the publication.

  5. Motion Model Employment using interacting Motion Model Algorithm

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar

    2006-01-01

    model being correct is computed through a likelihood function for each model.  The study presented a simple technique to introduce additional models into the system using deterministic acceleration which basically defines the dynamics of the system.  Therefore, based on this value more motion models can...

  6. Earthquake forecast models for Italy based on the RI algorithm

    Directory of Open Access Journals (Sweden)

    Kazuyoshi Z. Nanjo

    2010-11-01

    Full Text Available This study provides an overview of relative-intensity (RI-based earthquake forecast models that have been submitted for the 5-year and 10-year testing classes and the 3-month class of the Italian experiment within the Collaboratory for the Study of Earthquake Predictability (CSEP. The RI algorithm starts as a binary forecast system based on the working assumption that future large earthquakes are considered likely to occur at sites of higher seismic activity in the past. The measure of RI is the simply counting of the number of past earthquakes, which is known as the RI of seismicity. To improve the RI forecast performance, we first expand the RI algorithm to become part of a general class of smoothed seismicity models. We then convert the RI representation from a binary system into a testable CSEP model that forecasts the numbers of earthquakes for the predefined magnitudes. Our parameter tuning for the CSEP models is based on the past seismicity. The final submission is a set of two numerical data files that were created by tuned 5-year and 10-year models and an executable computer code of a tuned 3-month model, to examine which testing class is more meaningful in terms of the RI hypothesis. The main purpose of our participation is to better understand the importance (or lack of importance of RI of seismicity for earthquake forecastability.

  7. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  8. A Multiple Model Prediction Algorithm for CNC Machine Wear PHM

    Directory of Open Access Journals (Sweden)

    Huimin Chen

    2011-01-01

    Full Text Available The 2010 PHM data challenge focuses on the remaining useful life (RUL estimation for cutters of a high speed CNC milling machine using measurements from dynamometer, accelerometer, and acoustic emission sensors. We present a multiple model approach for wear depth estimation of milling machine cutters using the provided data. The feature selection, initial wear estimation and multiple model fusion components of the proposed algorithm are explained in details and compared with several alternative methods using the training data. The final submission ranked #2 among professional and student participants and the method is applicable to other data driven PHM problems.

  9. Linguistically motivated statistical machine translation models and algorithms

    CERN Document Server

    Xiong, Deyi

    2015-01-01

    This book provides a wide variety of algorithms and models to integrate linguistic knowledge into Statistical Machine Translation (SMT). It helps advance conventional SMT to linguistically motivated SMT by enhancing the following three essential components: translation, reordering and bracketing models. It also serves the purpose of promoting the in-depth study of the impacts of linguistic knowledge on machine translation. Finally it provides a systematic introduction of Bracketing Transduction Grammar (BTG) based SMT, one of the state-of-the-art SMT formalisms, as well as a case study of linguistically motivated SMT on a BTG-based platform.

  10. How effective and efficient are multiobjective evolutionary algorithms at hydrologic model calibration?

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2006-01-01

    Full Text Available This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ε-NSGAII, the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA, and the Strength Pareto Evolutionary Algorithm 2 (SPEA2. This study uses three test cases to compare the algorithms' performances: (1 a standardized test function suite from the computer science literature, (2 a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3 a computationally intensive integrated surface-subsurface model application in the Shale Hills watershed in Pennsylvania. One challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 attained competitive to superior results for most of the problems tested in this study. The primary strengths of the SPEA2 algorithm lie in its search reliability and its diversity preservation operator. The biggest challenge in maximizing the performance of SPEA2 lies in specifying an effective archive size without a priori knowledge of the Pareto set. In practice, this would require significant trial-and-error analysis, which is problematic for more complex, computationally intensive calibration applications. ε-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration. ε-NSGAII's primary strength lies in its ease-of-use due to its dynamic population sizing and archiving which lead to rapid convergence to very high quality solutions with minimal user input. MOSCEM-UA is best suited for hydrologic model calibration applications that have small

  11. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Aarle, Wim van, E-mail: wim.vanaarle@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, Willem Jan, E-mail: willemjan.palenstijn@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); De Beenhouwer, Jan, E-mail: jan.debeenhouwer@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Altantzis, Thomas, E-mail: thomas.altantzis@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Bals, Sara, E-mail: sara.bals@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Batenburg, K. Joost, E-mail: joost.batenburg@cwi.nl [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); Mathematical Institute, Leiden University, P.O. Box 9512, NL-2300 RA Leiden (Netherlands); Sijbers, Jan, E-mail: jan.sijbers@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-10-15

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series.

  12. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    International Nuclear Information System (INIS)

    Aarle, Wim van; Palenstijn, Willem Jan; De Beenhouwer, Jan; Altantzis, Thomas; Bals, Sara; Batenburg, K. Joost; Sijbers, Jan

    2015-01-01

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series

  13. A formally verified algorithm for interactive consistency under a hybrid fault model

    Science.gov (United States)

    Lincoln, Patrick; Rushby, John

    1993-01-01

    Consistent distribution of single-source data to replicated computing channels is a fundamental problem in fault-tolerant system design. The 'Oral Messages' (OM) algorithm solves this problem of Interactive Consistency (Byzantine Agreement) assuming that all faults are worst-cass. Thambidurai and Park introduced a 'hybrid' fault model that distinguished three fault modes: asymmetric (Byzantine), symmetric, and benign; they also exhibited, along with an informal 'proof of correctness', a modified version of OM. Unfortunately, their algorithm is flawed. The discipline of mechanically checked formal verification eventually enabled us to develop a correct algorithm for Interactive Consistency under the hybrid fault model. This algorithm withstands $a$ asymmetric, $s$ symmetric, and $b$ benign faults simultaneously, using $m+1$ rounds, provided $n is greater than 2a + 2s + b + m$, and $m\\geg a$. We present this algorithm, discuss its subtle points, and describe its formal specification and verification in PVS. We argue that formal verification systems such as PVS are now sufficiently effective that their application to fault-tolerance algorithms should be considered routine.

  14. IIR Filter Modeling Using an Algorithm Inspired on Electromagnetism

    Directory of Open Access Journals (Sweden)

    Cuevas-Jiménez E.

    2013-01-01

    Full Text Available Infinite-impulse-response (IIR filtering provides a powerful approach for solving a variety of problems. However, its design represents a very complicated task, since the error surface of IIR filters is generally multimodal, global optimization techniques are required in order to avoid local minima. In this paper, a new method based on the Electromagnetism-Like Optimization Algorithm (EMO is proposed for IIR filter modeling. EMO originates from the electro-magnetism theory of physics by assuming potential solutions as electrically charged particles which spread around the solution space. The charge of each particle depends on its objective function value. This algorithm employs a collective attraction-repulsion mechanism to move the particles towards optimality. The experimental results confirm the high performance of the proposed method in solving various benchmark identification problems.

  15. mMWeb--an online platform for employing multiple ecological niche modeling algorithms.

    Science.gov (United States)

    Qiao, Huijie; Lin, Congtian; Ji, Liqiang; Jiang, Zhigang

    2012-01-01

    Predicting the ecological niche and potential habitat distribution of a given organism is one of the central domains of ecological and biogeographical research. A wide variety of modeling techniques have been developed for this purpose. In order to implement these models, the users must prepare a specific runtime environment for each model, learn how to use multiple model platforms, and prepare data in a different format each time. Additionally, often model results are difficult to interpret, and a standardized method for comparing model results across platforms does not exist. We developed a free and open source online platform, the multi-models web-based (mMWeb) platform, to address each of these problems, providing a novel environment in which the user can implement and compare multiple ecological niche model (ENM) algorithms. mMWeb combines 18 existing ENMs and their corresponding algorithms and provides a uniform procedure for modeling the potential habitat niche of a species via a common web browser. mMWeb uses Java Native Interface (JNI), Java R Interface to combine the different ENMs and executes multiple tasks in parallel on a super computer. The cross-platform, user-friendly interface of mMWeb simplifies the process of building ENMs, providing an accessible and efficient environment from which to explore and compare different model algorithms.

  16. mMWeb--an online platform for employing multiple ecological niche modeling algorithms.

    Directory of Open Access Journals (Sweden)

    Huijie Qiao

    Full Text Available BACKGROUND: Predicting the ecological niche and potential habitat distribution of a given organism is one of the central domains of ecological and biogeographical research. A wide variety of modeling techniques have been developed for this purpose. In order to implement these models, the users must prepare a specific runtime environment for each model, learn how to use multiple model platforms, and prepare data in a different format each time. Additionally, often model results are difficult to interpret, and a standardized method for comparing model results across platforms does not exist. We developed a free and open source online platform, the multi-models web-based (mMWeb platform, to address each of these problems, providing a novel environment in which the user can implement and compare multiple ecological niche model (ENM algorithms. METHODOLOGY: mMWeb combines 18 existing ENMs and their corresponding algorithms and provides a uniform procedure for modeling the potential habitat niche of a species via a common web browser. mMWeb uses Java Native Interface (JNI, Java R Interface to combine the different ENMs and executes multiple tasks in parallel on a super computer. The cross-platform, user-friendly interface of mMWeb simplifies the process of building ENMs, providing an accessible and efficient environment from which to explore and compare different model algorithms.

  17. Dynamic gradient descent learning algorithms for enhanced empirical modeling of power plants

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, Amir; Chong, K.T.

    1991-01-01

    A newly developed dynamic gradient descent-based learning algorithm is used to train a recurrent multilayer perceptron network for use in empirical modeling of power plants. The two main advantages of the proposed learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation, instead of one forward and one backward pass of the backpropagation algorithm. The latter advantage results in computational time saving because both passes can be performed simultaneously. The dynamic learning algorithm is used to train a hybrid feedforward/feedback neural network, a recurrent multilayer perceptron, which was previously found to exhibit good interpolation and extrapolation capabilities in modeling nonlinear dynamic systems. One of the drawbacks, however, of the previously reported work has been the long training times associated with accurate empirical models. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm are demonstrated by a case study of a steam power plant. The number of iterations required for accurate empirical modeling has been reduced from tens of thousands to hundreds, thus significantly expediting the learning process

  18. Model order reduction using eigen algorithm | Singh | International ...

    African Journals Online (AJOL)

    -scale dynamic systems where denominator polynomial determined through Eigen algorithm and numerator polynomial via factor division algorithm. In Eigen algorithm, the most dominant Eigen value of both original and reduced order ...

  19. The development of gamma energy identify algorithm for compact radiation sensors using stepwise refinement technique

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Hyun Jun [Div. of Radiation Regulation, Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Kim, Ye Won; Kim, Hyun Duk; Cho, Gyu Seong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Yi, Yun [Dept. of of Electronics and Information Engineering, Korea University, Seoul (Korea, Republic of)

    2017-06-15

    A gamma energy identifying algorithm using spectral decomposition combined with smoothing method was suggested to confirm the existence of the artificial radio isotopes. The algorithm is composed by original pattern recognition method and smoothing method to enhance the performance to identify gamma energy of radiation sensors that have low energy resolution. The gamma energy identifying algorithm for the compact radiation sensor is a three-step of refinement process. Firstly, the magnitude set is calculated by the original spectral decomposition. Secondly, the magnitude of modeling error in the magnitude set is reduced by the smoothing method. Thirdly, the expected gamma energy is finally decided based on the enhanced magnitude set as a result of the spectral decomposition with the smoothing method. The algorithm was optimized for the designed radiation sensor composed of a CsI (Tl) scintillator and a silicon pin diode. The two performance parameters used to estimate the algorithm are the accuracy of expected gamma energy and the number of repeated calculations. The original gamma energy was accurately identified with the single energy of gamma radiation by adapting this modeling error reduction method. Also the average error decreased by half with the multi energies of gamma radiation in comparison to the original spectral decomposition. In addition, the number of repeated calculations also decreased by half even in low fluence conditions under 104 (/0.09 cm{sup 2} of the scintillator surface). Through the development of this algorithm, we have confirmed the possibility of developing a product that can identify artificial radionuclides nearby using inexpensive radiation sensors that are easy to use by the public. Therefore, it can contribute to reduce the anxiety of the public exposure by determining the presence of artificial radionuclides in the vicinity.

  20. Model and code development

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    Progress in model and code development for reactor physics calculations is summarized. The codes included CINDER-10, PHROG, RAFFLE GAPP, DCFMR, RELAP/4, PARET, and KENO. Kinetics models for the PBF were developed

  1. Modelling Paleoearthquake Slip Distributions using a Gentic Algorithm

    Science.gov (United States)

    Lindsay, Anthony; Simão, Nuno; McCloskey, John; Nalbant, Suleyman; Murphy, Shane; Bhloscaidh, Mairead Nic

    2013-04-01

    Along the Sunda trench, the annual growth rings of coral microatolls store long term records of tectonic deformation. Spread over large areas of an active megathrust fault, they offer the possibility of high resolution reconstructions of slip for a number of paleo-earthquakes. These data are complex with spatial and temporal variations in uncertainty. Rather than assuming that any one model will uniquely fit the data, Monte Carlo Slip Estimation (MCSE) modelling produces a catalogue of possible models for each event. From each earthquake's catalogue, a model is selected and a possible history of slip along the fault reconstructed. By generating multiple histories, then finding the average slip during each earthquake, a probabilistic history of slip along the fault can be generated and areas that may have a large slip deficit identified. However, the MCSE technique requires the production of many hundreds of billions of models to yield the few models that fit the observed coral data. In an attempt to accelerate this process, we have designed a Genetic Algorithm (GA). The GA uses evolutionary operators to recombine the information held by a population of possible slip models to produce a set of new models, based on how well they reproduce a set of coral deformation data. Repeated iterations of the algorithm produce populations of improved models, each generation better satisfying the coral data. Preliminary results have shown the GA to be capable of recovering synthetically generated slip distributions based their displacements of sets of corals faster than the MCSE technique. The results of the systematic testing of the GA technique and its performance using both synthetic and observed coral displacement data will be presented.

  2. Parameter Estimation for Traffic Noise Models Using a Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Deok-Soon An

    2013-01-01

    Full Text Available A technique has been developed for predicting road traffic noise for environmental assessment, taking into account traffic volume as well as road surface conditions. The ASJ model (ASJ Prediction Model for Road Traffic Noise, 1999, which is based on the sound power level of the noise emitted by the interaction between the road surface and tires, employs regression models for two road surface types: dense-graded asphalt (DGA and permeable asphalt (PA. However, these models are not applicable to other types of road surfaces. Accordingly, this paper introduces a parameter estimation procedure for ASJ-based noise prediction models, utilizing a harmony search (HS algorithm. Traffic noise measurement data for four different vehicle types were used in the algorithm to determine the regression parameters for several road surface types. The parameters of the traffic noise prediction models were evaluated using another measurement set, and good agreement was observed between the predicted and measured sound power levels.

  3. QAP collaborates in development of the sick child algorithm.

    Science.gov (United States)

    1994-01-01

    Algorithms which specify procedures for proper diagnosis and treatment of common diseases have been available to primary health care services in less developed countries for the past decade. Whereas each algorithm has usually been limited to a single ailment, children often present with the need for more comprehensive assessment and treatment. Treating just one illness in these children leads to incomplete treatment or missed opportunities for preventive services. To address this problem, the World Health Organization has recently developed a Sick Child Algorithm (SCA) for children aged 2 months-5 years. In addition to specifying case management procedures for acute respiratory illness, diarrhea/dehydration, fever, otitis, and malnutrition, the SCA prompts a check of the child's immunization status. The specificity and sensitivity of this SCA were field-tested in Kenya and the Gambia. In Kenya, the Malaria Branch of the US Centers for Disease Control and Prevention tested the SCA under typical conditions in Siaya District. The Quality Assurance Project of the Center for Human Services carried out a parallel facility-based systems analysis at the request of the Malaria Branch. The assessment which took place in September-October 1993, took the form of observations of provider/patient interactions, provider interviews, and verification of supplies and equipment in 19 rural health facilities to determine how current practices compare to actions prescribed by the SCA. This will reveal the type and amount of technical support needed to achieve conformity to the SCA's clinical practice recommendations. The data will allow officials to devise the proper training programs and will predict quality improvements likely to be achieved through adoption of the SCA in terms of effective case treatment and fewer missed immunization opportunities. Preliminary analysis indicates that the primary health care delivery in Siya deviates in several significant respects from performance

  4. A novel computer algorithm for modeling and treating mandibular fractures: A pilot study.

    Science.gov (United States)

    Rizzi, Christopher J; Ortlip, Timothy; Greywoode, Jewel D; Vakharia, Kavita T; Vakharia, Kalpesh T

    2017-02-01

    To describe a novel computer algorithm that can model mandibular fracture repair. To evaluate the algorithm as a tool to model mandibular fracture reduction and hardware selection. Retrospective pilot study combined with cross-sectional survey. A computer algorithm utilizing Aquarius Net (TeraRecon, Inc, Foster City, CA) and Adobe Photoshop CS6 (Adobe Systems, Inc, San Jose, CA) was developed to model mandibular fracture repair. Ten different fracture patterns were selected from nine patients who had already undergone mandibular fracture repair. The preoperative computed tomography (CT) images were processed with the computer algorithm to create virtual images that matched the actual postoperative three-dimensional CT images. A survey comparing the true postoperative image with the virtual postoperative images was created and administered to otolaryngology resident and attending physicians. They were asked to rate on a scale from 0 to 10 (0 = completely different; 10 = identical) the similarity between the two images in terms of the fracture reduction and fixation hardware. Ten mandible fracture cases were analyzed and processed. There were 15 survey respondents. The mean score for overall similarity between the images was 8.41 ± 0.91; the mean score for similarity of fracture reduction was 8.61 ± 0.98; and the mean score for hardware appearance was 8.27 ± 0.97. There were no significant differences between attending and resident responses. There were no significant differences based on fracture location. This computer algorithm can accurately model mandibular fracture repair. Images created by the algorithm are highly similar to true postoperative images. The algorithm can potentially assist a surgeon planning mandibular fracture repair. 4. Laryngoscope, 2016 127:331-336, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  5. Extraction of battery parameters of the equivalent circuit model using a multi-objective genetic algorithm

    Science.gov (United States)

    Brand, Jonathan; Zhang, Zheming; Agarwal, Ramesh K.

    2014-02-01

    A simple but reasonably accurate battery model is required for simulating the performance of electrical systems that employ a battery for example an electric vehicle, as well as for investigating their potential as an energy storage device. In this paper, a relatively simple equivalent circuit based model is employed for modeling the performance of a battery. A computer code utilizing a multi-objective genetic algorithm is developed for the purpose of extracting the battery performance parameters. The code is applied to several existing industrial batteries as well as to two recently proposed high performance batteries which are currently in early research and development stage. The results demonstrate that with the optimally extracted performance parameters, the equivalent circuit based battery model can accurately predict the performance of various batteries of different sizes, capacities, and materials. Several test cases demonstrate that the multi-objective genetic algorithm can serve as a robust and reliable tool for extracting the battery performance parameters.

  6. Developing a corpus to verify the performance of a tone labelling algorithm

    CSIR Research Space (South Africa)

    Raborife, M

    2011-11-01

    Full Text Available The authors report on a study that involved the development of a corpus used to verify the performance of two tone labelling algorithms, with one algorithm being an improvement on the other. These algorithms were developed for speech synthesis...

  7. Calibration of parameters of water supply network model using genetic algorithm

    Science.gov (United States)

    Boczar, Tomasz; Adamikiewicz, Norbert; Stanisławski, Włodzimierz

    2017-10-01

    Computer simulation models of water supply networks are commonly applied in the water industry. As part of the research works, results of which are presented in the paper, OFF-LINE and ON-LINE calibration of water supply network model parameters using two methods was carried out and compared. The network skeleton was developed in the Epanet software. For optimization two types of dependent variables were subjected: the pressure on the node and volume flow in the network section. The first calibration method regards to application of the genetic algorithm, which is a build in plugin - "Epanet Calibrator". The second method was related to the use of function ga, which is implemented in the MATLAB toolbox Genetic Algorithm and Direct Search. The possibilities of application of these algorithms to solve the issue of optimizing the parameters of the created model of water supply network in both cases: OFF-LINE and ON-LINE calibration was examined. An analysis of the effectiveness of the considered algorithms for different values of configuration parameters was performed. Based on the achieved results it was stated that application of the ga algorithm gives higher correlation of the calibrated values to the empirical data.

  8. SPHERES as Formation Flight Algorithm Development and Validation Testbed: Current Progress and Beyond

    Science.gov (United States)

    Kong, Edmund M.; Saenz-Otero, Alvar; Nolet, Simon; Berkovitz, Dustin S.; Miller, David W.; Sell, Steve W.

    2004-01-01

    The MIT-SSL SPHERES testbed provides a facility for the development of algorithms necessary for the success of Distributed Satellite Systems (DSS). The initial development contemplated formation flight and docking control algorithms; SPHERES now supports the study of metrology, control, autonomy, artificial intelligence, and communications algorithms and their effects on DSS projects. To support this wide range of topics, the SPHERES design contemplated the need to support multiple researchers, as echoed from both the hardware and software designs. The SPHERES operational plan further facilitates the development of algorithms by multiple researchers, while the operational locations incrementally increase the ability of the tests to operate in a representative environment. In this paper, an overview of the SPHERES testbed is first presented. The SPHERES testbed serves as a model of the design philosophies that allow for the various researches being carried out on such a facility. The implementation of these philosophies are further highlighted in the three different programs that are currently scheduled for testing onboard the International Space Station (ISS) and three that are proposed for a re-flight mission: Mass Property Identification, Autonomous Rendezvous and Docking, TPF Multiple Spacecraft Formation Flight in the first flight and Precision Optical Pointing, Tethered Formation Flight and Mars Orbit Sample Retrieval for the re-flight mission.

  9. Development and testing of incident detection algorithms. Vol. 2, research methodology and detailed results.

    Science.gov (United States)

    1976-04-01

    The development and testing of incident detection algorithms was based on Los Angeles and Minneapolis freeway surveillance data. Algorithms considered were based on times series and pattern recognition techniques. Attention was given to the effects o...

  10. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    Energy Technology Data Exchange (ETDEWEB)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun [Gwangju (Korea, Republic of)

    2013-04-15

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task.

  11. Final Report for DOE Grant DE-FG02-03ER25579; Development of High-Order Accurate Interface Tracking Algorithms and Improved Constitutive Models for Problems in Continuum Mechanics with Applications to Jetting

    Energy Technology Data Exchange (ETDEWEB)

    Puckett, Elbridge Gerry [U.C. Davis, Department of Mathematics; Miller, Gregory Hale [.C. Davis, Department of Chemical Engineering

    2012-10-14

    published by Dr. Phillip Colella, the head of ANAG, and some of his colleagues. Chris Algieri is now employed as a staff member in Dr. Bill Collins' Climate Science Department in the Earth Sciences Division at LBNL working with computational models of climate change. Finally, it should be noted that the work conducted by Professor Puckett and his students Sarah Williams and Chris Algieri and described in this final report for DOE grant # DE-FC02-03ER25579 is closely related to work performed by Professor Puckett and his students under the auspices of Professor Puckett's DOE SciDAC grant DE-FC02-01ER25473 An Algorithmic and Software Framework for Applied Partial Differential Equations: A DOE SciDAC Integrated Software Infrastructure Center (ISIC). Dr. Colella was the lead PI for this SciDAC grant, which was comprised of several research groups from DOE national laboratories and five university PI's from five different universities. In theory Professor Puckett tried to use funds from the SciDAC grant to support work directly involved in implementing algorithms developed by members of his research group at UCD as software that might be of use to Puckett's SciDAC CoPIs. (For example, see the work reported in Section 2.2.2 of this final report.) However, since there is considerable lead time spent developing such algorithms before they are ready to become `software' and research plans and goals change as the research progresses, Professor Puckett supported each member of his research group partially with funds from the SciDAC APDEC ISIC DE-FC02-01ER25473 and partially with funds from this DOE MICS grant DE-FC02-03ER25579. This has necessarily resulted in a significant overlap of project areas that were funded by both grants. In particular, both Sarah Williams and Chris Algieri were supported partially with funds from grant # DE-FG02-03ER25579, for which this is the final report, and in part with funds from Professor Puckett's DOE SciDAC grant # DE

  12. Modeling the Swift Bat Trigger Algorithm with Machine Learning

    Science.gov (United States)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2016-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift / BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of greater than or equal to 97 percent (less than or equal to 3 percent error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6 percent (10.4 percent error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of n (sub 0) approaching 0.48 (sup plus 0.41) (sub minus 0.23) per cubic gigaparsecs per year with power-law indices of n (sub 1) approaching 1.7 (sup plus 0.6) (sub minus 0.5) and n (sub 2) approaching minus 5.9 (sup plus 5.7) (sub minus 0.1) for GRBs above and below a break point of z (redshift) (sub 1) approaching 6.8 (sup plus 2.8) (sub minus 3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.

  13. Heterogeneous Agents Model with the Worst Out Algorithm

    Czech Academy of Sciences Publication Activity Database

    Vošvrda, Miloslav; Vácha, Lukáš

    I, č. 1 (2007), s. 54-66 ISSN 1802-4696 R&D Projects: GA MŠk(CZ) LC06075; GA ČR(CZ) GA402/06/0990 Grant - others:GA UK(CZ) 454/2004/A-EK/FSV Institutional research plan: CEZ:AV0Z10750506 Keywords : Efficient Markets Hypothesis * Fractal Market Hypothesis * agents' investment horizons * agents' trading strategies * technical trading rules * heterogeneous agent model with stochastic memory * Worst out Algorithm Subject RIV: AH - Economics

  14. An Overview of the Automated Dispatch Controller Algorithms in the System Advisor Model (SAM)

    Energy Technology Data Exchange (ETDEWEB)

    DiOrio, Nicholas A [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-11-22

    Three automatic dispatch modes have been added to the battery model within the System Adviser Model. These controllers have been developed to perform peak shaving in an automated fashion, providing users with a way to see the benefit of reduced demand charges without manually programming a complicated dispatch control. A flexible input option allows more advanced interaction with the automated controller. This document will describe the algorithms in detail and present brief results on its use and limitations.

  15. Simulation Modeling of Intelligent Control Algorithms for Constructing Autonomous Power Supply Systems with Improved Energy Efficiency

    Directory of Open Access Journals (Sweden)

    Gimazov Ruslan

    2018-01-01

    Full Text Available The paper considers the issue of supplying autonomous robots by solar batteries. Low efficiency of modern solar batteries is a critical issue for the whole industry of renewable energy. The urgency of solving the problem of improved energy efficiency of solar batteries for supplying the robotic system is linked with the task of maximizing autonomous operation time. Several methods to improve the energy efficiency of solar batteries exist. The use of MPPT charge controller is one these methods. MPPT technology allows increasing the power generated by the solar battery by 15 – 30%. The most common MPPT algorithm is the perturbation and observation algorithm. This algorithm has several disadvantages, such as power fluctuation and the fixed time of the maximum power point tracking. These problems can be solved by using a sufficiently accurate predictive and adaptive algorithm. In order to improve the efficiency of solar batteries, autonomous power supply system was developed, which included an intelligent MPPT charge controller with the fuzzy logic-based perturbation and observation algorithm. To study the implementation of the fuzzy logic apparatus in the MPPT algorithm, in Matlab/Simulink environment, we developed a simulation model of the system, including solar battery, MPPT controller, accumulator and load. Results of the simulation modeling established that the use of MPPT technology had increased energy production by 23%; introduction of the fuzzy logic algorithm to MPPT controller had greatly increased the speed of the maximum power point tracking and neutralized the voltage fluctuations, which in turn reduced the power underproduction by 2%.

  16. [Development of an algorithm to predict the incidence of major depression among primary care consultants].

    Science.gov (United States)

    Saldivia, Sandra; Vicente, Benjamin; Marston, Louise; Melipillán, Roberto; Nazareth, Irwin; Bellón-Saameño, Juan; Xavier, Miguel; Maaroos, Heidi Ingrid; Svab, Igor; Geerlings, M-I; King, Michael

    2014-03-01

    The reduction of major depression incidence is a public health challenge. To develop an algorithm to estimate the risk of occurrence of major depression in patients attending primary health centers (PHC). Prospective cohort study of a random sample of 2832 patients attending PHC centers in Concepción, Chile, with evaluations at baseline, six and twelve months. Thirty nine known risk factors for depression were measured to build a model, using a logistic regression. The algorithm was developed in 2,133 patients not depressed at baseline and compared with risk algorithms developed in a sample of 5,216 European primary care attenders. The main outcome was the incidence of major depression in the follow-up period. The cumulative incidence of depression during the 12 months follow up in Chile was 12%. Eight variables were identified. Four corresponded to the patient (gender, age, depression background and educational level) and four to patients' current situation (physical and mental health, satisfaction with their situation at home and satisfaction with the relationship with their partner). The C-Index, used to assess the discriminating power of the final model, was 0.746 (95% confidence intervals (CI = 0,707-0,785), slightly lower than the equation obtained in European (0.790 95% CI = 0.767-0.813) and Spanish attenders (0.82; 95% CI = 0.79-0.84). Four of the factors identified in the risk algorithm are not modifiable. The other two factors are directly associated with the primary support network (family and partner). This risk algorithm for the incidence of major depression provides a tool that can guide efforts towards design, implementation and evaluation of effectiveness of interventions to prevent major depression.

  17. Meta Modelling of Submerged-Arc Welding Design based on Fuzzy Algorithm

    Science.gov (United States)

    Song, Chang-Yong; Park, Jonghwan; Goh, Dugab; Park, Woo-Chang; Lee, Chang-Ha; Kim, Mun Yong; Kang, Jinseo

    2017-12-01

    Fuzzy algorithm based meta-model is proposed for approximating submerged-arc weld design factors such as weld speed and weld output. Orthogonal array design based on the submerged-arc weld numerical analysis is applied to the proposed approach. The nonlinear finite element analysis is carried out to simulate the submerged-arc weld numerical analysis using thermo-mechanical and temperature-dependent material properties for general mild steel. The proposed meta-model based on fuzzy algorithm design is generated with triangle membership functions and fuzzy if-then rules using training data obtained from the Taguchi orthogonal array design data. The aim of proposed approach is to develop a fuzzy meta-model to effectively approximate the optimized submerged-arc weld factors. To validate the meta-model, the results obtained from the fuzzy meta-model are compared to the best cases from the Taguchi orthogonal array.

  18. A new parallelization algorithm of ocean model with explicit scheme

    Science.gov (United States)

    Fu, X. D.

    2017-08-01

    This paper will focus on the parallelization of ocean model with explicit scheme which is one of the most commonly used schemes in the discretization of governing equation of ocean model. The characteristic of explicit schema is that calculation is simple, and that the value of the given grid point of ocean model depends on the grid point at the previous time step, which means that one doesn’t need to solve sparse linear equations in the process of solving the governing equation of the ocean model. Aiming at characteristics of the explicit scheme, this paper designs a parallel algorithm named halo cells update with tiny modification of original ocean model and little change of space step and time step of the original ocean model, which can parallelize ocean model by designing transmission module between sub-domains. This paper takes the GRGO for an example to implement the parallelization of GRGO (Global Reduced Gravity Ocean model) with halo update. The result demonstrates that the higher speedup can be achieved at different problem size.

  19. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  20. A Centerline Based Model Morphing Algorithm for Patient-Specific Finite Element Modelling of the Left Ventricle.

    Science.gov (United States)

    Behdadfar, S; Navarro, L; Sundnes, J; Maleckar, M; Ross, S; Odland, H H; Avril, S

    2017-09-20

    Hexahedral automatic model generation is a recurrent problem in computer vision and computational biomechanics. It may even become a challenging problem when one wants to develop a patient-specific finite-element (FE) model of the left ventricle (LV), particularly when only low resolution images are available. In the present study, a fast and efficient algorithm is presented and tested to address such a situation. A template FE hexahedral model was created for a LV geometry using a General Electric (GE) ultrasound (US) system. A system of centerline was considered for this LV mesh. Then, the nodes located over the endocardial and epicardial surfaces are respectively projected from this centerline onto the actual endocardial and epicardial surfaces reconstructed from a patient's US data. Finally, the position of the internal nodes is derived by finding the deformations with minimal elastic energy. This approach was applied to eight patients suffering from congestive heart disease. A FE analysis was performed to derive the stress induced in the LV tissue by diastolic blood pressure on each of them. Our model morphing algorithm was applied successfully and the obtained meshes showed only marginal mismatches when compared to the corresponding US geometries. The diastolic FE analyses were successfully performed in seven patients to derive the distribution of principal stresses. The original model morphing algorithm is fast and robust with low computational cost. This low cost model morphing algorithm may be highly beneficial for future patient-specific reduced-order modelling of the LV with potential application to other crucial organs.

  1. A MATLAB GUI based algorithm for modelling Magnetotelluric data

    Science.gov (United States)

    Timur, Emre; Onsen, Funda

    2016-04-01

    The magnetotelluric method is an electromagnetic survey technique that images the electrical resistivity distribution of layers in subsurface depths. Magnetotelluric method measures simultaneously total electromagnetic field components such as both time-varying magnetic field B(t) and induced electric field E(t). At the same time, forward modeling of magnetotelluric method is so beneficial for survey planning purpose, for comprehending the method, especially for students, and as part of an iteration process in inverting measured data. The MTINV program can be used to model and to interpret geophysical electromagnetic (EM) magnetotelluric (MT) measurements using a horizontally layered earth model. This program uses either the apparent resistivity and phase components of the MT data together or the apparent resistivity data alone. Parameter optimization, which is based on linearized inversion method, can be utilized in 1D interpretations. In this study, a new MATLAB GUI based algorithm has been written for the 1D-forward modeling of magnetotelluric response function for multiple layers to use in educational studies. The code also includes an automatic Gaussian noise option for a demanded ratio value. Numerous applications were carried out and presented for 2,3 and 4 layer models and obtained theoretical data were interpreted using MTINV, in order to evaluate the initial parameters and effect of noise. Keywords: Education, Forward Modelling, Inverse Modelling, Magnetotelluric

  2. Epidemic Modelling by Ripple-Spreading Network and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Jian-Qin Liao

    2013-01-01

    Full Text Available Mathematical analysis and modelling is central to infectious disease epidemiology. This paper, inspired by the natural ripple-spreading phenomenon, proposes a novel ripple-spreading network model for the study of infectious disease transmission. The new epidemic model naturally has good potential for capturing many spatial and temporal features observed in the outbreak of plagues. In particular, using a stochastic ripple-spreading process simulates the effect of random contacts and movements of individuals on the probability of infection well, which is usually a challenging issue in epidemic modeling. Some ripple-spreading related parameters such as threshold and amplifying factor of nodes are ideal to describe the importance of individuals’ physical fitness and immunity. The new model is rich in parameters to incorporate many real factors such as public health service and policies, and it is highly flexible to modifications. A genetic algorithm is used to tune the parameters of the model by referring to historic data of an epidemic. The well-tuned model can then be used for analyzing and forecasting purposes. The effectiveness of the proposed method is illustrated by simulation results.

  3. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  4. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  5. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  6. Near infrared spectrometric technique for testing fruit quality: optimisation of regression models using genetic algorithms

    Science.gov (United States)

    Isingizwe Nturambirwe, J. Frédéric; Perold, Willem J.; Opara, Umezuruike L.

    2016-02-01

    Near infrared (NIR) spectroscopy has gained extensive use in quality evaluation. It is arguably one of the most advanced spectroscopic tools in non-destructive quality testing of food stuff, from measurement to data analysis and interpretation. NIR spectral data are interpreted through means often involving multivariate statistical analysis, sometimes associated with optimisation techniques for model improvement. The objective of this research was to explore the extent to which genetic algorithms (GA) can be used to enhance model development, for predicting fruit quality. Apple fruits were used, and NIR spectra in the range from 12000 to 4000 cm-1 were acquired on both bruised and healthy tissues, with different degrees of mechanical damage. GAs were used in combination with partial least squares regression methods to develop bruise severity prediction models, and compared to PLS models developed using the full NIR spectrum. A classification model was developed, which clearly separated bruised from unbruised apple tissue. GAs helped improve prediction models by over 10%, in comparison with full spectrum-based models, as evaluated in terms of error of prediction (Root Mean Square Error of Cross-validation). PLS models to predict internal quality, such as sugar content and acidity were developed and compared to the versions optimized by genetic algorithm. Overall, the results highlighted the potential use of GA method to improve speed and accuracy of fruit quality prediction.

  7. Development of a Grapevine Pruning Algorithm for Using in Pruning

    Directory of Open Access Journals (Sweden)

    S. M Hosseini

    2017-10-01

    Full Text Available Introduction Great areas of the orchards in the world are dedicated to cultivation of the grapevine. Normally grape vineyards are pruned twice a year. Among the operations of grape production, winter pruning of the bushes is the only operation that still has not been fully mechanized while it is known as the most laborious jobs in the farm. Some of the grape producing countries use various mechanical machines to prune the grapevines, but in most cases, these machines do not have a good performance. Therefore intelligent pruning machine seems to be necessary in this regard and this intelligent pruning machines can reduce the labor required to prune the vineyards. It this study in was attempted to develop an algorithm that uses image processing techniques to identify which parts of the grapevine should be cut. Stereo vision technique was used to obtain three dimensional images from the bare bushes whose leaves were fallen in autumn. Stereo vision systems are used to determine the depth from two images taken at the same time but from slightly different viewpoints using two cameras. Each pair of images of a common scene is related by a popular geometry, and corresponding points in the images pairs are constrained to lie on pairs of conjugate popular lines. Materials and Methods Photos were taken from gardens of the Research Center for Agriculture and Natural Resources of Fars province, Iran. At first, the distance between the plants and the cameras should be determined. The distance between the plants and cameras can be obtained by using the stereo vision techniques. Therefore, this method was used in this paper by two pictures taken from each plant with the left and right cameras. The algorithm was written in MATLAB. To facilitate the segmentation of the branches from the rows at the back, a blue plate with dimensions of 2×2 m2 were used at the background. After invoking the images, branches were segmented from the background to produce the binary

  8. Developing algorithm for the critical care physician scheduling

    Science.gov (United States)

    Lee, Hyojun; Pah, Adam; Amaral, Luis; Northwestern Memorial Hospital Collaboration

    Understanding the social network has enabled us to quantitatively study social phenomena such as behaviors in adoption and propagation of information. However, most work has been focusing on networks of large heterogeneous communities, and little attention has been paid to how work-relevant information spreads within networks of small and homogeneous groups of highly trained individuals, such as physicians. Within the professionals, the behavior patterns and the transmission of information relevant to the job are dependent not only on the social network between the employees but also on the schedules and teams that work together. In order to systematically investigate the dependence of the spread of ideas and adoption of innovations on a work-environment network, we sought to construct a model for the interaction network of critical care physicians at Northwestern Memorial Hospital (NMH) based on their work schedules. We inferred patterns and hidden rules from past work schedules such as turnover rates. Using the characteristics of the work schedules of the physicians and their turnover rates, we were able to create multi-year synthetic work schedules for a generic intensive care unit. The algorithm for creating shift schedules can be applied to other schedule dependent networks ARO1.

  9. Using the fuzzy modeling for the retrieval algorithms

    International Nuclear Information System (INIS)

    Mohamed, A.H

    2010-01-01

    A rapid growth in number and size of images in databases and world wide web (www) has created a strong need for more efficient search and retrieval systems to exploit the benefits of this large amount of information. However, the collection of this information is now based on the image technology. One of the limitations of the current image analysis techniques necessitates that most image retrieval systems use some form of text description provided by the users as the basis to index and retrieve images. To overcome this problem, the proposed system introduces the using of fuzzy modeling to describe the image by using the linguistic ambiguities. Also, the proposed system can include vague or fuzzy terms in modeling the queries to match the image descriptions in the retrieval process. This can facilitate the indexing and retrieving process, increase their performance and decrease its computational time . Therefore, the proposed system can improve the performance of the traditional image retrieval algorithms.

  10. Integer programming model for optimizing bus timetable using genetic algorithm

    Science.gov (United States)

    Wihartiko, F. D.; Buono, A.; Silalahi, B. P.

    2017-01-01

    Bus timetable gave an information for passengers to ensure the availability of bus services. Timetable optimal condition happened when bus trips frequency could adapt and suit with passenger demand. In the peak time, the number of bus trips would be larger than the off-peak time. If the number of bus trips were more frequent than the optimal condition, it would make a high operating cost for bus operator. Conversely, if the number of trip was less than optimal condition, it would make a bad quality service for passengers. In this paper, the bus timetabling problem would be solved by integer programming model with modified genetic algorithm. Modification was placed in the chromosomes design, initial population recovery technique, chromosomes reconstruction and chromosomes extermination on specific generation. The result of this model gave the optimal solution with accuracy 99.1%.

  11. Computational Analysis of 3D Ising Model Using Metropolis Algorithms

    International Nuclear Information System (INIS)

    Sonsin, A F; Cortes, M R; Nunes, D R; Gomes, J V; Costa, R S

    2015-01-01

    We simulate the Ising Model with the Monte Carlo method and use the algorithms of Metropolis to update the distribution of spins. We found that, in the specific case of the three-dimensional Ising Model, methods of Metropolis are efficient. Studying the system near the point of phase transition, we observe that the magnetization goes to zero. In our simulations we analyzed the behavior of the magnetization and magnetic susceptibility to verify the phase transition in a paramagnetic to ferromagnetic material. The behavior of the magnetization and of the magnetic susceptibility as a function of the temperature suggest a phase transition around KT/J ≈ 4.5 and was evidenced the problem of finite size of the lattice to work with large lattice. (paper)

  12. Path generation algorithm for UML graphic modeling of aerospace test software

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao

    2018-03-01

    Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

  13. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    Science.gov (United States)

    Gordon, Howard R.

    1997-01-01

    Significant accomplishments made during the present reporting period are as follows: (1) We developed a new method for identifying the presence of absorbing aerosols and, simultaneously, performing atmospheric correction. The algorithm consists of optimizing the match between the top-of-atmosphere radiance spectrum and the result of models of both the ocean and aerosol optical properties; (2) We developed an algorithm for providing an accurate computation of the diffuse transmittance of the atmosphere given an aerosol model. A module for inclusion into the MODIS atmospheric-correction algorithm was completed; (3) We acquired reflectance data for oceanic whitecaps during a cruise on the RV Ka'imimoana in the Tropical Pacific (Manzanillo, Mexico to Honolulu, Hawaii). The reflectance spectrum of whitecaps was found to be similar to that for breaking waves in the surf zone measured by Frouin, Schwindling and Deschamps, however, the drop in augmented reflectance from 670 to 860 nm was not as great, and the magnitude of the augmented reflectance was significantly less than expected; and (4) We developed a method for the approximate correction for the effects of the MODIS polarization sensitivity. The correction, however, requires adequate characterization of the polarization sensitivity of MODIS prior to launch.

  14. Forecasting of the development of professional medical equipment engineering based on neuro-fuzzy algorithms

    Science.gov (United States)

    Vaganova, E. V.; Syryamkin, M. V.

    2015-11-01

    The purpose of the research is the development of evolutionary algorithms for assessments of promising scientific directions. The main attention of the present study is paid to the evaluation of the foresight possibilities for identification of technological peaks and emerging technologies in professional medical equipment engineering in Russia and worldwide on the basis of intellectual property items and neural network modeling. An automated information system consisting of modules implementing various classification methods for accuracy of the forecast improvement and the algorithm of construction of neuro-fuzzy decision tree have been developed. According to the study result, modern trends in this field will focus on personalized smart devices, telemedicine, bio monitoring, «e-Health» and «m-Health» technologies.

  15. A DIFFERENTIAL EVOLUTION ALGORITHM DEVELOPED FOR A NURSE SCHEDULING PROBLEM

    Directory of Open Access Journals (Sweden)

    Shahnazari-Shahrezaei, P.

    2012-11-01

    Full Text Available Nurse scheduling is a type of manpower allocation problem that tries to satisfy hospital managers objectives and nurses preferences as much as possible by generating fair shift schedules. This paper presents a nurse scheduling problem based on a real case study, and proposes two meta-heuristics a differential evolution algorithm (DE and a greedy randomised adaptive search procedure (GRASP to solve it. To investigate the efficiency of the proposed algorithms, two problems are solved. Furthermore, some comparison metrics are applied to examine the reliability of the proposed algorithms. The computational results in this paper show that the proposed DE outperforms the GRASP.

  16. IR Algorithm Development for Fire and Forget Projectiles.

    Science.gov (United States)

    1982-06-18

    its in-house computer . These algorithms are then run against stored IR images of actual foreign tanks to determine the capabilities and -1il-rations of...34"ELECTE- S JUL 212U 0-1 MARCHESE 2. ALGORITHMS USED FOR ARMORED TARGET DETECTION: An algorithm is a set of logical rules or mathenatical instructions used...three adjacent pixels of the array sampled three times while the array rotates) an M2 value is computed . An M2 or variance map of the scene is thus

  17. Integral equation models for image restoration: high accuracy methods and fast algorithms

    International Nuclear Information System (INIS)

    Lu, Yao; Shen, Lixin; Xu, Yuesheng

    2010-01-01

    Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images

  18. New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration

    Science.gov (United States)

    Keshavarz, Kasra; Alizadeh, Hossein

    2017-04-01

    Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other

  19. Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies.

    Science.gov (United States)

    Xu, Lei; Jeavons, Peter

    2015-11-01

    Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.

  20. A simple algorithm to estimate genetic variance in an animal threshold model using Bayesian inference

    Directory of Open Access Journals (Sweden)

    Heringstad Bjørg

    2010-07-01

    Full Text Available Abstract Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (covariance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative" or "non-informative" with respect to genetic (covariance components. The "non-informative" individuals are characterized by their Mendelian sampling deviations (deviance from the mid-parent mean being completely confounded with a single residual on the underlying liability scale. For threshold models, residual variance on the underlying scale is not identifiable. Hence, variance of fully confounded Mendelian sampling deviations cannot be identified either, but can be inferred from the between-family variation. In the new algorithm, breeding values are sampled as in a standard animal model using the full relationship matrix, but genetic (covariance components are inferred from the sampled breeding values and relationships between "informative" individuals (usually parents only. The latter is analogous to a sire-dam model (in cases with no individual records on the parents. Results When applied to simulated data sets, the standard animal threshold model failed to produce useful results since samples of genetic variance always drifted towards infinity, while the new algorithm produced proper parameter estimates essentially identical to the results from a sire-dam model (given the fact that no individual records exist for the parents. Furthermore, the new algorithm showed much faster Markov chain mixing properties for genetic parameters (similar to

  1. Developed adaptive neuro-fuzzy algorithm to control air conditioning ...

    African Journals Online (AJOL)

    The paper developed artificial intelligence technique adaptive neuro-fuzzy controller for air conditioning systems at different pressures. The first order Sugeno fuzzy inference system was implemented and utilized for modeling and controller design. In addition, the estimation of the heat transfer rate and water mass flow rate ...

  2. Developing an Algorithm to Consider Mutliple Demand Response Objectives

    Directory of Open Access Journals (Sweden)

    D. Behrens

    2018-02-01

    Full Text Available Due to technological improvement and changing environment, energy grids face various challenges, which, for example, deal with integrating new appliances such as electric vehicles and photovoltaic. Managing such grids has become increasingly important for research and practice, since, for example, grid reliability and cost benefits are endangered. Demand response (DR is one possibility to contribute to this crucial task by shifting and managing energy loads in particular. Realizing DR thereby can address multiple objectives (such as cost savings, peak load reduction and flattening the load profile to obtain various goals. However, current research lacks algorithms that address multiple DR objectives sufficiently. This paper aims to design a multi-objective DR optimization algorithm and to purpose a solution strategy. We therefore first investigate the research field and existing solutions, and then design an algorithm suitable for taking multiple objectives into account. The algorithm has a predictable runtime and guarantees termination.

  3. Development and analysis of a three phase cloudlet allocation algorithm

    Directory of Open Access Journals (Sweden)

    Sudip Roy

    2017-10-01

    Full Text Available Cloud computing is one of the most popular and pragmatic topics of research nowadays. The allocation of cloudlet(s to suitable VM(s is one of the most challenging areas of research in the domain of cloud computing. This paper highlights a new cloudlet allocation algorithm which improves the performance of a cloud service provider (CSP in comparison with the other existing cloudlet allocation algorithms. The proposed Range wise Busy-checking 2-way Balanced (RB2B cloudlet allocation algorithm optimizes few basic parameters associated with the performance analysis. An extensive simulation is done to evaluate the proposed algorithm using Cloudsim to attest its efficacy in comparison to the other existing allocation policies.

  4. A self-organizing algorithm for modeling protein loops.

    Directory of Open Access Journals (Sweden)

    Pu Liu

    2009-08-01

    Full Text Available Protein loops, the flexible short segments connecting two stable secondary structural units in proteins, play a critical role in protein structure and function. Constructing chemically sensible conformations of protein loops that seamlessly bridge the gap between the anchor points without introducing any steric collisions remains an open challenge. A variety of algorithms have been developed to tackle the loop closure problem, ranging from inverse kinematics to knowledge-based approaches that utilize pre-existing fragments extracted from known protein structures. However, many of these approaches focus on the generation of conformations that mainly satisfy the fixed end point condition, leaving the steric constraints to be resolved in subsequent post-processing steps. In the present work, we describe a simple solution that simultaneously satisfies not only the end point and steric conditions, but also chirality and planarity constraints. Starting from random initial atomic coordinates, each individual conformation is generated independently by using a simple alternating scheme of pairwise distance adjustments of randomly chosen atoms, followed by fast geometric matching of the conformationally rigid components of the constituent amino acids. The method is conceptually simple, numerically stable and computationally efficient. Very importantly, additional constraints, such as those derived from NMR experiments, hydrogen bonds or salt bridges, can be incorporated into the algorithm in a straightforward and inexpensive way, making the method ideal for solving more complex multi-loop problems. The remarkable performance and robustness of the algorithm are demonstrated on a set of protein loops of length 4, 8, and 12 that have been used in previous studies.

  5. Dataflow-Driven Crowdsourcing: Relational Models and Algorithms

    Directory of Open Access Journals (Sweden)

    D. A. Ustalov

    2016-01-01

    Full Text Available Recently, microtask crowdsourcing has become a popular approach for addressing various data mining problems. Crowdsourcing workflows for approaching such problems are composed of several data processing stages which require consistent representation for making the work reproducible. This paper is devoted to the problem of reproducibility and formalization of the microtask crowdsourcing process. A computational model for microtask crowdsourcing based on an extended relational model and a dataflow computational model has been proposed. The proposed collaborative dataflow computational model is designed for processing the input data sources by executing annotation stages and automatic synchronization stages simultaneously. Data processing stages and connections between them are expressed by using collaborative computation workflows represented as loosely connected directed acyclic graphs. A synchronous algorithm for executing such workflows has been described. The computational model has been evaluated by applying it to two tasks from the computational linguistics field: concept lexicalization refining in electronic thesauri and establishing hierarchical relations between such concepts. The “Add–Remove–Confirm” procedure is designed for adding the missing lexemes to the concepts while removing the odd ones. The “Genus–Species–Match” procedure is designed for establishing “is-a” relations between the concepts provided with the corresponding word pairs. The experiments involving both volunteers from popular online social networks and paid workers from crowdsourcing marketplaces confirm applicability of these procedures for enhancing lexical resources. 

  6. Application of stochastic weighted algorithms to a multidimensional silica particle model

    Energy Technology Data Exchange (ETDEWEB)

    Menz, William J. [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom); Patterson, Robert I.A.; Wagner, Wolfgang [Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstrasse 39, Berlin 10117 (Germany); Kraft, Markus, E-mail: mk306@cam.ac.uk [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom)

    2013-09-01

    Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associated majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.

  7. On the development of protein pKa calculation algorithms

    Science.gov (United States)

    Carstensen, Tommy; Farrell, Damien; Huang, Yong; Baker, Nathan A.; Nielsen, Jens Erik

    2011-01-01

    Protein pKa calculation methods are developed partly to provide fast non-experimental estimates of the ionization constants of protein side chains. However, the most significant reason for developing such methods is that a good pKa calculation method is presumed to provide an accurate physical model of protein electrostatics, which can be applied in methods for drug design, protein design and other structure-based energy calculation methods. We explore the validity of this presumption by simulating the development of a pKa calculation method using artificial experimental data derived from a human-defined physical reality. We examine the ability of an RMSD-guided development protocol to retrieve the correct (artificial) physical reality and find that a rugged optimization landscape and a huge parameter space prevent the identification of the correct physical reality. We examine the importance of the training set in developing pKa calculation methods and investigate the effect of experimental noise on our ability to identify the correct physical reality, and find that both effects have a significant and detrimental impact on the physical reality of the optimal model identified. Our findings are of relevance to all structure-based methods for protein energy calculations and simulation, and have large implications for all types of current pKa calculation methods. Our analysis furthermore suggests that careful and extensive validation on many types of experimental data can go some way in making current models more realistic. PMID:21744393

  8. Development and Implementation of a Hardware In-the-Loop Test Bed for Unmanned Aerial Vehicle Control Algorithms

    Science.gov (United States)

    Nyangweso, Emmanuel; Bole, Brian

    2014-01-01

    Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.

  9. Development of a Novel Probabilistic Algorithm for Localization of Rotors during Atrial Fibrillation

    Science.gov (United States)

    Ganesan, Prasanth; Salmin, Anthony; Cherry, Elizabeth M.; Ghoraani, Behnaz

    2018-01-01

    Atrial fibrillation (AF) is an irregular heart rhythm that can lead to stroke and other heart-related complications. Catheter ablation has been commonly used to destroy triggering sources of AF in the atria and consequently terminate the arrhythmia. However, efficient and accurate localization of the AF sustaining sources known as rotors is a major challenge in catheter ablation. In this paper, we developed a novel probabilistic algorithm that can adaptively guide a Lasso diagnostic catheter to locate the center of a rotor. Our algorithm uses a Bayesian updating approach to search for and locate rotors based on the characteristics of electrogram signals collected at every catheter placement. The algorithm was evaluated using a 10 × 10 cm 2D atrial tissue simulation of the Nygren human atrial cell model and was able to successfully guide the catheter to the rotor center in 3.37±1.05 (mean±std) steps (including placement at the center) when starting from any location on the tissue. Our novel automated algorithm can potentially play a significant role in patient-specific ablation of AF sources and increase the success of AF elimination procedures. PMID:28268378

  10. Modelling and genetic algorithm based optimisation of inverse supply chain

    Science.gov (United States)

    Bányai, T.

    2009-04-01

    (Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a

  11. Application of locally developed pavement temperature prediction algorithms in performance grade (PG) binder selection

    CSIR Research Space (South Africa)

    Denneman, E

    2007-07-01

    Full Text Available , in other words, data from outside the datasets against which the model was developed. The Viljoen algorithms form the basis of newly developed pavement temperature prediction software, called CSIR ThermalPADS. The use of this software in HMA... is provided as Equation 3. The ThermalPADS software contains a more accurate approximation of the daily solar declination. Declination=23.45º⋅cos[360º365⋅N 10] (3) where: N = day of the year (with 1st of January = 1) The equation for maximum asphalt...

  12. Modeling Tourism Sustainable Development

    Science.gov (United States)

    Shcherbina, O. A.; Shembeleva, E. A.

    The basic approaches to decision making and modeling tourism sustainable development are reviewed. Dynamics of a sustainable development is considered in the Forrester's system dynamics. Multidimensionality of tourism sustainable development and multicriteria issues of sustainable development are analyzed. Decision Support Systems (DSS) and Spatial Decision Support Systems (SDSS) as an effective technique in examining and visualizing impacts of policies, sustainable tourism development strategies within an integrated and dynamic framework are discussed. Main modules that may be utilized for integrated modeling sustainable tourism development are proposed.

  13. Parametrisation of a Maxwell model for transient tyre forces by means of an extended firefly algorithm

    Directory of Open Access Journals (Sweden)

    Andreas Hackl

    2016-12-01

    Full Text Available Developing functions for advanced driver assistance systems requires very accurate tyre models, especially for the simulation of transient conditions. In the past, parametrisation of a given tyre model based on measurement data showed shortcomings, and the globally optimal solution obtained did not appear to be plausible. In this article, an optimisation strategy is presented, which is able to find plausible and physically feasible solutions by detecting many local outcomes. The firefly algorithm mimics the natural behaviour of fireflies, which use a kind of flashing light to communicate with other members. An algorithm simulating the intensity of the light of a single firefly, diminishing with increasing distances, is implicitly able to detect local solutions on its way to the best solution in the search space. This implicit clustering feature is stressed by an additional explicit clustering step, where local solutions are stored and terminally processed to obtain a large number of possible solutions. The enhanced firefly algorithm will be first applied to the well-known Rastrigin functions and then to the tyre parametrisation problem. It is shown that the firefly algorithm is qualified to find a high number of optimisation solutions, which is required for plausible parametrisation for the given tyre model.

  14. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  15. Development of a New Fractal Algorithm to Predict Quality Traits of MRI Loins

    DEFF Research Database (Denmark)

    Caballero, Daniel; Caro, Andrés; Amigo, José Manuel

    2017-01-01

    Traditionally, the quality traits of meat products have been estimated by means of physico-chemical methods. Computer vision algorithms on MRI have also been presented as an alternative to these destructive methods since MRI is non-destructive, non-ionizing and innocuous. The use of fractals...... to analyze MRI could be another possibility for this purpose. In this paper, a new fractal algorithm is developed, to obtain features from MRI based on fractal characteristics. This algorithm is called OPFTA (One Point Fractal Texture Algorithm). Three fractal algorithms were tested in this study: CFA...... (Classical fractal algorithm), FTA (Fractal texture algorithm) and OPFTA. The results obtained by means of these three fractal algorithms were correlated to the results obtained by means of physico-chemical methods. OPFTA and FTA achieved correlation coefficients higher than 0.75 and CFA reached low...

  16. [A Hyperspectral Imagery Anomaly Detection Algorithm Based on Gauss-Markov Model].

    Science.gov (United States)

    Gao, Kun; Liu, Ying; Wang, Li-jing; Zhu, Zhen-yu; Cheng, Hao-bo

    2015-10-01

    With the development of spectral imaging technology, hyperspectral anomaly detection is getting more and more widely used in remote sensing imagery processing. The traditional RX anomaly detection algorithm neglects spatial correlation of images. Besides, it does not validly reduce the data dimension, which costs too much processing time and shows low validity on hyperspectral data. The hyperspectral images follow Gauss-Markov Random Field (GMRF) in space and spectral dimensions. The inverse matrix of covariance matrix is able to be directly calculated by building the Gauss-Markov parameters, which avoids the huge calculation of hyperspectral data. This paper proposes an improved RX anomaly detection algorithm based on three-dimensional GMRF. The hyperspectral imagery data is simulated with GMRF model, and the GMRF parameters are estimated with the Approximated Maximum Likelihood method. The detection operator is constructed with GMRF estimation parameters. The detecting pixel is considered as the centre in a local optimization window, which calls GMRF detecting window. The abnormal degree is calculated with mean vector and covariance inverse matrix, and the mean vector and covariance inverse matrix are calculated within the window. The image is detected pixel by pixel with the moving of GMRF window. The traditional RX detection algorithm, the regional hypothesis detection algorithm based on GMRF and the algorithm proposed in this paper are simulated with AVIRIS hyperspectral data. Simulation results show that the proposed anomaly detection method is able to improve the detection efficiency and reduce false alarm rate. We get the operation time statistics of the three algorithms in the same computer environment. The results show that the proposed algorithm improves the operation time by 45.2%, which shows good computing efficiency.

  17. The production-distribution problem with order acceptance and package delivery: models and algorithm

    Directory of Open Access Journals (Sweden)

    Khalili Majid

    2016-01-01

    Full Text Available The production planning and distribution are among the most important decisions in the supply chain. Classically, in this problem, it is assumed that all orders have to produced and separately delivered; while, in practice, an order may be rejected if the cost that it brings to the supply chain exceeds its revenue. Moreover, orders can be delivered in a batch to reduce the related costs. This paper considers the production planning and distribution problem with order acceptance and package delivery to maximize the profit. At first, a new mathematical model based on mixed integer linear programming is developed. Using commercial optimization software, the model can optimally solve small or even medium sized instances. For large instances, a solution method, based on imperialist competitive algorithms, is also proposed. Using numerical experiments, the proposed model and algorithm are evaluated.

  18. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    Directory of Open Access Journals (Sweden)

    Gonglin Yuan

    Full Text Available Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1 βk ≥ 0 2 the search direction has the trust region property without the use of any line search method 3 the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  19. Development of a Scaling Algorithm for Remotely Sensed and In-situ Soil Moisture Data across Complex Terrain

    Science.gov (United States)

    Shin, Y.; Mohanty, B. P.

    2012-12-01

    Spatial scaling algorithms have been developed/improved for increasing the availability of remotely sensed (RS) and in-situ soil moisture data for hydrologic applications. Existing approaches have their own drawbacks such as application in complex terrains, complexity of coupling downscaling and upscaling approaches, etc. In this study, we developed joint downscaling and upscaling algorithm for remotely sensed and in-situ soil moisture data. Our newly developed algorithm can downscale RS soil moisture footprints as well as upscale in-situ data simultaneously in complex terrains. This scheme is based on inverse modeling with a genetic algorithm. Normalized digital elevation model (NDEM) and normalized difference vegetation index (NDVI) that represent the heterogeneity of topography and vegetation covers, were used to characterize the variability of land surface. Our approach determined soil hydraulic parameters from RS and in-situ soil moisture at the airborne-/satellite footprint scales. Predicted soil moisture estimates were driven by derived soil hydraulic properties using a hydrological model (Soil-Water-Atmosphere-Plant, SWAP). As model simulated soil moisture predictions were generated for different elevations and NDVI values across complex terrains at a finer-scale (30 m 30 m), downscaled and upscaled soil moisture estimates were obtained. We selected the Little Washita watershed in Oklahoma for validating our proposed methodology at multiple scales. This newly developed joint downscaling and upscaling algorithm performed well across topographically complex regions and improved the availability of RS and in-situ soil moisture at appropriate scales for agriculture and water resources management efficiently.

  20. The effect of different log P algorithms on the modeling of the soil sorption coefficient of nonionic pesticides.

    Science.gov (United States)

    dos Reis, Ralpho Rinaldo; Sampaio, Silvio César; de Melo, Eduardo Borges

    2013-10-01

    Collecting data on the effects of pesticides on the environment is a slow and costly process. Therefore, significant efforts have been focused on the development of models that predict physical, chemical or biological properties of environmental interest. The soil sorption coefficient normalized to the organic carbon content (Koc) is a key parameter that is used in environmental risk assessments. Thus, several log Koc prediction models that use the hydrophobic parameter log P as a descriptor have been reported in the literature. Often, algorithms are used to calculate the value of log P due to the lack of experimental values for this property. Despite the availability of various algorithms, previous studies fail to describe the procedure used to select the appropriate algorithm. In this study, models that correlate log Koc with log P were developed for a heterogeneous group of nonionic pesticides using different freeware algorithms. The statistical qualities and predictive power of all of the models were evaluated. Thus, this study was conducted to assess the effect of the log P algorithm choice on log Koc modeling. The results clearly demonstrate that the lack of a selection criterion may result in inappropriate prediction models. Seven algorithms were tested, of which only two (ALOGPS and KOWWIN) produced good results. A sensible choice may result in simple models with statistical qualities and predictive power values that are comparable to those of more complex models. Therefore, the selection of the appropriate log P algorithm for modeling log Koc cannot be arbitrary but must be based on the chemical structure of compounds and the characteristics of the available algorithms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Genetic algorithms used for PWRs refuel management automatic optimization: a new modelling

    International Nuclear Information System (INIS)

    Chapot, Jorge Luiz C.; Schirru, Roberto; Silva, Fernando Carvalho da

    1996-01-01

    A Genetic Algorithms-based system, linking the computer codes GENESIS 5.0 and ANC through the interface ALGER, has been developed aiming the PWRs fuel management optimization. An innovative codification, the Lists Model, has been incorporated to the genetic system, which avoids the use of variants of the standard crossover operator and generates only valid loading patterns in the core. The GENESIS/ALGER/ANC system has been successfully tested in an optimization study for Angra-1 second cycle. (author)

  2. Models and algorithm of optimization launch and deployment of virtual network functions in the virtual data center

    Science.gov (United States)

    Bolodurina, I. P.; Parfenov, D. I.

    2017-10-01

    The goal of our investigation is optimization of network work in virtual data center. The advantage of modern infrastructure virtualization lies in the possibility to use software-defined networks. However, the existing optimization of algorithmic solutions does not take into account specific features working with multiple classes of virtual network functions. The current paper describes models characterizing the basic structures of object of virtual data center. They including: a level distribution model of software-defined infrastructure virtual data center, a generalized model of a virtual network function, a neural network model of the identification of virtual network functions. We also developed an efficient algorithm for the optimization technology of containerization of virtual network functions in virtual data center. We propose an efficient algorithm for placing virtual network functions. In our investigation we also generalize the well renowned heuristic and deterministic algorithms of Karmakar-Karp.

  3. Calibration of Uncertainty Analysis of the SWAT Model Using Genetic Algorithms and Bayesian Model Averaging

    Science.gov (United States)

    In this paper, the Genetic Algorithms (GA) and Bayesian model averaging (BMA) were combined to simultaneously conduct calibration and uncertainty analysis for the Soil and Water Assessment Tool (SWAT). In this hybrid method, several SWAT models with different structures are first selected; next GA i...

  4. An Evolutionary Search Algorithm for Covariate Models in Population Pharmacokinetic Analysis.

    Science.gov (United States)

    Yamashita, Fumiyoshi; Fujita, Atsuto; Sasa, Yukako; Higuchi, Yuriko; Tsuda, Masahiro; Hashida, Mitsuru

    2017-09-01

    Building a covariate model is a crucial task in population pharmacokinetics. This study develops a novel method for automated covariate modeling based on gene expression programming (GEP), which not only enables covariate selection, but also the construction of nonpolynomial relationships between pharmacokinetic parameters and covariates. To apply GEP to the extended nonlinear least squares analysis, the parameter consolidation and initial parameter value estimation algorithms were further developed and implemented. The entire program was coded in Java. The performance of the developed covariate model was evaluated for the population pharmacokinetic data of tobramycin. In comparison with the established covariate model, goodness-of-fit of the measured data was greatly improved by using only 2 additional adjustable parameters. Ten test runs yielded the same solution. In conclusion, the systematic exploration method is a potentially powerful tool for prescreening covariate models in population pharmacokinetic analysis. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  5. Developing mathematical modelling competence

    DEFF Research Database (Denmark)

    Blomhøj, Morten; Jensen, Tomas Højgaard

    2003-01-01

    In this paper we introduce the concept of mathematical modelling competence, by which we mean being able to carry through a whole mathematical modelling process in a certain context. Analysing the structure of this process, six sub-competences are identified. Mathematical modelling competence...... cannot be reduced to these six sub-competences, but they are necessary elements in the development of mathematical modelling competence. Experience from the development of a modelling course is used to illustrate how the different nature of the sub-competences can be used as a tool for finding...... the balance between different kinds of activities in a particular educational setting. Obstacles of social, cognitive and affective nature for the students' development of mathematical modelling competence are reported and discussed in relation to the sub-competences....

  6. Advances in diagnosing vaginitis: development of a new algorithm.

    Science.gov (United States)

    Nyirjesy, Paul; Sobel, Jack D

    2005-11-01

    The current approach to diagnosing vulvovaginal symptoms is both flawed and inadequate. Mistakes occur at the level of the patient herself, her provider, and the sensitivity of office-based tests. Often, the differential diagnosis is so broad that providers may overlook some of the possibilities. A diagnostic algorithm which separates women into either a normal or elevated vaginal pH can successfully classify most women with vaginitis. Based on the amine test, vaginal leukocytes, and vaginal parabasal cells, those with an elevated pH can be placed into further diagnostic categories. Such an algorithm helps to prioritize different diagnoses and suggest appropriate ancillary tests.

  7. Developing a Learning Algorithm-Generated Empirical Relaxer

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Wayne [Univ. of Colorado, Boulder, CO (United States). Dept. of Applied Math; Kallman, Josh [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Toreja, Allen [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gallagher, Brian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jiang, Ming [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Laney, Dan [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-03-30

    One of the main difficulties when running Arbitrary Lagrangian-Eulerian (ALE) simulations is determining how much to relax the mesh during the Eulerian step. This determination is currently made by the user on a simulation-by-simulation basis. We present a Learning Algorithm-Generated Empirical Relaxer (LAGER) which uses a regressive random forest algorithm to automate this decision process. We also demonstrate that LAGER successfully relaxes a variety of test problems, maintains simulation accuracy, and has the potential to significantly decrease both the person-hours and computational hours needed to run a successful ALE simulation.

  8. Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm

    NARCIS (Netherlands)

    Jansen, R.C.

    A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical

  9. Physics Based Model for Cryogenic Chilldown and Loading. Part I: Algorithm

    Science.gov (United States)

    Luchinsky, Dmitry G.; Smelyanskiy, Vadim N.; Brown, Barbara

    2014-01-01

    We report the progress in the development of the physics based model for cryogenic chilldown and loading. The chilldown and loading is model as fully separated non-equilibrium two-phase flow of cryogenic fluid thermally coupled to the pipe walls. The solution follow closely nearly-implicit and semi-implicit algorithms developed for autonomous control of thermal-hydraulic systems developed by Idaho National Laboratory. A special attention is paid to the treatment of instabilities. The model is applied to the analysis of chilldown in rapid loading system developed at NASA-Kennedy Space Center. The nontrivial characteristic feature of the analyzed chilldown regime is its active control by dump valves. The numerical predictions are in reasonable agreement with the experimental time traces. The obtained results pave the way to the development of autonomous loading operation on the ground and space.

  10. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    Science.gov (United States)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  11. Optimisation of Hidden Markov Model using Baum–Welch algorithm ...

    Indian Academy of Sciences (India)

    new model λ is obtained, which is more likely than model λ, producing observation sequence. O. This process of re-estimation is continued till no improvement in the probability of observation sequence reached. 4. Results and discussion. HMMs have been developed for prediction of maximum and minimum temperatures in ...

  12. A parallel domain decomposition algorithm for coastal ocean circulation models based on integer linear programming

    Science.gov (United States)

    Jordi, Antoni; Georgas, Nickitas; Blumberg, Alan

    2017-05-01

    This paper presents a new parallel domain decomposition algorithm based on integer linear programming (ILP), a mathematical optimization method. To minimize the computation time of coastal ocean circulation models, the ILP decomposition algorithm divides the global domain in local domains with balanced work load according to the number of processors and avoids computations over as many as land grid cells as possible. In addition, it maintains the use of logically rectangular local domains and achieves the exact same results as traditional domain decomposition algorithms (such as Cartesian decomposition). However, the ILP decomposition algorithm may not converge to an exact solution for relatively large domains. To overcome this problem, we developed two ILP decomposition formulations. The first one (complete formulation) has no additional restriction, although it is impractical for large global domains. The second one (feasible) imposes local domains with the same dimensions and looks for the feasibility of such decomposition, which allows much larger global domains. Parallel performance of both ILP formulations is compared to a base Cartesian decomposition by simulating two cases with the newly created parallel version of the Stevens Institute of Technology's Estuarine and Coastal Ocean Model (sECOM). Simulations with the ILP formulations run always faster than the ones with the base decomposition, and the complete formulation is better than the feasible one when it is applicable. In addition, parallel efficiency with the ILP decomposition may be greater than one.

  13. The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models

    OpenAIRE

    GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.

    2008-01-01

    In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.

  14. A simple and efficient parallel FFT algorithm using the BSP model

    NARCIS (Netherlands)

    Bisseling, R.H.; Inda, M.A.

    2000-01-01

    In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case

  15. Implementation and testing of a simple data assimilation algorithm in the regional air pollution forecast model, DEOM

    Directory of Open Access Journals (Sweden)

    J. Frydendall

    2009-08-01

    Full Text Available A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM, applied for air pollution forecasting at the National Environmental Research Institute (NERI, Denmark. In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP (European Monitoring and Evaluation Programme network covering a half-year period, April–September 1999. The best performing setup of the data assimilation algorithm for surface ozone concentrations has been found, including the combination of determining the covariances using the Hollingsworth method, varying the correlation length according to the number of adjacent observation stations and applying the assimilation routine at three successive hours during the morning. Improvements in the correlation coefficient in the range of 0.1 to 0.21 between the results from the reference and the optimal configuration of the data assimilation algorithm, were found. The data assimilation algorithm will in the future be used in the operational THOR integrated air pollution forecast system, which includes the DEOM.

  16. Development of the Algorithm for Energy Efficiency Improvement of Bulk Material Transport System

    Directory of Open Access Journals (Sweden)

    Milan Bebic

    2013-06-01

    Full Text Available The paper presents a control strategy for the system of belt conveyors with adjustable speed drives based on the principle of optimum energy consumption. Different algorithms are developed for generating the reference speed of the system of belt conveyors in order to achieve maximum material cross section on the belts and thus reduction of required electrical drive power. Control structures presented in the paper are developed and tested on the detailed mathematical model of the drive system with the rubber belt. The performed analyses indicate that the application of the algorithm based on fuzzy logic control (FLC which incorporates drive torque as an input variable is the proper solution. Therefore, this solution is implemented on the newvariable speed belt conveyor system with remote control on an open pit mine. Results of measurements on the system prove that the applied algorithm based on fuzzy logic control provides minimum electrical energy consumption of the drive under given constraints. The paper also presents the additional analytical verification of the achieved results trough a method based on the sequential quadratic programming for finding a minimum of a nonlinear function of multiple variables under given constraints.

  17. Genetic algorithm based optimization of advanced solar cell designs modeled in Silvaco AtlasTM

    OpenAIRE

    Utsler, James

    2006-01-01

    A genetic algorithm was used to optimize the power output of multi-junction solar cells. Solar cell operation was modeled using the Silvaco ATLASTM software. The output of the ATLASTM simulation runs served as the input to the genetic algorithm. The genetic algorithm was run as a diffusing computation on a network of eighteen dual processor nodes. Results showed that the genetic algorithm produced better power output optimizations when compared with the results obtained using the hill cli...

  18. A robust model predictive control algorithm for uncertain nonlinear systems that guarantees resolvability

    Science.gov (United States)

    Acikmese, Ahmet Behcet; Carson, John M., III

    2006-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees resolvability. With resolvability, initial feasibility of the finite-horizon optimal control problem implies future feasibility in a receding-horizon framework. The control consists of two components; (i) feed-forward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives and derivatives in polytopes. An illustrative numerical example is also provided.

  19. Small Body GN&C Research Report: A Robust Model Predictive Control Algorithm with Guaranteed Resolvability

    Science.gov (United States)

    Acikmese, Behcet A.; Carson, John M., III

    2005-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees the resolvability of the associated finite-horizon optimal control problem in a receding-horizon implementation. The control consists of two components; (i) feedforward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives, and derivatives in polytopes. An illustrative numerical example is also provided.

  20. A Cost-Effective Tracking Algorithm for Hypersonic Glide Vehicle Maneuver Based on Modified Aerodynamic Model

    Directory of Open Access Journals (Sweden)

    Yu Fan

    2016-10-01

    Full Text Available In order to defend the hypersonic glide vehicle (HGV, a cost-effective single-model tracking algorithm using Cubature Kalman filter (CKF is proposed in this paper based on modified aerodynamic model (MAM as process equation and radar measurement model as measurement equation. In the existing aerodynamic model, the two control variables attack angle and bank angle cannot be measured by the existing radar equipment and their control laws cannot be known by defenders. To establish the process equation, the MAM for HGV tracking is proposed by using additive white noise to model the rates of change of the two control variables. For the ease of comparison several multiple model algorithms based on CKF are presented, including interacting multiple model (IMM algorithm, adaptive grid interacting multiple model (AGIMM algorithm and hybrid grid multiple model (HGMM algorithm. The performances of these algorithms are compared and analyzed according to the simulation results. The simulation results indicate that the proposed tracking algorithm based on modified aerodynamic model has the best tracking performance with the best accuracy and least computational cost among all tracking algorithms in this paper. The proposed algorithm is cost-effective for HGV tracking.

  1. Introducing Elitist Black-Box Models: When Does Elitist Selection Weaken the Performance of Evolutionary Algorithms?

    OpenAIRE

    Doerr, Carola; Lengler, Johannes

    2015-01-01

    Black-box complexity theory provides lower bounds for the runtime of black-box optimizers like evolutionary algorithms and serves as an inspiration for the design of new genetic algorithms. Several black-box models covering different classes of algorithms exist, each highlighting a different aspect of the algorithms under considerations. In this work we add to the existing black-box notions a new \\emph{elitist black-box model}, in which algorithms are required to base all decisions solely on ...

  2. Developing the algorithm for assessing the competitive abilities of functional foods in marketing

    Directory of Open Access Journals (Sweden)

    Nilova Liudmila

    2017-01-01

    Full Text Available A thorough analysis of competitive factors of functional foods has made it possible to develop an algorithm for assessing the competitive factors of functional food products, with respect to their essential consumer features — quality, safety and functionality. Questionnaires filled in by experts and the published results of surveys of consumers from different countries were used to help select the essential consumer features in functional foods. A “desirability of consumer features” model triangle, based on functional bread and bakery products, was constructed with the use of the Harrington function.

  3. Caco-2 cell permeability modelling: a neural network coupled genetic algorithm approach

    Science.gov (United States)

    Di Fenza, Armida; Alagona, Giuliano; Ghio, Caterina; Leonardi, Riccardo; Giolitti, Alessandro; Madami, Andrea

    2007-04-01

    The ability to cross the intestinal cell membrane is a fundamental prerequisite of a drug compound. However, the experimental measurement of such an important property is a costly and highly time consuming step of the drug development process because it is necessary to synthesize the compound first. Therefore, in silico modelling of intestinal absorption, which can be carried out at very early stages of drug design, is an appealing alternative procedure which is based mainly on multivariate statistical analysis such as partial least squares (PLS) and neural networks (NN). Our implementation of neural network models for the prediction of intestinal absorption is based on the correlation of Caco-2 cell apparent permeability ( P app) values, as a measure of intestinal absorption, to the structures of two different data sets of drug candidates. Several molecular descriptors of the compounds were calculated and the optimal subsets were selected using a genetic algorithm; therefore, the method was indicated as Genetic Algorithm-Neural Network (GA-NN). A methodology combining a genetic algorithm search with neural network analysis applied to the modelling of Caco-2 P app has never been presented before, although the two procedures have been already employed separately. Moreover, we provide new Caco-2 cell permeability measurements for more than two hundred compounds. Interestingly, the selected descriptors show to possess physico-chemical connotations which are in excellent accordance with the well known relevant molecular properties involved in the cellular membrane permeation phenomenon: hydrophilicity, hydrogen bonding propensity, hydrophobicity and molecular size. The predictive ability of the models, although rather good for a preliminary study, is somewhat affected by the poor precision of the experimental Caco-2 measurements. Finally, the generalization ability of one model was checked on an external test set not derived from the data sets used to build the models

  4. Heuristic Algorithms for Solving Bounded Diameter Minimum Spanning Tree Problem and Its Application to Genetic Algorithm Development

    OpenAIRE

    Nghia, Nguyen Duc; Binh, Huynh Thi Thanh

    2008-01-01

    We have introduced the heuristic algorithm for solving BDMST problem, called CBRC. The experiment shows that CBRC have best result than the other known heuristic algorithm for solving BDMST prolem on Euclidean instances. The best solution found by the genetic algorithm which uses best heuristic algorithm or only one heuristic algorithm for initialization the population is not better than the best solution found by the genetic algorithm which uses mixed heuristic algorithms (randomized heurist...

  5. Sampling algorithms for validation of supervised learning models for Ising-like systems

    Science.gov (United States)

    Portman, Nataliya; Tamblyn, Isaac

    2017-12-01

    In this paper, we build and explore supervised learning models of ferromagnetic system behavior, using Monte-Carlo sampling of the spin configuration space generated by the 2D Ising model. Given the enormous size of the space of all possible Ising model realizations, the question arises as to how to choose a reasonable number of samples that will form physically meaningful and non-intersecting training and testing datasets. Here, we propose a sampling technique called ;ID-MH; that uses the Metropolis-Hastings algorithm creating Markov process across energy levels within the predefined configuration subspace. We show that application of this method retains phase transitions in both training and testing datasets and serves the purpose of validation of a machine learning algorithm. For larger lattice dimensions, ID-MH is not feasible as it requires knowledge of the complete configuration space. As such, we develop a new ;block-ID; sampling strategy: it decomposes the given structure into square blocks with lattice dimension N ≤ 5 and uses ID-MH sampling of candidate blocks. Further comparison of the performance of commonly used machine learning methods such as random forests, decision trees, k nearest neighbors and artificial neural networks shows that the PCA-based Decision Tree regressor is the most accurate predictor of magnetizations of the Ising model. For energies, however, the accuracy of prediction is not satisfactory, highlighting the need to consider more algorithmically complex methods (e.g., deep learning).

  6. Numerical Algorithms for Deterministic Impulse Control Models with Applications

    NARCIS (Netherlands)

    Grass, D.; Chahim, M.

    2012-01-01

    Abstract: In this paper we describe three different algorithms, from which two (as far as we know) are new in the literature. We take both the size of the jump as the jump times as decision variables. The first (new) algorithm considers an Impulse Control problem as a (multipoint) Boundary Value

  7. Development of traffic light control algorithm in smart municipal network

    OpenAIRE

    Kuzminykh, Ievgeniia

    2016-01-01

    This paper presents smart system that bypasses the normal functioning algorithm of traffic lights, triggers a green light when the lights are red or reset the timer of the traffic lights when they are about to turn red. Different pieces of hardware like microcontroller units, transceivers, resistors, diodes, LEDs, a digital compass and accelerometer will be coupled together and programed to create unified complex intelligent system.

  8. Development of fuzzy logic algorithm for water purification plant

    OpenAIRE

    SUDESH SINGH RANA; SUDESH SINGH RANA

    2015-01-01

    This paper propose the design of FLC algorithm for industrial application such application is water purification plant. In the water purification plant raw water or ground water is promptly purified by injecting chemical at rates related to water quality. The feed of chemical rates judged and determined by the skilled operator. Yagishita et al.[1] structured a system based on fuzzy logic so that the feed rate of the coagulant can be judged automatically without any skilled operator. We perfor...

  9. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process.

    Science.gov (United States)

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H; Miller, Cass T

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  10. The emission factor of volatile isoprenoids: caveats, model algorithms, response shapes and scaling

    Science.gov (United States)

    Niinemets, Ü.; Monson, R. K.; Arneth, A.; Ciccioli, P.; Kesselmeier, J.; Kuhn, U.; Noe, S. M.; Peñuelas, J.; Staudt, M.

    2010-02-01

    In models of plant volatile isoprenoid emissions, the instantaneous compound emission rate typically scales with the plant's emission capacity under specified environmental conditions, also defined as the emission factor, ES. In the most widely employed plant isoprenoid emission models, the algorithms developed by Guenther and colleagues (1991, 1993), instantaneous variation of the steady-state emission rate is described as the product of ES and light and temperature response functions. When these models are employed in the in atmospheric chemistry modeling community, species-specific ES values and parameter values defining the instantaneous response curves are typically considered as constant. In the current review, we argue that ES is largely a modeling concept, importantly depending on our understanding of which environmental factors affect isoprenoid emissions, and consequently need standardization during ES determination. In particular, there is now increasing consensus that variations in atmospheric CO2 concentration, in addition to variations in light and temperature, need to be included in the emission models. Furthermore, we demonstrate that for less volatile isoprenoids, mono- and sesquiterpenes, the emissions are often jointly controlled by the compound synthesis and volatility, and because of these combined biochemical and physico-chemical properties, specification of ES as a constant value is incapable of describing instantaneous emissions within the sole assumptions of fluctuating light and temperature, as are used in the standard algorithms. The definition of ES also varies depending on the degree of aggregation of ES values in different parameterization schemes (leaf- vs. canopy- or region-level, species vs. plant functional type level), and various aggregated ES schemes are not compatible for different integration models. The summarized information collectively emphasizes the need to update model algorithms by including missing environmental and

  11. The leaf-level emission factor of volatile isoprenoids: caveats, model algorithms, response shapes and scaling

    Science.gov (United States)

    Niinemets, Ü.; Monson, R. K.; Arneth, A.; Ciccioli, P.; Kesselmeier, J.; Kuhn, U.; Noe, S. M.; Peñuelas, J.; Staudt, M.

    2010-06-01

    In models of plant volatile isoprenoid emissions, the instantaneous compound emission rate typically scales with the plant's emission potential under specified environmental conditions, also called as the emission factor, ES. In the most widely employed plant isoprenoid emission models, the algorithms developed by Guenther and colleagues (1991, 1993), instantaneous variation of the steady-state emission rate is described as the product of ES and light and temperature response functions. When these models are employed in the atmospheric chemistry modeling community, species-specific ES values and parameter values defining the instantaneous response curves are often taken as initially defined. In the current review, we argue that ES as a characteristic used in the models importantly depends on our understanding of which environmental factors affect isoprenoid emissions, and consequently need standardization during experimental ES determinations. In particular, there is now increasing consensus that in addition to variations in light and temperature, alterations in atmospheric and/or within-leaf CO2 concentrations may need to be included in the emission models. Furthermore, we demonstrate that for less volatile isoprenoids, mono- and sesquiterpenes, the emissions are often jointly controlled by the compound synthesis and volatility. Because of these combined biochemical and physico-chemical drivers, specification of ES as a constant value is incapable of describing instantaneous emissions within the sole assumptions of fluctuating light and temperature as used in the standard algorithms. The definition of ES also varies depending on the degree of aggregation of ES values in different parameterization schemes (leaf- vs. canopy- or region-scale, species vs. plant functional type levels) and various aggregated ES schemes are not compatible for different integration models. The summarized information collectively emphasizes the need to update model algorithms by including

  12. Parallel algorithms for interactive manipulation of digital terrain models

    Science.gov (United States)

    Davis, E. W.; Mcallister, D. F.; Nagaraj, V.

    1988-01-01

    Interactive three-dimensional graphics applications, such as terrain data representation and manipulation, require extensive arithmetic processing. Massively parallel machines are attractive for this application since they offer high computational rates, and grid connected architectures provide a natural mapping for grid based terrain models. Presented here are algorithms for data movement on the massive parallel processor (MPP) in support of pan and zoom functions over large data grids. It is an extension of earlier work that demonstrated real-time performance of graphics functions on grids that were equal in size to the physical dimensions of the MPP. When the dimensions of a data grid exceed the processing array size, data is packed in the array memory. Windows of the total data grid are interactively selected for processing. Movement of packed data is needed to distribute items across the array for efficient parallel processing. Execution time for data movement was found to exceed that for arithmetic aspects of graphics functions. Performance figures are given for routines written in MPP Pascal.

  13. On developing B-spline registration algorithms for multi-core processors.

    Science.gov (United States)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-11-07

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  14. On developing B-spline registration algorithms for multi-core processors

    International Nuclear Information System (INIS)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-01-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  15. Wear rate optimization of Al/SiCnp/e-glass fibre hybrid metal matrix composites using Taguchi method and genetic algorithm and development of wear model using artificial neural networks

    Science.gov (United States)

    Bongale, Arunkumar M.; Kumar, Satish; Sachit, T. S.; Jadhav, Priya

    2018-03-01

    Studies on wear properties of Aluminium based hybrid nano composite materials, processed through powder metallurgy technique, are reported in the present study. Silicon Carbide nano particles and E-glass fibre are reinforced in pure aluminium matrix to fabricate hybrid nano composite material samples. Pin-on-Disc wear testing equipment is used to evaluate dry sliding wear properties of the composite samples. The tests were conducted following the Taguchi’s Design of Experiments method. Signal-to-Noise ratio analysis and Analysis of Variance are carried out on the test data to find out the influence of test parameters on the wear rate. Scanning Electron Microscopic analysis and Energy Dispersive x-ray analysis are conducted on the worn surfaces to find out the wear mechanisms responsible for wear of the composites. Multiple linear regression analysis and Genetic Algorithm techniques are employed for optimization of wear test parameters to yield minimum wear of the composite samples. Finally, a wear model is built by the application of Artificial Neural Networks to predict the wear rate of the composite material, under different testing conditions. The predicted values of wear rate are found to be very close to the experimental values with a deviation in the range of 0.15% to 8.09%.

  16. An Algorithm for Modelling the Impact of the Judicial Conflict-Resolution Process on Construction Investment

    Directory of Open Access Journals (Sweden)

    Andrej Bugajev

    2018-01-01

    Full Text Available In this article, the modelling of the judicial conflict-resolution process is considered from a construction investor’s point of view. Such modelling is important for improving the risk management for construction investors and supporting sustainable city development by supporting the development of rules regulating the construction process. Thus, this raises the problem of evaluation of different decisions and selection of the optimal one followed by distribution extraction. First, the example of such a process is analysed and schematically represented. Then, it is formalised as a graph, which is described in the form of a decision graph with cycles. We use some natural problem properties and provide the algorithm to convert this graph into a tree. Then, we propose the algorithm to evaluate profits for different scenarios with estimation of time, which is done by integration of an average daily costs function. Afterwards, the optimisation problem is solved and the optimal investor strategy is obtained—this allows one to extract the construction project profit distribution, which can be used for further analysis by standard risk (and other important information-evaluation techniques. The overall algorithm complexity is analysed, the computational experiment is performed and conclusions are formulated.

  17. Algorithms for a parallel implementation of Hidden Markov Models with a small state space

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Sand, Andreas

    2011-01-01

    Two of the most important algorithms for Hidden Markov Models are the forward and the Viterbi algorithms. We show how formulating these using linear algebra naturally lends itself to parallelization. Although the obtained algorithms are slow for Hidden Markov Models with large state spaces......, they require very little communication between processors, and are fast in practice on models with a small state space. We have tested our implementation against two other imple- mentations on artificial data and observe a speed-up of roughly a factor of 5 for the forward algorithm and more than 6...... for the Viterbi algorithm. We also tested our algorithm in the Coalescent Hidden Markov Model framework, where it gave a significant speed-up....

  18. New droplet model developments

    International Nuclear Information System (INIS)

    Dorso, C.O.; Myers, W.D.; Swiatecki, W.J.; Moeller, P.; Treiner, J.; Weiss, M.S.

    1985-09-01

    A brief summary is given of three recent contributions to the development of the Droplet Model. The first concerns the electric dipole moment induced in octupole deformed nuclei by the Coulomb redistribution. The second concerns a study of squeezing in nuclei and the third is a study of the improved predictive power of the model when an empirical ''exponential'' term is included. 25 refs., 3 figs

  19. The Development of Video Learning to Deliver a Basic Algorithm Learning

    Directory of Open Access Journals (Sweden)

    slamet kurniawan fahrurozi

    2017-12-01

    Full Text Available The world of education is currently entering the era of the media world, where learning activities demand reduction of lecture methods and Should be replaced by the use of many medias. In relation to the function of instructional media, it can be emphasized as follows: as a tool to make learning more effective, accelerate the teaching and learning process and improve the quality of teaching and learning process. This research aimed to develop a learning video programming basic materials algorithm that is appropriate to be applied as a learning resource in class X SMK. This study was also aimed to know the feasibility of learning video media developed. The research method used was research was research and development using development model developed by Alessi and Trollip (2001. The development model was divided into 3 stages namely Planning, Design, and Develpoment. Data collection techniques used interview method, literature method and instrument method. In the next stage, learning video was validated or evaluated by the material experts, media experts and users who are implemented to 30 Learners. The result of the research showed that video learning has been successfully made on basic programming subjects which consist of 8 scane video. Based on the learning video validation result, the percentage of learning video's eligibility is 90.5% from material experts, 95.9% of media experts, and 84% of users or learners. From the testing result that the learning videos that have been developed can be used as learning resources or instructional media programming subjects basic materials algorithm.

  20. Optimisation of Hidden Markov Model using Baum–Welch algorithm ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 126; Issue 1. Optimisation of Hidden Markov ... The present work is a part of development of Hidden Markov Model (HMM) based avalanche forecasting system for Pir-Panjal and Great Himalayan mountain ranges of the Himalaya. In this work, HMMs have been ...

  1. Leakage detection algorithm integrating water distribution networks hydraulic model

    CSIR Research Space (South Africa)

    Adedeji, K

    2017-06-01

    Full Text Available and estimation is vital for effective water service. For effective detection of background leakages, a hydraulic analysis of flow characteristics in water piping networks is indispensable for appraising such type of leakage. A leakage detection algorithm...

  2. Developing a modified SEBAL algorithm that is responsive to advection by using limited weather data

    Science.gov (United States)

    Mkhwanazi, Mcebisi

    The use of Remote Sensing ET algorithms in water management, especially for agricultural purposes is increasing, and there are more models being introduced. The Surface Energy Balance Algorithm for Land (SEBAL) and its variant, Mapping Evapotranspiration with Internalized Calibration (METRIC) are some of the models that are being widely used. While SEBAL has several advantages over other RS models, including that it does not require prior knowledge of soil, crop and other ground details, it has the downside of underestimating evapotranspiration (ET) on days when there is advection, which may be in most cases in arid and semi-arid areas. METRIC, however has been modified to be able to account for advection, but in doing so it requires hourly weather data. In most developing countries, while accurate estimates of ET are required, the weather data necessary to use METRIC may not be available. This research therefore was meant to develop a modified version of SEBAL that would require minimal weather data that may be available in these areas, and still estimate ET accurately. The data that were used to develop this model were minimum and maximum temperatures, wind data, preferably the run of wind in the afternoon, and wet bulb temperature. These were used to quantify the advected energy that would increase ET in the field. This was a two-step process; the first was developing the model for standard conditions, which was described as a healthy cover of alfalfa, 40-60 cm tall and not short of water. Under standard conditions, when estimated ET using modified SEBAL was compared with lysimeter-measured ET, the modified SEBAL model had a Mean Bias Error (MBE) of 2.2 % compared to -17.1 % from the original SEBAL. The Root Mean Square Error (RMSE) was lower for the modified SEBAL model at 10.9 % compared to 25.1 % for the original SEBAL. The modified SEBAL model, developed on an alfalfa field in Rocky Ford, was then tested on other crops; beans and wheat. It was also tested on

  3. Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem

    Science.gov (United States)

    Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.

    2018-03-01

    Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.

  4. Probabilistic Model Development

    Science.gov (United States)

    Adam, James H., Jr.

    2010-01-01

    Objective: Develop a Probabilistic Model for the Solar Energetic Particle Environment. Develop a tool to provide a reference solar particle radiation environment that: 1) Will not be exceeded at a user-specified confidence level; 2) Will provide reference environments for: a) Peak flux; b) Event-integrated fluence; and c) Mission-integrated fluence. The reference environments will consist of: a) Elemental energy spectra; b) For protons, helium and heavier ions.

  5. Design requirements and development of an airborne descent path definition algorithm for time navigation

    Science.gov (United States)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  6. Parameter identification of ZnO surge arrester models based on genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bayadi, Abdelhafid [Laboratoire d' Automatique de Setif, Departement d' Electrotechnique, Faculte des Sciences de l' Ingenieur, Universite Ferhat ABBAS de Setif, Route de Bejaia Setif 19000 (Algeria)

    2008-07-15

    The correct and adequate modelling of ZnO surge arresters characteristics is very important for insulation coordination studies and systems reliability. In this context many researchers addressed considerable efforts to the development of surge arresters models to reproduce the dynamic characteristics observed in their behaviour when subjected to fast front impulse currents. The difficulties with these models reside essentially in the calculation and the adjustment of their parameters. This paper proposes a new technique based on genetic algorithm to obtain the best possible series of parameter values of ZnO surge arresters models. The validity of the predicted parameters is then checked by comparing the predicted results with the experimental results available in the literature. Using the ATP-EMTP package, an application of the arrester model on network system studies is presented and discussed. (author)

  7. Design patterns for the development of electronic health record-driven phenotype extraction algorithms.

    Science.gov (United States)

    Rasmussen, Luke V; Thompson, Will K; Pacheco, Jennifer A; Kho, Abel N; Carrell, David S; Pathak, Jyotishman; Peissig, Peggy L; Tromp, Gerard; Denny, Joshua C; Starren, Justin B

    2014-10-01

    Design patterns, in the context of software development and ontologies, provide generalized approaches and guidance to solving commonly occurring problems, or addressing common situations typically informed by intuition, heuristics and experience. While the biomedical literature contains broad coverage of specific phenotype algorithm implementations, no work to date has attempted to generalize common approaches into design patterns, which may then be distributed to the informatics community to efficiently develop more accurate phenotype algorithms. Using phenotyping algorithms stored in the Phenotype KnowledgeBase (PheKB), we conducted an independent iterative review to identify recurrent elements within the algorithm definitions. We extracted and generalized recurrent elements in these algorithms into candidate patterns. The authors then assessed the candidate patterns for validity by group consensus, and annotated them with attributes. A total of 24 electronic Medical Records and Genomics (eMERGE) phenotypes available in PheKB as of 1/25/2013 were downloaded and reviewed. From these, a total of 21 phenotyping patterns were identified, which are available as an online data supplement. Repeatable patterns within phenotyping algorithms exist, and when codified and cataloged may help to educate both experienced and novice algorithm developers. The dissemination and application of these patterns has the potential to decrease the time to develop algorithms, while improving portability and accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Clustering algorithm evaluation and the development of a replacement for procedure 1. [for crop inventories

    Science.gov (United States)

    Lennington, R. K.; Johnson, J. K.

    1979-01-01

    An efficient procedure which clusters data using a completely unsupervised clustering algorithm and then uses labeled pixels to label the resulting clusters or perform a stratified estimate using the clusters as strata is developed. Three clustering algorithms, CLASSY, AMOEBA, and ISOCLS, are compared for efficiency. Three stratified estimation schemes and three labeling schemes are also considered and compared.

  9. Development Modules for Specification of Requirements for a System of Verification of Parallel Algorithms

    Directory of Open Access Journals (Sweden)

    Vasiliy Yu. Meltsov

    2012-05-01

    Full Text Available This paper presents the results of the development of one of the modules of the system verification of parallel algorithms that are used to verify the inference engine. This module is designed to build the specification requirements, the feasibility of which on the algorithm is necessary to prove (test.

  10. Genetic Algorithms for Optimization of Machine-learning Models and their Applications in Bioinformatics

    KAUST Repository

    Magana-Mora, Arturo

    2017-04-29

    Machine-learning (ML) techniques have been widely applied to solve different problems in biology. However, biological data are large and complex, which often result in extremely intricate ML models. Frequently, these models may have a poor performance or may be computationally unfeasible. This study presents a set of novel computational methods and focuses on the application of genetic algorithms (GAs) for the simplification and optimization of ML models and their applications to biological problems. The dissertation addresses the following three challenges. The first is to develop a generalizable classification methodology able to systematically derive competitive models despite the complexity and nature of the data. Although several algorithms for the induction of classification models have been proposed, the algorithms are data dependent. Consequently, we developed OmniGA, a novel and generalizable framework that uses different classification models in a treeXlike decision structure, along with a parallel GA for the optimization of the OmniGA structure. Results show that OmniGA consistently outperformed existing commonly used classification models. The second challenge is the prediction of translation initiation sites (TIS) in plants genomic DNA. We performed a statistical analysis of the genomic DNA and proposed a new set of discriminant features for this problem. We developed a wrapper method based on GAs for selecting an optimal feature subset, which, in conjunction with a classification model, produced the most accurate framework for the recognition of TIS in plants. Finally, results demonstrate that despite the evolutionary distance between different plants, our approach successfully identified conserved genomic elements that may serve as the starting point for the development of a generic model for prediction of TIS in eukaryotic organisms. Finally, the third challenge is the accurate prediction of polyadenylation signals in human genomic DNA. To achieve

  11. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  12. Parallel Algorithm for Solving TOV Equations for Sequence of Cold and Dense Nuclear Matter Models

    Science.gov (United States)

    Ayriyan, Alexander; Buša, Ján; Grigorian, Hovik; Poghosyan, Gevorg

    2018-04-01

    We have introduced parallel algorithm simulation of neutron star configurations for set of equation of state models. The performance of the parallel algorithm has been investigated for testing set of EoS models on two computational systems. It scales when using with MPI on modern CPUs and this investigation allowed us also to compare two different types of computational nodes.

  13. Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models

    KAUST Repository

    Vignal, Philippe

    2016-02-11

    Phase-field models are emerging as a promising strategy to simulate interfacial phenomena. Rather than tracking interfaces explicitly as done in sharp interface descriptions, these models use a diffuse order parameter to monitor interfaces implicitly. This implicit description, as well as solid physical and mathematical footings, allow phase-field models to overcome problems found by predecessors. Nonetheless, the method has significant drawbacks. The phase-field framework relies on the solution of high-order, nonlinear partial differential equations. Solving these equations entails a considerable computational cost, so finding efficient strategies to handle them is important. Also, standard discretization strategies can many times lead to incorrect solutions. This happens because, for numerical solutions to phase-field equations to be valid, physical conditions such as mass conservation and free energy monotonicity need to be guaranteed. In this work, we focus on the development of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure evolution. The algorithm developed conserves, guarantees energy stability and is second order accurate in time. The second part of the thesis presents two numerical schemes that generalize literature regarding energy-stable methods for conserved and non-conserved phase-field models. The time discretization strategies can conserve mass if needed, are energy-stable, and second order accurate in time. We also develop an adaptive time-stepping strategy, which can be applied to any second-order accurate scheme. This time-adaptive strategy relies on a backward approximation to give an accurate error estimator. The spatial discretization, in both parts, relies on a mixed finite element formulation and isogeometric analysis. The codes are

  14. TIGER: Development of Thermal Gradient Compensation Algorithms and Techniques

    Science.gov (United States)

    Hereford, James; Parker, Peter A.; Rhew, Ray D.

    2004-01-01

    In a wind tunnel facility, the direct measurement of forces and moments induced on the model are performed by a force measurement balance. The measurement balance is a precision-machined device that has strain gages at strategic locations to measure the strain (i.e., deformations) due to applied forces and moments. The strain gages convert the strain (and hence the applied force) to an electrical voltage that is measured by external instruments. To address the problem of thermal gradients on the force measurement balance NASA-LaRC has initiated a research program called TIGER - Thermally-Induced Gradients Effects Research. The ultimate goals of the TIGER program are to: (a) understand the physics of the thermally-induced strain and its subsequent impact on load measurements and (b) develop a robust thermal gradient compensation technique. This paper will discuss the impact of thermal gradients on force measurement balances, specific aspects of the TIGER program (the design of a special-purpose balance, data acquisition and data analysis challenges), and give an overall summary.

  15. Continuous time boolean modeling for biological signaling: application of Gillespie algorithm

    Directory of Open Access Journals (Sweden)

    Stoll Gautier

    2012-08-01

    translated in a set of ordinary differential equations on probability distributions. We developed a C++ software, MaBoSS, that is able to simulate such a system by applying Kinetic Monte-Carlo (or Gillespie algorithm on the Boolean state space. This software, parallelized and optimized, computes the temporal evolution of probability distributions and estimates stationary distributions. Conclusions Applications of the Boolean Kinetic Monte-Carlo are demonstrated for three qualitative models: a toy model, a published model of p53/Mdm2 interaction and a published model of the mammalian cell cycle. Our approach allows to describe kinetic phenomena which were difficult to handle in the original models. In particular, transient effects are represented by time dependent probability distributions, interpretable in terms of cell populations.

  16. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta.

    Science.gov (United States)

    Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J

    2010-03-01

    PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site.

  17. Efficient Fourier based Algorithm Development for Airborne Moving Target Indication

    NARCIS (Netherlands)

    Lidicky, L.; Hoogeboom, P.

    2009-01-01

    This paper shows how the signal model that is commonly used as a starting point in multi-channel Space Time Adaptive Processing (STAP) for airborne Moving Target Indication (MTI) formally corresponds to a model that can be derived from a bi-static Synthetic Aperture Radar (SAR) model extended for

  18. A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials

    Energy Technology Data Exchange (ETDEWEB)

    Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A

    2008-12-04

    We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.

  19. Soft sensor development for Mooney viscosity prediction in rubber mixing process based on GMMDJITGPR algorithm

    Science.gov (United States)

    Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping

    2017-01-01

    In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.

  20. A Convex Optimization Model and Algorithm for Retinex

    Directory of Open Access Journals (Sweden)

    Qing-Nan Zhao

    2017-01-01

    Full Text Available Retinex is a theory on simulating and explaining how human visual system perceives colors under different illumination conditions. The main contribution of this paper is to put forward a new convex optimization model for Retinex. Different from existing methods, the main idea is to rewrite a multiplicative form such that the illumination variable and the reflection variable are decoupled in spatial domain. The resulting objective function involves three terms including the Tikhonov regularization of the illumination component, the total variation regularization of the reciprocal of the reflection component, and the data-fitting term among the input image, the illumination component, and the reciprocal of the reflection component. We develop an alternating direction method of multipliers (ADMM to solve the convex optimization model. Numerical experiments demonstrate the advantages of the proposed model which can decompose an image into the illumination and the reflection components.

  1. A Decomposition Algorithm for Mean-Variance Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP that decompo......This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP...

  2. A genetic-algorithm-aided stochastic optimization model for regional air quality management under uncertainty.

    Science.gov (United States)

    Qin, Xiaosheng; Huang, Guohe; Liu, Lei

    2010-01-01

    A genetic-algorithm-aided stochastic optimization (GASO) model was developed in this study for supporting regional air quality management under uncertainty. The model incorporated genetic algorithm (GA) and Monte Carlo simulation techniques into a general stochastic chance-constrained programming (CCP) framework and allowed uncertainties in simulation and optimization model parameters to be considered explicitly in the design of least-cost strategies. GA was used to seek the optimal solution of the management model by progressively evaluating the performances of individual solutions. Monte Carlo simulation was used to check the feasibility of each solution. A management problem in terms of regional air pollution control was studied to demonstrate the applicability of the proposed method. Results of the case study indicated the proposed model could effectively communicate uncertainties into the optimization process and generate solutions that contained a spectrum of potential air pollutant treatment options with risk and cost information. Decision alternatives could be obtained by analyzing tradeoffs between the overall pollutant treatment cost and the system-failure risk due to inherent uncertainties.

  3. Choosing processor array configuration by performance modeling for a highly parallel linear algebra algorithm

    International Nuclear Information System (INIS)

    Littlefield, R.J.; Maschhoff, K.J.

    1991-04-01

    Many linear algebra algorithms utilize an array of processors across which matrices are distributed. Given a particular matrix size and a maximum number of processors, what configuration of processors, i.e., what size and shape array, will execute the fastest? The answer to this question depends on tradeoffs between load balancing, communication startup and transfer costs, and computational overhead. In this paper we analyze in detail one algorithm: the blocked factored Jacobi method for solving dense eigensystems. A performance model is developed to predict execution time as a function of the processor array and matrix sizes, plus the basic computation and communication speeds of the underlying computer system. In experiments on a large hypercube (up to 512 processors), this model has been found to be highly accurate (mean error ∼ 2%) over a wide range of matrix sizes (10 x 10 through 200 x 200) and processor counts (1 to 512). The model reveals, and direct experiment confirms, that the tradeoffs mentioned above can be surprisingly complex and counterintuitive. We propose decision procedures based directly on the performance model to choose configurations for fastest execution. The model-based decision procedures are compared to a heuristic strategy and shown to be significantly better. 7 refs., 8 figs., 1 tab

  4. Solving inverse problem for Markov chain model of customer lifetime value using flower pollination algorithm

    Science.gov (United States)

    Al-Ma'shumah, Fathimah; Permana, Dony; Sidarto, Kuntjoro Adji

    2015-12-01

    Customer Lifetime Value is an important and useful concept in marketing. One of its benefits is to help a company for budgeting marketing expenditure for customer acquisition and customer retention. Many mathematical models have been introduced to calculate CLV considering the customer retention/migration classification scheme. A fairly new class of these models which will be described in this paper uses Markov Chain Models (MCM). This class of models has the major advantage for its flexibility to be modified to several different cases/classification schemes. In this model, the probabilities of customer retention and acquisition play an important role. From Pfeifer and Carraway, 2000, the final formula of CLV obtained from MCM usually contains nonlinear form of the transition probability matrix. This nonlinearity makes the inverse problem of CLV difficult to solve. This paper aims to solve this inverse problem, yielding the approximate transition probabilities for the customers, by applying metaheuristic optimization algorithm developed by Yang, 2013, Flower Pollination Algorithm. The major interpretation of obtaining the transition probabilities are to set goals for marketing teams in keeping the relative frequencies of customer acquisition and customer retention.

  5. Modeling of Energy Demand in the Greenhouse Using PSO-GA Hybrid Algorithms

    Directory of Open Access Journals (Sweden)

    Jiaoliao Chen

    2015-01-01

    Full Text Available Modeling of energy demand in agricultural greenhouse is very important to maintain optimum inside environment for plant growth and energy consumption decreasing. This paper deals with the identification parameters for physical model of energy demand in the greenhouse using hybrid particle swarm optimization and genetic algorithms technique (HPSO-GA. HPSO-GA is developed to estimate the indistinct internal parameters of greenhouse energy model, which is built based on thermal balance. Experiments were conducted to measure environment and energy parameters in a cooling greenhouse with surface water source heat pump system, which is located in mid-east China. System identification experiments identify model parameters using HPSO-GA such as inertias and heat transfer constants. The performance of HPSO-GA on the parameter estimation is better than GA and PSO. This algorithm can improve the classification accuracy while speeding up the convergence process and can avoid premature convergence. System identification results prove that HPSO-GA is reliable in solving parameter estimation problems for modeling the energy demand in the greenhouse.

  6. Algorithms and Methods for High-Performance Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca

    routines employed in the numerical tests. The main focus of this thesis is on linear MPC problems. In this thesis, both the algorithms and their implementation are equally important. About the implementation, a novel implementation strategy for the dense linear algebra routines in embedded optimization...... is proposed, aiming at improving the computational performance in case of small matrices. About the algorithms, they are built on top of the proposed linear algebra, and they are tailored to exploit the high-level structure of the MPC problems, with special care on reducing the computational complexity....

  7. Supplier selection based on a neural network model using genetic algorithm.

    Science.gov (United States)

    Golmohammadi, Davood; Creese, Robert C; Valian, Haleh; Kolassa, John

    2009-09-01

    In this paper, a decision-making model was developed to select suppliers using neural networks (NNs). This model used historical supplier performance data for selection of vendor suppliers. Input and output were designed in a unique manner for training purposes. The managers' judgments about suppliers were simulated by using a pairwise comparisons matrix for output estimation in the NN. To obtain the benefit of a search technique for model structure and training, genetic algorithm (GA) was applied for the initial weights and architecture of the network. The suppliers' database information (input) can be updated over time to change the suppliers' score estimation based on their performance. The case study illustrated shows how the model can be applied for suppliers' selection.

  8. Decoding neural events from fMRI BOLD signal: a comparison of existing approaches and development of a new algorithm.

    Science.gov (United States)

    Bush, Keith; Cisler, Josh

    2013-07-01

    Neuroimaging methodology predominantly relies on the blood oxygenation level dependent (BOLD) signal. While the BOLD signal is a valid measure of neuronal activity, variances in fluctuations of the BOLD signal are not only due to fluctuations in neural activity. Thus, a remaining problem in neuroimaging analyses is developing methods that ensure specific inferences about neural activity that are not confounded by unrelated sources of noise in the BOLD signal. Here, we develop and test a new algorithm for performing semiblind (i.e., no knowledge of stimulus timings) deconvolution of the BOLD signal that treats the neural event as an observable, but intermediate, probabilistic representation of the system's state. We test and compare this new algorithm against three other recent deconvolution algorithms under varied levels of autocorrelated and Gaussian noise, hemodynamic response function (HRF) misspecification and observation sampling rate. Further, we compare the algorithms' performance using two models to simulate BOLD data: a convolution of neural events with a known (or misspecified) HRF versus a biophysically accurate balloon model of hemodynamics. We also examine the algorithms' performance on real task data. The results demonstrated good performance of all algorithms, though the new algorithm generally outperformed the others (3.0% improvement) under simulated resting-state experimental conditions exhibiting multiple, realistic confounding factors (as well as 10.3% improvement on a real Stroop task). The simulations also demonstrate that the greatest negative influence on deconvolution accuracy is observation sampling rate. Practical and theoretical implications of these results for improving inferences about neural activity from fMRI BOLD signal are discussed. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Optimizing bi-objective, multi-echelon supply chain model using particle swarm intelligence algorithm

    Science.gov (United States)

    Sathish Kumar, V. R.; Anbuudayasankar, S. P.; Rameshkumar, K.

    2018-02-01

    In the current globalized scenario, business organizations are more dependent on cost effective supply chain to enhance profitability and better handle competition. Demand uncertainty is an important factor in success or failure of a supply chain. An efficient supply chain limits the stock held at all echelons to the extent of avoiding a stock-out situation. In this paper, a three echelon supply chain model consisting of supplier, manufacturing plant and market is developed and the same is optimized using particle swarm intelligence algorithm.

  10. A Taxonomy for Modeling Flexibility and a Computationally Efficient Algorithm for Dispatch in Smart Grids

    DEFF Research Database (Denmark)

    Petersen, Mette Højgaard; Edlund, Kristian; Hansen, Lars Henrik

    2013-01-01

    The word flexibility is central to Smart Grid literature, but still a formal definition of flexibility is pending. This paper present a taxonomy for flexibility modeling denoted Buckets, Batteries and Bakeries. We consider a direct control Virtual Power Plant (VPP), which is given the task...... of servicing a portfolio of flexible consumers by use of a fluctuating power supply. Based on the developed taxonomy we first prove that no causal optimal dispatch strategies exist for the considered problem. We then present two heuristic algorithms for solving the balancing task: Predictive Balancing...

  11. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  12. RNA secondary structure prediction with pseudoknots: Contribution of algorithm versus energy model.

    Science.gov (United States)

    Jabbari, Hosna; Wark, Ian; Montemagno, Carlo

    2018-01-01

    RNA is a biopolymer with various applications inside the cell and in biotechnology. Structure of an RNA molecule mainly determines its function and is essential to guide nanostructure design. Since experimental structure determination is time-consuming and expensive, accurate computational prediction of RNA structure is of great importance. Prediction of RNA secondary structure is relatively simpler than its tertiary structure and provides information about its tertiary structure, therefore, RNA secondary structure prediction has received attention in the past decades. Numerous methods with different folding approaches have been developed for RNA secondary structure prediction. While methods for prediction of RNA pseudoknot-free structure (structures with no crossing base pairs) have greatly improved in terms of their accuracy, methods for prediction of RNA pseudoknotted secondary structure (structures with crossing base pairs) still have room for improvement. A long-standing question for improving the prediction accuracy of RNA pseudoknotted secondary structure is whether to focus on the prediction algorithm or the underlying energy model, as there is a trade-off on computational cost of the prediction algorithm versus the generality of the method. The aim of this work is to argue when comparing different methods for RNA pseudoknotted structure prediction, the combination of algorithm and energy model should be considered and a method should not be considered superior or inferior to others if they do not use the same scoring model. We demonstrate that while the folding approach is important in structure prediction, it is not the only important factor in prediction accuracy of a given method as the underlying energy model is also as of great value. Therefore we encourage researchers to pay particular attention in comparing methods with different energy models.

  13. Development of Automatic Cluster Algorithm for Microcalcification in Digital Mammography

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Seok Yoon [Dept. of Medical Engineering, Korea University, Seoul (Korea, Republic of); Kim, Chang Soo [Dept. of Radiological Science, College of Health Sciences, Catholic University of Pusan, Pusan (Korea, Republic of)

    2009-03-15

    Digital Mammography is an efficient imaging technique for the detection and diagnosis of breast pathological disorders. Six mammographic criteria such as number of cluster, number, size, extent and morphologic shape of microcalcification, and presence of mass, were reviewed and correlation with pathologic diagnosis were evaluated. It is very important to find breast cancer early when treatment can reduce deaths from breast cancer and breast incision. In screening breast cancer, mammography is typically used to view the internal organization. Clusterig microcalcifications on mammography represent an important feature of breast mass, especially that of intraductal carcinoma. Because microcalcification has high correlation with breast cancer, a cluster of a microcalcification can be very helpful for the clinical doctor to predict breast cancer. For this study, three steps of quantitative evaluation are proposed : DoG filter, adaptive thresholding, Expectation maximization. Through the proposed algorithm, each cluster in the distribution of microcalcification was able to measure the number calcification and length of cluster also can be used to automatically diagnose breast cancer as indicators of the primary diagnosis.

  14. Development and evaluation of a micro-macro algorithm for the simulation of polymer flow

    International Nuclear Information System (INIS)

    Feigl, Kathleen; Tanner, Franz X.

    2006-01-01

    A micro-macro algorithm for the calculation of polymer flow is developed and numerically evaluated. The system being solved consists of the momentum and mass conservation equations from continuum mechanics coupled with a microscopic-based rheological model for polymer stress. Standard finite element techniques are used to solve the conservation equations for velocity and pressure, while stochastic simulation techniques are used to compute polymer stress from the simulated polymer dynamics in the rheological model. The rheological model considered combines aspects of reptation, network and continuum models. Two types of spatial approximation are considered for the configuration fields defining the dynamics in the model: piecewise constant and piecewise linear. The micro-macro algorithm is evaluated by simulating the abrupt planar die entry flow of a polyisobutylene solution described in the literature. The computed velocity and stress fields are found to be essentially independent of mesh size and ensemble size, while there is some dependence of the results on the order of spatial approximation to the configuration fields close to the die entry. Comparison with experimental data shows that the piecewise linear approximation leads to better predictions of the centerline first normal stress difference. Finally, the computational time associated with the piecewise constant spatial approximation is found to be about 2.5 times lower than that associated with the piecewise linear approximation. This is the result of the more efficient time integration scheme that is possible with the former type of approximation due to the pointwise incompressibility guaranteed by the choice of velocity-pressure finite element

  15. Developing a Model Component

    Science.gov (United States)

    Fields, Christina M.

    2013-01-01

    The Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI) is responsible for providing simulations to support test and verification of SCCS hardware and software. The Universal Coolant Transporter System (UCTS) was a Space Shuttle Orbiter support piece of the Ground Servicing Equipment (GSE). The initial purpose of the UCTS was to provide two support services to the Space Shuttle Orbiter immediately after landing at the Shuttle Landing Facility. The UCTS is designed with the capability of servicing future space vehicles; including all Space Station Requirements necessary for the MPLM Modules. The Simulation uses GSE Models to stand in for the actual systems to support testing of SCCS systems during their development. As an intern at Kennedy Space Center (KSC), my assignment was to develop a model component for the UCTS. I was given a fluid component (dryer) to model in Simulink. I completed training for UNIX and Simulink. The dryer is a Catch All replaceable core type filter-dryer. The filter-dryer provides maximum protection for the thermostatic expansion valve and solenoid valve from dirt that may be in the system. The filter-dryer also protects the valves from freezing up. I researched fluid dynamics to understand the function of my component. The filter-dryer was modeled by determining affects it has on the pressure and velocity of the system. I used Bernoulli's Equation to calculate the pressure and velocity differential through the dryer. I created my filter-dryer model in Simulink and wrote the test script to test the component. I completed component testing and captured test data. The finalized model was sent for peer review for any improvements. I participated in Simulation meetings and was involved in the subsystem design process and team collaborations. I gained valuable work experience and insight into a career path as an engineer.

  16. Nonlinear inversion of resistivity sounding data for 1-D earth models using the Neighbourhood Algorithm

    Science.gov (United States)

    Ojo, A. O.; Xie, Jun; Olorunfemi, M. O.

    2018-01-01

    To reduce ambiguity related to nonlinearities in the resistivity model-data relationships, an efficient direct-search scheme employing the Neighbourhood Algorithm (NA) was implemented to solve the 1-D resistivity problem. In addition to finding a range of best-fit models which are more likely to be global minimums, this method investigates the entire multi-dimensional model space and provides additional information about the posterior model covariance matrix, marginal probability density function and an ensemble of acceptable models. This provides new insights into how well the model parameters are constrained and make assessing trade-offs between them possible, thus avoiding some common interpretation pitfalls. The efficacy of the newly developed program is tested by inverting both synthetic (noisy and noise-free) data and field data from other authors employing different inversion methods so as to provide a good base for comparative performance. In all cases, the inverted model parameters were in good agreement with the true and recovered model parameters from other methods and remarkably correlate with the available borehole litho-log and known geology for the field dataset. The NA method has proven to be useful whilst a good starting model is not available and the reduced number of unknowns in the 1-D resistivity inverse problem makes it an attractive alternative to the linearized methods. Hence, it is concluded that the newly developed program offers an excellent complementary tool for the global inversion of the layered resistivity structure.

  17. Analyzing Traffic Problem Model With Graph Theory Algorithms

    OpenAIRE

    Tan, Yong

    2014-01-01

    This paper will contribute to a practical problem, Urban Traffic. We will investigate those features, try to simplify the complexity and formulize this dynamic system. These contents mainly contain how to analyze a decision problem with combinatorial method and graph theory algorithms; how to optimize our strategy to gain a feasible solution through employing other principles of Computer Science.

  18. A face recognition algorithm based on multiple individual discriminative models

    DEFF Research Database (Denmark)

    Fagertun, Jens; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2005-01-01

    Abstract—In this paper, a novel algorithm for facial recognition is proposed. The technique combines the color texture and geometrical configuration provided by face images. Landmarks and pixel intensities are used by Principal Component Analysis and Fisher Linear Discriminant Analysis to associa...... as an accurate and robust tool for facial identification and unknown detection....

  19. Model-based remote sensing algorithms for particulate organic ...

    Indian Academy of Sciences (India)

    PCA algorithms based on the first three, four, and five modes accounted for 90, 95, and 98% of total variance and yielded significant correlations with POC with 2 = 0.89, 0.92, and 0.93. These full waveband approaches provided robust estimates of POC in various water types. Three different analyses (root mean square ...

  20. Algorithmic Research and Software Development for an Industrial Strength Sparse Matrix Library for Parallel Computers

    National Research Council Canada - National Science Library

    Grimes, Roger

    1999-01-01

    This final report describes the status of work performed during the months of Sept 1995 through Jan 1999 on the Algorithmic Research And Software Development For An Industrial Strength Sparse Matrix...

  1. An Iterative Algorithm to Determine the Dynamic User Equilibrium in a Traffic Simulation Model

    Science.gov (United States)

    Gawron, C.

    An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented. Each driver's route choice is modeled by a discrete probability distribution which is used to select a route in the simulation. After each simulation run, the probability distribution is adapted to minimize the travel costs. Although the algorithm does not depend on the simulation model, a queuing model is used for performance reasons. The stability of the algorithm is analyzed for a simple example network. As an application example, a dynamic version of Braess's paradox is studied.

  2. A multiobjective optimization model and an orthogonal design-based hybrid heuristic algorithm for regional urban mining management problems.

    Science.gov (United States)

    Wu, Hao; Wan, Zhong

    2018-02-01

    In this paper, a multiobjective mixed-integer piecewise nonlinear programming model (MOMIPNLP) is built to formulate the management problem of urban mining system, where the decision variables are associated with buy-back pricing, choices of sites, transportation planning, and adjustment of production capacity. Different from the existing approaches, the social negative effect, generated from structural optimization of the recycling system, is minimized in our model, as well as the total recycling profit and utility from environmental improvement are jointly maximized. For solving the problem, the MOMIPNLP model is first transformed into an ordinary mixed-integer nonlinear programming model by variable substitution such that the piecewise feature of the model is removed. Then, based on technique of orthogonal design, a hybrid heuristic algorithm is developed to find an approximate Pareto-optimal solution, where genetic algorithm is used to optimize the structure of search neighborhood, and both local branching algorithm and relaxation-induced neighborhood search algorithm are employed to cut the searching branches and reduce the number of variables in each branch. Numerical experiments indicate that this algorithm spends less CPU (central processing unit) time in solving large-scale regional urban mining management problems, especially in comparison with the similar ones available in literature. By case study and sensitivity analysis, a number of practical managerial implications are revealed from the model. Since the metal stocks in society are reliable overground mineral sources, urban mining has been paid great attention as emerging strategic resources in an era of resource shortage. By mathematical modeling and development of efficient algorithms, this paper provides decision makers with useful suggestions on the optimal design of recycling system in urban mining. For example, this paper can answer how to encourage enterprises to join the recycling activities

  3. Multi-sources model and control algorithm of an energy management system for light electric vehicles

    International Nuclear Information System (INIS)

    Hannan, M.A.; Azidin, F.A.; Mohamed, A.

    2012-01-01

    Highlights: ► An energy management system (EMS) is developed for a scooter under normal and heavy power load conditions. ► The battery, FC, SC, EMS, DC machine and vehicle dynamics are modeled and designed for the system. ► State-based logic control algorithms provide an efficient and feasible multi-source EMS for light electric vehicles. ► Vehicle’s speed and power are closely matched with the ECE-47 driving cycle under normal and heavy load conditions. ► Sources of energy changeover occurred at 50% of the battery state of charge level in heavy load conditions. - Abstract: This paper presents the multi-sources energy models and ruled based feedback control algorithm of an energy management system (EMS) for light electric vehicle (LEV), i.e., scooters. The multiple sources of energy, such as a battery, fuel cell (FC) and super-capacitor (SC), EMS and power controller, DC machine and vehicle dynamics are designed and modeled using MATLAB/SIMULINK. The developed control strategies continuously support the EMS of the multiple sources of energy for a scooter under normal and heavy power load conditions. The performance of the proposed system is analyzed and compared with that of the ECE-47 test drive cycle in terms of vehicle speed and load power. The results show that the designed vehicle’s speed and load power closely match those of the ECE-47 test driving cycle under normal and heavy load conditions. This study’s results suggest that the proposed control algorithm provides an efficient and feasible EMS for LEV.

  4. Using genetic algorithm and TOPSIS for Xinanjiang model calibration with a single procedure

    Science.gov (United States)

    Cheng, Chun-Tian; Zhao, Ming-Yan; Chau, K. W.; Wu, Xin-Yu

    2006-01-01

    Genetic Algorithm (GA) is globally oriented in searching and thus useful in optimizing multiobjective problems, especially where the objective functions are ill-defined. Conceptual rainfall-runoff models that aim at predicting streamflow from the knowledge of precipitation over a catchment have become a basic tool for flood forecasting. The parameter calibration of a conceptual model usually involves the multiple criteria for judging the performances of observed data. However, it is often difficult to derive all objective functions for the parameter calibration problem of a conceptual model. Thus, a new method to the multiple criteria parameter calibration problem, which combines GA with TOPSIS (technique for order performance by similarity to ideal solution) for Xinanjiang model, is presented. This study is an immediate further development of authors' previous research (Cheng, C.T., Ou, C.P., Chau, K.W., 2002. Combining a fuzzy optimal model with a genetic algorithm to solve multi-objective rainfall-runoff model calibration. Journal of Hydrology, 268, 72-86), whose obvious disadvantages are to split the whole procedure into two parts and to become difficult to integrally grasp the best behaviors of model during the calibration procedure. The current method integrates the two parts of Xinanjiang rainfall-runoff model calibration together, simplifying the procedures of model calibration and validation and easily demonstrated the intrinsic phenomenon of observed data in integrity. Comparison of results with two-step procedure shows that the current methodology gives similar results to the previous method, is also feasible and robust, but simpler and easier to apply in practice.

  5. A Linked Simulation-Optimization (LSO) Model for Conjunctive Irrigation Management using Clonal Selection Algorithm

    Science.gov (United States)

    Islam, Sirajul; Talukdar, Bipul

    2016-09-01

    A Linked Simulation-Optimization (LSO) model based on a Clonal Selection Algorithm (CSA) was formulated for application in conjunctive irrigation management. A series of measures were considered for reducing the computational burden associated with the LSO approach. Certain modifications were incurred to the formulated CSA, so as to decrease the number of function evaluations. In addition, a simple problem specific code for a two dimensional groundwater flow simulation model was developed. The flow model was further simplified by a novel approach of area reduction, in order to save computational time in simulation. The LSO model was applied in the irrigation command of the Pagladiya Dam Project in Assam, India. With a view to evaluate the performance of the CSA, a Genetic Algorithm (GA) was used as a comparison base. The results from the CSA compared well with those from the GA. In fact, the CSA was found to consume less computational time than the GA while converging to the optimal solution, due to the modifications incurred in it.

  6. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    Energy Technology Data Exchange (ETDEWEB)

    Lester, Brian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scherzinger, William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-19

    Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.

  7. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    Energy Technology Data Exchange (ETDEWEB)

    Lester, Brian T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scherzinger, William M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-19

    A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.

  8. Modeling skin collimation using the electron pencil beam redefinition algorithm.

    Science.gov (United States)

    Chi, Pai-Chun M; Hogstrom, Kenneth R; Starkschall, George; Antolak, John A; Boyd, Robert A

    2005-11-01

    Skin collimation is an important tool for electron beam therapy that is used to minimize the penumbra when treating near critical structures, at extended treatment distances, with bolus, or using arc therapy. It is usually made of lead or lead alloy material that conforms to and is placed on patient surface. Presently, commercially available treatment-planning systems lack the ability to model skin collimation and to accurately calculate dose in its presence. The purpose of the present work was to evaluate the use of the pencil beam redefinition algorithm (PBRA) in calculating dose in the presence of skin collimation. Skin collimation was incorporated into the PBRA by terminating the transport of electrons once they enter the skin collimator. Both fixed- and arced-beam dose calculations for arced-beam geometries were evaluated by comparing them with measured dose distributions for 10- and 15-MeV beams. Fixed-beam dose distributions were measured in water at 88-cm source-to-surface distance with an air gap of 32 cm. The 6 x 20-cm2 field (dimensions projected to isocenter) had a 10-mm thick lead collimator placed on the surface of the water with its edge 5 cm inside the field's edge located at +10 cm. Arced-beam dose distributions were measured in a 13.5-cm radius polystyrene circular phantom. The beam was arced 90 degrees (-45 degrees to +45 degrees), and 10-mm thick lead collimation was placed at +/- 30 degrees. For the fixed beam at 10 MeV, the PBRA- calculated dose agreed with measured dose to within 2.0-mm distance to agreement (DTA) in the regions of high-dose gradient and 2.0% in regions of low dose gradient. At 15 MeV, the PBRA agreed to within a 2.0-mm DTA in the regions of high-dose gradient; however, the PBRA underestimated the dose by as much as 5.3% over small regions at depths less than 2 cm because it did not model electrons scattered from the edge of the skin collimation. For arced beams at 10 MeV, the agreement was 1-mm DTA in the high-dose gradient

  9. Development and validation of an algorithm for laser application in wound treatment

    Directory of Open Access Journals (Sweden)

    Diequison Rite da Cunha

    2017-12-01

    Full Text Available ABSTRACT Objective: To develop and validate an algorithm for laser wound therapy. Method: Methodological study and literature review. For the development of the algorithm, a review was performed in the Health Sciences databases of the past ten years. The algorithm evaluation was performed by 24 participants, nurses, physiotherapists, and physicians. For data analysis, the Cronbach’s alpha coefficient and the chi-square test for independence was used. The level of significance of the statistical test was established at 5% (p<0.05. Results: The professionals’ responses regarding the facility to read the algorithm indicated: 41.70%, great; 41.70%, good; 16.70%, regular. With regard the algorithm being sufficient for supporting decisions related to wound evaluation and wound cleaning, 87.5% said yes to both questions. Regarding the participants’ opinion that the algorithm contained enough information to support their decision regarding the choice of laser parameters, 91.7% said yes. The questionnaire presented reliability using the Cronbach’s alpha coefficient test (α = 0.962. Conclusion: The developed and validated algorithm showed reliability for evaluation, wound cleaning, and use of laser therapy in wounds.

  10. Study on solitary word based on HMM model and Baum-Welch algorithm

    Directory of Open Access Journals (Sweden)

    Junxia CHEN

    Full Text Available This paper introduces the principle of Hidden Markov Model, which is used to describe the Markov process with unknown parameters, is a probability model to describe the statistical properties of the random process. On this basis, designed a solitary word detection experiment based on HMM model, by optimizing the experimental model, Using Baum-Welch algorithm for training the problem of solving the HMM model, HMM model to estimate the parameters of the λ value is found, in this view of mathematics equivalent to other linear prediction coefficient. This experiment in reducing unnecessary HMM training at the same time, reduced the algorithm complexity. In order to test the effectiveness of the Baum-Welch algorithm, The simulation of experimental data, the results show that the algorithm is effective.

  11. Novel particle tracking algorithm based on the Random Sample Consensus Model for the Active Target Time Projection Chamber (AT-TPC)

    Science.gov (United States)

    Ayyad, Yassid; Mittig, Wolfgang; Bazin, Daniel; Beceiro-Novo, Saul; Cortesi, Marco

    2018-02-01

    The three-dimensional reconstruction of particle tracks in a time projection chamber is a challenging task that requires advanced classification and fitting algorithms. In this work, we have developed and implemented a novel algorithm based on the Random Sample Consensus Model (RANSAC). The RANSAC is used to classify tracks including pile-up, to remove uncorrelated noise hits, as well as to reconstruct the vertex of the reaction. The algorithm, developed within the Active Target Time Projection Chamber (AT-TPC) framework, was tested and validated by analyzing the 4He+4He reaction. Results, performance and quality of the proposed algorithm are presented and discussed in detail.

  12. Portfolio optimization by using linear programing models based on genetic algorithm

    Science.gov (United States)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  13. Development of MODIS data-based algorithm for retrieving sea surface temperature in coastal waters.

    Science.gov (United States)

    Wang, Jiao; Deng, Zhiqiang

    2017-06-01

    A new algorithm was developed for retrieving sea surface temperature (SST) in coastal waters using satellite remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua platform. The new SST algorithm was trained using the Artificial Neural Network (ANN) method and tested using 8 years of remote sensing data from MODIS Aqua sensor and in situ sensing data from the US coastal waters in Louisiana, Texas, Florida, California, and New Jersey. The ANN algorithm could be utilized to map SST in both deep offshore and particularly shallow nearshore waters at the high spatial resolution of 1 km, greatly expanding the coverage of remote sensing-based SST data from offshore waters to nearshore waters. Applications of the ANN algorithm require only the remotely sensed reflectance values from the two MODIS Aqua thermal bands 31 and 32 as input data. Application results indicated that the ANN algorithm was able to explaining 82-90% variations in observed SST in US coastal waters. While the algorithm is generally applicable to the retrieval of SST, it works best for nearshore waters where important coastal resources are located and existing algorithms are either not applicable or do not work well, making the new ANN-based SST algorithm unique and particularly useful to coastal resource management.

  14. Development and validation of a prediction algorithm for the onset of common mental disorders in a working population.

    Science.gov (United States)

    Fernandez, Ana; Salvador-Carulla, Luis; Choi, Isabella; Calvo, Rafael; Harvey, Samuel B; Glozier, Nicholas

    2018-01-01

    Common mental disorders are the most common reason for long-term sickness absence in most developed countries. Prediction algorithms for the onset of common mental disorders may help target indicated work-based prevention interventions. We aimed to develop and validate a risk algorithm to predict the onset of common mental disorders at 12 months in a working population. We conducted a secondary analysis of the Household, Income and Labour Dynamics in Australia Survey, a longitudinal, nationally representative household panel in Australia. Data from the 6189 working participants who did not meet the criteria for a common mental disorders at baseline were non-randomly split into training and validation databases, based on state of residence. Common mental disorders were assessed with the mental component score of 36-Item Short Form Health Survey questionnaire (score ⩽45). Risk algorithms were constructed following recommendations made by the Transparent Reporting of a multivariable prediction model for Prevention Or Diagnosis statement. Different risk factors were identified among women and men for the final risk algorithms. In the training data, the model for women had a C-index of 0.73 and effect size (Hedges' g) of 0.91. In men, the C-index was 0.76 and the effect size was 1.06. In the validation data, the C-index was 0.66 for women and 0.73 for men, with positive predictive values of 0.28 and 0.26, respectively Conclusion: It is possible to develop an algorithm with good discrimination for the onset identifying overall and modifiable risks of common mental disorders among working men. Such models have the potential to change the way that prevention of common mental disorders at the workplace is conducted, but different models may be required for women.

  15. Implicit level set algorithms for modelling hydraulic fracture propagation.

    Science.gov (United States)

    Peirce, A

    2016-10-13

    Hydraulic fractures are tensile cracks that propagate in pre-stressed solid media due to the injection of a viscous fluid. Developing numerical schemes to model the propagation of these fractures is particularly challenging due to the degenerate, hypersingular nature of the coupled integro-partial differential equations. These equations typically involve a singular free boundary whose velocity can only be determined by evaluating a distinguished limit. This review paper describes a class of numerical schemes that have been developed to use the multiscale asymptotic behaviour typically encountered near the fracture boundary as multiple physical processes compete to determine the evolution of the fracture. The fundamental concepts of locating the free boundary using the tip asymptotics and imposing the tip asymptotic behaviour in a weak form are illustrated in two quite different formulations of the governing equations. These formulations are the displacement discontinuity boundary integral method and the extended finite-element method. Practical issues are also discussed, including new models for proppant transport able to capture 'tip screen-out'; efficient numerical schemes to solve the coupled nonlinear equations; and fast methods to solve resulting linear systems. Numerical examples are provided to illustrate the performance of the numerical schemes. We conclude the paper with open questions for further research. This article is part of the themed issue 'Energy and the subsurface'. © 2016 The Author(s).

  16. SPICE Modeling and Simulation of a MPPT Algorithm

    Directory of Open Access Journals (Sweden)

    Miona Andrejević Stošović

    2014-06-01

    Full Text Available One among several equally important subsystems of a standalone photovoltaic (PV system is the circuit for maximum power point tracking (MPPT. There are several algorithms that may be used for it. In this paper we choose such an algorithm based on the maximum simplicity criteria. Then we make some small modifications to it in order to make it more robust. We synthesize a circuit built out of elements from the list of elements recognized by SPICE. The inputs are the voltage and the current at the PV panel to DC-DC converter interface. Its task is to generate a pulse width modulated pulse train whose duty ratio is defined to keep the input impedance of the DC-DC converter at the optimal value.

  17. Parameter Optimization of Single-Diode Model of Photovoltaic Cell Using Memetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yourim Yoon

    2015-01-01

    Full Text Available This study proposes a memetic approach for optimally determining the parameter values of single-diode-equivalent solar cell model. The memetic algorithm, which combines metaheuristic and gradient-based techniques, has the merit of good performance in both global and local searches. First, 10 single algorithms were considered including genetic algorithm, simulated annealing, particle swarm optimization, harmony search, differential evolution, cuckoo search, least squares method, and pattern search; then their final solutions were used as initial vectors for generalized reduced gradient technique. From this memetic approach, we could further improve the accuracy of the estimated solar cell parameters when compared with single algorithm approaches.

  18. Event-chain algorithm for the Heisenberg model: Evidence for z≃1 dynamic scaling.

    Science.gov (United States)

    Nishikawa, Yoshihiko; Michel, Manon; Krauth, Werner; Hukushima, Koji

    2015-12-01

    We apply the event-chain Monte Carlo algorithm to the three-dimensional ferromagnetic Heisenberg model. The algorithm is rejection-free and also realizes an irreversible Markov chain that satisfies global balance. The autocorrelation functions of the magnetic susceptibility and the energy indicate a dynamical critical exponent z≈1 at the critical temperature, while that of the magnetization does not measure the performance of the algorithm. We show that the event-chain Monte Carlo algorithm substantially reduces the dynamical critical exponent from the conventional value of z≃2.

  19. Atmosphere Clouds Model Algorithm for Solving Optimal Reactive Power Dispatch Problem

    Directory of Open Access Journals (Sweden)

    Lenin Kanagasabai

    2014-04-01

    Full Text Available In this paper, a new method, called Atmosphere Clouds Model (ACM algorithm, used for solving optimal reactive power dispatch problem. ACM stochastic optimization algorithm stimulated from the behavior of cloud in the natural earth. ACM replicate the generation behavior, shift behavior and extend behavior of cloud. The projected (ACM algorithm has been tested on standard IEEE 30 bus test system and simulation results shows clearly about the superior performance of the proposed algorithm in plummeting the real power loss. Normal 0 false false false EN-IN X-NONE X-NONE

  20. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

    Science.gov (United States)

    Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

    2015-09-01

    The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate and stable for steep slopes, and also conclude that, for longer time steps, the optimal

  1. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    Science.gov (United States)

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…

  2. Inferring the structure of latent class models using a genetic algorithm

    NARCIS (Netherlands)

    van der Maas, H.L.J.; Raijmakers, M.E.J.; Visser, I.

    2005-01-01

    Present optimization techniques in latent class analysis apply the expectation maximization algorithm or the Newton-Raphson algorithm for optimizing the parameter values of a prespecified model. These techniques can be used to find maximum likelihood estimates of the parameters, given the specified

  3. Global Convergence of the EM Algorithm for Unconstrained Latent Variable Models with Categorical Indicators

    Science.gov (United States)

    Weissman, Alexander

    2013-01-01

    Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…

  4. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    Science.gov (United States)

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  5. Efficient cache oblivious algorithms for randomized divide-and-conquer on the multicore model

    OpenAIRE

    Sharma, Neeraj; Sen, Sandeep

    2012-01-01

    In this paper we present randomized algorithms for sorting and convex hull that achieves optimal performance (for speed-up and cache misses) on the multicore model with private cache model. Our algorithms are cache oblivious and generalize the randomized divide and conquer strategy given by Reischuk and Reif and Sen. Although the approach yielded optimal speed-up in the PRAM model, we require additional techniques to optimize cache-misses in an oblivious setting. Under a mild assumption on in...

  6. Bayesian Algorithm Implementation in a Real Time Exposure Assessment Model on Benzene with Calculation of Associated Cancer Risks

    Directory of Open Access Journals (Sweden)

    Pavlos A. Kassomenos

    2009-02-01

    Full Text Available The objective of the current study was the development of a reliable modeling platform to calculate in real time the personal exposure and the associated health risk for filling station employees evaluating current environmental parameters (traffic, meteorological and amount of fuel traded determined by the appropriate sensor network. A set of Artificial Neural Networks (ANNs was developed to predict benzene exposure pattern for the filling station employees. Furthermore, a Physiology Based Pharmaco-Kinetic (PBPK risk assessment model was developed in order to calculate the lifetime probability distribution of leukemia to the employees, fed by data obtained by the ANN model. Bayesian algorithm was involved in crucial points of both model sub compartments. The application was evaluated in two filling stations (one urban and one rural. Among several algorithms available for the development of the ANN exposure model, Bayesian regularization provided the best results and seemed to be a promising technique for prediction of the exposure pattern of that occupational population group. On assessing the estimated leukemia risk under the scope of providing a distribution curve based on the exposure levels and the different susceptibility of the population, the Bayesian algorithm was a prerequisite of the Monte Carlo approach, which is integrated in the PBPK-based risk model. In conclusion, the modeling system described herein is capable of exploiting the information collected by the environmental sensors in order to estimate in real time the personal exposure and the resulting health risk for employees of gasoline filling stations.

  7. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  8. Developed adaptive neuro-fuzzy algorithm to control air conditioning ...

    African Journals Online (AJOL)

    user

    Our expectations of such systems have been raised to demand more than just temperature control, and it is increasingly desirable to apply these ... 2012) introduced a hybrid steady-state modeling approach for air-conditioning systems to keep the conservation of mass, energy ..... that shows the complexity and flexibility.

  9. Genetic Algorithms and Local Search

    Science.gov (United States)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  10. Algorithm Development for Multi-Energy SXR based Electron Temperature Profile Reconstruction

    Science.gov (United States)

    Clayton, D. J.; Tritz, K.; Finkenthal, M.; Kumar, D.; Stutman, D.

    2012-10-01

    New techniques utilizing computational tools such as neural networks and genetic algorithms are being developed to infer plasma electron temperature profiles on fast time scales (> 10 kHz) from multi-energy soft-x-ray (ME-SXR) diagnostics. Traditionally, a two-foil SXR technique, using the ratio of filtered continuum emission measured by two SXR detectors, has been employed on fusion devices as an indirect method of measuring electron temperature. However, these measurements can be susceptible to large errors due to uncertainties in time-evolving impurity density profiles, leading to unreliable temperature measurements. To correct this problem, measurements using ME-SXR diagnostics, which use three or more filtered SXR arrays to distinguish line and continuum emission from various impurities, in conjunction with constraints from spectroscopic diagnostics, can be used to account for unknown or time evolving impurity profiles [K. Tritz et al, Bull. Am. Phys. Soc. Vol. 56, No. 12 (2011), PP9.00067]. On NSTX, ME-SXR diagnostics can be used for fast (10-100 kHz) temperature profile measurements, using a Thomson scattering diagnostic (60 Hz) for periodic normalization. The use of more advanced algorithms, such as neural network processing, can decouple the reconstruction of the temperature profile from spectral modeling.

  11. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    Science.gov (United States)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  12. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

    Energy Technology Data Exchange (ETDEWEB)

    Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

    1985-05-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  13. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Directory of Open Access Journals (Sweden)

    A H Sabry

    Full Text Available The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  14. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    Science.gov (United States)

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  15. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Science.gov (United States)

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  16. Development of Quantum Devices and Algorithms for Radiation Detection and Radiation Signal Processing

    International Nuclear Information System (INIS)

    El Tokhy, M.E.S.M.E.S.

    2012-01-01

    The main functions of spectroscopy system are signal detection, filtering and amplification, pileup detection and recovery, dead time correction, amplitude analysis and energy spectrum analysis. Safeguards isotopic measurements require the best spectrometer systems with excellent resolution, stability, efficiency and throughput. However, the resolution and throughput, which depend mainly on the detector, amplifier and the analog-to-digital converter (ADC), can still be improved. These modules have been in continuous development and improvement. For this reason we are interested with both the development of quantum detectors and efficient algorithms of the digital processing measurement. Therefore, the main objective of this thesis is concentrated on both 1. Study quantum dot (QD) devices behaviors under gamma radiation 2. Development of efficient algorithms for handling problems of gamma-ray spectroscopy For gamma radiation detection, a detailed study of nanotechnology QD sources and infrared photodetectors (QDIP) for gamma radiation detection is introduced. There are two different types of quantum scintillator detectors, which dominate the area of ionizing radiation measurements. These detectors are QD scintillator detectors and QDIP scintillator detectors. By comparison with traditional systems, quantum systems have less mass, require less volume, and consume less power. These factors are increasing the need for efficient detector for gamma-ray applications such as gamma-ray spectroscopy. Consequently, the nanocomposite materials based on semiconductor quantum dots has potential for radiation detection via scintillation was demonstrated in the literature. Therefore, this thesis presents a theoretical analysis for the characteristics of QD sources and infrared photodetectors (QDIPs). A model of QD sources under incident gamma radiation detection is developed. A novel methodology is introduced to characterize the effect of gamma radiation on QD devices. The rate

  17. Development of an Aerosol Opacity Retrieval Algorithm for Use with Multi-Angle Land Surface Images

    Science.gov (United States)

    Diner, D.; Paradise, S.; Martonchik, J.

    1994-01-01

    In 1998, the Multi-angle Imaging SpectroRadiometer (MISR) will fly aboard the EOS-AM1 spacecraft. MISR will enable unique methods for retrieving the properties of atmospheric aerosols, by providing global imagery of the Earth at nine viewing angles in four visible and near-IR spectral bands. As part of the MISR algorithm development, theoretical methods of analyzing multi-angle, multi-spectral data are being tested using images acquired by the airborne Advanced Solid-State Array Spectroradiometer (ASAS). In this paper we derive a method to be used over land surfaces for retrieving the change in opacity between spectral bands, which can then be used in conjunction with an aerosol model to derive a bound on absolute opacity.

  18. DEVELOPMENT OF GENETIC ALGORITHM-BASED METHODOLOGY FOR SCHEDULING OF MOBILE ROBOTS

    DEFF Research Database (Denmark)

    Dang, Vinh Quang

    problem and finding optimal solutions for each one. However, the formulated mathematical models could only be applicable to small-scale problems in practice due to the significant increase of computation time as the problem size grows. Note that making schedules of mobile robots is part of real......-time operations of production managers. Hence to deal with large-scale applications, each heuristic based on genetic algorithms is then developed to find near-optimal solutions within a reasonable computation time for each problem. The quality of these solutions is then compared and evaluated by using......This thesis addresses the issues of scheduling of mobile robot(s) at operational levels of manufacturing systems. More specifically, two problems of scheduling of a single mobile robot with part-feeding tasks and scheduling of multiple mobile robots with preemptive tasks are taken into account...

  19. A new hybrid model optimized by an intelligent optimization algorithm for wind speed forecasting

    International Nuclear Information System (INIS)

    Su, Zhongyue; Wang, Jianzhou; Lu, Haiyan; Zhao, Ge

    2014-01-01

    Highlights: • A new hybrid model is developed for wind speed forecasting. • The model is based on the Kalman filter and the ARIMA. • An intelligent optimization method is employed in the hybrid model. • The new hybrid model has good performance in western China. - Abstract: Forecasting the wind speed is indispensable in wind-related engineering studies and is important in the management of wind farms. As a technique essential for the future of clean energy systems, reducing the forecasting errors related to wind speed has always been an important research subject. In this paper, an optimized hybrid method based on the Autoregressive Integrated Moving Average (ARIMA) and Kalman filter is proposed to forecast the daily mean wind speed in western China. This approach employs Particle Swarm Optimization (PSO) as an intelligent optimization algorithm to optimize the parameters of the ARIMA model, which develops a hybrid model that is best adapted to the data set, increasing the fitting accuracy and avoiding over-fitting. The proposed method is subsequently examined on the wind farms of western China, where the proposed hybrid model is shown to perform effectively and steadily

  20. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    International Nuclear Information System (INIS)

    Nishizuka, N.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.; Sugiura, K.

    2017-01-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  1. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    Science.gov (United States)

    Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010-2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions (ARs) from the full-disk magnetogram, from which ˜60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  2. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    Energy Technology Data Exchange (ETDEWEB)

    Nishizuka, N.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M. [Applied Electromagnetic Research Institute, National Institute of Information and Communications Technology, 4-2-1, Nukui-Kitamachi, Koganei, Tokyo 184-8795 (Japan); Sugiura, K., E-mail: nishizuka.naoto@nict.go.jp [Advanced Speech Translation Research and Development Promotion Center, National Institute of Information and Communications Technology (Japan)

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  3. An integrated environment for fast development and performance assessment of sonar image processing algorithms - SSIE

    DEFF Research Database (Denmark)

    Henriksen, Lars

    1996-01-01

    The sonar simulator integrated environment (SSIE) is a tool for developing high performance processing algorithms for single or sequences of sonar images. The tool is based on MATLAB providing a very short lead time from concept to executable code and thereby assessment of the algorithms tested...... of the algorithms is the availability of sonar images. To accommodate this problem the SSIE has been equipped with a simulator capable of generating high fidelity sonar images for a given scene of objects, sea-bed AUV path, etc. In the paper the main components of the SSIE is described and examples of different...... processing steps are given...

  4. The development of an algebraic multigrid algorithm for symmetric positive definite linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Vanek, P.; Mandel, J.; Brezina, M. [Univ. of Colorado, Denver, CO (United States)

    1996-12-31

    An algebraic multigrid algorithm for symmetric, positive definite linear systems is developed based on the concept of prolongation by smoothed aggregation. Coarse levels are generated automatically. We present a set of requirements motivated heuristically by a convergence theory. The algorithm then attempts to satisfy the requirements. Input to the method are the coefficient matrix and zero energy modes, which are determined from nodal coordinates and knowledge of the differential equation. Efficiency of the resulting algorithm is demonstrated by computational results on real world problems from solid elasticity, plate blending, and shells.

  5. An Efficient Hybrid DSMC/MD Algorithm for Accurate Modeling of Micro Gas Flows

    KAUST Repository

    Liang, Tengfei

    2013-01-01

    Aiming at simulating micro gas flows with accurate boundary conditions, an efficient hybrid algorithmis developed by combining themolecular dynamics (MD) method with the direct simulationMonte Carlo (DSMC)method. The efficiency comes from the fact that theMD method is applied only within the gas-wall interaction layer, characterized by the cut-off distance of the gas-solid interaction potential, to resolve accurately the gas-wall interaction process, while the DSMC method is employed in the remaining portion of the flow field to efficiently simulate rarefied gas transport outside the gas-wall interaction layer. A unique feature about the present scheme is that the coupling between the two methods is realized by matching the molecular velocity distribution function at the DSMC/MD interface, hence there is no need for one-toone mapping between a MD gas molecule and a DSMC simulation particle. Further improvement in efficiency is achieved by taking advantage of gas rarefaction inside the gas-wall interaction layer and by employing the "smart-wall model" proposed by Barisik et al. The developed hybrid algorithm is validated on two classical benchmarks namely 1-D Fourier thermal problem and Couette shear flow problem. Both the accuracy and efficiency of the hybrid algorithm are discussed. As an application, the hybrid algorithm is employed to simulate thermal transpiration coefficient in the free-molecule regime for a system with atomically smooth surface. Result is utilized to validate the coefficients calculated from the pure DSMC simulation with Maxwell and Cercignani-Lampis gas-wall interaction models. ©c 2014 Global-Science Press.

  6. Modeling of Photovoltaic System with Modified Incremental Conductance Algorithm for Fast Changes of Irradiance

    Directory of Open Access Journals (Sweden)

    Saad Motahhir

    2018-01-01

    Full Text Available The first objective of this work is to determine some of the performance parameters characterizing the behavior of a particular photovoltaic (PV panels that are not normally provided in the manufacturers’ specifications. These provide the basis for developing a simple model for the electrical behavior of the PV panel. Next, using this model, the effects of varying solar irradiation, temperature, series and shunt resistances, and partial shading on the output of the PV panel are presented. In addition, the PV panel model is used to configure a large photovoltaic array. Next, a boost converter for the PV panel is designed. This converter is put between the panel and the load in order to control it by means of a maximum power point tracking (MPPT controller. The MPPT used is based on incremental conductance (INC, and it is demonstrated here that this technique does not respond accurately when solar irradiation is increased. To investigate this, a modified incremental conductance technique is presented in this paper. It is shown that this system does respond accurately and reduces the steady-state oscillations when solar irradiation is increased. Finally, simulations of the conventional and modified algorithm are compared, and the results show that the modified algorithm provides an accurate response to a sudden increase in solar irradiation.

  7. RStorm: Developing and Testing Streaming Algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams.

  8. RStorm : Developing and testing streaming algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams.

  9. Developed adaptive neuro-fuzzy algorithm to control air conditioning ...

    African Journals Online (AJOL)

    user

    The paper developed artificial intelligence technique adaptive neuro-fuzzy controller for air conditioning systems at different pressures. The first order Sugeno fuzzy .... condenser heat rejection rate, refrigerant mass flow rate, compressor power, electric power input to the compressor motor and the coefficient of performance.

  10. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    Science.gov (United States)

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  11. Development and validation of a diagnostic model for early differentiation of sepsis and non-infectious SIRS in critically ill children - a data-driven approach using machine-learning algorithms.

    Science.gov (United States)

    Lamping, Florian; Jack, Thomas; Rübsamen, Nicole; Sasse, Michael; Beerbaum, Philipp; Mikolajczyk, Rafael T; Boehne, Martin; Karch, André

    2018-03-15

    Since early antimicrobial therapy is mandatory in septic patients, immediate diagnosis and distinction from non-infectious SIRS is essential but hampered by the similarity of symptoms between both entities. We aimed to develop a diagnostic model for differentiation of sepsis and non-infectious SIRS in critically ill children based on routinely available parameters (baseline characteristics, clinical/laboratory parameters, technical/medical support). This is a secondary analysis of a randomized controlled trial conducted at a German tertiary-care pediatric intensive care unit (PICU). Two hundred thirty-eight cases of non-infectious SIRS and 58 cases of sepsis (as defined by IPSCC criteria) were included. We applied a Random Forest approach to identify the best set of predictors out of 44 variables measured at the day of onset of the disease. The developed diagnostic model was validated in a temporal split-sample approach. A model including four clinical (length of PICU stay until onset of non-infectious SIRS/sepsis, central line, core temperature, number of non-infectious SIRS/sepsis episodes prior to diagnosis) and four laboratory parameters (interleukin-6, platelet count, procalcitonin, CRP) was identified in the training dataset. Validation in the test dataset revealed an AUC of 0.78 (95% CI: 0.70-0.87). Our model was superior to previously proposed biomarkers such as CRP, interleukin-6, procalcitonin or a combination of CRP and procalcitonin (maximum AUC = 0.63; 95% CI: 0.52-0.74). When aiming at a complete identification of sepsis cases (100%; 95% CI: 87-100%), 28% (95% CI: 20-38%) of non-infectious SIRS cases were assorted correctly. Our approach allows early recognition of sepsis with an accuracy superior to previously described biomarkers, and could potentially reduce antibiotic use by 30% in non-infectious SIRS cases. External validation studies are necessary to confirm the generalizability of our approach across populations and treatment practices

  12. SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation

    International Nuclear Information System (INIS)

    Yao, W; Farr, J

    2015-01-01

    Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MC simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations

  13. Development of an algorithm for energy efficient automated train driving

    OpenAIRE

    Ozhigin, Artem; Prunev, Pavel; Sverdlin, Victor; Vikulina, Yulia

    2016-01-01

    International audience; Automated train driving function is greatly demanded in high-speed and commuter trains operated by Russian railways. Siemens Corporate Technology is involved in the development of such real-time function within a "robotised" train control system. The main intention of the system is not only to relieve the human driver from routine control over traction and brakes (allowing him to pay more attention to assurance of safety) but also to increase train efficiency by reduci...

  14. Behavioral Modeling for Mental Health using Machine Learning Algorithms.

    Science.gov (United States)

    Srividya, M; Mohanavalli, S; Bhalaji, N

    2018-04-03

    Mental health is an indicator of emotional, psychological and social well-being of an individual. It determines how an individual thinks, feels and handle situations. Positive mental health helps one to work productively and realize their full potential. Mental health is important at every stage of life, from childhood and adolescence through adulthood. Many factors contribute to mental health problems which lead to mental illness like stress, social anxiety, depression, obsessive compulsive disorder, drug addiction, and personality disorders. It is becoming increasingly important to determine the onset of the mental illness to maintain proper life balance. The nature of machine learning algorithms and Artificial Intelligence (AI) can be fully harnessed for predicting the onset of mental illness. Such applications when implemented in real time will benefit the society by serving as a monitoring tool for individuals with deviant behavior. This research work proposes to apply various machine learning algorithms such as support vector machines, decision trees, naïve bayes classifier, K-nearest neighbor classifier and logistic regression to identify state of mental health in a target group. The responses obtained from the target group for the designed questionnaire were first subject to unsupervised learning techniques. The labels obtained as a result of clustering were validated by computing the Mean Opinion Score. These cluster labels were then used to build classifiers to predict the mental health of an individual. Population from various groups like high school students, college students and working professionals were considered as target groups. The research presents an analysis of applying the aforementioned machine learning algorithms on the target groups and also suggests directions for future work.

  15. Algorithm and simulation development in support of response strategies for contamination events in air and water systems.

    Energy Technology Data Exchange (ETDEWEB)

    Waanders, Bart Van Bloemen

    2006-01-01

    Chemical/Biological/Radiological (CBR) contamination events pose a considerable threat to our nation's infrastructure, especially in large internal facilities, external flows, and water distribution systems. Because physical security can only be enforced to a limited degree, deployment of early warning systems is being considered. However to achieve reliable and efficient functionality, several complex questions must be answered: (1) where should sensors be placed, (2) how can sparse sensor information be efficiently used to determine the location of the original intrusion, (3) what are the model and data uncertainties, (4) how should these uncertainties be handled, and (5) how can our algorithms and forward simulations be sufficiently improved to achieve real time performance? This report presents the results of a three year algorithmic and application development to support the identification, mitigation, and risk assessment of CBR contamination events. The main thrust of this investigation was to develop (1) computationally efficient algorithms for strategically placing sensors, (2) identification process of contamination events by using sparse observations, (3) characterization of uncertainty through developing accurate demands forecasts and through investigating uncertain simulation model parameters, (4) risk assessment capabilities, and (5) reduced order modeling methods. The development effort was focused on water distribution systems, large internal facilities, and outdoor areas.

  16. Multiobjecitve Sampling Design for Calibration of Water Distribution Network Model Using Genetic Algorithm and Neural Network

    Directory of Open Access Journals (Sweden)

    Kourosh Behzadian

    2008-03-01

    Full Text Available In this paper, a novel multiobjective optimization model is presented for selecting optimal locations in the water distribution network (WDN with the aim of installing pressure loggers. The pressure data collected at optimal locations will be used later on in the calibration of the proposed WDN model. Objective functions consist of maximization of calibrated model prediction accuracy and minimization of the total cost for sampling design. In order to decrease the model run time, an optimization model has been developed using multiobjective genetic algorithm and adaptive neural network (MOGA-ANN. Neural networks (NNs are initially trained after a number of initial GA generations and periodically retrained and updated after generation of a specified number of full model-analyzed solutions. Trained NNs are replaced with the fitness evaluation of some chromosomes within the GA progress. Using cache prevents objective function evaluation of repetitive chromosomes within GA. Optimal solutions are obtained through pareto-optimal front with respect to the two objective functions. Results show that jointing NNs in MOGA for approximating portions of chromosomes’ fitness in each generation leads to considerable savings in model run time and can be promising for reducing run-time in optimization models with significant computational effort.

  17. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  18. Golden Ratio Genetic Algorithm Based Approach for Modelling and Analysis of the Capacity Expansion of Urban Road Traffic Network

    Science.gov (United States)

    Zhang, Lun; Zhang, Meng; Yang, Wenchen; Dong, Decun

    2015-01-01

    This paper presents the modelling and analysis of the capacity expansion of urban road traffic network (ICURTN). Thebilevel programming model is first employed to model the ICURTN, in which the utility of the entire network is maximized with the optimal utility of travelers' route choice. Then, an improved hybrid genetic algorithm integrated with golden ratio (HGAGR) is developed to enhance the local search of simple genetic algorithms, and the proposed capacity expansion model is solved by the combination of the HGAGR and the Frank-Wolfe algorithm. Taking the traditional one-way network and bidirectional network as the study case, three numerical calculations are conducted to validate the presented model and algorithm, and the primary influencing factors on extended capacity model are analyzed. The calculation results indicate that capacity expansion of road network is an effective measure to enlarge the capacity of urban road network, especially on the condition of limited construction budget; the average computation time of the HGAGR is 122 seconds, which meets the real-time demand in the evaluation of the road network capacity. PMID:25802512

  19. An improved algorithm to convert CAD model to MCNP geometry model based on STEP file

    International Nuclear Information System (INIS)

    Zhou, Qingguo; Yang, Jiaming; Wu, Jiong; Tian, Yanshan; Wang, Junqiong; Jiang, Hai; Li, Kuan-Ching

    2015-01-01

    Highlights: • Fully exploits common features of cells, making the processing efficient. • Accurately provide the cell position. • Flexible to add new parameters in the structure. • Application of novel structure in INP file processing, conveniently evaluate cell location. - Abstract: MCNP (Monte Carlo N-Particle Transport Code) is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport. Its input file, the INP file, has the characteristics of complicated form and is error-prone when describing geometric models. Due to this, a conversion algorithm that can solve the problem by converting general geometric model to MCNP model during MCNP aided modeling is highly needed. In this paper, we revised and incorporated a number of improvements over our previous work (Yang et al., 2013), which was proposed and targeted after STEP file and INP file were analyzed. Results of experiments show that the revised algorithm is more applicable and efficient than previous work, with the optimized extraction of geometry and topology information of the STEP file, as well as the production efficiency of output INP file. This proposed research is promising, and serves as valuable reference for the majority of researchers involved with MCNP-related researches

  20. Development of a neonate lung reconstruction algorithm using a wavelet AMG and estimated boundary form

    International Nuclear Information System (INIS)

    Bayford, R; Tizzard, A; Yerworth, R; Kantartzis, P; Liatsis, P; Demosthenous, A

    2008-01-01

    Objective, non-invasive measures of lung maturity and development, oxygen requirements and lung function, suitable for use in small, unsedated infants, are urgently required to define the nature and severity of persisting lung disease, and to identify risk factors for developing chronic lung problems. Disorders of lung growth, maturation and control of breathing are among the most important problems faced by the neonatologists. At present, no system for continuous monitoring of neonate lung function to reduce the risk of chronic lung disease in infancy in intensive care units exists. We are in the process of developing a new integrated electrical impedance tomography (EIT) system based on wearable technology to integrate measures of the boundary diameter from the boundary form for neonates into the reconstruction algorithm. In principle, this approach could provide a reduction of image artefacts in the reconstructed image associated with incorrect boundary form assumptions. In this paper, we investigate the required accuracy of the boundary form that would be suitable to minimize artefacts in the reconstruction for neonate lung function. The number of data points needed to create the required boundary form is automatically determined using genetic algorithms. The approach presented in this paper is to assist quality of the reconstruction using different approximations to the ideal boundary form. We also investigate the use of a wavelet algebraic multi-grid (WAMG) preconditioner to reduce the reconstruction computation requirements. Results are presented that demonstrate a full 3D model is required to minimize artefact in the reconstructed image and the implementation of a WAMG for EIT

  1. Field tests and machine learning approaches for refining algorithms and correlations of driver's model parameters.

    Science.gov (United States)

    Tango, Fabio; Minin, Luca; Tesauri, Francesco; Montanari, Roberto

    2010-03-01

    This paper describes the field tests on a driving simulator carried out to validate the algorithms and the correlations of dynamic parameters, specifically driving task demand and drivers' distraction, able to predict drivers' intentions. These parameters belong to the driver's model developed by AIDE (Adaptive Integrated Driver-vehicle InterfacE) European Integrated Project. Drivers' behavioural data have been collected from the simulator tests to model and validate these parameters using machine learning techniques, specifically the adaptive neuro fuzzy inference systems (ANFIS) and the artificial neural network (ANN). Two models of task demand and distraction have been developed, one for each adopted technique. The paper provides an overview of the driver's model, the description of the task demand and distraction modelling and the tests conducted for the validation of these parameters. A test comparing predicted and expected outcomes of the modelled parameters for each machine learning technique has been carried out: for distraction, in particular, promising results (low prediction errors) have been obtained by adopting an artificial neural network.

  2. Development and validation of a novel algorithm based on the ECG magnet response for rapid identification of any unknown pacemaker.

    Science.gov (United States)

    Squara, Fabien; Chik, William W; Benhayon, Daniel; Maeda, Shingo; Latcu, Decebal Gabriel; Lacaze-Gadonneix, Jonathan; Tibi, Thierry; Thomas, Olivier; Cooper, Joshua M; Duthoit, Guillaume

    2014-08-01

    Pacemaker (PM) interrogation requires correct manufacturer identification. However, an unidentified PM is a frequent occurrence, requiring time-consuming steps to identify the device. The purpose of this study was to develop and validate a novel algorithm for PM manufacturer identification, using the ECG response to magnet application. Data on the magnet responses of all recent PM models (≤15 years) from the 5 major manufacturers were collected. An algorithm based on the ECG response to magnet application to identify the PM manufacturer was subsequently developed. Patients undergoing ECG during magnet application in various clinical situations were prospectively recruited in 7 centers. The algorithm was applied in the analysis of every ECG by a cardiologist blinded to PM information. A second blinded cardiologist analyzed a sample of randomly selected ECGs in order to assess the reproducibility of the results. A total of 250 ECGs were analyzed during magnet application. The algorithm led to the correct single manufacturer choice in 242 ECGs (96.8%), whereas 7 (2.8%) could only be narrowed to either 1 of 2 manufacturer possibilities. Only 2 (0.4%) incorrect manufacturer identifications occurred. The algorithm identified Medtronic and Sorin Group PMs with 100% sensitivity and specificity, Biotronik PMs with 100% sensitivity and 99.5% specificity, and St. Jude and Boston Scientific PMs with 92% sensitivity and 100% specificity. The results were reproducible between the 2 blinded cardiologists with 92% concordant findings. Unknown PM manufacturers can be accurately identified by analyzing the ECG magnet response using this newly developed algorithm. Copyright © 2014 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  3. An Improved Algorithm to Delineate Urban Targets with Model-Based Decomposition of PolSAR Data

    Directory of Open Access Journals (Sweden)

    Dingfeng Duan

    2017-10-01

    Full Text Available In model-based decomposition algorithms using polarimetric synthetic aperture radar (PolSAR data, urban targets are typically identified based on the existence of strong double-bounced scattering. However, urban targets with large azimuth orientation angles (AOAs produce strong volumetric scattering that appears similar to scattering characteristics from tree canopies. Due to scattering ambiguity, urban targets can be classified into the vegetation category if the same classification scheme of the model-based PolSAR decomposition algorithms is followed. To resolve the ambiguity and to reduce the misclassification eventually, we introduced a correlation coefficient that characterized scattering mechanisms of urban targets with variable AOAs. Then, an existing volumetric scattering model was modified, and a PolSAR decomposition algorithm developed. The validity and effectiveness of the algorithm were examined using four PolSAR datasets. The algorithm was valid and effective to delineate urban targets with a wide range of AOAs, and applicable to a broad range of ground targets from urban areas, and from upland and flooded forest stands.

  4. Robust integration schemes for generalized viscoplasticity with internal-state variables. Part 2: Algorithmic developments and implementation

    Science.gov (United States)

    Li, Wei; Saleeb, Atef F.

    1995-01-01

    This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present second part of

  5. Developing a Random Forest Algorithm for MODIS Global Burned Area Classification

    Directory of Open Access Journals (Sweden)

    Rubén Ramo

    2017-11-01

    Full Text Available This paper aims to develop a global burned area (BA algorithm for MODIS BRDF-corrected images based on the Random Forest (RF classifier. Two RF models were generated, including: (1 all MODIS reflective bands; and (2 only the red (R and near infrared (NIR bands. Active fire information, vegetation indices and auxiliary variables were taken into account as well. Both RF models were trained using a statistically designed sample of 130 reference sites, which took into account the global diversity of fire conditions. For each site, fire perimeters were obtained from multitemporal pairs of Landsat TM/ETM+ images acquired in 2008. Those fire perimeters were used to extract burned and unburned areas to train the RF models. Using the standard MD43A4 resolution (500 × 500 m, the training dataset included 48,365 burned pixels and 6,293,205 unburned pixels. Different combinations of number of trees and number of parameters were tested. The final RF models included 600 trees and 5 attributes. The RF full model (considering all bands provided a balanced accuracy of 0.94, while the RF RNIR model had 0.93. As a first assessment of these RF models, they were used to classify daily MCD43A4 images in three test sites for three consecutive years (2006–2008. The selected sites included different ecosystems: Australia (Tropical, Boreal (Canada and Temperate (California, and extended coverage (totaling more than 2,500,000 km2. Results from both RF models for those sites were compared with national fire perimeters, as well as with two existing BA MODIS products; the MCD45 and MCD64. Considering all three years and three sites, commission error for the RF Full model was 0.16, with an omission error of 0.23. For the RF RNIR model, these errors were 0.19 and 0.21, respectively. The existing MODIS BA products had lower commission errors, but higher omission errors (0.09 and 0.33 for the MCD45 and 0.10 and 0.29 for the MCD64 than those obtained with the RF models, and

  6. State-space models - from the EM algorithm to a gradient approach

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Petersen, Kaare Brandt; Lehn-Schiøler, Tue

    2007-01-01

    Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the gradient of the log-likelihood function. Such an algorithm is a practical alternative due to the fact...... that the exact gradient of the log-likelihood function can be computed by recycling components of the expectation-maximization (EM) algorithm. We demonstrate the efficiency of the proposed method in three relevant instances of the linear state-space model. In high signal-to-noise ratios, where EM is particularly...

  7. Filtering Based Recursive Least Squares Algorithm for Multi-Input Multioutput Hammerstein Models

    Directory of Open Access Journals (Sweden)

    Ziyun Wang

    2014-01-01

    Full Text Available This paper considers the parameter estimation problem for Hammerstein multi-input multioutput finite impulse response (FIR-MA systems. Filtered by the noise transfer function, the FIR-MA model is transformed into a controlled autoregressive model. The key-term variable separation principle is used to derive a data filtering based recursive least squares algorithm. The numerical examples confirm that the proposed algorithm can estimate parameters more accurately and has a higher computational efficiency compared with the recursive least squares algorithm.

  8. Filtering Based Recursive Least Squares Algorithm for Multi-Input Multioutput Hammerstein Models

    OpenAIRE

    Wang, Ziyun; Wang, Yan; Ji, Zhicheng

    2014-01-01

    This paper considers the parameter estimation problem for Hammerstein multi-input multioutput finite impulse response (FIR-MA) systems. Filtered by the noise transfer function, the FIR-MA model is transformed into a controlled autoregressive model. The key-term variable separation principle is used to derive a data filtering based recursive least squares algorithm. The numerical examples confirm that the proposed algorithm can estimate parameters more accurately and has a higher computational...

  9. Development and Evaluation of an Automated Machine Learning Algorithm for In-Hospital Mortality Risk Adjustment Among Critical Care Patients.

    Science.gov (United States)

    Delahanty, Ryan J; Kaufman, David; Jones, Spencer S

    2018-02-06

    Risk adjustment algorithms for ICU mortality are necessary for measuring and improving ICU performance. Existing risk adjustment algorithms are not widely adopted. Key barriers to adoption include licensing and implementation costs as well as labor costs associated with human-intensive data collection. Widespread adoption of electronic health records makes automated risk adjustment feasible. Using modern machine learning methods and open source tools, we developed and evaluated a retrospective risk adjustment algorithm for in-hospital mortality among ICU patients. The Risk of Inpatient Death score can be fully automated and is reliant upon data elements that are generated in the course of usual hospital processes. One hundred thirty-one ICUs in 53 hospitals operated by Tenet Healthcare. A cohort of 237,173 ICU patients discharged between January 2014 and December 2016. The data were randomly split into training (36 hospitals), and validation (17 hospitals) data sets. Feature selection and model training were carried out using the training set while the discrimination, calibration, and accuracy of the model were assessed in the validation data set. Model discrimination was evaluated based on the area under receiver operating characteristic curve; accuracy and calibration were assessed via adjusted Brier scores and visual analysis of calibration curves. Seventeen features, including a mix of clinical and administrative data elements, were retained in the final model. The Risk of Inpatient Death score demonstrated excellent discrimination (area under receiver operating characteristic curve = 0.94) and calibration (adjusted Brier score = 52.8%) in the validation dataset; these results compare favorably to the published performance statistics for the most commonly used mortality risk adjustment algorithms. Low adoption of ICU mortality risk adjustment algorithms impedes progress toward increasing the value of the healthcare delivered in ICUs. The Risk of Inpatient Death

  10. Mathematical model and algorithm of operation scheduling for monitoring situation in local waters

    Directory of Open Access Journals (Sweden)

    Sokolov Boris

    2017-01-01

    Full Text Available A multiple-model approach to description and investigation of control processes in regional maritime security system is presented. The processes considered in this paper were qualified as control processes of computing operations providing monitoring of the situation adding in the local water area and connected to relocation of different ships classes (further the active mobile objects (AMO. Previously developed concept of active moving object (AMO is used. The models describe operation of AMO automated monitoring and control system (AMCS elements as well as their interaction with objects-in-service that are sources or recipients of information being processed. The unified description of various control processes allows synthesizing simultaneously both technical and functional structures of AMO AMCS. The algorithm for solving the scheduling problem is described in terms of the classical theory of optimal automatic control.

  11. JACoW Model learning algorithms for anomaly detection in CERN control systems

    CERN Document Server

    Tilaro, Filippo; Gonzalez-Berges, Manuel; Roshchin, Mikhail; Varela, Fernando

    2018-01-01

    The CERN automation infrastructure consists of over 600 heterogeneous industrial control systems with around 45 million deployed sensors, actuators and control objects. Therefore, it is evident that the monitoring of such huge system represents a challenging and complex task. This paper describes three different mathematical approaches that have been designed and developed to detect anomalies in any of the CERN control systems. Specifically, one of these algorithms is purely based on expert knowledge; the other two mine the historical generated data to create a simple model of the system; this model is then used to detect faulty sensors measurements. The presented methods can be categorized as dynamic unsupervised anomaly detection; “dynamic” since the behaviour of the system and the evolution of its attributes are observed and changing in time. They are “unsupervised” because we are trying to predict faulty events without examples in the data history. So, the described strategies involve monitoring t...

  12. Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison

    Directory of Open Access Journals (Sweden)

    Olympia Roeva

    2005-12-01

    Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.

  13. Development and Evaluation of the National Cancer Institute's Dietary Screener Questionnaire Scoring Algorithms.

    Science.gov (United States)

    Thompson, Frances E; Midthune, Douglas; Kahle, Lisa; Dodd, Kevin W

    2017-06-01

    Background: Methods for improving the utility of short dietary assessment instruments are needed. Objective: We sought to describe the development of the NHANES Dietary Screener Questionnaire (DSQ) and its scoring algorithms and performance. Methods: The 19-item DSQ assesses intakes of fruits and vegetables, whole grains, added sugars, dairy, fiber, and calcium. Two nonconsecutive 24-h dietary recalls and the DSQ were administered in NHANES 2009-2010 to respondents aged 2-69 y ( n = 7588). The DSQ frequency responses, coupled with sex- and age-specific portion size information, were regressed on intake from 24-h recalls by using the National Cancer Institute usual intake method to obtain scoring algorithms to estimate mean and prevalences of reaching 2 a priori threshold levels. The resulting scoring algorithms were applied to the DSQ and compared with intakes estimated with the 24-h recall data only. The stability of the derived scoring algorithms was evaluated in repeated sampling. Finally, scoring algorithms were applied to screener data, and these estimates were compared with those from multiple 24-h recalls in 3 external studies. Results: The DSQ and its scoring algorithms produced estimates of mean intake and prevalence that agreed closely with those from multiple 24-h recalls. The scoring algorithms were stable in repeated sampling. Differences in the means were algorithms is an advance in the use of screeners. However, because these algorithms may not be generalizable to all studies, a pilot study in the proposed study population is advisable. Although more precise instruments such as 24-h dietary recalls are recommended in most research, the NHANES DSQ provides a less burdensome alternative when time and resources are constrained and interest is in a limited set of dietary factors. © 2017 American Society for Nutrition.

  14. Modeling of pedestrian evacuation based on the particle swarm optimization algorithm

    Science.gov (United States)

    Zheng, Yaochen; Chen, Jianqiao; Wei, Junhong; Guo, Xiwei

    2012-09-01

    By applying the evolutionary algorithm of Particle Swarm Optimization (PSO), we have developed a new pedestrian evacuation model. In the new model, we first introduce the local pedestrian’s density concept which is defined as the number of pedestrians distributed in a certain area divided by the area. Both the maximum velocity and the size of a particle (pedestrian) are supposed to be functions of the local density. An attempt to account for the impact consequence between pedestrians is also made by introducing a threshold of injury into the model. The updating rule of the model possesses heterogeneous spatial and temporal characteristics. Numerical examples demonstrate that the model is capable of simulating the typical features of evacuation captured by CA (Cellular Automata) based models. As contrast to CA-based simulations, in which the velocity (via step size) of a pedestrian in each time step is a constant value and limited in several directions, the new model is more flexible in describing pedestrians’ velocities since they are not limited in discrete values and directions according to the new updating rule.

  15. Parameter estimation of internal thermal mass of building dynamic models using genetic algorithm

    International Nuclear Information System (INIS)

    Wang Shengwei; Xu Xinhua

    2006-01-01

    Building thermal transfer models are essential to predict transient cooling or heating requirements for performance monitoring, diagnosis and control strategy analysis. Detailed physical models are time consuming and often not cost effective. Black box models require a significant amount of training data and may not always reflect the physical behaviors. In this study, a building is described using a simplified thermal network model. For the building envelope, the model parameters can be determined using easily available physical details. For building internal mass having thermal capacitance, including components such as furniture, partitions etc., it is very difficult to obtain detailed physical properties. To overcome this problem, this paper proposes to present the building internal mass with a thermal network structure of lumped thermal mass and estimate the lumped parameters using operation data. A genetic algorithm estimator is developed to estimate the lumped internal thermal parameters of the building thermal network model using the operation data collected from site monitoring. The simplified dynamic model of building internal mass is validated in different weather conditions

  16. Developing a NIR multispectral imaging for prediction and visualization of peanut protein content using variable selection algorithms

    Science.gov (United States)

    Cheng, Jun-Hu; Jin, Huali; Liu, Zhiwei

    2018-01-01

    The feasibility of developing a multispectral imaging method using important wavelengths from hyperspectral images selected by genetic algorithm (GA), successive projection algorithm (SPA) and regression coefficient (RC) methods for modeling and predicting protein content in peanut kernel was investigated for the first time. Partial least squares regression (PLSR) calibration model was established between the spectral data from the selected optimal wavelengths and the reference measured protein content ranged from 23.46% to 28.43%. The RC-PLSR model established using eight key wavelengths (1153, 1567, 1972, 2143, 2288, 2339, 2389 and 2446 nm) showed the best predictive results with the coefficient of determination of prediction (R2P) of 0.901, and root mean square error of prediction (RMSEP) of 0.108 and residual predictive deviation (RPD) of 2.32. Based on the obtained best model and image processing algorithms, the distribution maps of protein content were generated. The overall results of this study indicated that developing a rapid and online multispectral imaging system using the feature wavelengths and PLSR analysis is potential and feasible for determination of the protein content in peanut kernels.

  17. Research on magnetorheological damper suspension with permanent magnet and magnetic valve based on developed FOA-optimal control algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Ping; Gao, Hong [Anhui Polytechnic University, Wuhu (China); Niu, Limin [Anhui University of Technology, Maanshan (China)

    2017-07-15

    Due to the fail safe problem, it was difficult for the existing Magnetorheological damper (MD) to be widely applied in automotive suspensions. Therefore, permanent magnets and magnetic valves were introduced to existing MDs so that fail safe problem could be solved by the magnets and damping force could be adjusted easily by the magnetic valve. Thus, a new Magnetorheological damper with permanent magnet and magnetic valve (MDPMMV) was developed and MDPMMV suspension was studied. First of all, mechanical structure of existing magnetorheological damper applied in automobile suspensions was redesigned, comprising a permanent magnet and a magnetic valve. In addition, prediction model of damping force was built based on electromagnetics theory and Bingham model. Experimental research was onducted on the newly designed damper and goodness of fit between experiment results and simulated ones by models was high. On this basis, a quarter suspension model was built. Then, fruit Fly optimization algorithm (FOA)-optimal control algorithm suitable for automobile suspension was designed based on developing normal FOA. Finally, simulation experiments and bench tests with input surface of pulse road and B road were carried out and the results indicated that working erformance of MDPMMV suspension based on FOA-optimal control algorithm was good.

  18. An Improved Artificial Colony Algorithm Model for Forecasting Chinese Electricity Consumption and Analyzing Effect Mechanism

    Directory of Open Access Journals (Sweden)

    Jingmin Wang

    2016-01-01

    Full Text Available Electricity consumption forecast is perceived to be a growing hot topic in such a situation that China’s economy has entered a period of new normal and the demand of electric power has slowed down. Therefore, exploring Chinese electricity consumption influence mechanism and forecasting electricity consumption are crucial to formulate electrical energy plan scientifically and guarantee the sustainable economic and social development. Research has identified medium and long term electricity consumption forecast as a difficult study influenced by various factors. This paper proposed an improved Artificial Bee Colony (ABC algorithm which combined with multivariate linear regression (MLR for exploring the influencing mechanism of various factors on Chinese electricity consumption and forecasting electricity consumption in the future. The results indicated that the improved ABC algorithm in view of the various factors is superior to traditional models just considering unilateralism in accuracy and persuasion. The overall findings cast light on this model which provides a new scientific and effective way to forecast the medium and long term electricity consumption.

  19. Bobcat 2013: a hyperspectral data collection supporting the development and evaluation of spatial-spectral algorithms

    Science.gov (United States)

    Kaufman, Jason; Celenk, Mehmet; White, A. K.; Stocker, Alan D.

    2014-06-01

    The amount of hyperspectral imagery (HSI) data currently available is relatively small compared to other imaging modalities, and what is suitable for developing, testing, and evaluating spatial-spectral algorithms is virtually nonexistent. In this work, a significant amount of coincident airborne hyperspectral and high spatial resolution panchromatic imagery that supports the advancement of spatial-spectral feature extraction algorithms was collected to address this need. The imagery was collected in April 2013 for Ohio University by the Civil Air Patrol, with their Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) sensor. The target materials, shapes, and movements throughout the collection area were chosen such that evaluation of change detection algorithms, atmospheric compensation techniques, image fusion methods, and material detection and identification algorithms is possible. This paper describes the collection plan, data acquisition, and initial analysis of the collected imagery.

  20. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    Science.gov (United States)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  1. Pattern recognition in lithology classification: modeling using neural networks, self-organizing maps and genetic algorithms

    Science.gov (United States)

    Sahoo, Sasmita; Jha, Madan K.

    2017-03-01

    Effective characterization of lithology is vital for the conceptualization of complex aquifer systems, which is a prerequisite for the development of reliable groundwater-flow and contaminant-transport models. However, such information is often limited for most groundwater basins. This study explores the usefulness and potential of a hybrid soft-computing framework; a traditional artificial neural network with gradient descent-momentum training (ANN-GDM) and a traditional genetic algorithm (GA) based ANN (ANN-GA) approach were developed and compared with a novel hybrid self-organizing map (SOM) based ANN (SOM-ANN-GA) method for the prediction of lithology at a basin scale. This framework is demonstrated through a case study involving a complex multi-layered aquifer system in India, where well-log sites were clustered on the basis of sand-layer frequencies; within each cluster, subsurface layers were reclassified into four depth classes based on the maximum drilling depth. ANN models for each depth class were developed using each of the three approaches. Of the three, the hybrid SOM-ANN-GA models were able to recognize incomplete geologic pattern more reasonably, followed by ANN-GA and ANN-GDM models. It is concluded that the hybrid soft-computing framework can serve as a promising tool for characterizing lithology in groundwater basins with missing lithologic patterns.

  2. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    Science.gov (United States)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  3. Optimal redistribution of an urban air quality monitoring network using atmospheric dispersion model and genetic algorithm

    Science.gov (United States)

    Hao, Yufang; Xie, Shaodong

    2018-03-01

    Air quality monitoring networks play a significant role in identifying the spatiotemporal patterns of air pollution, and they need to be deployed efficiently, with a minimum number of sites. The revision and optimal adjustment of existing monitoring networks is crucial for cities that have undergone rapid urban expansion and experience temporal variations in pollution patterns. The approach based on the Weather Research and Forecasting-California PUFF (WRF-CALPUFF) model and genetic algorithm (GA) was developed to design an optimal monitoring network. The maximization of coverage with minimum overlap and the ability to detect violations of standards were developed as the design objectives for redistributed networks. The non-dominated sorting genetic algorithm was applied to optimize the network size and site locations simultaneously for Shijiazhuang city, one of the most polluted cities in China. The assessment on the current network identified the insufficient spatial coverage of SO2 and NO2 monitoring for the expanding city. The optimization results showed that significant improvements were achieved in multiple objectives by redistributing the original network. Efficient coverage of the resulting designs improved to 60.99% and 76.06% of the urban area for SO2 and NO2, respectively. The redistributing design for multi-pollutant including 8 sites was also proposed, with the spatial representation covered 52.30% of the urban area and the overlapped areas decreased by 85.87% compared with the original network. The abilities to detect violations of standards were not improved as much as the other two objectives due to the conflicting nature between the multiple objectives. Additionally, the results demonstrated that the algorithm was slightly sensitive to the parameter settings, with the number of generations presented the most significant effect. Overall, our study presents an effective and feasible procedure for air quality network optimization at a city scale.

  4. Model-based optimization strategy of chiller driven liquid desiccant dehumidifier with genetic algorithm

    International Nuclear Information System (INIS)

    Wang, Xinli; Cai, Wenjian; Lu, Jiangang; Sun, Youxian; Zhao, Lei

    2015-01-01

    This study presents a model-based optimization strategy for an actual chiller driven dehumidifier of liquid desiccant dehumidification system operating with lithium chloride solution. By analyzing the characteristics of the components, energy predictive models for the components in the dehumidifier are developed. To minimize the energy usage while maintaining the outlet air conditions at the pre-specified set-points, an optimization problem is formulated with an objective function, the constraints of mechanical limitations and components interactions. Model-based optimization strategy using genetic algorithm is proposed to obtain the optimal set-points for desiccant solution temperature and flow rate, to minimize the energy usage in the dehumidifier. Experimental studies on an actual system are carried out to compare energy consumption between the proposed optimization and the conventional strategies. The results demonstrate that energy consumption using the proposed optimization strategy can be reduced by 12.2% in the dehumidifier operation. - Highlights: • Present a model-based optimization strategy for energy saving in LDDS. • Energy predictive models for components in dehumidifier are developed. • The Optimization strategy are applied and tested in an actual LDDS. • Optimization strategy can achieve energy savings by 12% during operation

  5. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings.

    Science.gov (United States)

    Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian

    2017-06-01

    There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Algorithms for Hidden Markov Models Restricted to Occurrences of Regular Expressions

    DEFF Research Database (Denmark)

    Tataru, Paula; Sand, Andreas; Hobolth, Asger

    2013-01-01

    Hidden Markov Models (HMMs) are widely used probabilistic models, particularly for annotating sequential data with an underlying hidden structure. Patterns in the annotation are often more relevant to study than the hidden structure itself. A typical HMM analysis consists of annotating the observed...... data using a decoding algorithm and analyzing the annotation to study patterns of interest. For example, given an HMM modeling genes in DNA sequences, the focus is on occurrences of genes in the annotation. In this paper, we define a pattern through a regular expression and present a restriction...... of three classical algorithms to take the number of occurrences of the pattern in the hidden sequence into account. We present a new algorithm to compute the distribution of the number of pattern occurrences, and we extend the two most widely used existing decoding algorithms to employ information from...

  7. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    Science.gov (United States)

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  8. Interpolation Algorithm and Mathematical Model in Automated Welding of Saddle-Shaped Weld

    Directory of Open Access Journals (Sweden)

    Lianghao Xue

    2018-01-01

    Full Text Available This paper presents welding torch pose model and interpolation algorithm of trajectory control of saddle-shaped weld formed by intersection of two pipes; the working principle, interpolation algorithm, welding experiment, and simulation result of the automatic welding system of the saddle-shaped weld are described. A variable angle interpolation method is used to control the trajectory and pose of the welding torch, which guarantees the constant linear terminal velocity. The mathematical model of the trajectory and pose of welding torch are established. Simulation and experiment have been carried out to verify the effectiveness of the proposed algorithm and mathematical model. The results demonstrate that the interpolation algorithm is well within the interpolation requirements of the saddle-shaped weld and ideal feed rate stability.

  9. Genotype copy number variations using Gaussian mixture models: theory and algorithms.

    Science.gov (United States)

    Lin, Chang-Yun; Lo, Yungtai; Ye, Kenny Q

    2012-10-12

    Copy number variations (CNVs) are important in the disease association studies and are usually targeted by most recent microarray platforms developed for GWAS studies. However, the probes targeting the same CNV regions could vary greatly in performance, with some of the probes carrying little information more than pure noise. In this paper, we investigate how to best combine measurements of multiple probes to estimate copy numbers of individuals under the framework of Gaussian mixture model (GMM). First we show that under two regularity conditions and assume all the parameters except the mixing proportions are known, optimal weights can be obtained so that the univariate GMM based on the weighted average gives the exactly the same classification as the multivariate GMM does. We then developed an algorithm that iteratively estimates the parameters and obtains the optimal weights, and uses them for classification. The algorithm performs well on simulation data and two sets of real data, which shows clear advantage over classification based on the equal weighted average.

  10. Using an Improved Artificial Bee Colony Algorithm for Parameter Estimation of a Dynamic Grain Flow Model

    Directory of Open Access Journals (Sweden)

    He Wang

    2018-01-01

    Full Text Available An effective method is proposed to estimate the parameters of a dynamic grain flow model (DGFM. To this end, an improved artificial bee colony (IABC algorithm is used to estimate unknown parameters of DGFM with minimizing a given objective function. A comparative study of the performance of the IABC algorithm and the other ABC variants on several benchmark functions is carried out, and the results present a significant improvement in performance over the other ABC variants. The practical application performance of the IABC is compared to that of the nonlinear least squares (NLS, particle swarm optimization (PSO, and genetic algorithm (GA. The compared results demonstrate that IABC algorithm is more accurate and effective for the parameter estimation of DGFM than the other algorithms.

  11. PM Synchronous Motor Dynamic Modeling with Genetic Algorithm ...

    African Journals Online (AJOL)

    Adel

    This paper proposes dynamic modeling simulation for ac Surface Permanent Magnet Synchronous Motor (SPMSM) with the aid of MATLAB – Simulink environment. The proposed model would be used in many applications such as automotive, mechatronics, green energy applications, and machine drives. The modeling ...

  12. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    Science.gov (United States)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  13. Optimizing the Forward Algorithm for Hidden Markov Model on IBM Roadrunner clusters

    Directory of Open Access Journals (Sweden)

    SOIMAN, S.-I.

    2015-05-01

    Full Text Available In this paper we present a parallel solution of the Forward Algorithm for Hidden Markov Models. The Forward algorithm compute a probability of a hidden state from Markov model at a certain time, this process being recursively. The whole process requires large computational resources for those models with a large number of states and long observation sequences. Our solution in order to reduce the computational time is a multilevel parallelization of Forward algorithm. Two types of cores were used in our implementation, for each level of parallelization, cores that are graved on the same chip of PowerXCell8i processor. This hybrid architecture of processors permitted us to obtain a speedup factor over 40 relative to the sequential algorithm for a model with 24 states and 25 millions of observable symbols. Experimental results showed that the parallel Forward algorithm can evaluate the probability of an observation sequence on a hidden Markov model 40 times faster than the classic one does. Based on the performance obtained, we demonstrate the applicability of this parallel implementation of Forward algorithm in complex problems such as large vocabulary speech recognition.

  14. An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework

    International Nuclear Information System (INIS)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong

    2016-01-01

    This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the

  15. Development and evaluation of spectral transformation algorithms for analysis and characterization of forest vegetation

    Science.gov (United States)

    Zhao, Guang

    1998-11-01

    This research reviewed and evaluated some of the most important statistically based spectral transformation algorithms. Two spectral transformation algorithms, canonical discriminant analysis (CDA) and multiple logistic regression (MLR) transformations were developed and evaluated in two independent studies. The objectives were to investigate if the methods are capable of solving the two fundamental questions raised in the beginning: separating spectral overlap and quantifying spatial variability under forest conditions. It was generalized from previous research that spectral transformations are usually performed to complete one or more tasks, with ultimate goal of optimizing data structure for improving visual interpretation, analysis, and classification performance. PCA is the most widely used spectral transformation techniques. Kauth-Thomas Tasseled Cap transformed components are important vegetation indices, and they are developed using sensor and scene physical characteristics and Gram-Schmidt orthogonalization process. A theoretical comparison was conducted to identify major differences among Tasseled Cap, PCA, and CDA transformations in their objectives, prior knowledge requirements, limitations, processes, and variance-covariance usage. CDA was a better "separation" algorithm than PCA in improving overall classification accuracy. CDA was used as a transformation technique to not only increase class separation, but also reduce data dimension and noise. The last two canonical components usually contain largely noise variances, which hold less than 1 percent of the variance found in source variables. A sub-dimension (the first four components) is preferable for final classifications than the whole derived canonical component data sets, as the noise variances associated with the last two components were removed. Comparison of CDA and PCA eigenstructure matrices revealed that there is no distinct pattern in terms of source variable contribution and load signs

  16. A valence force field-Monte Carlo algorithm for quantum dot growth modeling

    DEFF Research Database (Denmark)

    Barettin, Daniele; Kadkhodazadeh, Shima; Pecchia, Alessandro

    2017-01-01

    We present a novel kinetic Monte Carlo version for the atomistic valence force fields algorithm in order to model a self-assembled quantum dot growth process. We show our atomistic model is both computationally favorable and capture more details compared to traditional kinetic Monte Carlo models...

  17. The Rasch Poisson Counts Model for Incomplete Data: An Application of the EM Algorithm.

    Science.gov (United States)

    Jansen, Margo G. H.

    1995-01-01

    The Rasch Poisson counts model is a latent trait model for the situation in which "K" tests are administered to "N" examinees and the test score is a count (repeated number of some event). A mixed model is presented that applies the EM algorithm and that can allow for missing data. (SLD)

  18. An Analysis of OpenACC Programming Model: Image Processing Algorithms as a Case Study

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2014-06-01

    Full Text Available Graphics processing units and similar accelerators have been intensively used in general purpose computations for several years. In the last decade, GPU architecture and organization changed dramatically to support an ever-increasing demand for computing power. Along with changes in hardware, novel programming models have been proposed, such as NVIDIA’s Compute Unified Device Architecture (CUDA and Open Computing Language (OpenCL by Khronos group. Although numerous commercial and scientific applications have been developed using these two models, they still impose a significant challenge for less experienced users. There are users from various scientific and engineering communities who would like to speed up their applications without the need to deeply understand a low-level programming model and underlying hardware. In 2011, OpenACC programming model was launched. Much like OpenMP for multicore processors, OpenACC is a high-level, directive-based programming model for manycore processors like GPUs. This paper presents an analysis of OpenACC programming model and its applicability in typical domains like image processing. Three, simple image processing algorithms have been implemented for execution on the GPU with OpenACC. The results were compared with their sequential counterparts, and results are briefly discussed.

  19. Development and application of an algorithm to compute weighted multiple glycan alignments.

    Science.gov (United States)

    Hosoda, Masae; Akune, Yukie; Aoki-Kinoshita, Kiyoko F

    2017-05-01

    A glycan consists of monosaccharides linked by glycosidic bonds, has branches and forms complex molecular structures. Databases have been developed to store large amounts of glycan-binding experiments, including glycan arrays with glycan-binding proteins. However, there are few bioinformatics techniques to analyze large amounts of data for glycans because there are few tools that can handle the complexity of glycan structures. Thus, we have developed the MCAW (Multiple Carbohydrate Alignment with Weights) tool that can align multiple glycan structures, to aid in the understanding of their function as binding recognition molecules. We have described in detail the first algorithm to perform multiple glycan alignments by modeling glycans as trees. To test our tool, we prepared several data sets, and as a result, we found that the glycan motif could be successfully aligned without any prior knowledge applied to the tool, and the known recognition binding sites of glycans could be aligned at a high rate amongst all our datasets tested. We thus claim that our tool is able to find meaningful glycan recognition and binding patterns using data obtained by glycan-binding experiments. The development and availability of an effective multiple glycan alignment tool opens possibilities for many other glycoinformatics analysis, making this work a big step towards furthering glycomics analysis. http://www.rings.t.soka.ac.jp. kkiyoko@soka.ac.jp. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  20. Development of Turbulent Diffusion Transfer Algorithms to Estimate Lake Tahoe Water Budget

    Science.gov (United States)

    Sahoo, G. B.; Schladow, S. G.; Reuter, J. E.

    2012-12-01

    The evaporative loss is a dominant component in the Lake Tahoe hydrologic budget because watershed area (813km2) is very small compared to the lake surface area (501 km2). The 5.5 m high dam built at the lake's only outlet, the Truckee River at Tahoe City can increase the lake's capacity by approximately 0.9185 km3. The lake serves as a flood protection for downstream areas and source of water supply for downstream cities, irrigation, hydropower, and instream environmental requirements. When the lake water level falls below the natural rim, cessation of flows from the lake cause problems for water supply, irrigation, and fishing. Therefore, it is important to develop algorithms to correctly estimate the lake hydrologic budget. We developed a turbulent diffusion transfer model and coupled to the dynamic lake model (DLM-WQ). We generated the stream flows and pollutants loadings of the streams using the US Environmental Protection Agency (USEPA) supported watershed model, Loading Simulation Program in C++ (LSPC). The bulk transfer coefficients were calibrated using correlation coefficient (R2) as the objective function. Sensitivity analysis was conducted for the meteorological inputs and model parameters. The DLM-WQ estimated lake water level and water temperatures were in agreement to those of measured records with R2 equal to 0.96 and 0.99, respectively for the period 1994 to 2008. The estimated average evaporation from the lake, stream inflow, precipitation over the lake, groundwater fluxes, and outflow from the lake during 1994 to 2008 were found to be 32.0%, 25.0%, 19.0%, 0.3%, and 11.7%, respectively.