WorldWideScience

Sample records for plausible exchange algorithm

  1. Systematic reviews need to consider applicability to disadvantaged populations: inter-rater agreement for a health equity plausibility algorithm.

    Science.gov (United States)

    Welch, Vivian; Brand, Kevin; Kristjansson, Elizabeth; Smylie, Janet; Wells, George; Tugwell, Peter

    2012-12-19

    Systematic reviews have been challenged to consider effects on disadvantaged groups. A priori specification of subgroup analyses is recommended to increase the credibility of these analyses. This study aimed to develop and assess inter-rater agreement for an algorithm for systematic review authors to predict whether differences in effect measures are likely for disadvantaged populations relative to advantaged populations (only relative effect measures were addressed). A health equity plausibility algorithm was developed using clinimetric methods with three items based on literature review, key informant interviews and methodology studies. The three items dealt with the plausibility of differences in relative effects across sex or socioeconomic status (SES) due to: 1) patient characteristics; 2) intervention delivery (i.e., implementation); and 3) comparators. Thirty-five respondents (consisting of clinicians, methodologists and research users) assessed the likelihood of differences across sex and SES for ten systematic reviews with these questions. We assessed inter-rater reliability using Fleiss multi-rater kappa. The proportion agreement was 66% for patient characteristics (95% confidence interval: 61%-71%), 67% for intervention delivery (95% confidence interval: 62% to 72%) and 55% for the comparator (95% confidence interval: 50% to 60%). Inter-rater kappa, assessed with Fleiss kappa, ranged from 0 to 0.199, representing very low agreement beyond chance. Users of systematic reviews rated that important differences in relative effects across sex and socioeconomic status were plausible for a range of individual and population-level interventions. However, there was very low inter-rater agreement for these assessments. There is an unmet need for discussion of plausibility of differential effects in systematic reviews. Increased consideration of external validity and applicability to different populations and settings is warranted in systematic reviews to meet this

  2. Rethinking exchange market models as optimization algorithms

    Science.gov (United States)

    Luquini, Evandro; Omar, Nizam

    2018-02-01

    The exchange market model has mainly been used to study the inequality problem. Although the human society inequality problem is very important, the exchange market models dynamics until stationary state and its capability of ranking individuals is interesting in itself. This study considers the hypothesis that the exchange market model could be understood as an optimization procedure. We present herein the implications for algorithmic optimization and also the possibility of a new family of exchange market models

  3. Anatomically Plausible Surface Alignment and Reconstruction

    DEFF Research Database (Denmark)

    Paulsen, Rasmus R.; Larsen, Rasmus

    2010-01-01

    With the increasing clinical use of 3D surface scanners, there is a need for accurate and reliable algorithms that can produce anatomically plausible surfaces. In this paper, a combined method for surface alignment and reconstruction is proposed. It is based on an implicit surface representation...

  4. Algorithms for Lightweight Key Exchange.

    Science.gov (United States)

    Alvarez, Rafael; Caballero-Gil, Cándido; Santonja, Juan; Zamora, Antonio

    2017-06-27

    Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks.

  5. Searching for Plausible N-k Contingencies Endangering Voltage Stability

    DEFF Research Database (Denmark)

    Weckesser, Johannes Tilman Gabriel; Van Cutsem, Thierry

    2017-01-01

    This paper presents a novel search algorithm using time-domain simulations to identify plausible N − k contingencies endangering voltage stability. Starting from an initial list of disturbances, progressively more severe contingencies are investigated. After simulation of a N − k contingency......, the simulation results are assessed. If the system response is unstable, a plausible harmful contingency sequence has been found. Otherwise, components affected by the contingencies are considered as candidate next event leading to N − (k + 1) contingencies. This implicitly takes into account hidden failures...

  6. Exergetic optimization of shell and tube heat exchangers using a genetic based algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Oezcelik, Yavuz [Ege University, Bornova, Izmir (Turkey). Engineering Faculty, Chemical Engineering Department

    2007-08-15

    In the computer-based optimization, many thousands of alternative shell and tube heat exchangers may be examined by varying the high number of exchanger parameters such as tube length, tube outer diameter, pitch size, layout angle, baffle space ratio, number of tube side passes. In the present study, a genetic based algorithm was developed, programmed, and applied to estimate the optimum values of discrete and continuous variables of the MINLP (mixed integer nonlinear programming) test problems. The results of the test problems show that the genetic based algorithm programmed can estimate the acceptable values of continuous variables and optimum values of integer variables. Finally the genetic based algorithm was extended to make parametric studies and to find optimum configuration of heat exchangers by minimizing the sum of the annual capital cost and exergetic cost of the shell and tube heat exchangers. The results of the example problems show that the proposed algorithm is applicable to find optimum and near optimum alternatives of the shell and tube heat exchanger configurations. (author)

  7. Enhanced diffie-hellman algorithm for reliable key exchange

    Science.gov (United States)

    Aryan; Kumar, Chaithanya; Vincent, P. M. Durai Raj

    2017-11-01

    The Diffie -Hellman is one of the first public-key procedure and is a certain way of exchanging the cryptographic keys securely. This concept was introduced by Ralph Markel and it is named after Whitfield Diffie and Martin Hellman. Sender and Receiver make a common secret key in Diffie-Hellman algorithm and then they start communicating with each other over the public channel which is known to everyone. A number of internet services are secured by Diffie -Hellman. In Public key cryptosystem, the sender has to trust while receiving the public key of the receiver and vice-versa and this is the challenge of public key cryptosystem. Man-in-the-Middle attack is very much possible on the existing Diffie-Hellman algorithm. In man-in-the-middle attack, the attacker exists in the public channel, the attacker receives the public key of both sender and receiver and sends public keys to sender and receiver which is generated by his own. This is how man-in-the-middle attack is possible on Diffie-Hellman algorithm. Denial of service attack is another attack which is found common on Diffie-Hellman. In this attack, the attacker tries to stop the communication happening between sender and receiver and attacker can do this by deleting messages or by confusing the parties with miscommunication. Some more attacks like Insider attack, Outsider attack, etc are possible on Diffie-Hellman. To reduce the possibility of attacks on Diffie-Hellman algorithm, we have enhanced the Diffie-Hellman algorithm to a next level. In this paper, we are extending the Diffie -Hellman algorithm by using the concept of the Diffie -Hellman algorithm to get a stronger secret key and that secret key is further exchanged between the sender and the receiver so that for each message, a new secret shared key would be generated. The second secret key will be generated by taking primitive root of the first secret key.

  8. An improved shuffled frog leaping algorithm based evolutionary framework for currency exchange rate prediction

    Science.gov (United States)

    Dash, Rajashree

    2017-11-01

    Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.

  9. Turning Simulation into Estimation: Generalized Exchange Algorithms for Exponential Family Models.

    Directory of Open Access Journals (Sweden)

    Maarten Marsman

    Full Text Available The Single Variable Exchange algorithm is based on a simple idea; any model that can be simulated can be estimated by producing draws from the posterior distribution. We build on this simple idea by framing the Exchange algorithm as a mixture of Metropolis transition kernels and propose strategies that automatically select the more efficient transition kernels. In this manner we achieve significant improvements in convergence rate and autocorrelation of the Markov chain without relying on more than being able to simulate from the model. Our focus will be on statistical models in the Exponential Family and use two simple models from educational measurement to illustrate the contribution.

  10. A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments

    KAUST Repository

    Harman, Radoslav; Filová , Lenka; Richtarik, Peter

    2018-01-01

    We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.

  11. A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments

    KAUST Repository

    Harman, Radoslav

    2018-01-17

    We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.

  12. A novel Random Walk algorithm with Compulsive Evolution for heat exchanger network synthesis

    International Nuclear Information System (INIS)

    Xiao, Yuan; Cui, Guomin

    2017-01-01

    Highlights: • A novel Random Walk Algorithm with Compulsive Evolution is proposed for HENS. • A simple and feasible evolution strategy is presented in RWCE algorithm. • The integer and continuous variables of HEN are optimized simultaneously in RWCE. • RWCE is demonstrated a relatively strong global search ability in HEN optimization. - Abstract: The heat exchanger network (HEN) synthesis can be characterized as highly combinatorial, nonlinear and nonconvex, contributing to unmanageable computational time and a challenge in identifying the global optimal network design. Stochastic methods are robust and show a powerful global optimizing ability. Based on the common characteristic of different stochastic methods, namely randomness, a novel Random Walk algorithm with Compulsive Evolution (RWCE) is proposed to achieve the best possible total annual cost of heat exchanger network with the relatively simple and feasible evolution strategy. A population of heat exchanger networks is first randomly initialized. Next, the heat load of heat exchanger for each individual is randomly expanded or contracted in order to optimize both the integer and continuous variables simultaneously and to obtain the lowest total annual cost. Besides, when individuals approach to local optima, there is a certain probability for them to compulsively accept the imperfect networks in order to keep the population diversity and ability of global optimization. The presented method is then applied to heat exchanger network synthesis cases from the literature to compare the best results published. RWCE consistently has a lower computed total annual cost compared to previously published results.

  13. Starting design for use in variance exchange algorithms | Iwundu ...

    African Journals Online (AJOL)

    A new method of constructing the initial design for use in variance exchange algorithms is presented. The method chooses support points to go into the design as measures of distances of the support points from the centre of the geometric region and of permutation-invariant sets. The initial design is as close as possible to ...

  14. Noniterative accurate algorithm for the exact exchange potential of density-functional theory

    International Nuclear Information System (INIS)

    Cinal, M.; Holas, A.

    2007-01-01

    An algorithm for determination of the exchange potential is constructed and tested. It represents a one-step procedure based on the equations derived by Krieger, Li, and Iafrate (KLI) [Phys. Rev. A 46, 5453 (1992)], implemented already as an iterative procedure by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)]. Due to suitable transformation of the KLI equations, we can solve them avoiding iterations. Our algorithm is applied to the closed-shell atoms, from Be up to Kr, within the DFT exchange-only approximation. Using pseudospectral techniques for representing orbitals, we obtain extremely accurate values of total and orbital energies with errors at least four orders of magnitude smaller than known in the literature

  15. Design and economic investigation of shell and tube heat exchangers using Improved Intelligent Tuned Harmony Search algorithm

    Directory of Open Access Journals (Sweden)

    Oguz Emrah Turgut

    2014-12-01

    Full Text Available This study explores the thermal design of shell and tube heat exchangers by using Improved Intelligent Tuned Harmony Search (I-ITHS algorithm. Intelligent Tuned Harmony Search (ITHS is an upgraded version of harmony search algorithm which has an advantage of deciding intensification and diversification processes by applying proper pitch adjusting strategy. In this study, we aim to improve the search capacity of ITHS algorithm by utilizing chaotic sequences instead of uniformly distributed random numbers and applying alternative search strategies inspired by Artificial Bee Colony algorithm and Opposition Based Learning on promising areas (best solutions. Design variables including baffle spacing, shell diameter, tube outer diameter and number of tube passes are used to minimize total cost of heat exchanger that incorporates capital investment and the sum of discounted annual energy expenditures related to pumping and heat exchanger area. Results show that I-ITHS can be utilized in optimizing shell and tube heat exchangers.

  16. Optimization of heat exchanger networks using genetic algorithms

    International Nuclear Information System (INIS)

    Teyssedou, A.; Dipama, J.; Sorin, M.

    2004-01-01

    Most thermal processes encountered in the power industry (chemical, metallurgical, nuclear and thermal power stations) necessitate the transfer of large amounts of heat between fluids having different thermal potentials. A common practice applied to achieve such a requirement consists of using heat exchangers. In general, each current of fluid is conveniently cooled or heated independently from each other in the power plant. When the number of heat exchangers is large enough, however, a convenient arrangement of different flow currents may allow a considerable reduction in energy consumption to be obtained (Linnhoff and Hidmarsh, 1983). In such a case the heat exchangers form a 'Heat Exchanger Network' (HEN) that can be optimized to reduce the overall energy consumption. This type of optimization problem, involves two separates calculation procedures. First, it is necessary to optimize the topology of the HEN that will permit a reduction in energy consumption to be obtained. In a second step the power distribution across the HEN should be optimized without violating the second law of thermodynamics. The numerical treatment of this kind of problem requires the use of both discrete variables (for taking into account each heat exchanger unit) and continuous variables for handling the thermal load of each unit. It is obvious that for a large number of heat exchangers, the use of conventional calculation methods, i.e., Simplexe, becomes almost impossible. Therefore, in this paper we present a 'Genetic Algorithm' (GA), that has been implemented and successfully used to treat complex HENs, containing a large number of heat exchangers. As opposed to conventional optimization techniques that require the knowledge of the derivatives of a function, GAs start the calculation process from a large population of possible solutions of a given problem (Goldberg, 1999). Each possible solution is in turns evaluated according to a 'fitness' criterion obtained from an objective

  17. Asset management using genetic algorithm: Evidence from Tehran Stock Exchange

    Directory of Open Access Journals (Sweden)

    Abbas Sarijaloo

    2014-02-01

    Full Text Available This paper presents an empirical investigation to study the effect of market management using Markowitz theorem. The study uses the information of 50 best performers on Tehran Stock Exchange over the period 2006-2009 and, using Markowitz theorem, the efficient asset allocation are determined and the result are analyzed. The proposed model of this paper has been solved using genetic algorithm. The results indicate that Tehran Stock Exchange has managed to perform much better than average world market in most years of studies especially on year 2009. The results of our investigation have also indicated that one could reach outstanding results using GA and forming efficient portfolio.

  18. Use of genetic algorithm to identify thermophysical properties of deposited fouling in heat exchanger tubes

    International Nuclear Information System (INIS)

    Adili, Ali; Ben Salah, Mohieddine; Kerkeni, Chekib; Ben Nasrallah, Sassi

    2009-01-01

    At high temperature, the circulation of fluid in heat exchangers provides a tendency for fouling accumulation to take place on the internal surface of tubes. This paper shows an experimental process of thermophysical properties estimation of the fouling deposited on internal surface of a heat exchanger tube using genetic algorithms (GAs). The genetic algorithm is used to minimize an objective function containing calculated and measured temperatures. The experimental bench using a photothermal method with a finite width pulse heat excitation is used and the estimated parameters are obtained with high accuracy

  19. A novel algorithm for demand-control of a single-room ventilation unit with a rotary heat exchanger

    DEFF Research Database (Denmark)

    Smith, Kevin Michael; Jansen, Anders Lund; Svendsen, Svend

    in the indoor environment. Based on these values, a demand-control algorithm varies fan speeds to change airflow rates and varies the rotational speed of the heat exchanger to modulate heat and moisture recovery. The algorithm varies airflow rates to provide free cooling and limit CO2 concentrations and varies...... moisture recovery by varying the rotational speed and then safely unbalances airflows in a worst-case scenario. In the algorithm, frost protection and minimum supply temperature take the highest priority and override other controls. This paper documents the proposed demand control algorithm and analyses...... its impacts on compliance of building regulations in Denmark. The paper presents an algorithm that manufacturers can program into their controls. The commercially available single-room ventilation unit with a rotary heat exchanger uses this algorithm coded in the C language. Future work will document...

  20. A Numerical Algorithm and a Graphical Method to Size a Heat Exchanger

    DEFF Research Database (Denmark)

    Berning, Torsten

    2011-01-01

    This paper describes the development of a numerical algorithm and a graphical method that can be employed in order to determine the overall heat transfer coefficient inside heat exchangers. The method is based on an energy balance and utilizes the spreadsheet application software Microsoft Excel...

  1. A Numerical Algorithm and a Graphical Method to Size a Heat Exchanger

    DEFF Research Database (Denmark)

    Berning, Torsten

    2011-01-01

    This paper describes the development of a numerical algorithm and a graphical method that can be employed in order to determine the overall heat transfer coefficient inside heat exchangers. The method is based on an energy balance and utilizes the spreadsheet application software Microsoft ExcelTM...

  2. Techno-economic optimization of a shell and tube heat exchanger by genetic and particle swarm algorithms

    International Nuclear Information System (INIS)

    Sadeghzadeh, H.; Ehyaei, M.A.; Rosen, M.A.

    2015-01-01

    Highlights: • Calculating pressure drop and heat transfer coefficient by Delaware method. • The accuracy of the Delaware method is more than the Kern method. • The results of the PSO are better than the results of the GA. • The optimization results suggest that yields the best and most economic optimization. - Abstract: The use of genetic and particle swarm algorithms in the design of techno-economically optimum shell-and-tube heat exchangers is demonstrated. A cost function (including costs of the heat exchanger based on surface area and power consumption to overcome pressure drops) is the objective function, which is to be minimized. Selected decision variables include tube diameter, central baffles spacing and shell diameter. The Delaware method is used to calculate the heat transfer coefficient and the shell-side pressure drop. The accuracy and efficiency of the suggested algorithm and the Delaware method are investigated. A comparison of the results obtained by the two algorithms shows that results obtained with the particle swarm optimization method are superior to those obtained with the genetic algorithm method. By comparing these results with those from various references employing the Kern method and other algorithms, it is shown that the Delaware method accompanied by genetic and particle swarm algorithms achieves more optimum results, based on assessments for two case studies

  3. A novel hybrid chaotic ant swarm algorithm for heat exchanger networks synthesis

    International Nuclear Information System (INIS)

    Zhang, Chunwei; Cui, Guomin; Peng, Fuyu

    2016-01-01

    Highlights: • The chaotic ant swarm algorithm is proposed to avoid trapping into a local optimum. • The organization variables update strategy makes full use of advantages of the chaotic search. • The structure evolution strategy is developed to handle integer variables optimization. • Overall three cases taken form the literatures are investigated with better optima. - Abstract: The heat exchanger networks synthesis (HENS) still remains an open problem due to its combinatorial nature, which can easily result in suboptimal design and unacceptable calculation effort. In this paper, a novel hybrid chaotic ant swarm algorithm is proposed. The presented algorithm, which consists of a combination of chaotic ant swarm (CAS) algorithm, structure evolution strategy, local optimization strategy and organization variables update strategy, can simultaneously optimize continuous variables and integer variables. The CAS algorithm chaotically searches and generates new solutions in the given space, and subsequently the structure evolution strategy evolves the structures represented by the solutions and limits the search space. Furthermore, the local optimizing strategy and the organization variables update strategy are introduced to enhance the performance of the algorithm. The study of three different cases, found in the literature, revealed special search abilities in both structure space and continuous variable space.

  4. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

    Science.gov (United States)

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  5. Neural networks, nativism, and the plausibility of constructivism.

    Science.gov (United States)

    Quartz, S R

    1993-09-01

    Recent interest in PDP (parallel distributed processing) models is due in part to the widely held belief that they challenge many of the assumptions of classical cognitive science. In the domain of language acquisition, for example, there has been much interest in the claim that PDP models might undermine nativism. Related arguments based on PDP learning have also been given against Fodor's anti-constructivist position--a position that has contributed to the widespread dismissal of constructivism. A limitation of many of the claims regarding PDP learning, however, is that the principles underlying this learning have not been rigorously characterized. In this paper, I examine PDP models from within the framework of Valiant's PAC (probably approximately correct) model of learning, now the dominant model in machine learning, and which applies naturally to neural network learning. From this perspective, I evaluate the implications of PDP models for nativism and Fodor's influential anti-constructivist position. In particular, I demonstrate that, contrary to a number of claims, PDP models are nativist in a robust sense. I also demonstrate that PDP models actually serve as a good illustration of Fodor's anti-constructivist position. While these results may at first suggest that neural network models in general are incapable of the sort of concept acquisition that is required to refute Fodor's anti-constructivist position, I suggest that there is an alternative form of neural network learning that demonstrates the plausibility of constructivism. This alternative form of learning is a natural interpretation of the constructivist position in terms of neural network learning, as it employs learning algorithms that incorporate the addition of structure in addition to weight modification schemes. By demonstrating that there is a natural and plausible interpretation of constructivism in terms of neural network learning, the position that nativism is the only plausible model of

  6. Bisimulation for Single-Agent Plausibility Models

    DEFF Research Database (Denmark)

    Andersen, Mikkel Birkegaard; Bolander, Thomas; van Ditmarsch, H.

    2013-01-01

    define a proper notion of bisimulation, and prove that bisimulation corresponds to logical equivalence on image-finite models. We relate our results to other epistemic notions, such as safe belief and degrees of belief. Our results imply that there are only finitely many non-bisimilar single......-agent epistemic plausibility models on a finite set of propositions. This gives decidability for single-agent epistemic plausibility planning....

  7. Design and economic optimization of shell and tube heat exchangers using Artificial Bee Colony (ABC) algorithm

    International Nuclear Information System (INIS)

    Sencan Sahin, Arzu; Kilic, Bayram; Kilic, Ulas

    2011-01-01

    Highlights: → Artificial Bee Colony for shell and tube heat exchanger optimization is used. → The total cost is minimized by varying design variables. → This new approach can be applied for optimization of heat exchangers. - Abstract: In this study, a new shell and tube heat exchanger optimization design approach is developed. Artificial Bee Colony (ABC) has been applied to minimize the total cost of the equipment including capital investment and the sum of discounted annual energy expenditures related to pumping of shell and tube heat exchanger by varying various design variables such as tube length, tube outer diameter, pitch size, baffle spacing, etc. Finally, the results are compared to those obtained by literature approaches. The obtained results indicate that Artificial Bee Colony (ABC) algorithm can be successfully applied for optimal design of shell and tube heat exchangers.

  8. Design and economic optimization of shell and tube heat exchangers using Artificial Bee Colony (ABC) algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Sencan Sahin, Arzu, E-mail: sencan@tef.sdu.edu.tr [Department of Mechanical Education, Technical Education Faculty, Sueleyman Demirel University, 32260 Isparta (Turkey); Kilic, Bayram, E-mail: bayramkilic@hotmail.com [Bucak Emin Guelmez Vocational School, Mehmet Akif Ersoy University, Bucak (Turkey); Kilic, Ulas, E-mail: ulaskilic@mehmetakif.edu.tr [Bucak Emin Guelmez Vocational School, Mehmet Akif Ersoy University, Bucak (Turkey)

    2011-10-15

    Highlights: {yields} Artificial Bee Colony for shell and tube heat exchanger optimization is used. {yields} The total cost is minimized by varying design variables. {yields} This new approach can be applied for optimization of heat exchangers. - Abstract: In this study, a new shell and tube heat exchanger optimization design approach is developed. Artificial Bee Colony (ABC) has been applied to minimize the total cost of the equipment including capital investment and the sum of discounted annual energy expenditures related to pumping of shell and tube heat exchanger by varying various design variables such as tube length, tube outer diameter, pitch size, baffle spacing, etc. Finally, the results are compared to those obtained by literature approaches. The obtained results indicate that Artificial Bee Colony (ABC) algorithm can be successfully applied for optimal design of shell and tube heat exchangers.

  9. Minimizing shell-and-tube heat exchanger cost with genetic algorithms and considering maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Wildi-Tremblay, P.; Gosselin, L. [Universite Laval, Quebec (Canada). Dept. de genie mecanique

    2007-07-15

    This paper presents a procedure for minimizing the cost of a shell-and-tube heat exchanger based on genetic algorithms (GA). The global cost includes the operating cost (pumping power) and the initial cost expressed in terms of annuities. Eleven design variables associated with shell-and-tube heat exchanger geometries are considered: tube pitch, tube layout patterns, number of tube passes, baffle spacing at the centre, baffle spacing at the inlet and outlet, baffle cut, tube-to-baffle diametrical clearance, shell-to-baffle diametrical clearance, tube bundle outer diameter, shell diameter, and tube outer diameter. Evaluations of the heat exchangers performances are based on an adapted version of the Bell-Delaware method. Pressure drops constraints are included in the procedure. Reliability and maintenance due to fouling are taken into account by restraining the coefficient of increase of surface into a given interval. Two case studies are presented. Results show that the procedure can properly and rapidly identify the optimal design for a specified heat transfer process. (author)

  10. Exchange inlet optimization by genetic algorithm for improved RBCC performance

    Science.gov (United States)

    Chorkawy, G.; Etele, J.

    2017-09-01

    A genetic algorithm based on real parameter representation using a variable selection pressure and variable probability of mutation is used to optimize an annular air breathing rocket inlet called the Exchange Inlet. A rapid and accurate design method which provides estimates for air breathing, mixing, and isentropic flow performance is used as the engine of the optimization routine. Comparison to detailed numerical simulations show that the design method yields desired exit Mach numbers to within approximately 1% over 75% of the annular exit area and predicts entrained air massflows to between 1% and 9% of numerically simulated values depending on the flight condition. Optimum designs are shown to be obtained within approximately 8000 fitness function evaluations in a search space on the order of 106. The method is also shown to be able to identify beneficial values for particular alleles when they exist while showing the ability to handle cases where physical and aphysical designs co-exist at particular values of a subset of alleles within a gene. For an air breathing engine based on a hydrogen fuelled rocket an exchange inlet is designed which yields a predicted air entrainment ratio within 95% of the theoretical maximum.

  11. Pilgrims sailing the Titanic: plausibility effects on memory for misinformation.

    Science.gov (United States)

    Hinze, Scott R; Slaten, Daniel G; Horton, William S; Jenkins, Ryan; Rapp, David N

    2014-02-01

    People rely on information they read even when it is inaccurate (Marsh, Meade, & Roediger, Journal of Memory and Language 49:519-536, 2003), but how ubiquitous is this phenomenon? In two experiments, we investigated whether this tendency to encode and rely on inaccuracies from text might be influenced by the plausibility of misinformation. In Experiment 1, we presented stories containing inaccurate plausible statements (e.g., "The Pilgrims' ship was the Godspeed"), inaccurate implausible statements (e.g., . . . the Titanic), or accurate statements (e.g., . . . the Mayflower). On a subsequent test of general knowledge, participants relied significantly less on implausible than on plausible inaccuracies from the texts but continued to rely on accurate information. In Experiment 2, we replicated these results with the addition of a think-aloud procedure to elicit information about readers' noticing and evaluative processes for plausible and implausible misinformation. Participants indicated more skepticism and less acceptance of implausible than of plausible inaccuracies. In contrast, they often failed to notice, completely ignored, and at times even explicitly accepted the misinformation provided by plausible lures. These results offer insight into the conditions under which reliance on inaccurate information occurs and suggest potential mechanisms that may underlie reported misinformation effects.

  12. Multi-objective optimization of a plate and frame heat exchanger via genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Najafi, Hamidreza; Najafi, Behzad [K. N. Toosi University of Technology, Department of Mechanical Engineering, Tehran (Iran)

    2010-06-15

    In the present paper, a plate and frame heat exchanger is considered. Multi-objective optimization using genetic algorithm is developed in order to obtain a set of geometric design parameters, which lead to minimum pressure drop and the maximum overall heat transfer coefficient. Vividly, considered objective functions are conflicting and no single solution can satisfy both objectives simultaneously. Multi-objective optimization procedure yields a set of optimal solutions, called Pareto front, each of which is a trade-off between objectives and can be selected by the user, regarding the application and the project's limits. The presented work takes care of numerous geometric parameters in the presence of logical constraints. A sensitivity analysis is also carried out to study the effects of different geometric parameters on the considered objective functions. Modeling the system and implementing the multi-objective optimization via genetic algorithm has been performed by MATLAB. (orig.)

  13. Fast filtering algorithm based on vibration systems and neural information exchange and its application to micro motion robot

    International Nuclear Information System (INIS)

    Gao Wa; Zha Fu-Sheng; Li Man-Tian; Song Bao-Yu

    2014-01-01

    This paper develops a fast filtering algorithm based on vibration systems theory and neural information exchange approach. The characters, including the derivation process and parameter analysis, are discussed and the feasibility and the effectiveness are testified by the filtering performance compared with various filtering methods, such as the fast wavelet transform algorithm, the particle filtering method and our previously developed single degree of freedom vibration system filtering algorithm, according to simulation and practical approaches. Meanwhile, the comparisons indicate that a significant advantage of the proposed fast filtering algorithm is its extremely fast filtering speed with good filtering performance. Further, the developed fast filtering algorithm is applied to the navigation and positioning system of the micro motion robot, which is a high real-time requirement for the signals preprocessing. Then, the preprocessing data is used to estimate the heading angle error and the attitude angle error of the micro motion robot. The estimation experiments illustrate the high practicality of the proposed fast filtering algorithm. (general)

  14. Heuristic Elements of Plausible Reasoning.

    Science.gov (United States)

    Dudczak, Craig A.

    At least some of the reasoning processes involved in argumentation rely on inferences which do not fit within the traditional categories of inductive or deductive reasoning. The reasoning processes involved in plausibility judgments have neither the formal certainty of deduction nor the imputed statistical probability of induction. When utilizing…

  15. Plausible values in statistical inference

    NARCIS (Netherlands)

    Marsman, M.

    2014-01-01

    In Chapter 2 it is shown that the marginal distribution of plausible values is a consistent estimator of the true latent variable distribution, and, furthermore, that convergence is monotone in an embedding in which the number of items tends to infinity. This result is used to clarify some of the

  16. Efficient, approximate and parallel Hartree-Fock and hybrid DFT calculations. A 'chain-of-spheres' algorithm for the Hartree-Fock exchange

    International Nuclear Information System (INIS)

    Neese, Frank; Wennmohs, Frank; Hansen, Andreas; Becker, Ute

    2009-01-01

    In this paper, the possibility is explored to speed up Hartree-Fock and hybrid density functional calculations by forming the Coulomb and exchange parts of the Fock matrix by different approximations. For the Coulomb part the previously introduced Split-RI-J variant (F. Neese, J. Comput. Chem. 24 (2003) 1740) of the well-known 'density fitting' approximation is used. The exchange part is formed by semi-numerical integration techniques that are closely related to Friesner's pioneering pseudo-spectral approach. Our potentially linear scaling realization of this algorithm is called the 'chain-of-spheres exchange' (COSX). A combination of semi-numerical integration and density fitting is also proposed. Both Split-RI-J and COSX scale very well with the highest angular momentum in the basis sets. It is shown that for extended basis sets speed-ups of up to two orders of magnitude compared to traditional implementations can be obtained in this way. Total energies are reproduced with an average error of <0.3 kcal/mol as determined from extended test calculations with various basis sets on a set of 26 molecules with 20-200 atoms and up to 2000 basis functions. Reaction energies agree to within 0.2 kcal/mol (Hartree-Fock) or 0.05 kcal/mol (hybrid DFT) with the canonical values. The COSX algorithm parallelizes with a speedup of 8.6 observed for 10 processes. Minimum energy geometries differ by less than 0.3 pm in the bond distances and 0.5 deg. in the bond angels from their canonical values. These developments enable highly efficient and accurate self-consistent field calculations including nonlocal Hartree-Fock exchange for large molecules. In combination with the RI-MP2 method and large basis sets, second-order many body perturbation energies can be obtained for medium sized molecules with unprecedented efficiency. The algorithms are implemented into the ORCA electronic structure system

  17. Tuning of PID Controllers for Quadcopter System using Cultural Exchange Imperialist Competitive Algorithm

    Directory of Open Access Journals (Sweden)

    Nizar Hadi Abbas

    2018-02-01

    Full Text Available Quadrotors are coming up as an attractive platform for unmanned aerial vehicle (UAV research, due to the simplicity of their structure and maintenance, their ability to hover, and their vertical take-off and landing (VTOL capability. With the vast advancements in small-size sensors, actuators, and processors, researchers are now focusing on developing mini UAV’s to be used in both research and commercial applications. This work presents a detailed mathematical nonlinear dynamic model of the quadrotor which is formulated using the Newton-Euler method. Although the quadrotor is a 6 DOF under-actuated system, the derived rotational subsystem is fully actuated, while the translational subsystem is under-actuated. The derivation of the mathematical model was followed by the development of the controller to control the altitude, attitude, heading and position of the quadrotor in space, which is, based on the linear Proportional-Derivative- Integral (PID controller; thus, a simplified version of the model is obtained. The gains of the controllers will be tuned using optimization techniques to improve the system's dynamic response. The standard Imperialist Competitive Algorithm (ICA was applied to tune the PID parameters and then it was compared to Cultural Exchange Imperialist Competitive algorithm (CEICA tuning, and the results show improvement in the proposed algorithm. The objective function results were enhanced by (23.91% in the CEICA compared with ICA.

  18. Optimality and Plausibility in Language Design

    Directory of Open Access Journals (Sweden)

    Michael R. Levot

    2016-12-01

    Full Text Available The Minimalist Program in generative syntax has been the subject of much rancour, a good proportion of it stoked by Noam Chomsky’s suggestion that language may represent “a ‘perfect solution’ to minimal design specifications.” A particular flash point has been the application of Minimalist principles to speculations about how language evolved in the human species. This paper argues that Minimalism is well supported as a plausible approach to language evolution. It is claimed that an assumption of minimal design specifications like that employed in MP syntax satisfies three key desiderata of evolutionary and general scientific plausibility: Physical Optimism, Rational Optimism, and Darwin’s Problem. In support of this claim, the methodologies employed in MP to maximise parsimony are characterised through an analysis of recent theories in Minimalist syntax, and those methodologies are defended with reference to practices and arguments from evolutionary biology and other natural sciences.

  19. Stereotyping to infer group membership creates plausible deniability for prejudice-based aggression.

    Science.gov (United States)

    Cox, William T L; Devine, Patricia G

    2014-02-01

    In the present study, participants administered painful electric shocks to an unseen male opponent who was either explicitly labeled as gay or stereotypically implied to be gay. Identifying the opponent with a gay-stereotypic attribute produced a situation in which the target's group status was privately inferred but plausibly deniable to others. To test the plausible deniability hypothesis, we examined aggression levels as a function of internal (personal) and external (social) motivation to respond without prejudice. Whether plausible deniability was present or absent, participants high in internal motivation aggressed at low levels, and participants low in both internal and external motivation aggressed at high levels. The behavior of participants low in internal and high in external motivation, however, depended on experimental condition. They aggressed at low levels when observers could plausibly attribute their behavior to prejudice and aggressed at high levels when the situation granted plausible deniability. This work has implications for both obstacles to and potential avenues for prejudice-reduction efforts.

  20. Application of plausible reasoning to AI-based control systems

    Science.gov (United States)

    Berenji, Hamid; Lum, Henry, Jr.

    1987-01-01

    Some current approaches to plausible reasoning in artificial intelligence are reviewed and discussed. Some of the most significant recent advances in plausible and approximate reasoning are examined. A synergism among the techniques of uncertainty management is advocated, and brief discussions on the certainty factor approach, probabilistic approach, Dempster-Shafer theory of evidence, possibility theory, linguistic variables, and fuzzy control are presented. Some extensions to these methods are described, and the applications of the methods are considered.

  1. Multidimensional generalized-ensemble algorithms for complex systems.

    Science.gov (United States)

    Mitsutake, Ayori; Okamoto, Yuko

    2009-06-07

    We give general formulations of the multidimensional multicanonical algorithm, simulated tempering, and replica-exchange method. We generalize the original potential energy function E(0) by adding any physical quantity V of interest as a new energy term. These multidimensional generalized-ensemble algorithms then perform a random walk not only in E(0) space but also in V space. Among the three algorithms, the replica-exchange method is the easiest to perform because the weight factor is just a product of regular Boltzmann-like factors, while the weight factors for the multicanonical algorithm and simulated tempering are not a priori known. We give a simple procedure for obtaining the weight factors for these two latter algorithms, which uses a short replica-exchange simulation and the multiple-histogram reweighting techniques. As an example of applications of these algorithms, we have performed a two-dimensional replica-exchange simulation and a two-dimensional simulated-tempering simulation using an alpha-helical peptide system. From these simulations, we study the helix-coil transitions of the peptide in gas phase and in aqueous solution.

  2. Credibility judgments of narratives: language, plausibility, and absorption.

    Science.gov (United States)

    Nahari, Galit; Glicksohn, Joseph; Nachson, Israel

    2010-01-01

    Two experiments were conducted in order to find out whether textual features of narratives differentially affect credibility judgments made by judges having different levels of absorption (a disposition associated with rich visual imagination). Participants in both experiments were exposed to a textual narrative and requested to judge whether the narrator actually experienced the event he described in his story. In Experiment 1, the narrative varied in terms of language (literal, figurative) and plausibility (ordinary, anomalous). In Experiment 2, the narrative varied in terms of language only. The participants' perceptions of the plausibility of the story described and the extent to which they were absorbed in reading were measured. The data from both experiments together suggest that the groups applied entirely different criteria in credibility judgments. For high-absorption individuals, their credibility judgment depends on the degree to which the text can be assimilated into their own vivid imagination, whereas for low-absorption individuals it depends mainly on plausibility. That is, high-absorption individuals applied an experiential mental set while judging the credibility of the narrator, whereas low-absorption individuals applied an instrumental mental set. Possible cognitive mechanisms and implications for credibility judgments are discussed.

  3. Algorithms as fetish: Faith and possibility in algorithmic work

    Directory of Open Access Journals (Sweden)

    Suzanne L Thomas

    2018-01-01

    Full Text Available Algorithms are powerful because we invest in them the power to do things. With such promise, they can transform the ordinary, say snapshots along a robotic vacuum cleaner’s route, into something much more, such as a clean home. Echoing David Graeber’s revision of fetishism, we argue that this easy slip from technical capabilities to broader claims betrays not the “magic” of algorithms but rather the dynamics of their exchange. Fetishes are not indicators of false thinking, but social contracts in material form. They mediate emerging distributions of power often too nascent, too slippery or too disconcerting to directly acknowledge. Drawing primarily on 2016 ethnographic research with computer vision professionals, we show how faith in what algorithms can do shapes the social encounters and exchanges of their production. By analyzing algorithms through the lens of fetishism, we can see the social and economic investment in some people’s labor over others. We also see everyday opportunities for social creativity and change. We conclude that what is problematic about algorithms is not their fetishization but instead their stabilization into full-fledged gods and demons – the more deserving objects of critique.

  4. Parametrisation of a Maxwell model for transient tyre forces by means of an extended firefly algorithm

    Directory of Open Access Journals (Sweden)

    Andreas Hackl

    2016-12-01

    Full Text Available Developing functions for advanced driver assistance systems requires very accurate tyre models, especially for the simulation of transient conditions. In the past, parametrisation of a given tyre model based on measurement data showed shortcomings, and the globally optimal solution obtained did not appear to be plausible. In this article, an optimisation strategy is presented, which is able to find plausible and physically feasible solutions by detecting many local outcomes. The firefly algorithm mimics the natural behaviour of fireflies, which use a kind of flashing light to communicate with other members. An algorithm simulating the intensity of the light of a single firefly, diminishing with increasing distances, is implicitly able to detect local solutions on its way to the best solution in the search space. This implicit clustering feature is stressed by an additional explicit clustering step, where local solutions are stored and terminally processed to obtain a large number of possible solutions. The enhanced firefly algorithm will be first applied to the well-known Rastrigin functions and then to the tyre parametrisation problem. It is shown that the firefly algorithm is qualified to find a high number of optimisation solutions, which is required for plausible parametrisation for the given tyre model.

  5. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence From Word Segmentation.

    Science.gov (United States)

    Phillips, Lawrence; Pearl, Lisa

    2015-11-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.

  6. Optimization of a Finned Shell and Tube Heat Exchanger Using a Multi-Objective Optimization Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Heidar Sadeghzadeh

    2015-08-01

    Full Text Available Heat transfer rate and cost significantly affect designs of shell and tube heat exchangers. From the viewpoint of engineering, an optimum design is obtained via maximum heat transfer rate and minimum cost. Here, an analysis of a radial, finned, shell and tube heat exchanger is carried out, considering nine design parameters: tube arrangement, tube diameter, tube pitch, tube length, number of tubes, fin height, fin thickness, baffle spacing ratio and number of fins per unit length of tube. The “Delaware modified” technique is used to determine heat transfer coefficients and the shell-side pressure drop. In this technique, the baffle cut is 20 percent and the baffle ratio limits range from 0.2 to 0.4. The optimization of the objective functions (maximum heat transfer rate and minimum total cost is performed using a non-dominated sorting genetic algorithm (NSGA-II, and compared against a one-objective algorithm, to find the best solutions. The results are depicted as a set of solutions on a Pareto front, and show that the heat transfer rate ranges from 3517 to 7075 kW. Also, the minimum and maximum objective functions are specified, allowing the designer to select the best points among these solutions based on requirements. Additionally, variations of shell-side pressure drop with total cost are depicted, and indicate that the pressure drop ranges from 3.8 to 46.7 kPa.

  7. Irreversibility analysis for optimization design of plate fin heat exchangers using a multi-objective cuckoo search algorithm

    International Nuclear Information System (INIS)

    Wang, Zhe; Li, Yanzhong

    2015-01-01

    Highlights: • The first application of IMOCS for plate-fin heat exchanger design. • Irreversibility degrees of heat transfer and fluid friction are minimized. • Trade-off of efficiency, total cost and pumping power is achieved. • Both EGM and EDM methods have been compared in the optimization of PFHE. • This study has superiority over other single-objective optimization design. - Abstract: This paper introduces and applies an improved multi-objective cuckoo search (IMOCS) algorithm, a novel met-heuristic optimization algorithm based on cuckoo breeding behavior, for the multi-objective optimization design of plate-fin heat exchangers (PFHEs). A modified irreversibility degree of the PFHE is separated into heat transfer and fluid friction irreversibility degrees which are adopted as two initial objective functions to be minimized simultaneously for narrowing the search scope of the design. The maximization efficiency, minimization of pumping power, and total annual cost are considered final objective functions. Results obtained from a two dimensional normalized Pareto-optimal frontier clearly demonstrate the trade-off between heat transfer and fluid friction irreversibility. Moreover, a three dimensional Pareto-optimal frontier reveals a relationship between efficiency, total annual cost, and pumping power in the PFHE design. Three examples presented here further demonstrate that the presented method is able to obtain optimum solutions with higher accuracy, lower irreversibility, and fewer iterations as compared to the previous methods and single-objective design approaches

  8. Graph-drawing algorithms geometries versus molecular mechanics in fullereness

    Science.gov (United States)

    Kaufman, M.; Pisanski, T.; Lukman, D.; Borštnik, B.; Graovac, A.

    1996-09-01

    The algorithms of Kamada-Kawai (KK) and Fruchterman-Reingold (FR) have been recently generalized (Pisanski et al., Croat. Chem. Acta 68 (1995) 283) in order to draw molecular graphs in three-dimensional space. The quality of KK and FR geometries is studied here by comparing them with the molecular mechanics (MM) and the adjacency matrix eigenvectors (AME) algorithm geometries. In order to compare different layouts of the same molecule, an appropriate method has been developed. Its application to a series of experimentally detected fullerenes indicates that the KK, FR and AME algorithms are able to reproduce plausible molecular geometries.

  9. Endocrine distrupting chemicals and human health: The plausibility ...

    African Journals Online (AJOL)

    The plausibility of research results on DDT and reproductive health ... cals in the environment and that human health is inextri- cably linked to the health of .... periods of folliculo-genesis or embryo-genesis that increases risk for adverse effects.

  10. Bayesian analysis for exponential random graph models using the adaptive exchange sampler

    KAUST Repository

    Jin, Ick Hoon

    2013-01-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the existence of intractable normalizing constants. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the issue of intractable normalizing constants encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.

  11. Methods of Thermal Calculations for a Condensing Waste-Heat Exchanger

    Directory of Open Access Journals (Sweden)

    Rączka Paweł

    2014-12-01

    Full Text Available The paper presents the algorithms for a flue gas/water waste-heat exchanger with and without condensation of water vapour contained in flue gas with experimental validation of theoretical results. The algorithms were used for calculations of the area of a heat exchanger using waste heat from a pulverised brown coal fired steam boiler operating in a power unit with a capacity of 900 MWe. In calculation of the condensing part, the calculation results obtained with two algorithms were compared (Colburn-Hobler and VDI algorithms. The VDI algorithm allowed to take into account the condensation of water vapour for flue gas temperatures above the temperature of the water dew point. Thanks to this, it was possible to calculate more accurately the required heat transfer area, which resulted in its reduction by 19 %. In addition, the influence of the mass transfer on the heat transfer area was taken into account, which contributed to a further reduction in the calculated size of the heat exchanger - in total by 28% as compared with the Colburn-Hobler algorithm. The presented VDI algorithm was used to design a 312 kW pilot-scale condensing heat exchanger installed in PGE Belchatow power plant. Obtained experimental results are in a good agreement with calculated values.

  12. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  13. A novel proton exchange membrane fuel cell based power conversion system for telecom supply with genetic algorithm assisted intelligent interfacing converter

    International Nuclear Information System (INIS)

    Kaur, Rajvir; Krishnasamy, Vijayakumar; Muthusamy, Kaleeswari; Chinnamuthan, Periasamy

    2017-01-01

    Highlights: • Proton exchange membrane fuel cell based telecom tower supply is proposed. • The use of diesel generator is eliminated and battery size is reduced. • Boost converter based intelligent interfacing unit is implemented. • The genetic algorithm assisted controller is proposed for effective interfacing. • The controller is robust against input and output disturbance rejection. - Abstract: This paper presents the fuel cell based simple electric energy conversion system for supplying the telecommunication towers to reduce the operation and maintenance cost of telecom companies. The telecom industry is at the boom and is penetrating deep into remote rural areas having unreliable or no grid supply. The telecom industry is getting heavily dependent on a diesel generator set and battery bank as a backup for continuously supplying a base transceiver station of telecom towers. This excessive usage of backup supply resulted in increased operational expenditure, the unreliability of power supply and had become a threat to the environment. A significant development and concern of clean energy sources, proton exchange membrane fuel cell based supply for base transceiver station is proposed with intelligent interfacing unit. The necessity of the battery bank capacity is significantly reduced as compared with the earlier solutions. Further, a simple closed loop and genetic algorithm assisted controller is proposed for intelligent interfacing unit which consists of power electronic boost converter for power conditioning. The proposed genetic algorithm assisted controller would ensure the tight voltage regulation at the DC distribution bus of the base transceiver station. Also, it will provide the robust performance of the base transceiver station under telecom load variation and proton exchange membrane fuel cell output voltage fluctuations. The complete electric energy conversion system along with telecom loads is simulated in MATLAB/Simulink platform and

  14. Protein folding simulations by generalized-ensemble algorithms.

    Science.gov (United States)

    Yoda, Takao; Sugita, Yuji; Okamoto, Yuko

    2014-01-01

    In the protein folding problem, conventional simulations in physical statistical mechanical ensembles, such as the canonical ensemble with fixed temperature, face a great difficulty. This is because there exist a huge number of local-minimum-energy states in the system and the conventional simulations tend to get trapped in these states, giving wrong results. Generalized-ensemble algorithms are based on artificial unphysical ensembles and overcome the above difficulty by performing random walks in potential energy, volume, and other physical quantities or their corresponding conjugate parameters such as temperature, pressure, etc. The advantage of generalized-ensemble simulations lies in the fact that they not only avoid getting trapped in states of energy local minima but also allows the calculations of physical quantities as functions of temperature or other parameters from a single simulation run. In this article we review the generalized-ensemble algorithms. Four examples, multicanonical algorithm, replica-exchange method, replica-exchange multicanonical algorithm, and multicanonical replica-exchange method, are described in detail. Examples of their applications to the protein folding problem are presented.

  15. Molecular simulations of hydrated proton exchange membranes. The structure

    Energy Technology Data Exchange (ETDEWEB)

    Marcharnd, Gabriel [Duisburg-Essen Univ., Essen (Germany). Lehrstuhl fuer Theoretische Chemie; Bordeaux Univ., Talence (France). Dept. of Chemistry; Bopp, Philippe A. [Bordeaux Univ., Talence (France). Dept. of Chemistry; Spohr, Eckhard [Duisburg-Essen Univ., Essen (Germany). Lehrstuhl fuer Theoretische Chemie

    2013-01-15

    The structure of two hydrated proton exchange membranes for fuel cells (PEMFC), Nafion {sup registered} (Dupont) and Hyflon {sup registered} (Solvay), is studied by all-atom molecular dynamics (MD) computer simulations. Since the characteristic times of these systems are long compared to the times for which they can be simulated, several different, but equivalent, initial configurations with a large degree of randomness are generated for different water contents and then equilibrated and simulated in parallel. A more constrained structure, analog to the newest model proposed in the literature based on scattering experiments, is investigated in the same way. One might speculate that a limited degree of entanglement of the polymer chains is a key feature of the structures showing the best agreement with experiment. Nevertheless, the overall conclusion remains that the scattering experiments cannot distinguish between the several, in our view equally plausible, structural models. We thus find that the characteristic features of experimental scattering curves are, after equilibration, fairly well reproduced by all systems prepared with our method. We thus study in more detail some structural details. We attempt to characterize the spatial and size distribution of the water rich domains, which is where the proton diffusion mostly takes place, using several clustering algorithms. (orig.)

  16. Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra

    2018-03-01

    The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.

  17. Randomized Assignments for Barter Exchanges: Fairness vs Efficiency

    DEFF Research Database (Denmark)

    Fang, Wenyi; Filos-Ratsikas, Aris; Frederiksen, Søren Kristoffer Stiil

    2015-01-01

    We study fairness and efficiency properties of randomized algorithms for barter exchanges with direct applications to kidney exchange problems. It is well documented that randomization can serve as a tool to ensure fairness among participants. However, in many applications, practical constraints...

  18. Instrument-induced spatial crosstalk deconvolution algorithm

    Science.gov (United States)

    Wright, Valerie G.; Evans, Nathan L., Jr.

    1986-01-01

    An algorithm has been developed which reduces the effects of (deconvolves) instrument-induced spatial crosstalk in satellite image data by several orders of magnitude where highly precise radiometry is required. The algorithm is based upon radiance transfer ratios which are defined as the fractional bilateral exchange of energy betwen pixels A and B.

  19. Generation of Plausible Hurricane Tracks for Preparedness Exercises

    Science.gov (United States)

    2017-04-25

    product kernel. KDE with a beta kernel gene- rates maximum sustained winds, and linear regression simulates minimum central pressure. Maximum significant...the Storm level models the number of waypoints M , birth and death locations w1 and wM , and total number of steps L. The Stage level models the...MATLAB and leverages HURDAT2 to construct data-driven statistical models that can generate plausible yet never-before-seen storm behaviors. For a

  20. Optimization of Heat Exchangers

    International Nuclear Information System (INIS)

    Catton, Ivan

    2010-01-01

    The objective of this research is to develop tools to design and optimize heat exchangers (HE) and compact heat exchangers (CHE) for intermediate loop heat transport systems found in the very high temperature reator (VHTR) and other Generation IV designs by addressing heat transfer surface augmentation and conjugate modeling. To optimize heat exchanger, a fast running model must be created that will allow for multiple designs to be compared quickly. To model a heat exchanger, volume averaging theory, VAT, is used. VAT allows for the conservation of mass, momentum and energy to be solved for point by point in a 3 dimensional computer model of a heat exchanger. The end product of this project is a computer code that can predict an optimal configuration for a heat exchanger given only a few constraints (input fluids, size, cost, etc.). As VAT computer code can be used to model characteristics (pumping power, temperatures, and cost) of heat exchangers more quickly than traditional CFD or experiment, optimization of every geometric parameter simultaneously can be made. Using design of experiment, DOE and genetric algorithms, GE, to optimize the results of the computer code will improve heat exchanger design.

  1. Crystal structure and cation exchanging properties of a novel open framework phosphate of Ce (IV)

    Energy Technology Data Exchange (ETDEWEB)

    Bevara, Samatha; Achary, S. N., E-mail: sachary@barc.gov.in; Tyagi, A. K. [Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Homi Bhabha National Insitute, Anushakti Nagar, Mumbai 400094 (India); Patwe, S. J. [Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Sinha, A. K. [Indus Synchrotrons Utilization Division, Raja Ramanna Centre for Advanced Technology, Indore 452013 (India); Mishra, R. K.; Kumar, Amar; Kaushik, C. P. [Waste Management Division, Bhabha Atomic Research Centre, Mumbai 400085 (India)

    2016-05-23

    Herein we report preparation, crystal structure and ion exchanging properties of a new phosphate of tetravalent cerium, K{sub 2}Ce(PO{sub 4}){sub 2}. A monoclinic structure having framework type arrangement of Ce(PO{sub 4}){sub 6} units formed by C2O{sub 8} square-antiprism and PO{sub 4} tetrahedra is assigned for K{sub C}e(PO{sub 4}){sub 2}. The K{sup +} ions are occupied in the channels formed by the Ce(PO{sub 4})6 and provide overall charge neutrality. The unique channel type arrangements of the K+ make them exchangeable with other cations. The ion exchanging properties of K2Ce(PO4)2 has been investigated by equilibrating with solution of 90Sr followed by radiometric analysis. In optimum conditions, significant exchange of K+ with Sr2+ with Kd ~ 8000 mL/g is observed. The details of crystal structure and ion exchange properties are explained and a plausible mechanism for ion exchange is presented.

  2. A Genetic Algorithm That Exchanges Neighboring Centers for Fuzzy c-Means Clustering

    Science.gov (United States)

    Chahine, Firas Safwan

    2012-01-01

    Clustering algorithms are widely used in pattern recognition and data mining applications. Due to their computational efficiency, partitional clustering algorithms are better suited for applications with large datasets than hierarchical clustering algorithms. K-means is among the most popular partitional clustering algorithm, but has a major…

  3. Trilateral market coupling. Algorithm appendix

    International Nuclear Information System (INIS)

    2006-03-01

    Market Coupling is both a mechanism for matching orders on the exchange and an implicit cross-border capacity allocation mechanism. Market Coupling improves the economic surplus of the coupled markets: the highest purchase orders and the lowest sale orders of the coupled power exchanges are matched, regardless of the area where they have been submitted; matching results depend however on the Available Transfer Capacity (ATC) between the coupled hubs. Market prices and schedules of the day-ahead power exchanges of the several connected markets are simultaneously determined with the use of the Available Transfer Capacity defined by the relevant Transmission System Operators. The transmission capacity is thereby implicitly auctioned and the implicit cost of the transmission capacity from one market to the other is the price difference between the two markets. In particular, if the transmission capacity between two markets is not fully used, there is no price difference between the markets and the implicit cost of the transmission capacity is null. Market coupling relies on the principle that the market with the lowest price exports electricity to the market with the highest price. Two situations may appear: either the Available Transfer Capacity (ATC) is large enough and the prices of both markets are equalized (price convergence), or the ATC is too small and the prices cannot be equalized. The Market Coupling algorithm takes as an input: 1 - The Available Transfer Capacity (ATC) between each area for each flow direction and each Settlement Period of the following day (i.e. for each hour of following day); 2 - The (Block Free) Net Export Curves (NEC) of each market for each hour of the following day, i.e., the difference between the total quantity of Divisible Hourly Bids and the total quantity of Divisible Hourly Offers for each price level. The NEC reflects a market's import or export volume sensitivity to price. 3 - The Block Orders submitted by the participants in

  4. Trilateral market coupling. Algorithm appendix

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-03-15

    Market Coupling is both a mechanism for matching orders on the exchange and an implicit cross-border capacity allocation mechanism. Market Coupling improves the economic surplus of the coupled markets: the highest purchase orders and the lowest sale orders of the coupled power exchanges are matched, regardless of the area where they have been submitted; matching results depend however on the Available Transfer Capacity (ATC) between the coupled hubs. Market prices and schedules of the day-ahead power exchanges of the several connected markets are simultaneously determined with the use of the Available Transfer Capacity defined by the relevant Transmission System Operators. The transmission capacity is thereby implicitly auctioned and the implicit cost of the transmission capacity from one market to the other is the price difference between the two markets. In particular, if the transmission capacity between two markets is not fully used, there is no price difference between the markets and the implicit cost of the transmission capacity is null. Market coupling relies on the principle that the market with the lowest price exports electricity to the market with the highest price. Two situations may appear: either the Available Transfer Capacity (ATC) is large enough and the prices of both markets are equalized (price convergence), or the ATC is too small and the prices cannot be equalized. The Market Coupling algorithm takes as an input: 1 - The Available Transfer Capacity (ATC) between each area for each flow direction and each Settlement Period of the following day (i.e. for each hour of following day); 2 - The (Block Free) Net Export Curves (NEC) of each market for each hour of the following day, i.e., the difference between the total quantity of Divisible Hourly Bids and the total quantity of Divisible Hourly Offers for each price level. The NEC reflects a market's import or export volume sensitivity to price. 3 - The Block Orders submitted by the

  5. Of paradox and plausibility: the dynamic of change in medical law.

    Science.gov (United States)

    Harrington, John

    2014-01-01

    This article develops a model of change in medical law. Drawing on systems theory, it argues that medical law participates in a dynamic of 'deparadoxification' and 'reparadoxification' whereby the underlying contingency of the law is variously concealed through plausible argumentation, or revealed by critical challenge. Medical law is, thus, thoroughly rhetorical. An examination of the development of the law on abortion and on the sterilization of incompetent adults shows that plausibility is achieved through the deployment of substantive common sense and formal stylistic devices. It is undermined where these elements are shown to be arbitrary and constructed. In conclusion, it is argued that the politics of medical law are constituted by this antagonistic process of establishing and challenging provisionally stable normative regimes. © The Author [2014]. Published by Oxford University Press; all rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. An improved approach to exchange non-rectangular departments in CRAFT algorithm

    OpenAIRE

    Esmaeili Aliabadi, Danial; Pourghannad, Behrooz

    2012-01-01

    In this Paper, an algorithm which improves CRAFT algorithm’s efficacy is developed. CRAFT is an algorithm widely used to solve facility layout problems. Our proposed method, named Plasma, can be used to improve CRAFT results. In this note, Plasma algorithm is tested in several sample problems. The comparison between Plasma and classic CRAFT and also Micro-CRAFT indicates that Plasma is successful in cost reduction in comparison with CRAFT and Micro-CRAFT.

  7. A Stochastic Model of Plausibility in Live Virtual Constructive Environments

    Science.gov (United States)

    2017-09-14

    from the model parameters that are inputs to the computer model ( mathematical model) but whose exact values are unknown to experimentalists and...Environments Jeremy R. Millar Follow this and additional works at: https://scholar.afit.edu/etd Part of the Computer Sciences Commons This Dissertation...25 3.3 Computing Plausibility Exceedance Probabilities . . . . . . . . . . . . . . . . . . . 28 IV

  8. Fe atom exchange between aqueous Fe2+ and magnetite.

    Science.gov (United States)

    Gorski, Christopher A; Handler, Robert M; Beard, Brian L; Pasakarnis, Timothy; Johnson, Clark M; Scherer, Michelle M

    2012-11-20

    The reaction between magnetite and aqueous Fe(2+) has been extensively studied due to its role in contaminant reduction, trace-metal sequestration, and microbial respiration. Previous work has demonstrated that the reaction of Fe(2+) with magnetite (Fe(3)O(4)) results in the structural incorporation of Fe(2+) and an increase in the bulk Fe(2+) content of magnetite. It is unclear, however, whether significant Fe atom exchange occurs between magnetite and aqueous Fe(2+), as has been observed for other Fe oxides. Here, we measured the extent of Fe atom exchange between aqueous Fe(2+) and magnetite by reacting isotopically "normal" magnetite with (57)Fe-enriched aqueous Fe(2+). The extent of Fe atom exchange between magnetite and aqueous Fe(2+) was significant (54-71%), and went well beyond the amount of Fe atoms found at the near surface. Mössbauer spectroscopy of magnetite reacted with (56)Fe(2+) indicate that no preferential exchange of octahedral or tetrahedral sites occurred. Exchange experiments conducted with Co-ferrite (Co(2+)Fe(2)(3+)O(4)) showed little impact of Co substitution on the rate or extent of atom exchange. Bulk electron conduction, as previously invoked to explain Fe atom exchange in goethite, is a possible mechanism, but if it is occurring, conduction does not appear to be the rate-limiting step. The lack of significant impact of Co substitution on the kinetics of Fe atom exchange, and the relatively high diffusion coefficients reported for magnetite suggest that for magnetite, unlike goethite, Fe atom diffusion is a plausible mechanism to explain the rapid rates of Fe atom exchange in magnetite.

  9. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  10. A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm

    Science.gov (United States)

    Thirer, Nonel

    2013-05-01

    With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.

  11. Modeling and optimization for proton exchange membrane fuel cell stack using aging and challenging P systems based optimization algorithm

    International Nuclear Information System (INIS)

    Yang, Shipin; Chellali, Ryad; Lu, Xiaohua; Li, Lijuan; Bo, Cuimei

    2016-01-01

    Accurate models of PEM (proton exchange membrane) fuel cells are of great significance for the analysis and the control for power generation. We present a new semi-empirical model to predict the voltage outputs of PEM fuel cell stacks. We also introduce a new estimation method, called AC-POA (aging and challenging P systems based optimization algorithm) allowing deriving the parameters of the semi-empirical model. In our model, the cathode inlet pressure is selected as an additional factor to modify the expression of concentration over-voltage V con for traditional Amphlett's PEM fuel cell model. In AC-POA, the aging-mechanism inspired object updating rule is merged in existing P system. We validate through experiments the effectiveness of AC-POA and the fitting accuracy of our model. Modeling comparison results show that the predictions of our model are the best in terms of fitting to actual sample data. - Highlights: • Presented a p c -based modificatory semi-empirical model for PEMFC stack. • Introduced a new aging inspired improved parameter estimation algorithm, AC-POA. • Validated the effectiveness of the AC-POA and the new model. • Remodeled the practical PEM fuel cell system.

  12. An intense Nigerian stock exchange market prediction using logistic ...

    African Journals Online (AJOL)

    This paper is a continuation of our research work on the Nigerian Stock Exchange Market (NSEM) uncertainties, In our previous work (Magaji et al, 2013) we presented the Naive Bayes and SVM-SMO algorithms as a tools for predicting the Nigerian Stock Exchange Market; subsequently we used the same transformed data ...

  13. Probabilistic reasoning in intelligent systems networks of plausible inference

    CERN Document Server

    Pearl, Judea

    1988-01-01

    Probabilistic Reasoning in Intelligent Systems is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty--and offers techniques, based on belief networks, that provid

  14. A Note on Unified Statistics Including Fermi-Dirac, Bose-Einstein, and Tsallis Statistics, and Plausible Extension to Anisotropic Effect

    Directory of Open Access Journals (Sweden)

    Christianto V.

    2007-04-01

    Full Text Available In the light of some recent hypotheses suggesting plausible unification of thermostatistics where Fermi-Dirac, Bose-Einstein and Tsallis statistics become its special subsets, we consider further plausible extension to include non-integer Hausdorff dimension, which becomes realization of fractal entropy concept. In the subsequent section, we also discuss plausible extension of this unified statistics to include anisotropic effect by using quaternion oscillator, which may be observed in the context of Cosmic Microwave Background Radiation. Further observation is of course recommended in order to refute or verify this proposition.

  15. Semantics-based plausible reasoning to extend the knowledge coverage of medical knowledge bases for improved clinical decision support.

    Science.gov (United States)

    Mohammadhassanzadeh, Hossein; Van Woensel, William; Abidi, Samina Raza; Abidi, Syed Sibte Raza

    2017-01-01

    Capturing complete medical knowledge is challenging-often due to incomplete patient Electronic Health Records (EHR), but also because of valuable, tacit medical knowledge hidden away in physicians' experiences. To extend the coverage of incomplete medical knowledge-based systems beyond their deductive closure, and thus enhance their decision-support capabilities, we argue that innovative, multi-strategy reasoning approaches should be applied. In particular, plausible reasoning mechanisms apply patterns from human thought processes, such as generalization, similarity and interpolation, based on attributional, hierarchical, and relational knowledge. Plausible reasoning mechanisms include inductive reasoning , which generalizes the commonalities among the data to induce new rules, and analogical reasoning , which is guided by data similarities to infer new facts. By further leveraging rich, biomedical Semantic Web ontologies to represent medical knowledge, both known and tentative, we increase the accuracy and expressivity of plausible reasoning, and cope with issues such as data heterogeneity, inconsistency and interoperability. In this paper, we present a Semantic Web-based, multi-strategy reasoning approach, which integrates deductive and plausible reasoning and exploits Semantic Web technology to solve complex clinical decision support queries. We evaluated our system using a real-world medical dataset of patients with hepatitis, from which we randomly removed different percentages of data (5%, 10%, 15%, and 20%) to reflect scenarios with increasing amounts of incomplete medical knowledge. To increase the reliability of the results, we generated 5 independent datasets for each percentage of missing values, which resulted in 20 experimental datasets (in addition to the original dataset). The results show that plausibly inferred knowledge extends the coverage of the knowledge base by, on average, 2%, 7%, 12%, and 16% for datasets with, respectively, 5%, 10%, 15

  16. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  17. Kohn–Sham exchange-correlation potentials from second-order reduced density matrices

    Energy Technology Data Exchange (ETDEWEB)

    Cuevas-Saavedra, Rogelio; Staroverov, Viktor N., E-mail: vstarove@uwo.ca [Department of Chemistry, The University of Western Ontario, London, Ontario N6A 5B7 (Canada); Ayers, Paul W. [Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1 (Canada)

    2015-12-28

    We describe a practical algorithm for constructing the Kohn–Sham exchange-correlation potential corresponding to a given second-order reduced density matrix. Unlike conventional Kohn–Sham inversion methods in which such potentials are extracted from ground-state electron densities, the proposed technique delivers unambiguous results in finite basis sets. The approach can also be used to separate approximately the exchange and correlation potentials for a many-electron system for which the reduced density matrix is known. The algorithm is implemented for configuration-interaction wave functions and its performance is illustrated with numerical examples.

  18. Epidemiologic studies of occupational pesticide exposure and cancer: regulatory risk assessments and biologic plausibility.

    Science.gov (United States)

    Acquavella, John; Doe, John; Tomenson, John; Chester, Graham; Cowell, John; Bloemen, Louis

    2003-01-01

    Epidemiologic studies frequently show associations between self-reported use of specific pesticides and human cancers. These findings have engendered debate largely on methodologic grounds. However, biologic plausibility is a more fundamental issue that has received only superficial attention. The purpose of this commentary is to review briefly the toxicology and exposure data that are developed as part of the pesticide regulatory process and to discuss the applicability of this data to epidemiologic research. The authors also provide a generic example of how worker pesticide exposures might be estimated and compared to relevant toxicologic dose levels. This example provides guidance for better characterization of exposure and for consideration of biologic plausibility in epidemiologic studies of pesticides.

  19. From information processing to decisions: Formalizing and comparing psychologically plausible choice models.

    Science.gov (United States)

    Heck, Daniel W; Hilbig, Benjamin E; Moshagen, Morten

    2017-08-01

    Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. L’Analyse du Risque Géopolitique: du Plausible au Probable

    OpenAIRE

    Adib Bencherif

    2015-01-01

    This paper is going to explore the logical process behind risk analysis, particularly in geopolitics. The main goal is to demonstrate the ambiguities behind risk calculation and to highlight the continuum between plausibility and probability in risk analysis. To demonstrate it, the author introduces two notions: the inference of abduction, often neglected in the social sciences literature, and the Bayesian calculation. Inspired by the works of Louise Amoore, this paper tries to go further by ...

  1. Distributed k-Means Algorithm and Fuzzy c-Means Algorithm for Sensor Networks Based on Multiagent Consensus Theory.

    Science.gov (United States)

    Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing

    2016-03-03

    This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.

  2. A replica exchange Monte Carlo algorithm for protein folding in the HP model

    Directory of Open Access Journals (Sweden)

    Shmygelska Alena

    2007-09-01

    Full Text Available Abstract Background The ab initio protein folding problem consists of predicting protein tertiary structure from a given amino acid sequence by minimizing an energy function; it is one of the most important and challenging problems in biochemistry, molecular biology and biophysics. The ab initio protein folding problem is computationally challenging and has been shown to be NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFneVtcqqGqbauaaa@3961@-hard even when conformations are restricted to a lattice. In this work, we implement and evaluate the replica exchange Monte Carlo (REMC method, which has already been applied very successfully to more complex protein models and other optimization problems with complex energy landscapes, in combination with the highly effective pull move neighbourhood in two widely studied Hydrophobic Polar (HP lattice models. Results We demonstrate that REMC is highly effective for solving instances of the square (2D and cubic (3D HP protein folding problem. When using the pull move neighbourhood, REMC outperforms current state-of-the-art algorithms for most benchmark instances. Additionally, we show that this new algorithm provides a larger ensemble of ground-state structures than the existing state-of-the-art methods. Furthermore, it scales well with sequence length, and it finds significantly better conformations on long biological sequences and sequences with a provably unique ground-state structure, which is believed to be a characteristic of real proteins. We also present evidence that our REMC algorithm can fold sequences which exhibit significant interaction between termini in the hydrophobic core relatively easily. Conclusion We demonstrate that REMC utilizing the pull move

  3. The Radical Promise of Reformist Zeal: What Makes "Inquiry for Equity" Plausible?

    Science.gov (United States)

    Lashaw, Amanda

    2010-01-01

    Education reform movements often promise more than they deliver. Why are such promises plausible in light of seemingly perpetual education reform? Drawing on ethnographic fieldwork based in a nonprofit education reform organization, this article explores the appeal of popular notions about "using data to close the racial achievement…

  4. Construction of a 21-Component Layered Mixture Experiment Design Using a New Mixture Coordinate-Exchange Algorithm

    International Nuclear Information System (INIS)

    Piepel, Gregory F.; Cooley, Scott K.; Jones, Bradley

    2005-01-01

    This paper describes the solution to a unique and challenging mixture experiment design problem involving: (1) 19 and 21 components for two different parts of the design, (2) many single-component and multi-component constraints, (3) augmentation of existing data, (4) a layered design developed in stages, and (5) a no-candidate-point optimal design approach. The problem involved studying the liquidus temperature of spinel crystals as a function of nuclear waste glass composition. The statistical objective was to develop an experimental design by augmenting existing glasses with new nonradioactive and radioactive glasses chosen to cover the designated nonradioactive and radioactive experimental regions. The existing 144 glasses were expressed as 19-component nonradioactive compositions and then augmented with 40 new nonradioactive glasses. These included 8 glasses on the outer layer of the region, 27 glasses on an inner layer, 2 replicate glasses at the centroid, and one replicate each of three existing glasses. Then, the 144 + 40 = 184 glasses were expressed as 21-component radioactive compositions, and augmented with 5 radioactive glasses. A D-optimal design algorithm was used to select the new outer layer, inner layer, and radioactive glasses. Several statistical software packages can generate D-optimal experimental designs, but nearly all of them require a set of candidate points (e.g., vertices) from which to select design points. The large number of components (19 or 21) and many constraints made it impossible to generate the huge number of vertices and other typical candidate points. JMP was used to select design points without candidate points. JMP uses a coordinate-exchange algorithm modified for mixture experiments, which is discussed and illustrated in the paper

  5. An ATR architecture for algorithm development and testing

    Science.gov (United States)

    Breivik, Gøril M.; Løkken, Kristin H.; Brattli, Alvin; Palm, Hans C.; Haavardsholm, Trym

    2013-05-01

    A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development. In this paper we present an architecture for ATR systems that is designed to be exible, generic and efficient. The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations. A software framework in C++ that handles large data ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.

  6. Design optimization of shell-and-tube heat exchangers using single objective and multiobjective particle swarm optimization

    International Nuclear Information System (INIS)

    Elsays, Mostafa A.; Naguib Aly, M; Badawi, Alya A.

    2010-01-01

    The Particle Swarm Optimization (PSO) algorithm is used to optimize the design of shell-and-tube heat exchangers and determine the optimal feasible solutions so as to eliminate trial-and-error during the design process. The design formulation takes into account the area and the total annual cost of heat exchangers as two objective functions together with operating as well as geometrical constraints. The Nonlinear Constrained Single Objective Particle Swarm Optimization (NCSOPSO) algorithm is used to minimize and find the optimal feasible solution for each of the nonlinear constrained objective functions alone, respectively. Then, a novel Nonlinear Constrained Mult-objective Particle Swarm Optimization (NCMOPSO) algorithm is used to minimize and find the Pareto optimal solutions for both of the nonlinear constrained objective functions together. The experimental results show that the two algorithms are very efficient, fast and can find the accurate optimal feasible solutions of the shell and tube heat exchangers design optimization problem. (orig.)

  7. An Agent-Based Co-Evolutionary Multi-Objective Algorithm for Portfolio Optimization

    Directory of Open Access Journals (Sweden)

    Rafał Dreżewski

    2017-08-01

    Full Text Available Algorithms based on the process of natural evolution are widely used to solve multi-objective optimization problems. In this paper we propose the agent-based co-evolutionary algorithm for multi-objective portfolio optimization. The proposed technique is compared experimentally to the genetic algorithm, co-evolutionary algorithm and a more classical approach—the trend-following algorithm. During the experiments historical data from the Warsaw Stock Exchange is used in order to assess the performance of the compared algorithms. Finally, we draw some conclusions from these experiments, showing the strong and weak points of all the techniques.

  8. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence from Word Segmentation

    Science.gov (United States)

    Phillips, Lawrence; Pearl, Lisa

    2015-01-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's "cognitive plausibility." We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition…

  9. Channel Access Algorithm Design for Automatic Identification System

    Institute of Scientific and Technical Information of China (English)

    Oh Sang-heon; Kim Seung-pum; Hwang Dong-hwan; Park Chan-sik; Lee Sang-jeong

    2003-01-01

    The Automatic Identification System (AIS) is a maritime equipment to allow an efficient exchange of the navigational data between ships and between ships and shore stations. It utilizes a channel access algorithm which can quickly resolve conflicts without any intervention from control stations. In this paper, a design of channel access algorithm for the AIS is presented. The input/output relationship of each access algorithm module is defined by drawing the state transition diagram, dataflow diagram and flowchart based on the technical standard, ITU-R M.1371. In order to verify the designed channel access algorithm, the simulator was developed using the C/C++ programming language. The results show that the proposed channel access algorithm can properly allocate transmission slots and meet the operational performance requirements specified by the technical standard.

  10. Optimization of shell-and-tube heat exchangers conforming to TEMA standards with designs motivated by constructal theory

    International Nuclear Information System (INIS)

    Yang, Jie; Fan, Aiwu; Liu, Wei; Jacobi, Anthony M.

    2014-01-01

    Highlights: • A design method of heat exchangers motivated by constructal theory is proposed. • A genetic algorithm is applied and the TEMA standards are rigorously followed. • Three cases are studied to illustrate the advantage of the proposed design method. • The design method will reduce the total cost compared to two other methods. - Abstract: A modified optimization design approach motivated by constructal theory is proposed for shell-and-tube heat exchangers in the present paper. In this method, a shell-and-tube heat exchanger is divided into several in-series heat exchangers. The Tubular Exchanger Manufacturers Association (TEMA) standards are rigorously followed for all design parameters. The total cost of the whole shell-and-tube heat exchanger is set as the objective function, including the investment cost for initial manufacture and the operational cost involving the power consumption to overcome the frictional pressure loss. A genetic algorithm is applied to minimize the cost function by adjusting parameters such as the tube and shell diameters, tube length and tube arrangement. Three cases are studied which indicate that the modified design approach can significantly reduce the total cost compared to the original design method and traditional genetic algorithm design method

  11. Queue and stack sorting algorithm optimization and performance analysis

    Science.gov (United States)

    Qian, Mingzhu; Wang, Xiaobao

    2018-04-01

    Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.

  12. On the biological plausibility of Wind Turbine Syndrome.

    Science.gov (United States)

    Harrison, Robert V

    2015-01-01

    An emerging environmental health issue relates to potential ill-effects of wind turbine noise. There have been numerous suggestions that the low-frequency acoustic components in wind turbine signals can cause symptoms associated with vestibular system disorders, namely vertigo, nausea, and nystagmus. This constellation of symptoms has been labeled as Wind Turbine Syndrome, and has been identified in case studies of individuals living close to wind farms. This review discusses whether it is biologically plausible for the turbine noise to stimulate the vestibular parts of the inner ear and, by extension, cause Wind Turbine Syndrome. We consider the sound levels that can activate the semicircular canals or otolith end organs in normal subjects, as well as in those with preexisting conditions known to lower vestibular threshold to sound stimulation.

  13. An Improved Fruit Fly Optimization Algorithm and Its Application in Heat Exchange Fouling Ultrasonic Detection

    Directory of Open Access Journals (Sweden)

    Xia Li

    2018-01-01

    Full Text Available Inspired by the basic theory of Fruit Fly Optimization Algorithm, in this paper, cat mapping was added to the original algorithm, and the individual distribution and evolution mechanism of fruit fly population were improved in order to increase the search speed and accuracy. The flowchart of the improved algorithm was drawn to show its procedure. Using classical test functions, simulation optimization results show that the improved algorithm has faster and more reliable optimization ability. The algorithm was then combined with sparse decomposition theory and used in processing fouling detection ultrasonic signals to verify the validity and practicability of the improved algorithm.

  14. Accurate orbital-dependent correlation and exchange-correlation potentials from non-iterative ab initio dft calculations

    Science.gov (United States)

    Grabowski, Ireneusz; Lotrich, Victor

    2005-08-01

    A new approximate non-iterative procedure to obtain accurate correlation and exchange-correlation potentials of Kohn-Sham (KS) density functional theory (DFT) is presented. By carrying out only one step of the correlated optimized effective potential (OEP) iterations following the standard iterative exchange-only OEP, one can recover accurate correlation potentials corresponding to the orbital-dependent second-order many-body perturbation theory [MBPT(2)] energy functional that are hardly discernible from those obtained by the more expensive, fully iterative procedure. This new 'one-step' OEP-MBPT(2) algorithm reflects the non-iterative, perturbative algorithm of standard, canonical MBPT(2) of ab initio wave function theory, while it allows the correlation potentials to readjust and include the majority of the MBPT(2) correlation effect. It is also flexible in the treatment of exchange and the Hartree-Fock orbitals may be used in lieu of the exchange-only OEP orbitals, when the correlation or exchange-correlation potential is of interest.

  15. Multi-AGV path planning with double-path constraints by using an improved genetic algorithm.

    Directory of Open Access Journals (Sweden)

    Zengliang Han

    Full Text Available This paper investigates an improved genetic algorithm on multiple automated guided vehicle (multi-AGV path planning. The innovations embody in two aspects. First, three-exchange crossover heuristic operators are used to produce more optimal offsprings for getting more information than with the traditional two-exchange crossover heuristic operators in the improved genetic algorithm. Second, double-path constraints of both minimizing the total path distance of all AGVs and minimizing single path distances of each AGV are exerted, gaining the optimal shortest total path distance. The simulation results show that the total path distance of all AGVs and the longest single AGV path distance are shortened by using the improved genetic algorithm.

  16. Utilization of genetic algorithm in on-line tuning of fluid power servos

    Energy Technology Data Exchange (ETDEWEB)

    Halme, J.

    1997-12-31

    This study describes a robust and plausible method based on genetic algorithms suitable for tuning a regulator. The main advantages of the method presented is its robustness and easy-to-use feature. In this thesis the method is demonstrated by searching for appropriate control parameters of a state-feedback controller in a fluid power environment. To corroborate the robustness of the tuning method, two earlier studies are also presented in the appendix, where the presented tuning method is used in different kinds of regulator tuning situations. (orig.) 33 refs.

  17. Utilization of genetic algorithm in on-line tuning of fluid power servos

    Energy Technology Data Exchange (ETDEWEB)

    Halme, J

    1998-12-31

    This study describes a robust and plausible method based on genetic algorithms suitable for tuning a regulator. The main advantages of the method presented is its robustness and easy-to-use feature. In this thesis the method is demonstrated by searching for appropriate control parameters of a state-feedback controller in a fluid power environment. To corroborate the robustness of the tuning method, two earlier studies are also presented in the appendix, where the presented tuning method is used in different kinds of regulator tuning situations. (orig.) 33 refs.

  18. A distributed algorithm for machine learning

    Science.gov (United States)

    Chen, Shihong

    2018-04-01

    This paper considers a distributed learning problem in which a group of machines in a connected network, each learning its own local dataset, aim to reach a consensus at an optimal model, by exchanging information only with their neighbors but without transmitting data. A distributed algorithm is proposed to solve this problem under appropriate assumptions.

  19. Is knowing believing? The role of event plausibility and background knowledge in planting false beliefs about the personal past.

    Science.gov (United States)

    Pezdek, Kathy; Blandon-Gitlin, Iris; Lam, Shirley; Hart, Rhiannon Ellis; Schooler, Jonathan W

    2006-12-01

    False memories are more likely to be planted for plausible than for implausible events, but does just knowing about an implausible event make individuals more likely to think that the event happened to them? Two experiments assessed the independent contributions o f plausibility a nd background knowledge to planting false beliefs. In Experiment 1, subjects rated 20 childhood events as to the likelihood of each event having happened to them. The list included the implausible target event "received an enema," a critical target event of Pezdek, Finger, and Hodge (1997). Two weeks later, subjects were presented with (1) information regarding the high prevalence rate of enemas; (2) background information on how to administer an enema; (3) neither type of information; or (4) both. Immediately or 2 weeks later, they rated the 20 childhood events again. Only plausibility significantly increased occurrence ratings. In Experiment 2, the target event was changed from "barium enema administered in a hospital" to "home enema for constipation"; significant effects of both plausibility and background knowledge resulted. The results suggest that providing background knowledge can increase beliefs about personal events, but that its impact is limited by the extent of the individual's familiarity with the context of the suggested target event.

  20. Dynamic control for a quadruped locomotion robot in consideration of the leg-support-exchange phenomenon

    International Nuclear Information System (INIS)

    Sano, Akihito; Furusho, Junji; Okajima, Yosuke

    1988-01-01

    This paper proposes a new control method for quardruped walking robots in which the leg-support-exchange is lithely implemented. First, the authors formulate the leg-support-exchange phenomenon in 'Trot' using Lagrange's collision equation. Then the continuous walking can be numerically analyzed. Secondly, we propose a new control algorithm for leg-support-exchange. The conventional high gain local feedback causes many problems such as slip and excessive high torque in the leg-support-exchange phase of dynamic walking since it is impossible in this phase to prepare the proper reference values beforehand. In this algorithm, the control law is changed to 'free mode' or 'constant current mode' in order to adjust to the environment. The effectiveness of the proposed control strategy is confirmed by computer simulation and experiments using the walking robot 'COLT-1.' (author)

  1. Three-pass protocol scheme for bitmap image security by using vernam cipher algorithm

    Science.gov (United States)

    Rachmawati, D.; Budiman, M. A.; Aulya, L.

    2018-02-01

    Confidentiality, integrity, and efficiency are the crucial aspects of data security. Among the other digital data, image data is too prone to abuse of operation like duplication, modification, etc. There are some data security techniques, one of them is cryptography. The security of Vernam Cipher cryptography algorithm is very dependent on the key exchange process. If the key is leaked, security of this algorithm will collapse. Therefore, a method that minimizes key leakage during the exchange of messages is required. The method which is used, is known as Three-Pass Protocol. This protocol enables message delivery process without the key exchange. Therefore, the sending messages process can reach the receiver safely without fear of key leakage. The system is built by using Java programming language. The materials which are used for system testing are image in size 200×200 pixel, 300×300 pixel, 500×500 pixel, 800×800 pixel and 1000×1000 pixel. The result of experiments showed that Vernam Cipher algorithm in Three-Pass Protocol scheme could restore the original image.

  2. SPEEDUPtrademark ion exchange column model

    International Nuclear Information System (INIS)

    Hang, T.

    2000-01-01

    A transient model to describe the process of loading a solute onto the granular fixed bed in an ion exchange (IX) column has been developed using the SpeedUptrademark software package. SpeedUp offers the advantage of smooth integration into other existing SpeedUp flowsheet models. The mathematical algorithm of a porous particle diffusion model was adopted to account for convection, axial dispersion, film mass transfer, and pore diffusion. The method of orthogonal collocation on finite elements was employed to solve the governing transport equations. The model allows the use of a non-linear Langmuir isotherm based on an effective binary ionic exchange process. The SpeedUp column model was tested by comparing to the analytical solutions of three transport problems from the ion exchange literature. In addition, a sample calculation of a train of three crystalline silicotitanate (CST) IX columns in series was made using both the SpeedUp model and Purdue University's VERSE-LC code. All test cases showed excellent agreement between the SpeedUp model results and the test data. The model can be readily used for SuperLigtrademark ion exchange resins, once the experimental data are complete

  3. A new HBMO algorithm for multiobjective daily Volt/Var control in distribution systems considering Distributed Generators

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher [Electrical and Electronics Engineering Department, Shiraz University of Technology, Modars Blvd. P.O. 71555-313, Shiraz (Iran, Islamic Republic of)

    2011-03-15

    In recent years, Distributed Generators (DGs) connected to the distribution network have received increasing attention. The connection of enormous DGs into existing distribution network changes the operation of distribution systems. Because of the small X/R ratio and radial structure of distribution systems, DGs affect the daily Volt/Var control. This paper presents a new algorithm for multiobjective daily Volt/Var control in distribution systems including Distributed Generators (DGs). The objectives are costs of energy generation by DGs and distribution companies, electrical energy losses and the voltage deviations for the next day. A new optimization algorithm based on a Chaotic Improved Honey Bee Mating Optimization (CIHBMO) is proposed to determine the active power values of DGs, reactive power values of capacitors and tap positions of transformers for the next day. Since objectives are not the same, a fuzzy system is used to calculate the best solution. The plausibility of the proposed algorithm is demonstrated and its performance is compared with other methods on a 69-bus distribution feeder. Simulation results illustrate that the proposed algorithm has better outperforms the other algorithms. (author)

  4. A new HBMO algorithm for multiobjective daily Volt/Var control in distribution systems considering Distributed Generators

    International Nuclear Information System (INIS)

    Niknam, Taher

    2011-01-01

    In recent years, Distributed Generators (DGs) connected to the distribution network have received increasing attention. The connection of enormous DGs into existing distribution network changes the operation of distribution systems. Because of the small X/R ratio and radial structure of distribution systems, DGs affect the daily Volt/Var control. This paper presents a new algorithm for multiobjective daily Volt/Var control in distribution systems including Distributed Generators (DGs). The objectives are costs of energy generation by DGs and distribution companies, electrical energy losses and the voltage deviations for the next day. A new optimization algorithm based on a Chaotic Improved Honey Bee Mating Optimization (CIHBMO) is proposed to determine the active power values of DGs, reactive power values of capacitors and tap positions of transformers for the next day. Since objectives are not the same, a fuzzy system is used to calculate the best solution. The plausibility of the proposed algorithm is demonstrated and its performance is compared with other methods on a 69-bus distribution feeder. Simulation results illustrate that the proposed algorithm has better outperforms the other algorithms.

  5. Optimization of liquid LBE-helium heat exchanger in ADS

    International Nuclear Information System (INIS)

    Meng Ruixue; Cai Jun; Huai Xiulan; Chen Fei

    2015-01-01

    The multi-parameter optimization of the liquid LBE-helium heat exchanger in ADS was conducted by genetic algorithm with entransy dissipation number and total cost as objective functions. The results show that the effectiveness of heat exchanger increases by 10.5% and 3.8%, and the total cost-reduces by 5.9% and 27.0% respectively with two optimization methods. Nevertheless, the optimization processes trade off increasing heat transfer area and decreasing heat transfer effectiveness respectively against achieving optimization targets. By comprehensively considering heat exchanger performance and cost-benefit, the optimization method with entransy dissipation number as the objective function is found to be more advantageous. (authors)

  6. Observability and Estimation of Distributed Space Systems via Local Information-Exchange Networks

    Science.gov (United States)

    Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Rahmani, Amirreza

    2011-01-01

    Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation-flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are proposed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms rely on a local information exchange network, relaxing the assumptions on existing algorithms. Distributed space systems rely on a signal transmission network among multiple spacecraft for their operation. Control and coordination among multiple spacecraft in a formation is facilitated via a network of relative sensing and interspacecraft communications. Guidance, navigation, and control rely on the sensing network. This network becomes more complex the more spacecraft are added, or as mission requirements become more complex. The observability of a formation state was observed by a set of local observations from a particular node in the formation. Formation observability can be parameterized in terms of the matrices appearing in the formation dynamics and observation matrices. An agreement protocol was used as a mechanism for observing formation states from local measurements. An agreement protocol is essentially an unforced dynamic system whose trajectory is governed by the interconnection geometry and initial condition of each node, with a goal of reaching a common value of interest. The observability of the interconnected system depends on the geometry of the network, as well as the position of the observer relative to the topology. For the first time, critical GN&C (guidance, navigation, and control estimation) subsystems are synthesized by bringing the contribution of the spacecraft information-exchange network to the forefront of algorithmic analysis and design. The result is a

  7. Computation of many-particle quantum trajectories with exchange interaction: application to the simulation of nanoelectronic devices

    International Nuclear Information System (INIS)

    Alarcón, A; Yaro, S; Cartoixà, X; Oriols, X

    2013-01-01

    Following Oriols (2007 Phys. Rev. Lett. 98 066803), an algorithm to deal with the exchange interaction in non-separable quantum systems is presented. The algorithm can be applied to fermions or bosons and, by construction, it exactly ensures that any observable is totally independent of the interchange of particles. It is based on the use of conditional Bohmian wave functions which are solutions of single-particle pseudo-Schrödinger equations. The exchange symmetry is directly defined by demanding symmetry properties of the quantum trajectories in the configuration space with a universal algorithm, rather than through a particular exchange–correlation functional introduced into the single-particle pseudo-Schrödinger equation. It requires the computation of N 2 conditional wave functions to deal with N identical particles. For separable Hamiltonians, the algorithm reduces to the standard Slater determinant for fermions (or permanent for bosons). A numerical test for a two-particle system, where exact solutions for non-separable Hamiltonians are computationally accessible, is presented. The numerical viability of the algorithm for quantum electron transport (in a far-from-equilibrium time-dependent open system) is demonstrated by computing the current and fluctuations in a nano-resistor, with exchange and Coulomb interactions among electrons. (paper)

  8. Resolution of cosmological singularity and a plausible mechanism of the big bang

    International Nuclear Information System (INIS)

    Choudhury, D.C.

    2002-01-01

    The initial cosmological singularity in the framework of the general theory of relativity is resolved by introducing the effect of the uncertainty principle of quantum theory without violating conventional laws of physics. A plausible account of the mechanism of the big bang, analogous to that of a nuclear explosion, is given and the currently accepted Planck temperature of ≅10 32 K at the beginning of the big bang is predicted

  9. Plausibility and evidence: the case of homeopathy.

    Science.gov (United States)

    Rutten, Lex; Mathie, Robert T; Fisher, Peter; Goossens, Maria; van Wassenhoven, Michel

    2013-08-01

    Homeopathy is controversial and hotly debated. The conclusions of systematic reviews of randomised controlled trials of homeopathy vary from 'comparable to conventional medicine' to 'no evidence of effects beyond placebo'. It is claimed that homeopathy conflicts with scientific laws and that homoeopaths reject the naturalistic outlook, but no evidence has been cited. We are homeopathic physicians and researchers who do not reject the scientific outlook; we believe that examination of the prior beliefs underlying this enduring stand-off can advance the debate. We show that interpretations of the same set of evidence--for homeopathy and for conventional medicine--can diverge. Prior disbelief in homeopathy is rooted in the perceived implausibility of any conceivable mechanism of action. Using the 'crossword analogy', we demonstrate that plausibility bias impedes assessment of the clinical evidence. Sweeping statements about the scientific impossibility of homeopathy are themselves unscientific: scientific statements must be precise and testable. There is growing evidence that homeopathic preparations can exert biological effects; due consideration of such research would reduce the influence of prior beliefs on the assessment of systematic review evidence.

  10. Multi-agent Pareto appointment exchanging in hospital patient scheduling

    NARCIS (Netherlands)

    Vermeulen, I.B.; Bohté, S.M.; Somefun, D.J.A.; Poutré, La J.A.

    2007-01-01

    We present a dynamic and distributed approach to the hospital patient scheduling problem, in which patients can have multiple appointments that have to be scheduled to different resources. To efficiently solve this problem we develop a multi-agent Pareto-improvement appointment exchanging algorithm:

  11. Particulate air pollution and increased mortality: Biological plausibility for causal relationship

    International Nuclear Information System (INIS)

    Henderson, R.F.

    1995-01-01

    Recently, a number of epidemiological studies have concluded that ambient particulate exposure is associated with increased mortality and morbidity at PM concentrations well below those previously thought to affect human health. These studies have been conducted in several different geographical locations and have involved a range of populations. While the consistency of the findings and the presence of an apparent concentration response relationship provide a strong argument for causality, epidemiological studies can only conclude this based upon inference from statistical associations. The biological plausibility of a causal relationship between low concentrations of PM and daily mortality and morbidity rates is neither intuitively obvious nor expected based on past experimental studies on the toxicity of inhaled particles. Chronic toxicity from inhaled, poorly soluble particles has been observed based on the slow accumulation of large lung burdens of particles, not on small daily fluctuations in PM levels. Acute toxicity from inhaled particles is associated mainly with acidic particles and is observed at much higher concentrations than those observed in the epidemiology studies reporting an association between PM concentrations and morbidity/mortality. To approach the difficult problem of determining if the association between PM concentrations and daily morbidity and mortality is biologically plausible and causal, one must consider (1) the chemical and physical characteristics of the particles in the inhaled atmospheres, (2) the characteristics of the morbidity/mortality observed and the people who are affected, and (3) potential mechanisms that might link the two

  12. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    Science.gov (United States)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  13. DATA SECURITY IN LOCAL AREA NETWORK BASED ON FAST ENCRYPTION ALGORITHM

    Directory of Open Access Journals (Sweden)

    G. Ramesh

    2010-06-01

    Full Text Available Hacking is one of the greatest problems in the wireless local area networks. Many algorithms have been used to prevent the outside attacks to eavesdrop or prevent the data to be transferred to the end-user safely and correctly. In this paper, a new symmetrical encryption algorithm is proposed that prevents the outside attacks. The new algorithm avoids key exchange between users and reduces the time taken for the encryption and decryption. It operates at high data rate in comparison with The Data Encryption Standard (DES, Triple DES (TDES, Advanced Encryption Standard (AES-256, and RC6 algorithms. The new algorithm is applied successfully on both text file and voice message.

  14. Application of epidemic algorithms for smart grids control

    International Nuclear Information System (INIS)

    Krkoleva, Aleksandra

    2012-01-01

    Smart Grids are a new concept for electricity networks development, aiming to provide economically efficient and sustainable power system by integrating effectively the actions and needs of the network users. The thesis addresses the Smart Grids concept, with emphasis on the control strategies developed on the basis of epidemic algorithms, more specifically, gossip algorithms. The thesis is developed around three Smart grid aspects: the changed role of consumers in terms of taking part in providing services within Smart Grids; the possibilities to implement decentralized control strategies based on distributed algorithms; and information exchange and benefits emerging from implementation of information and communication technologies. More specifically, the thesis presents a novel approach for providing ancillary services by implementing gossip algorithms. In a decentralized manner, by exchange of information between the consumers and by making decisions on local level, based on the received information and local parameters, the group achieves its global objective, i. e. providing ancillary services. The thesis presents an overview of the Smart Grids control strategies with emphasises on new strategies developed for the most promising Smart Grids concepts, as Micro grids and Virtual power plants. The thesis also presents the characteristics of epidemic algorithms and possibilities for their implementation in Smart Grids. Based on the research on epidemic algorithms, two applications have been developed. These applications are the main outcome of the research. The first application enables consumers, represented by their commercial aggregators, to participate in load reduction and consequently, to participate in balancing market or reduce the balancing costs of the group. In this context, the gossip algorithms are used for aggregator's message dissemination for load reduction and households and small commercial and industrial consumers to participate in maintaining

  15. Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange.

    Science.gov (United States)

    Hula, Andreas; Montague, P Read; Dayan, Peter

    2015-06-01

    Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent's preference for equity with their partner, beliefs about the partner's appetite for equity, beliefs about the partner's model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference.

  16. Vulnerabilities to agricultural production shocks: An extreme, plausible scenario for assessment of risk for the insurance sector

    Directory of Open Access Journals (Sweden)

    Tobias Lunt

    2016-01-01

    Full Text Available Climate risks pose a threat to the function of the global food system and therefore also a hazard to the global financial sector, the stability of governments, and the food security and health of the world’s population. This paper presents a method to assess plausible impacts of an agricultural production shock and potential materiality for global insurers. A hypothetical, near-term, plausible, extreme scenario was developed based upon modules of historical agricultural production shocks, linked under a warm phase El Niño-Southern Oscillation (ENSO meteorological framework. The scenario included teleconnected floods and droughts in disparate agricultural production regions around the world, as well as plausible, extreme biotic shocks. In this scenario, global crop yield declines of 10% for maize, 11% for soy, 7% for wheat and 7% for rice result in quadrupled commodity prices and commodity stock fluctuations, civil unrest, significant negative humanitarian consequences and major financial losses worldwide. This work illustrates a need for the scientific community to partner across sectors and industries towards better-integrated global data, modeling and analytical capacities, to better respond to and prepare for concurrent agricultural failure. Governments, humanitarian organizations and the private sector collectively may recognize significant benefits from more systematic assessment of exposure to agricultural climate risk.

  17. De Facto Exchange Rate Regime Classifications Are Better Than You Think

    OpenAIRE

    Michael Bleaney; Mo Tian; Lin Yin

    2015-01-01

    Several de facto exchange rate regime classifications have been widely used in empirical research, but they are known to disagree with one another to a disturbing extent. We dissect the algorithms employed and argue that they can be significantly improved. We implement the improvements, and show that there is a far higher agreement rate between the modified classifications. We conclude that the current pessimism about de facto exchange rate regime classification schemes is unwarranted.

  18. Morality Principles for Risk Modelling: Needs and Links with the Origins of Plausible Inference

    Science.gov (United States)

    Solana-Ortega, Alberto; Solana, Vicente

    2009-12-01

    In comparison with the foundations of probability calculus, the inescapable and controversial issue of how to assign probabilities has only recently become a matter of formal study. The introduction of information as a technical concept was a milestone, but the most promising entropic assignment methods still face unsolved difficulties, manifesting the incompleteness of plausible inference theory. In this paper we examine the situation faced by risk analysts in the critical field of extreme events modelling, where the former difficulties are especially visible, due to scarcity of observational data, the large impact of these phenomena and the obligation to assume professional responsibilities. To respond to the claim for a sound framework to deal with extremes, we propose a metafoundational approach to inference, based on a canon of extramathematical requirements. We highlight their strong moral content, and show how this emphasis in morality, far from being new, is connected with the historic origins of plausible inference. Special attention is paid to the contributions of Caramuel, a contemporary of Pascal, unfortunately ignored in the usual mathematical accounts of probability.

  19. Performance measurement of plate fin heat exchanger by exploration: ANN, ANFIS, GA, and SA

    OpenAIRE

    A.K. Gupta; P. Kumar; R.K. Sahoo; A.K. Sahu; S.K. Sarangi

    2017-01-01

    An experimental work is conducted on counter flow plate fin compact heat exchanger using offset strip fin under different mass flow rates. The training, testing, and validation set of data has been collected by conducting experiments. Next, artificial neural network merged with Genetic Algorithm (GA) utilized to measure the performance of plate-fin compact heat exchanger. The main aim of present research is to measure the performance of plate-fin compact heat exchanger and to provide full exp...

  20. Plausible scenarios for the radiography profession in Sweden in 2025

    International Nuclear Information System (INIS)

    Björkman, B.; Fridell, K.; Tavakol Olofsson, P.

    2017-01-01

    Introduction: Radiography is a healthcare speciality with many technical challenges. Advances in engineering and information technology applications may continue to drive and be driven by radiographers. The world of diagnostic imaging is changing rapidly and radiographers must be proactive in order to survive. To ensure sustainable development, organisations have to identify future opportunities and threats in a timely manner and incorporate them into their strategic planning. Hence, the aim of this study was to analyse and describe plausible scenarios for the radiography profession in 2025. Method: The study has a qualitative design with an inductive approach based on focus group interviews. The interviews were inspired by the Scenario-Planning method. Results: Of the seven trends identified in a previous study, the radiographers considered two as the most uncertain scenarios that would have the greatest impact on the profession should they occur. These trends, labelled “Access to career advancement” and “A sufficient number of radiographers”, were inserted into the scenario cross. The resulting four plausible future scenarios were: The happy radiographer, the specialist radiographer, the dying profession and the assembly line. Conclusion: It is suggested that “The dying profession” scenario could probably be turned in the opposite direction by facilitating career development opportunities for radiographers within the profession. Changing the direction would probably lead to a profession composed of “happy radiographers” who are specialists, proud of their profession and competent to carry out advanced tasks, in contrast to being solely occupied by “the assembly line”. - Highlights: • The world of radiography is changing rapidly and radiographers must be proactive in order to survive. • Future opportunities and threats should be identified and incorporated into the strategic planning. • Appropriate actions can probably change the

  1. DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Tewfik Ahmed H

    2006-01-01

    Full Text Available Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.

  2. Biologically plausible learning in neural networks: a lesson from bacterial chemotaxis.

    Science.gov (United States)

    Shimansky, Yury P

    2009-12-01

    Learning processes in the brain are usually associated with plastic changes made to optimize the strength of connections between neurons. Although many details related to biophysical mechanisms of synaptic plasticity have been discovered, it is unclear how the concurrent performance of adaptive modifications in a huge number of spatial locations is organized to minimize a given objective function. Since direct experimental observation of even a relatively small subset of such changes is not feasible, computational modeling is an indispensable investigation tool for solving this problem. However, the conventional method of error back-propagation (EBP) employed for optimizing synaptic weights in artificial neural networks is not biologically plausible. This study based on computational experiments demonstrated that such optimization can be performed rather efficiently using the same general method that bacteria employ for moving closer to an attractant or away from a repellent. With regard to neural network optimization, this method consists of regulating the probability of an abrupt change in the direction of synaptic weight modification according to the temporal gradient of the objective function. Neural networks utilizing this method (regulation of modification probability, RMP) can be viewed as analogous to swimming in the multidimensional space of their parameters in the flow of biochemical agents carrying information about the optimality criterion. The efficiency of RMP is comparable to that of EBP, while RMP has several important advantages. Since the biological plausibility of RMP is beyond a reasonable doubt, the RMP concept provides a constructive framework for the experimental analysis of learning in natural neural networks.

  3. Resolution of Cosmological Singularity and a Plausible Mechanism of the Big Bang

    OpenAIRE

    Choudhury, D. C.

    2001-01-01

    The initial cosmological singularity in the framework of the general theory of relativity is resolved by introducing the effect of the uncertainty principle of quantum theory without violating conventional laws of physics. A plausible account of the mechanism of the big bang, analogous to that of a nuclear explosion, is given and the currently accepted Planck temperature of about 10^(32)K at the beginning of the big bang is predicted. Subj-class: cosmology: theory-pre-big bang; mechanism of t...

  4. Development of the novel control algorithm for the small proton exchange membrane fuel cell stack without external humidification

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae-Hoon; Kim, Sang-Hyun; Kim, Wook; Lee, Jong-Hak; Cho, Kwan-Seok; Choi, Woojin [Department of Electrical Engineering, Soongsil University, 1-1 Sangdo-dong, Dongjak-gu, Seoul 156-743 (Korea); Park, Kyung-Won [Department of Chemical/Environmental Engineering, Soongsil University, 1-1 Sangdo-dong, Dongjak-gu, Seoul 156-743 (Korea)

    2010-09-15

    Small PEM (proton exchange membrane) fuel cell systems do not require humidification and have great commercialization possibilities. However, methods for controlling small PEM fuel cell stacks have not been clearly established. In this paper, a control method for small PEM fuel cell systems using a dual closed loop with a static feed-forward structure is defined and realized using a microcontroller. The fundamental elements that need to be controlled in fuel cell systems include the supply of air and hydrogen, water management inside the stack, and heat management of the stack. For small PEM fuel cell stacks operated without a separate humidifier, fans are essential for air supply, heat management, and water management of the stack. A purge valve discharges surplus water from the stack. The proposed method controls the fan using a dual closed loop with a static feed-forward structure, thereby improving system efficiency and operation stability. The validity of the proposed method is confirmed by experiments using a 150-W PEM fuel cell stack. We expect the proposed algorithm to be widely used for controlling small PEM fuel cell stacks. (author)

  5. Loop algorithms for quantum simulations of fermion models on lattices

    International Nuclear Information System (INIS)

    Kawashima, N.; Gubernatis, J.E.; Evertz, H.G.

    1994-01-01

    Two cluster algorithms, based on constructing and flipping loops, are presented for world-line quantum Monte Carlo simulations of fermions and are tested on the one-dimensional repulsive Hubbard model. We call these algorithms the loop-flip and loop-exchange algorithms. For these two algorithms and the standard world-line algorithm, we calculated the autocorrelation times for various physical quantities and found that the ordinary world-line algorithm, which uses only local moves, suffers from very long correlation times that makes not only the estimate of the error difficult but also the estimate of the average values themselves difficult. These difficulties are especially severe in the low-temperature, large-U regime. In contrast, we find that new algorithms, when used alone or in combinations with themselves and the standard algorithm, can have significantly smaller autocorrelation times, in some cases being smaller by three orders of magnitude. The new algorithms, which use nonlocal moves, are discussed from the point of view of a general prescription for developing cluster algorithms. The loop-flip algorithm is also shown to be ergodic and to belong to the grand canonical ensemble. Extensions to other models and higher dimensions are briefly discussed

  6. A robust stochastic approach for design optimization of air cooled heat exchangers

    Energy Technology Data Exchange (ETDEWEB)

    Doodman, A.R.; Fesanghary, M.; Hosseini, R. [Department of Mechanical Engineering, Amirkabir University of Technology, 424-Hafez Avenue, 15875-4413 Tehran (Iran)

    2009-07-15

    This study investigates the use of global sensitivity analysis (GSA) and harmony search (HS) algorithm for design optimization of air cooled heat exchangers (ACHEs) from the economic viewpoint. In order to reduce the size of the optimization problem, GSA is performed to examine the effect of the design parameters and to identify the non-influential parameters. Then HS is applied to optimize influential parameters. To demonstrate the ability of the HS algorithm a case study is considered and for validation purpose, genetic algorithm (GA) is also applied to this case study. Results reveal that the HS algorithm converges to optimum solution with higher accuracy in comparison with GA. (author)

  7. A robust stochastic approach for design optimization of air cooled heat exchangers

    International Nuclear Information System (INIS)

    Doodman, A.R.; Fesanghary, M.; Hosseini, R.

    2009-01-01

    This study investigates the use of global sensitivity analysis (GSA) and harmony search (HS) algorithm for design optimization of air cooled heat exchangers (ACHEs) from the economic viewpoint. In order to reduce the size of the optimization problem, GSA is performed to examine the effect of the design parameters and to identify the non-influential parameters. Then HS is applied to optimize influential parameters. To demonstrate the ability of the HS algorithm a case study is considered and for validation purpose, genetic algorithm (GA) is also applied to this case study. Results reveal that the HS algorithm converges to optimum solution with higher accuracy in comparison with GA

  8. Replica Exchange Gaussian Accelerated Molecular Dynamics: Improved Enhanced Sampling and Free Energy Calculation.

    Science.gov (United States)

    Huang, Yu-Ming M; McCammon, J Andrew; Miao, Yinglong

    2018-04-10

    Through adding a harmonic boost potential to smooth the system potential energy surface, Gaussian accelerated molecular dynamics (GaMD) provides enhanced sampling and free energy calculation of biomolecules without the need of predefined reaction coordinates. This work continues to improve the acceleration power and energy reweighting of the GaMD by combining the GaMD with replica exchange algorithms. Two versions of replica exchange GaMD (rex-GaMD) are presented: force constant rex-GaMD and threshold energy rex-GaMD. During simulations of force constant rex-GaMD, the boost potential can be exchanged between replicas of different harmonic force constants with fixed threshold energy. However, the algorithm of threshold energy rex-GaMD tends to switch the threshold energy between lower and upper bounds for generating different levels of boost potential. Testing simulations on three model systems, including the alanine dipeptide, chignolin, and HIV protease, demonstrate that through continuous exchanges of the boost potential, the rex-GaMD simulations not only enhance the conformational transitions of the systems but also narrow down the distribution width of the applied boost potential for accurate energetic reweighting to recover biomolecular free energy profiles.

  9. Artificial root foraging optimizer algorithm with hybrid strategies

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2017-02-01

    Full Text Available In this work, a new plant-inspired optimization algorithm namely the hybrid artificial root foraging optimizion (HARFO is proposed, which mimics the iterative root foraging behaviors for complex optimization. In HARFO model, two innovative strategies were developed: one is the root-to-root communication strategy, which enables the individual exchange information with each other in different efficient topologies that can essentially improve the exploration ability; the other is co-evolution strategy, which can structure the hierarchical spatial population driven by evolutionary pressure of multiple sub-populations that ensure the diversity of root population to be well maintained. The proposed algorithm is benchmarked against four classical evolutionary algorithms on well-designed test function suites including both classical and composition test functions. Through the rigorous performance analysis that of all these tests highlight the significant performance improvement, and the comparative results show the superiority of the proposed algorithm.

  10. High School Students' Evaluations, Plausibility (Re) Appraisals, and Knowledge about Topics in Earth Science

    Science.gov (United States)

    Lombardi, Doug; Bickel, Elliot S.; Bailey, Janelle M.; Burrell, Shondricka

    2018-01-01

    Evaluation is an important aspect of science and is receiving increasing attention in science education. The present study investigated (1) changes to plausibility judgments and knowledge as a result of a series of instructional scaffolds, called model-evidence link activities, that facilitated evaluation of scientific and alternative models in…

  11. Social signals and algorithmic trading of Bitcoin.

    Science.gov (United States)

    Garcia, David; Schweitzer, Frank

    2015-09-01

    The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment.

  12. On-line fouling monitor for heat exchangers

    International Nuclear Information System (INIS)

    Tsou, J.L.

    1995-01-01

    Biological and/or chemical fouling in utility service water system heat exchangers adversely affects operation and maintenance costs, and reduced heat transfer capability can force a power deaerating or even a plant shut down. In addition, service water heat exchanger performance is a safety issue for nuclear power plants, and the issue was highlighted by NRC in Generic Letter 89-13. Heat transfer losses due to fouling are difficult to measure and, usually, quantitative assessment of the impact of fouling is impossible. Plant operators typically measure inlet and outlet water temperatures and flow rates and then perform complex calculations for heat exchanger fouling resistance or ''cleanliness''. These direct estimates are often imprecise due to inadequate instrumentation. Electric Power Research Institute developed and patented an on-line condenser fouling monitor. This monitor may be installed in any location within the condenser; does not interfere with routine plant operations, including on-line mechanical and chemical treatment methods; and provides continuous, real-time readings of the heat transfer efficiency of the instrumented tube. This instrument can be modified to perform on-line monitoring of service water heat exchangers. This paper discusses the design, construction of the new monitor, and algorithm used to calculate service water heat exchanger fouling

  13. Performance measurement of plate fin heat exchanger by exploration: ANN, ANFIS, GA, and SA

    Directory of Open Access Journals (Sweden)

    A.K. Gupta

    2017-01-01

    Full Text Available An experimental work is conducted on counter flow plate fin compact heat exchanger using offset strip fin under different mass flow rates. The training, testing, and validation set of data has been collected by conducting experiments. Next, artificial neural network merged with Genetic Algorithm (GA utilized to measure the performance of plate-fin compact heat exchanger. The main aim of present research is to measure the performance of plate-fin compact heat exchanger and to provide full explanations. An artificial neural network predicted simulated data, which verified with experimental data under 10–20% error. Then, the authors examined two well-known global search techniques, simulated annealing and the genetic algorithm. The proposed genetic algorithm and Simulated Annealing (SA results have been summarized. The parameters are impartially important for good results. With the emergence of a new data-driven modeling technique, Neuro-fuzzy based systems are established in academic and practical applications. The neuro-fuzzy interference system (ANFIS has also been examined to undertake the problem related to plate-fin heat exchanger performance measurement under various parameters. Moreover, Parallel with ANFIS model and Artificial Neural Network (ANN model has been created with emphasizing the accuracy of the different techniques. A wide range of statistical indicators used to assess the performance of the models. Based on the comparison, it was revealed that technical ANFIS improve the accuracy of estimates in the small pool and tropical ANN.

  14. Neural correlates of early-closure garden-path processing: Effects of prosody and plausibility.

    Science.gov (United States)

    den Ouden, Dirk-Bart; Dickey, Michael Walsh; Anderson, Catherine; Christianson, Kiel

    2016-01-01

    Functional magnetic resonance imaging (fMRI) was used to investigate neural correlates of early-closure garden-path sentence processing and use of extrasyntactic information to resolve temporary syntactic ambiguities. Sixteen participants performed an auditory picture verification task on sentences presented with natural versus flat intonation. Stimuli included sentences in which the garden-path interpretation was plausible, implausible because of a late pragmatic cue, or implausible because of a semantic mismatch between an optionally transitive verb and the following noun. Natural sentence intonation was correlated with left-hemisphere temporal activation, but also with activation that suggests the allocation of more resources to interpretation when natural prosody is provided. Garden-path processing was associated with upregulation in bilateral inferior parietal and right-hemisphere dorsolateral prefrontal and inferior frontal cortex, while differences between the strength and type of plausibility cues were also reflected in activation patterns. Region of interest (ROI) analyses in regions associated with complex syntactic processing are consistent with a role for posterior temporal cortex supporting access to verb argument structure. Furthermore, ROI analyses within left-hemisphere inferior frontal gyrus suggest a division of labour, with the anterior-ventral part primarily involved in syntactic-semantic mismatch detection, the central part supporting structural reanalysis, and the posterior-dorsal part showing a general structural complexity effect.

  15. Freeze-thaw cycles induce content exchange between cell-sized lipid vesicles

    Science.gov (United States)

    Litschel, Thomas; Ganzinger, Kristina A.; Movinkel, Torgeir; Heymann, Michael; Robinson, Tom; Mutschler, Hannes; Schwille, Petra

    2018-05-01

    Early protocells are commonly assumed to consist of an amphiphilic membrane enclosing an RNA-based self-replicating genetic system and a primitive metabolism without protein enzymes. Thus, protocell evolution must have relied on simple physicochemical self-organization processes within and across such vesicular structures. We investigate freeze-thaw (FT) cycling as a potential environmental driver for the necessary content exchange between vesicles. To this end, we developed a conceptually simple yet statistically powerful high-throughput procedure based on nucleic acid-containing giant unilamellar vesicles (GUVs) as model protocells. GUVs are formed by emulsion transfer in glass bottom microtiter plates and hence can be manipulated and monitored by fluorescence microscopy without additional pipetting and sample handling steps. This new protocol greatly minimizes artefacts, such as unintended GUV rupture or fusion by shear forces. Using DNA-encapsulating phospholipid GUVs fabricated by this method, we quantified the extent of content mixing between GUVs under different FT conditions. We found evidence of nucleic acid exchange in all detected vesicles if fast freezing of GUVs at ‑80 °C is followed by slow thawing at room temperature. In contrast, slow freezing and fast thawing both adversely affected content mixing. Surprisingly, and in contrast to previous reports for FT-induced content mixing, we found that the content is not exchanged through vesicle fusion and fission, but that vesicles largely maintain their membrane identity and even large molecules are exchanged via diffusion across the membranes. Our approach supports efficient screening of prebiotically plausible molecules and environmental conditions, to yield universal mechanistic insights into how cellular life may have emerged.

  16. Multi-stage thermal-economical optimization of compact heat exchangers: A new evolutionary-based design approach for real-world problems

    International Nuclear Information System (INIS)

    Yousefi, Moslem; Darus, Amer Nordin; Yousefi, Milad; Hooshyar, Danial

    2015-01-01

    The complicated task of design optimization of compact heat exchangers (CHEs) have been effectively performed by using evolutionary algorithms (EAs) in the recent years. However, mainly due to difficulties of handling extra variables, the design approach has been based on constant rates of heat duty in the available literature. In this paper, a new design strategy is presented where variable operating conditions, which better represent real-world problems, are considered. The proposed strategy is illustrated using a case study for design of a plate-fin heat exchanger though it can be employed for all types of heat exchangers without much change. Learning automata based particle swarm optimization (LAPSO), is employed for handling nine design variables while satisfying various equality and inequality constraints. For handling the constraints, a novel feasibility based ranking strategy (FBRS) is introduced. The numerical results indicate that the design based on variable heat duties yields in more cost savings and superior thermodynamics efficiency comparing to a conventional design approach. Furthermore, the proposed algorithm has shown a superior performance in finding the near-optimum solution for this task when it is compared to the most popular evolutionary algorithms in engineering applications, i.e. genetic algorithm (GA) and particle swarm optimization (PSO). - Highlights: • Multi-stage design of heat exchangers is presented. • Feasibility based ranking strategy is employed for constraint handling. • Learning abilities added to particle swarm optimization

  17. Dynamic Vehicle Routing Using an Improved Variable Neighborhood Search Algorithm

    Directory of Open Access Journals (Sweden)

    Yingcheng Xu

    2013-01-01

    Full Text Available In order to effectively solve the dynamic vehicle routing problem with time windows, the mathematical model is established and an improved variable neighborhood search algorithm is proposed. In the algorithm, allocation customers and planning routes for the initial solution are completed by the clustering method. Hybrid operators of insert and exchange are used to achieve the shaking process, the later optimization process is presented to improve the solution space, and the best-improvement strategy is adopted, which make the algorithm can achieve a better balance in the solution quality and running time. The idea of simulated annealing is introduced to take control of the acceptance of new solutions, and the influences of arrival time, distribution of geographical location, and time window range on route selection are analyzed. In the experiment, the proposed algorithm is applied to solve the different sizes' problems of DVRP. Comparing to other algorithms on the results shows that the algorithm is effective and feasible.

  18. A Multipopulation Coevolutionary Strategy for Multiobjective Immune Algorithm

    Directory of Open Access Journals (Sweden)

    Jiao Shi

    2014-01-01

    Full Text Available How to maintain the population diversity is an important issue in designing a multiobjective evolutionary algorithm. This paper presents an enhanced nondominated neighbor-based immune algorithm in which a multipopulation coevolutionary strategy is introduced for improving the population diversity. In the proposed algorithm, subpopulations evolve independently; thus the unique characteristics of each subpopulation can be effectively maintained, and the diversity of the entire population is effectively increased. Besides, the dynamic information of multiple subpopulations is obtained with the help of the designed cooperation operator which reflects a mutually beneficial relationship among subpopulations. Subpopulations gain the opportunity to exchange information, thereby expanding the search range of the entire population. Subpopulations make use of the reference experience from each other, thereby improving the efficiency of evolutionary search. Compared with several state-of-the-art multiobjective evolutionary algorithms on well-known and frequently used multiobjective and many-objective problems, the proposed algorithm achieves comparable results in terms of convergence, diversity metrics, and running time on most test problems.

  19. Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange

    Science.gov (United States)

    Hula, Andreas; Montague, P. Read; Dayan, Peter

    2015-01-01

    Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent’s preference for equity with their partner, beliefs about the partner’s appetite for equity, beliefs about the partner’s model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429

  20. Relabeling exchange method (REM) for learning in neural networks

    Science.gov (United States)

    Wu, Wen; Mammone, Richard J.

    1994-02-01

    The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.

  1. An improved genetic algorithm with dynamic topology

    International Nuclear Information System (INIS)

    Cai Kai-Quan; Tang Yan-Wu; Zhang Xue-Jun; Guan Xiang-Min

    2016-01-01

    The genetic algorithm (GA) is a nature-inspired evolutionary algorithm to find optima in search space via the interaction of individuals. Recently, researchers demonstrated that the interaction topology plays an important role in information exchange among individuals of evolutionary algorithm. In this paper, we investigate the effect of different network topologies adopted to represent the interaction structures. It is found that GA with a high-density topology ends up more likely with an unsatisfactory solution, contrarily, a low-density topology can impede convergence. Consequently, we propose an improved GA with dynamic topology, named DT-GA, in which the topology structure varies dynamically along with the fitness evolution. Several experiments executed with 15 well-known test functions have illustrated that DT-GA outperforms other test GAs for making a balance of convergence speed and optimum quality. Our work may have implications in the combination of complex networks and computational intelligence. (paper)

  2. Simulation and Optimization of the Heat Exchanger for Automotive Exhaust-Based Thermoelectric Generators

    Science.gov (United States)

    Su, C. Q.; Huang, C.; Deng, Y. D.; Wang, Y. P.; Chu, P. Q.; Zheng, S. J.

    2016-03-01

    In order to enhance the exhaust waste heat recovery efficiency of the automotive exhaust-based thermoelectric generator (TEG) system, a three-segment heat exchanger with folded-shaped internal structure for the TEG system is investigated in this study. As the major effect factors of the performance for the TEG system, surface temperature, and thermal uniformity of the heat exchanger are analyzed in this research, pressure drop along the heat exchanger is also considered. Based on computational fluid dynamics simulations and temperature distribution, the pressure drop along the heat exchanger is obtained. By considering variable length and thickness of folded plates in each segment of the heat exchanger, response surface methodology and optimization by a multi-objective genetic algorithm is applied for surface temperature, thermal uniformity, and pressure drop for the folded-shaped heat exchanger. An optimum design based on the optimization is proposed to improve the overall performance of the TEG system. The performance of the optimized heat exchanger in different engine conditions is discussed.

  3. A methodology for the geometric design of heat recovery steam generators applying genetic algorithms

    International Nuclear Information System (INIS)

    Durán, M. Dolores; Valdés, Manuel; Rovira, Antonio; Rincón, E.

    2013-01-01

    This paper shows how the geometric design of heat recovery steam generators (HRSG) can be achieved. The method calculates the product of the overall heat transfer coefficient (U) by the area of the heat exchange surface (A) as a function of certain thermodynamic design parameters of the HRSG. A genetic algorithm is then applied to determine the best set of geometric parameters which comply with the desired UA product and, at the same time, result in a small heat exchange area and low pressure losses in the HRSG. In order to test this method, the design was applied to the HRSG of an existing plant and the results obtained were compared with the real exchange area of the steam generator. The findings show that the methodology is sound and offers reliable results even for complex HRSG designs. -- Highlights: ► The paper shows a methodology for the geometric design of heat recovery steam generators. ► Calculates product of the overall heat transfer coefficient by heat exchange area as a function of certain HRSG thermodynamic design parameters. ► It is a complement for the thermoeconomic optimization method. ► Genetic algorithms are used for solving the optimization problem

  4. Multiple image encryption scheme based on pixel exchange operation and vector decomposition

    Science.gov (United States)

    Xiong, Y.; Quan, C.; Tay, C. J.

    2018-02-01

    We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.

  5. Theory of mind broad and narrow: reasoning about social exchange engages ToM areas, precautionary reasoning does not.

    Science.gov (United States)

    Ermer, Elsa; Guerin, Scott A; Cosmides, Leda; Tooby, John; Miller, Michael B

    2006-01-01

    Baron-Cohen (1995) proposed that the theory of mind (ToM) inference system evolved to promote strategic social interaction. Social exchange--a form of co-operation for mutual benefit--involves strategic social interaction and requires ToM inferences about the contents of other individuals' mental states, especially their desires, goals, and intentions. There are behavioral and neuropsychological dissociations between reasoning about social exchange and reasoning about equivalent problems tapping other, more general content domains. It has therefore been proposed that social exchange behavior is regulated by social contract algorithms: a domain-specific inference system that is functionally specialized for reasoning about social exchange. We report an fMRI study using the Wason selection task that provides further support for this hypothesis. Precautionary rules share so many properties with social exchange rules--they are conditional, deontic, and involve subjective utilities--that most reasoning theories claim they are processed by the same neurocomputational machinery. Nevertheless, neuroimaging shows that reasoning about social exchange activates brain areas not activated by reasoning about precautionary rules, and vice versa. As predicted, neural correlates of ToM (anterior and posterior temporal cortex) were activated when subjects interpreted social exchange rules, but not precautionary rules (where ToM inferences are unnecessary). We argue that the interaction between ToM and social contract algorithms can be reciprocal: social contract algorithms requires ToM inferences, but their functional logic also allows ToM inferences to be made. By considering interactions between ToM in the narrower sense (belief-desire reasoning) and all the social inference systems that create the logic of human social interaction--ones that enable as well as use inferences about the content of mental states--a broader conception of ToM may emerge: a computational model embodying

  6. A novel self-organizing E-Learner community model with award and exchange mechanisms.

    Science.gov (United States)

    Yang, Fan; Shen, Rui-min; Han, Peng

    2004-11-01

    How to share experience and resources among learners is becoming one of the hottest topics in the field of E-Learning collaborative techniques. An intuitive way to achieve this objective is to group learners which can help each other into the same community and help them learn collaboratively. In this paper, we proposed a novel community self-organization model based on multi-agent mechanism, which can automatically group learners with similar preferences and capabilities. In particular, we proposed award and exchange schemas with evaluation and preference track records to raise the performance of this algorithm. The description of learner capability, the matchmaking process, the definition of evaluation and preference track records, the rules of award and exchange schemas and the self-organization algorithm are all discussed in this paper. Meanwhile, a prototype has been built to verify the validity and efficiency of the algorithm. Experiments based on real learner data showed that this mechanism can organize learner communities properly and efficiently; and that it has sustainable improved efficiency and scalability.

  7. Optimal design of the first stage of the plate-fin heat exchanger for the EAST cryogenic system

    Science.gov (United States)

    Qingfeng, JIANG; Zhigang, ZHU; Qiyong, ZHANG; Ming, ZHUANG; Xiaofei, LU

    2018-03-01

    The size of the heat exchanger is an important factor determining the dimensions of the cold box in helium cryogenic systems. In this paper, a counter-flow multi-stream plate-fin heat exchanger is optimized by means of a spatial interpolation method coupled with a hybrid genetic algorithm. Compared with empirical correlations, this spatial interpolation algorithm based on a kriging model can be adopted to more precisely predict the Colburn heat transfer factors and Fanning friction factors of offset-strip fins. Moreover, strict computational fluid dynamics simulations can be carried out to predict the heat transfer and friction performance in the absence of reliable experimental data. Within the constraints of heat exchange requirements, maximum allowable pressure drop, existing manufacturing techniques and structural strength, a mathematical model of an optimized design with discrete and continuous variables based on a hybrid genetic algorithm is established in order to minimize the volume. The results show that for the first-stage heat exchanger in the EAST refrigerator, the structural size could be decreased from the original 2.200 × 0.600 × 0.627 (m3) to the optimized 1.854 × 0.420 × 0.340 (m3), with a large reduction in volume. The current work demonstrates that the proposed method could be a useful tool to achieve optimization in an actual engineering project during the practical design process.

  8. The ethical plausibility of the 'Right To Try' laws.

    Science.gov (United States)

    Carrieri, D; Peccatori, F A; Boniolo, G

    2018-02-01

    'Right To Try' (RTT) laws originated in the USA to allow terminally ill patients to request access to early stage experimental medical products directly from the producer, removing the oversight and approval of the Food and Drug Administration. These laws have received significant media attention and almost equally unanimous criticism by the bioethics, clinical and scientific communities. They touch indeed on complex issues such as the conflict between individual and public interest, and the public understanding of medical research and its regulation. The increased awareness around RTT laws means that healthcare providers directly involved in the management of patients with life-threatening conditions such as cancer, infective, or neurologic conditions will deal more frequently with patients' requests of access to experimental medical products. This paper aims to assess the ethical plausibility of the RTT laws, and to suggest some possible ethical tools and considerations to address the main issues they touch. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Plausible inference: A multi-valued logic for problem solving

    Science.gov (United States)

    Friedman, L.

    1979-01-01

    A new logic is developed which permits continuously variable strength of belief in the truth of assertions. Four inference rules result, with formal logic as a limiting case. Quantification of belief is defined. Propagation of belief to linked assertions results from dependency-based techniques of truth maintenance so that local consistency is achieved or contradiction discovered in problem solving. Rules for combining, confirming, or disconfirming beliefs are given, and several heuristics are suggested that apply to revising already formed beliefs in the light of new evidence. The strength of belief that results in such revisions based on conflicting evidence are a highly subjective phenomenon. Certain quantification rules appear to reflect an orderliness in the subjectivity. Several examples of reasoning by plausible inference are given, including a legal example and one from robot learning. Propagation of belief takes place in directions forbidden in formal logic and this results in conclusions becoming possible for a given set of assertions that are not reachable by formal logic.

  10. Processes of Ammonia Air-Surface Exchange in a Fertilized Zea Mays Canopy

    Science.gov (United States)

    Recent incorporation of coupled soil biogeochemical and bi-directional NH3 air-surface exchange algorithms into regional air quality models holds promise for further reducing uncertainty in estimates of NH3 emissions from fertilized soils. While this advancement represents a sig...

  11. Looking for a Location: Dissociated Effects of Event-Related Plausibility and Verb-Argument Information on Predictive Processing in Aphasia.

    Science.gov (United States)

    Hayes, Rebecca A; Dickey, Michael Walsh; Warren, Tessa

    2016-12-01

    This study examined the influence of verb-argument information and event-related plausibility on prediction of upcoming event locations in people with aphasia, as well as older and younger, neurotypical adults. It investigated how these types of information interact during anticipatory processing and how the ability to take advantage of the different types of information is affected by aphasia. This study used a modified visual-world task to examine eye movements and offline photo selection. Twelve adults with aphasia (aged 54-82 years) as well as 44 young adults (aged 18-31 years) and 18 older adults (aged 50-71 years) participated. Neurotypical adults used verb argument status and plausibility information to guide both eye gaze (a measure of anticipatory processing) and image selection (a measure of ultimate interpretation). Argument status did not affect the behavior of people with aphasia in either measure. There was only limited evidence of interaction between these 2 factors in eye gaze data. Both event-related plausibility and verb-based argument status contributed to anticipatory processing of upcoming event locations among younger and older neurotypical adults. However, event-related likelihood had a much larger role in the performance of people with aphasia than did verb-based knowledge regarding argument structure.

  12. Looking for a Location: Dissociated Effects of Event-Related Plausibility and Verb–Argument Information on Predictive Processing in Aphasia

    Science.gov (United States)

    Dickey, Michael Walsh; Warren, Tessa

    2016-01-01

    Purpose This study examined the influence of verb–argument information and event-related plausibility on prediction of upcoming event locations in people with aphasia, as well as older and younger, neurotypical adults. It investigated how these types of information interact during anticipatory processing and how the ability to take advantage of the different types of information is affected by aphasia. Method This study used a modified visual-world task to examine eye movements and offline photo selection. Twelve adults with aphasia (aged 54–82 years) as well as 44 young adults (aged 18–31 years) and 18 older adults (aged 50–71 years) participated. Results Neurotypical adults used verb argument status and plausibility information to guide both eye gaze (a measure of anticipatory processing) and image selection (a measure of ultimate interpretation). Argument status did not affect the behavior of people with aphasia in either measure. There was only limited evidence of interaction between these 2 factors in eye gaze data. Conclusions Both event-related plausibility and verb-based argument status contributed to anticipatory processing of upcoming event locations among younger and older neurotypical adults. However, event-related likelihood had a much larger role in the performance of people with aphasia than did verb-based knowledge regarding argument structure. PMID:27997951

  13. Speech recognition employing biologically plausible receptive fields

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Bothe, Hans-Heinrich

    2011-01-01

    spectro-temporal receptive fields to auditory spectrogram input, motivated by the auditory pathway of humans, and ii) the adaptation or learning algorithms involved are biologically inspired. This is in contrast to state-of-the-art combinations of Mel-frequency cepstral coefficients and Hidden Markov...

  14. Liderazgo preventivo para la universidad. Una experiencia plausible

    Directory of Open Access Journals (Sweden)

    Alejandro Rodríguez Rodríguez

    2015-06-01

    Full Text Available El desarrollo del liderazgo, en el ámbito educativo superior, busca soluciones de aplicación inmediata a contextos en que todo líder se desenvuelve, pero se diluye el sustento teórico-práctico en la formación del líder que posibilite entender los procesos intelectivos durante la toma de decisiones. El paradigma de convergencia entre el método antropológico lonerganiano, la comunidad de aprendizaje vygotskiana y una relectura del sistema preventivo salesiano se presentan como propuesta plausible de formación al liderazgo preventivo entre los diversos actores de una comunidad universitaria. Un estudio de caso de la Universidad Salesiana en México empleando un método mixto de investigación, facilita una relectura del liderazgo desde una óptica preventiva como posibilidad de convergencia en un diálogo interdisciplinar. Los resultados teórico-práctico propuestos y examinados se muestran como herramienta útil para evaluar, enriquecer y renovar la teoría sobre el líder y el desarrollo de liderazgo en las universidades frente a una sociedad globalizada.

  15. A swarm intelligence framework for reconstructing gene networks: searching for biologically plausible architectures.

    Science.gov (United States)

    Kentzoglanakis, Kyriakos; Poole, Matthew

    2012-01-01

    In this paper, we investigate the problem of reverse engineering the topology of gene regulatory networks from temporal gene expression data. We adopt a computational intelligence approach comprising swarm intelligence techniques, namely particle swarm optimization (PSO) and ant colony optimization (ACO). In addition, the recurrent neural network (RNN) formalism is employed for modeling the dynamical behavior of gene regulatory systems. More specifically, ACO is used for searching the discrete space of network architectures and PSO for searching the corresponding continuous space of RNN model parameters. We propose a novel solution construction process in the context of ACO for generating biologically plausible candidate architectures. The objective is to concentrate the search effort into areas of the structure space that contain architectures which are feasible in terms of their topological resemblance to real-world networks. The proposed framework is initially applied to the reconstruction of a small artificial network that has previously been studied in the context of gene network reverse engineering. Subsequently, we consider an artificial data set with added noise for reconstructing a subnetwork of the genetic interaction network of S. cerevisiae (yeast). Finally, the framework is applied to a real-world data set for reverse engineering the SOS response system of the bacterium Escherichia coli. Results demonstrate the relative advantage of utilizing problem-specific knowledge regarding biologically plausible structural properties of gene networks over conducting a problem-agnostic search in the vast space of network architectures.

  16. Efficiency Analysis of the Parallel Implementation of the SIMPLE Algorithm on Multiprocessor Computers

    Science.gov (United States)

    Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.

    2017-12-01

    This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.

  17. Parameter identification of PEMFC model based on hybrid adaptive differential evolution algorithm

    International Nuclear Information System (INIS)

    Sun, Zhe; Wang, Ning; Bi, Yunrui; Srinivasan, Dipti

    2015-01-01

    In this paper, a HADE (hybrid adaptive differential evolution) algorithm is proposed for the identification problem of PEMFC (proton exchange membrane fuel cell). Inspired by biological genetic strategy, a novel adaptive scaling factor and a dynamic crossover probability are presented to improve the adaptive and dynamic performance of differential evolution algorithm. Moreover, two kinds of neighborhood search operations based on the bee colony foraging mechanism are introduced for enhancing local search efficiency. Through testing the benchmark functions, the proposed algorithm exhibits better performance in convergent accuracy and speed. Finally, the HADE algorithm is applied to identify the nonlinear parameters of PEMFC stack model. Through experimental comparison with other identified methods, the PEMFC model based on the HADE algorithm shows better performance. - Highlights: • We propose a hybrid adaptive differential evolution algorithm (HADE). • The search efficiency is enhanced in low and high dimension search space. • The effectiveness is confirmed by testing benchmark functions. • The identification of the PEMFC model is conducted by adopting HADE.

  18. Preview Effects of Plausibility and Character Order in Reading Chinese Transposed Words: Evidence from Eye Movements

    Science.gov (United States)

    Yang, Jinmian

    2013-01-01

    The current paper examined the role of plausibility information in the parafovea for Chinese readers by using two-character transposed words (in which the order of the component characters is reversed but are still words). In two eye-tracking experiments, readers received a preview of a target word that was (1) identical to the target word, (2) a…

  19. An efficient method based on the uniformity principle for synthesis of large-scale heat exchanger networks

    International Nuclear Information System (INIS)

    Zhang, Chunwei; Cui, Guomin; Chen, Shang

    2016-01-01

    Highlights: • Two dimensionless uniformity factors are presented to heat exchange network. • The grouping of process streams reduces the computational complexity of large-scale HENS problems. • The optimal sub-network can be obtained by Powell particle swarm optimization algorithm. • The method is illustrated by a case study involving 39 process streams, with a better solution. - Abstract: The optimal design of large-scale heat exchanger networks is a difficult task due to the inherent non-linear characteristics and the combinatorial nature of heat exchangers. To solve large-scale heat exchanger network synthesis (HENS) problems, two dimensionless uniformity factors to describe the heat exchanger network (HEN) uniformity in terms of the temperature difference and the accuracy of process stream grouping are deduced. Additionally, a novel algorithm that combines deterministic and stochastic optimizations to obtain an optimal sub-network with a suitable heat load for a given group of streams is proposed, and is named the Powell particle swarm optimization (PPSO). As a result, the synthesis of large-scale heat exchanger networks is divided into two corresponding sub-parts, namely, the grouping of process streams and the optimization of sub-networks. This approach reduces the computational complexity and increases the efficiency of the proposed method. The robustness and effectiveness of the proposed method are demonstrated by solving a large-scale HENS problem involving 39 process streams, and the results obtained are better than those previously published in the literature.

  20. The Matchmaker Exchange API: automating patient matching through the exchange of structured phenotypic and genotypic profiles.

    Science.gov (United States)

    Buske, Orion J; Schiettecatte, François; Hutton, Benjamin; Dumitriu, Sergiu; Misyura, Andriy; Huang, Lijia; Hartley, Taila; Girdea, Marta; Sobreira, Nara; Mungall, Chris; Brudno, Michael

    2015-10-01

    Despite the increasing prevalence of clinical sequencing, the difficulty of identifying additional affected families is a key obstacle to solving many rare diseases. There may only be a handful of similar patients worldwide, and their data may be stored in diverse clinical and research databases. Computational methods are necessary to enable finding similar patients across the growing number of patient repositories and registries. We present the Matchmaker Exchange Application Programming Interface (MME API), a protocol and data format for exchanging phenotype and genotype profiles to enable matchmaking among patient databases, facilitate the identification of additional cohorts, and increase the rate with which rare diseases can be researched and diagnosed. We designed the API to be straightforward and flexible in order to simplify its adoption on a large number of data types and workflows. We also provide a public test data set, curated from the literature, to facilitate implementation of the API and development of new matching algorithms. The initial version of the API has been successfully implemented by three members of the Matchmaker Exchange and was immediately able to reproduce previously identified matches and generate several new leads currently being validated. The API is available at https://github.com/ga4gh/mme-apis. © 2015 WILEY PERIODICALS, INC.

  1. A Penalized Semialgebraic Deflation ICA Algorithm for the Efficient Extraction of Interictal Epileptic Signals.

    Science.gov (United States)

    Becker, Hanna; Albera, Laurent; Comon, Pierre; Kachenoura, Amar; Merlet, Isabelle

    2017-01-01

    As a noninvasive technique, electroencephalography (EEG) is commonly used to monitor the brain signals of patients with epilepsy such as the interictal epileptic spikes. However, the recorded data are often corrupted by artifacts originating, for example, from muscle activities, which may have much higher amplitudes than the interictal epileptic signals of interest. To remove these artifacts, a number of independent component analysis (ICA) techniques were successfully applied. In this paper, we propose a new deflation ICA algorithm, called penalized semialgebraic unitary deflation (P-SAUD) algorithm, that improves upon classical ICA methods by leading to a considerably reduced computational complexity at equivalent performance. This is achieved by employing a penalized semialgebraic extraction scheme, which permits us to identify the epileptic components of interest (interictal spikes) first and obviates the need of extracting subsequent components. The proposed method is evaluated on physiologically plausible simulated EEG data and actual measurements of three patients. The results are compared to those of several popular ICA algorithms as well as second-order blind source separation methods, demonstrating that P-SAUD extracts the epileptic spikes with the same accuracy as the best ICA methods, but reduces the computational complexity by a factor of 10 for 32-channel recordings. This superior computational efficiency is of particular interest considering the increasing use of high-resolution EEG recordings, whose analysis requires algorithms with low computational cost.

  2. Measurement of the Exchange Rate of Waters of Hydration in Elastin by 2D T(2)-T(2) Correlation Nuclear Magnetic Resonance Spectroscopy.

    Science.gov (United States)

    Sun, Cheng; Boutis, Gregory S

    2011-02-28

    We report on the direct measurement of the exchange rate of waters of hydration in elastin by T(2)-T(2) exchange spectroscopy. The exchange rates in bovine nuchal ligament elastin and aortic elastin at temperatures near, below and at the physiological temperature are reported. Using an Inverse Laplace Transform (ILT) algorithm, we are able to identify four components in the relaxation times. While three of the components are in good agreement with previous measurements that used multi-exponential fitting, the ILT algorithm distinguishes a fourth component having relaxation times close to that of free water and is identified as water between fibers. With the aid of scanning electron microscopy, a model is proposed allowing for the application of a two-site exchange analysis between any two components for the determination of exchange rates between reservoirs. The results of the measurements support a model (described elsewhere [1]) wherein the net entropy of bulk waters of hydration should increase upon increasing temperature in the inverse temperature transition.

  3. Comparison of machine learning algorithms for detecting coral reef

    Directory of Open Access Journals (Sweden)

    Eduardo Tusa

    2014-09-01

    Full Text Available (Received: 2014/07/31 - Accepted: 2014/09/23This work focuses on developing a fast coral reef detector, which is used for an autonomous underwater vehicle, AUV. A fast detection secures the AUV stabilization respect to an area of reef as fast as possible, and prevents devastating collisions. We use the algorithm of Purser et al. (2009 because of its precision. This detector has two parts: feature extraction that uses Gabor Wavelet filters, and feature classification that uses machine learning based on Neural Networks. Due to the extensive time of the Neural Networks, we exchange for a classification algorithm based on Decision Trees. We use a database of 621 images of coral reef in Belize (110 images for training and 511 images for testing. We implement the bank of Gabor Wavelets filters using C++ and the OpenCV library. We compare the accuracy and running time of 9 machine learning algorithms, whose result was the selection of the Decision Trees algorithm. Our coral detector performs 70ms of running time in comparison to 22s executed by the algorithm of Purser et al. (2009.

  4. Cultural-Based Genetic Tabu Algorithm for Multiobjective Job Shop Scheduling

    Directory of Open Access Journals (Sweden)

    Yuzhen Yang

    2014-01-01

    Full Text Available The job shop scheduling problem, which has been dealt with by various traditional optimization methods over the decades, has proved to be an NP-hard problem and difficult in solving, especially in the multiobjective field. In this paper, we have proposed a novel quadspace cultural genetic tabu algorithm (QSCGTA to solve such problem. This algorithm provides a different structure from the original cultural algorithm in containing double brief spaces and population spaces. These spaces deal with different levels of populations globally and locally by applying genetic and tabu searches separately and exchange information regularly to make the process more effective towards promising areas, along with modified multiobjective domination and transform functions. Moreover, we have presented a bidirectional shifting for the decoding process of job shop scheduling. The computational results we presented significantly prove the effectiveness and efficiency of the cultural-based genetic tabu algorithm for the multiobjective job shop scheduling problem.

  5. Uncertain socioeconomic projections used in travel demand and emissions models: could plausible errors result in air quality nonconformity?

    International Nuclear Information System (INIS)

    Rodier, C.J.; Johnston, R.A.

    2002-01-01

    A sensitivity analysis of plausible errors in population, employment, fuel price, and income projections is conducted using the travel demand and emissions models of the Sacramento, CA, USA, region for their transportation plan. The results of the analyses indicate that plausible error ranges for household income and fuel prices are not a significant source of uncertainty with respect to the region's travel demand and emissions projections. However, plausible errors in population and employment projections (within approximately one standard deviation) may result in the region's transportation plan not meeting the conformity test for nitrogens of oxides (NO x ) in the year 2005 (i.e., an approximately 16% probability). This outcome is also possible in the year 2015 but less likely (within approximately two standard deviations or a 2.5% probability). Errors in socioeconomic projections are only one of many sources of error in travel demand and emissions models. These results have several policy implications. First, regions like Sacramento that meet their conformity tests by a very small margin should rethink new highway investment and consider contingency transportation plans that incorporate more aggressive emissions reduction policies. Second, regional transportation planning agencies should conduct sensitivity analyses as part of their conformity analysis to make explicit significant uncertainties in the methods and to identify the probability of their transportation plan not conforming. Third, the US Environmental Protection Agency (EPA) should clarify the interpretation of ''demonstrate'' conformity of transportation plans; that is, specify the level of certainty that it considers a sufficient demonstration of conformity. (author)

  6. Thermodynamic performance analysis and algorithm model of multi-pressure heat recovery steam generators (HRSG) based on heat exchangers layout

    International Nuclear Information System (INIS)

    Feng, Hongcui; Zhong, Wei; Wu, Yanling; Tong, Shuiguang

    2014-01-01

    Highlights: • A general model of multi-pressure HRSG based on heat exchangers layout is built. • The minimum temperature difference is introduced to replace pinch point analysis. • Effects of layout on dual pressure HRSG thermodynamic performances are analyzed. - Abstract: Changes of heat exchangers layout in heat recovery steam generator (HRSG) will modify the amount of waste heat recovered from flue gas; this brings forward a desire for the optimization of the design of HRSG. In this paper the model of multi-pressure HRSG is built, and an instance of a dual pressure HRSG under three different layouts of Taihu Boiler Co., Ltd. is discussed, with specified values of inlet temperature, mass flow rate, composition of flue gas and water/steam parameters as temperature, pressure etc., steam mass flow rate and heat efficiency of different heat exchangers layout of HRSG are analyzed. This analysis is based on the laws of thermodynamics and incorporated into the energy balance equations for the heat exchangers. In the conclusion, the results of the steam mass flow rate, heat efficiency obtained for three heat exchangers layout of HRSGs are compared. The results show that the optimization of heat exchangers layout of HRSGs has a great significance for waste heat recovery and energy conservation

  7. Exact-exchange time-dependent density-functional theory for static and dynamic polarizabilities

    International Nuclear Information System (INIS)

    Hirata, So; Ivanov, Stanislav; Bartlett, Rodney J.; Grabowski, Ireneusz

    2005-01-01

    Time-dependent density-functional theory (TDDFT) employing the exact-exchange functional has been formulated on the basis of the optimized-effective-potential (OEP) method of Talman and Shadwick for second-order molecular properties and implemented into a Gaussian-basis-set, trial-vector algorithm. The only approximation involved, apart from the lack of correlation effects and the use of Gaussian-type basis functions, was the consistent use of the adiabatic approximation in the exchange kernel and in the linear response function. The static and dynamic polarizabilities and their anisotropy predicted by the TDDFT with exact exchange (TDOEP) agree accurately with the corresponding values from time-dependent Hartree-Fock theory, the exact-exchange counterpart in the wave function theory. The TDOEP is free from the nonphysical asymptotic decay of the exchange potential of most conventional density functionals or from any other manifestations of the incomplete cancellation of the self-interaction energy. The systematic overestimation of the absolute values and dispersion of polarizabilities that plagues most conventional TDDFT cannot be seen in the TDOEP

  8. Analytic models of plausible gravitational lens potentials

    International Nuclear Information System (INIS)

    Baltz, Edward A.; Marshall, Phil; Oguri, Masamune

    2009-01-01

    Gravitational lenses on galaxy scales are plausibly modelled as having ellipsoidal symmetry and a universal dark matter density profile, with a Sérsic profile to describe the distribution of baryonic matter. Predicting all lensing effects requires knowledge of the total lens potential: in this work we give analytic forms for that of the above hybrid model. Emphasising that complex lens potentials can be constructed from simpler components in linear combination, we provide a recipe for attaining elliptical symmetry in either projected mass or lens potential. We also provide analytic formulae for the lens potentials of Sérsic profiles for integer and half-integer index. We then present formulae describing the gravitational lensing effects due to smoothly-truncated universal density profiles in cold dark matter model. For our isolated haloes the density profile falls off as radius to the minus fifth or seventh power beyond the tidal radius, functional forms that allow all orders of lens potential derivatives to be calculated analytically, while ensuring a non-divergent total mass. We show how the observables predicted by this profile differ from that of the original infinite-mass NFW profile. Expressions for the gravitational flexion are highlighted. We show how decreasing the tidal radius allows stripped haloes to be modelled, providing a framework for a fuller investigation of dark matter substructure in galaxies and clusters. Finally we remark on the need for finite mass halo profiles when doing cosmological ray-tracing simulations, and the need for readily-calculable higher order derivatives of the lens potential when studying catastrophes in strong lenses

  9. A Parallel Adaptive Particle Swarm Optimization Algorithm for Economic/Environmental Power Dispatch

    Directory of Open Access Journals (Sweden)

    Jinchao Li

    2012-01-01

    Full Text Available A parallel adaptive particle swarm optimization algorithm (PAPSO is proposed for economic/environmental power dispatch, which can overcome the premature characteristic, the slow-speed convergence in the late evolutionary phase, and lacking good direction in particles’ evolutionary process. A search population is randomly divided into several subpopulations. Then for each subpopulation, the optimal solution is searched synchronously using the proposed method, and thus parallel computing is realized. To avoid converging to a local optimum, a crossover operator is introduced to exchange the information among the subpopulations and the diversity of population is sustained simultaneously. Simulation results show that the proposed algorithm can effectively solve the economic/environmental operation problem of hydropower generating units. Performance comparisons show that the solution from the proposed method is better than those from the conventional particle swarm algorithm and other optimization algorithms.

  10. Considerably Unfolded Transthyretin Monomers Preceed and Exchange with Dynamically Structured Amyloid Protofibrils

    DEFF Research Database (Denmark)

    Groenning, Minna; Campos, Raul I; Hirschberg, Daniel

    2015-01-01

    describe an unexpectedly dynamic TTR protofibril structure which exchanges protomers with highly unfolded monomers in solution. The protofibrils only grow to an approximate final size of 2,900 kDa and a length of 70 nm and a comparative HXMS analysis of native and aggregated samples revealed a much higher...... average solvent exposure of TTR upon fibrillation. With SAXS, we reveal the continuous presence of a considerably unfolded TTR monomer throughout the fibrillation process, and show that a considerable fraction of the fibrillating protein remains in solution even at a late maturation state. Together......, these data reveal that the fibrillar state interchanges with the solution state. Accordingly, we suggest that TTR fibrillation proceeds via addition of considerably unfolded monomers, and the continuous presence of amyloidogenic structures near the protofibril surface offers a plausible explanation...

  11. Atom exchange between aqueous Fe(II) and structural Fe in clay minerals.

    Science.gov (United States)

    Neumann, Anke; Wu, Lingling; Li, Weiqiang; Beard, Brian L; Johnson, Clark M; Rosso, Kevin M; Frierdich, Andrew J; Scherer, Michelle M

    2015-03-03

    Due to their stability toward reductive dissolution, Fe-bearing clay minerals are viewed as a renewable source of Fe redox activity in diverse environments. Recent findings of interfacial electron transfer between aqueous Fe(II) and structural Fe in clay minerals and electron conduction in octahedral sheets of nontronite, however, raise the question whether Fe interaction with clay minerals is more dynamic than previously thought. Here, we use an enriched isotope tracer approach to simultaneously trace Fe atom movement from the aqueous phase to the solid ((57)Fe) and from the solid into the aqueous phase ((56)Fe). Over 6 months, we observed a significant decrease in aqueous (57)Fe isotope fraction, with a fast initial decrease which slowed after 3 days and stabilized after about 50 days. For the aqueous (56)Fe isotope fraction, we observed a similar but opposite trend, indicating that Fe atom movement had occurred in both directions: from the aqueous phase into the solid and from the solid into aqueous phase. We calculated that 5-20% of structural Fe in clay minerals NAu-1, NAu-2, and SWa-1 exchanged with aqueous Fe(II), which significantly exceeds the Fe atom layer exposed directly to solution. Calculations based on electron-hopping rates in nontronite suggest that the bulk conduction mechanism previously demonstrated for hematite1 and suggested as an explanation for the significant Fe atom exchange observed in goethite2 may be a plausible mechanism for Fe atom exchange in Fe-bearing clay minerals. Our finding of 5-20% Fe atom exchange in clay minerals indicates that we need to rethink how Fe mobility affects the macroscopic properties of Fe-bearing phyllosilicates and its role in Fe biogeochemical cycling, as well as its use in a variety of engineered applications, such as landfill liners and nuclear repositories.

  12. SVC control enhancement applying self-learning fuzzy algorithm for islanded microgrid

    Directory of Open Access Journals (Sweden)

    Hossam Gabbar

    2016-03-01

    Full Text Available Maintaining voltage stability, within acceptable levels, for islanded Microgrids (MGs is a challenge due to limited exchange power between generation and loads. This paper proposes an algorithm to enhance the dynamic performance of islanded MGs in presence of load disturbance using Static VAR Compensator (SVC with Fuzzy Model Reference Learning Controller (FMRLC. The proposed algorithm compensates MG nonlinearity via fuzzy membership functions and inference mechanism imbedded in both controller and inverse model. Hence, MG keeps the desired performance as required at any operating condition. Furthermore, the self-learning capability of the proposed control algorithm compensates for grid parameter’s variation even with inadequate information about load dynamics. A reference model was designed to reject bus voltage disturbance with achievable performance by the proposed fuzzy controller. Three simulations scenarios have been presented to investigate effectiveness of proposed control algorithm in improving steady-state and transient performance of islanded MGs. The first scenario conducted without SVC, second conducted with SVC using PID controller and third conducted using FMRLC algorithm. A comparison for results shows ability of proposed control algorithm to enhance disturbance rejection due to learning process.

  13. Mass Conservation and Positivity Preservation with Ensemble-type Kalman Filter Algorithms

    Science.gov (United States)

    Janjic, Tijana; McLaughlin, Dennis B.; Cohn, Stephen E.; Verlaan, Martin

    2013-01-01

    Maintaining conservative physical laws numerically has long been recognized as being important in the development of numerical weather prediction (NWP) models. In the broader context of data assimilation, concerted efforts to maintain conservation laws numerically and to understand the significance of doing so have begun only recently. In order to enforce physically based conservation laws of total mass and positivity in the ensemble Kalman filter, we incorporate constraints to ensure that the filter ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. We show that the analysis steps of ensemble transform Kalman filter (ETKF) algorithm and ensemble Kalman filter algorithm (EnKF) can conserve the mass integral, but do not preserve positivity. Further, if localization is applied or if negative values are simply set to zero, then the total mass is not conserved either. In order to ensure mass conservation, a projection matrix that corrects for localization effects is constructed. In order to maintain both mass conservation and positivity preservation through the analysis step, we construct a data assimilation algorithms based on quadratic programming and ensemble Kalman filtering. Mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate constraints. Some simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. The results show clear improvements in both analyses and forecasts, particularly in the presence of localized features. Behavior of the algorithm is also tested in presence of model error.

  14. An Overview of a Class of Clock Synchronization Algorithms for Wireless Sensor Networks: A Statistical Signal Processing Perspective

    Directory of Open Access Journals (Sweden)

    Xu Wang

    2015-08-01

    Full Text Available Recently, wireless sensor networks (WSNs have drawn great interest due to their outstanding monitoring and management potential in medical, environmental and industrial applications. Most of the applications that employ WSNs demand all of the sensor nodes to run on a common time scale, a requirement that highlights the importance of clock synchronization. The clock synchronization problem in WSNs is inherently related to parameter estimation. The accuracy of clock synchronization algorithms depends essentially on the statistical properties of the parameter estimation algorithms. Recently, studies dedicated to the estimation of synchronization parameters, such as clock offset and skew, have begun to emerge in the literature. The aim of this article is to provide an overview of the state-of-the-art clock synchronization algorithms for WSNs from a statistical signal processing point of view. This article focuses on describing the key features of the class of clock synchronization algorithms that exploit the traditional two-way message (signal exchange mechanism. Upon introducing the two-way message exchange mechanism, the main clock offset estimation algorithms for pairwise synchronization of sensor nodes are first reviewed, and their performance is compared. The class of fully-distributed clock offset estimation algorithms for network-wide synchronization is then surveyed. The paper concludes with a list of open research problems pertaining to clock synchronization of WSNs.

  15. A Monte Carlo Metropolis-Hastings Algorithm for Sampling from Distributions with Intractable Normalizing Constants

    KAUST Repository

    Liang, Faming; Jin, Ick-Hoon

    2013-01-01

    Simulating from distributions with intractable normalizing constants has been a long-standing problem inmachine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. TheMCMHalgorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals. © 2013 Massachusetts Institute of Technology.

  16. A Monte Carlo Metropolis-Hastings Algorithm for Sampling from Distributions with Intractable Normalizing Constants

    KAUST Repository

    Liang, Faming

    2013-08-01

    Simulating from distributions with intractable normalizing constants has been a long-standing problem inmachine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. TheMCMHalgorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals. © 2013 Massachusetts Institute of Technology.

  17. Measurement of the exchange rate of waters of hydration in elastin by 2D T2-T2 correlation nuclear magnetic resonance spectroscopy

    International Nuclear Information System (INIS)

    Sun Cheng; Boutis, Gregory S

    2011-01-01

    We report on a direct measurement of the exchange rate of waters of hydration in elastin by T 2 -T 2 exchange spectroscopy. The exchange rates in bovine nuchal ligament elastin and aortic elastin at temperatures near, below and at the physiological temperature are reported here. Using an inverse Laplace transform (ILT) algorithm, we are able to identify four components in the relaxation times. While three of the components are in good agreement with previous measurements that used multi-exponential fitting, the ILT algorithm distinguishes a fourth component having relaxation times close to that of free water and is identified as water between fibers. With the aid of scanning electron microscopy, a model is proposed that allows for the application of a two-site exchange analysis between any two components for the determination of exchange rates between reservoirs. The results of the measurements support a model (described by Urry and Parker 2002 J. Muscle Res. Cell Motil. 23 543-59) wherein the net entropy of waters of hydration should increase with increasing temperature in the inverse temperature transition.

  18. A Review Of Encryption Algorithms-RSA And Diffie-Hellman

    Directory of Open Access Journals (Sweden)

    Nilesh A. Lal

    2017-07-01

    Full Text Available Network security is protecting data and message from cybercrime. Cryptography system is designed freely to communicate over a computer network. It is a process where sender sends encrypted message to the recipient. Symmetric encryption is known as the single key encryption. RSA algorithm is a symmetric key encryption.it uses public key and private key. Diffie Hellman cryptography is where both parties exchange secrets keys to encrypt message.

  19. Investigating the effect of non-similar fins in thermoeconomic optimization of plate fin heat exchanger

    International Nuclear Information System (INIS)

    Hajabdollahi, Hassan

    2015-01-01

    Thermoeconomic optimization of plate fin heat exchanger with similar (SF) and different (DF) or non-similar fin in each side is presented in this work. For this purpose, both heat exchanger effectiveness and total annual cost (TAC) are optimized simultaneously using multi-objective particle swarm optimization algorithm. The above procedure is performed for various mass flow rates in each side. The optimum results reveal that no thermoeconomic improvement is observed in the case of same mass flow rate in each side while both effectiveness and TAC are improved in the case of different mass flow rate. For example, effectiveness and TAC are improved 0.95% and 10.17% respectively, for the DF compared with SF. In fact, the fin configuration should be selected more compact in a side with lower mass flow rate compared with the other side in the thermoeconomic viewpoint. Furthermore, for the thermodynamic optimization viewpoint both SF and DF have the same optimum result while for the economic (or thermoeconomic) optimization viewpoint, the significant decrease in TAC is accessible in the case of DF compared with SF. - Highlights: • Thermoeconomic modeling of compact heat exchanger. • Selection of fin and heat exchanger geometries as nine decision variables. • Applying MOPSO algorithm for multi objective optimization. • Considering the similar and different fin specification in each side. • Investigation of optimum design parameters for various mass flow rates

  20. Spin and orbital exchange interactions from Dynamical Mean Field Theory

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, A., E-mail: a.secchi@science.ru.nl [Radboud University, Institute for Molecules and Materials, 6525 AJ Nijmegen (Netherlands); Lichtenstein, A.I., E-mail: alichten@physnet.uni-hamburg.de [Universitat Hamburg, Institut für Theoretische Physik, Jungiusstraße 9, D-20355 Hamburg (Germany); Katsnelson, M.I., E-mail: m.katsnelson@science.ru.nl [Radboud University, Institute for Molecules and Materials, 6525 AJ Nijmegen (Netherlands)

    2016-02-15

    We derive a set of equations expressing the parameters of the magnetic interactions characterizing a strongly correlated electronic system in terms of single-electron Green's functions and self-energies. This allows to establish a mapping between the initial electronic system and a spin model including up to quadratic interactions between the effective spins, with a general interaction (exchange) tensor that accounts for anisotropic exchange, Dzyaloshinskii–Moriya interaction and other symmetric terms such as dipole–dipole interaction. We present the formulas in a format that can be used for computations via Dynamical Mean Field Theory algorithms. - Highlights: • We give formulas for the exchange interaction tensor in strongly correlated systems. • Interactions are written in terms of electronic Green's functions and self-energies. • The method is suitable for a Dynamical Mean Field Theory implementation. • No quenching of the orbital magnetic moments is assumed. • Spin and orbital contributions to magnetism can be computed separately.

  1. Efficient Calculation of Exact Exchange Within the Quantum Espresso Software Package

    Science.gov (United States)

    Barnes, Taylor; Kurth, Thorsten; Carrier, Pierre; Wichmann, Nathan; Prendergast, David; Kent, Paul; Deslippe, Jack

    Accurate simulation of condensed matter at the nanoscale requires careful treatment of the exchange interaction between electrons. In the context of plane-wave DFT, these interactions are typically represented through the use of approximate functionals. Greater accuracy can often be obtained through the use of functionals that incorporate some fraction of exact exchange; however, evaluation of the exact exchange potential is often prohibitively expensive. We present an improved algorithm for the parallel computation of exact exchange in Quantum Espresso, an open-source software package for plane-wave DFT simulation. Through the use of aggressive load balancing and on-the-fly transformation of internal data structures, our code exhibits speedups of approximately an order of magnitude for practical calculations. Additional optimizations are presented targeting the many-core Intel Xeon-Phi ``Knights Landing'' architecture, which largely powers NERSC's new Cori system. We demonstrate the successful application of the code to difficult problems, including simulation of water at a platinum interface and computation of the X-ray absorption spectra of transition metal oxides.

  2. A non-oscillatory energy-splitting method for the computation of compressible multi-fluid flows

    Science.gov (United States)

    Lei, Xin; Li, Jiequan

    2018-04-01

    This paper proposes a new non-oscillatory energy-splitting conservative algorithm for computing multi-fluid flows in the Eulerian framework. In comparison with existing multi-fluid algorithms in the literature, it is shown that the mass fraction model with isobaric hypothesis is a plausible choice for designing numerical methods for multi-fluid flows. Then we construct a conservative Godunov-based scheme with the high order accurate extension by using the generalized Riemann problem solver, through the detailed analysis of kinetic energy exchange when fluids are mixed under the hypothesis of isobaric equilibrium. Numerical experiments are carried out for the shock-interface interaction and shock-bubble interaction problems, which display the excellent performance of this type of schemes and demonstrate that nonphysical oscillations are suppressed around material interfaces substantially.

  3. The 10/66 Dementia Research Group's fully operationalised DSM-IV dementia computerized diagnostic algorithm, compared with the 10/66 dementia algorithm and a clinician diagnosis: a population validation study

    Directory of Open Access Journals (Sweden)

    Krishnamoorthy ES

    2008-06-01

    Full Text Available Abstract Background The criterion for dementia implicit in DSM-IV is widely used in research but not fully operationalised. The 10/66 Dementia Research Group sought to do this using assessments from their one phase dementia diagnostic research interview, and to validate the resulting algorithm in a population-based study in Cuba. Methods The criterion was operationalised as a computerised algorithm, applying clinical principles, based upon the 10/66 cognitive tests, clinical interview and informant reports; the Community Screening Instrument for Dementia, the CERAD 10 word list learning and animal naming tests, the Geriatric Mental State, and the History and Aetiology Schedule – Dementia Diagnosis and Subtype. This was validated in Cuba against a local clinician DSM-IV diagnosis and the 10/66 dementia diagnosis (originally calibrated probabilistically against clinician DSM-IV diagnoses in the 10/66 pilot study. Results The DSM-IV sub-criteria were plausibly distributed among clinically diagnosed dementia cases and controls. The clinician diagnoses agreed better with 10/66 dementia diagnosis than with the more conservative computerized DSM-IV algorithm. The DSM-IV algorithm was particularly likely to miss less severe dementia cases. Those with a 10/66 dementia diagnosis who did not meet the DSM-IV criterion were less cognitively and functionally impaired compared with the DSMIV confirmed cases, but still grossly impaired compared with those free of dementia. Conclusion The DSM-IV criterion, strictly applied, defines a narrow category of unambiguous dementia characterized by marked impairment. It may be specific but incompletely sensitive to clinically relevant cases. The 10/66 dementia diagnosis defines a broader category that may be more sensitive, identifying genuine cases beyond those defined by our DSM-IV algorithm, with relevance to the estimation of the population burden of this disorder.

  4. Direct estimates of national neonatal and child cause–specific mortality proportions in Niger by expert algorithm and physician–coded analysis of verbal autopsy interviews

    Directory of Open Access Journals (Sweden)

    Henry D. Kalter

    2015-06-01

    Full Text Available Background This study was one of a set of verbal autopsy investigations undertaken by the WHO/UNCEF–supported Child Health Epidemiology Reference Group (CHERG to derive direct estimates of the causes of neonatal and child deaths in high priority countries of sub–Saharan Africa. The objective of the study was to determine the cause distributions of neonatal (0–27 days and child (1–59 months mortality in Niger. Methods Verbal autopsy interviews were conducted of random samples of 453 neonatal deaths and 620 child deaths from 2007 to 2010 identified by the 2011 Niger National Mortality Survey. The cause of each death was assigned using two methods: computerized expert algorithms arranged in a hierarchy and physician completion of a death certificate for each child. The findings of the two methods were compared to each other, and plausibility checks were conducted to assess which is the preferred method. Comparison of some direct measures from this study with CHERG modeled cause of death estimates are discussed. Findings The cause distributions of neonatal deaths as determined by expert algorithms and the physician were similar, with the same top three causes by both methods and all but two other causes within one rank of each other. Although child causes of death differed more, the reasons often could be discerned by analyzing algorithmic criteria alongside the physician's application of required minimal diagnostic criteria. Including all algorithmic (primary and co–morbid and physician (direct, underlying and contributing diagnoses in the comparison minimized the differences, with kappa coefficients greater than 0.40 for five of 11 neonatal diagnoses and nine of 13 child diagnoses. By algorithmic diagnosis, early onset neonatal infection was significantly associated (χ2 = 13.2, P < 0.001 with maternal infection, and the geographic distribution of child meningitis deaths closely corresponded with that for meningitis surveillance

  5. Seamless Vertical Handoff using Invasive Weed Optimization (IWO algorithm for heterogeneous wireless networks

    Directory of Open Access Journals (Sweden)

    T. Velmurugan

    2016-03-01

    Full Text Available Heterogeneous wireless networks are an integration of two different networks. For better performance, connections are to be exchanged among the different networks using seamless Vertical Handoff. The evolutionary algorithm of invasive weed optimization algorithm popularly known as the IWO has been used in this paper, to solve the Vertical Handoff (VHO and Horizontal Handoff (HHO problems. This integer coded algorithm is based on the colonizing behavior of weed plants and has been developed to optimize the system load and reduce the battery power consumption of the Mobile Node (MN. Constraints such as Receiver Signal Strength (RSS, battery lifetime, mobility, load and so on are taken into account. Individual as well as a combination of a number of factors are considered during decision process to make it more effective. This paper brings out the novel method of IWO algorithm for decision making during Vertical Handoff. Therefore the proposed VHO decision making algorithm is compared with the existing SSF and OPTG methods.

  6. Energetic integration of discontinuous processes by means of genetic algorithms, GABSOBHIN; Integration energetique de procedes discontinus a l'aide d'algorithmes genetiques, GABSOBHIN

    Energy Technology Data Exchange (ETDEWEB)

    Krummenacher, P.; Renaud, B.; Marechal, F.; Favrat, D.

    2001-07-01

    This report presents a new methodological approach for the optimal design of energy-integrated batch processes. The main emphasis is put on indirect and, to some extend, on direct heat exchange networks with the possibility of introducing closed or open storage systems. The study demonstrates the feasibility of optimising with genetic algorithms while highlighting the pros and cons of this type of approach. The study shows that the resolution of such problems should preferably be done in several steps to better target the expected solutions. Demonstration is made that in spite of relatively large computer times (on PCs) the use of genetic algorithm allows the consideration of both continuous decision variables (size, operational rating of equipment, etc.) and integer variables (related to the structure at design and during operation). Comparison of two optimisation strategies is shown with a preference for a two-steps optimisation scheme. One of the strengths of genetic algorithms is the capacity to accommodate heuristic rules, which can be introduced in the model. However, a rigorous modelling strategy is advocated to improve robustness and adequate coding of the decision variables. The practical aspects of the research work are converted into a software developed with MATLAB to solve the energy integration of batch processes with a reasonable number of closed or open stores. This software includes the model of superstructures, including the heat exchangers and the storage alternatives, as well as the link to the Struggle algorithm developed at MIT via a dedicated new interface. The package also includes a user-friendly pre-processing using EXCEL, which is to facilitate to application to other similar industrial problems. These software developments have been validated both on an academic and on an industrial type of problems. (author)

  7. EDDA: An Efficient Distributed Data Replication Algorithm in VANETs.

    Science.gov (United States)

    Zhu, Junyu; Huang, Chuanhe; Fan, Xiying; Guo, Sipei; Fu, Bin

    2018-02-10

    Efficient data dissemination in vehicular ad hoc networks (VANETs) is a challenging issue due to the dynamic nature of the network. To improve the performance of data dissemination, we study distributed data replication algorithms in VANETs for exchanging information and computing in an arbitrarily-connected network of vehicle nodes. To achieve low dissemination delay and improve the network performance, we control the number of message copies that can be disseminated in the network and then propose an efficient distributed data replication algorithm (EDDA). The key idea is to let the data carrier distribute the data dissemination tasks to multiple nodes to speed up the dissemination process. We calculate the number of communication stages for the network to enter into a balanced status and show that the proposed distributed algorithm can converge to a consensus in a small number of communication stages. Most of the theoretical results described in this paper are to study the complexity of network convergence. The lower bound and upper bound are also provided in the analysis of the algorithm. Simulation results show that the proposed EDDA can efficiently disseminate messages to vehicles in a specific area with low dissemination delay and system overhead.

  8. EDDA: An Efficient Distributed Data Replication Algorithm in VANETs

    Science.gov (United States)

    Zhu, Junyu; Huang, Chuanhe; Fan, Xiying; Guo, Sipei; Fu, Bin

    2018-01-01

    Efficient data dissemination in vehicular ad hoc networks (VANETs) is a challenging issue due to the dynamic nature of the network. To improve the performance of data dissemination, we study distributed data replication algorithms in VANETs for exchanging information and computing in an arbitrarily-connected network of vehicle nodes. To achieve low dissemination delay and improve the network performance, we control the number of message copies that can be disseminated in the network and then propose an efficient distributed data replication algorithm (EDDA). The key idea is to let the data carrier distribute the data dissemination tasks to multiple nodes to speed up the dissemination process. We calculate the number of communication stages for the network to enter into a balanced status and show that the proposed distributed algorithm can converge to a consensus in a small number of communication stages. Most of the theoretical results described in this paper are to study the complexity of network convergence. The lower bound and upper bound are also provided in the analysis of the algorithm. Simulation results show that the proposed EDDA can efficiently disseminate messages to vehicles in a specific area with low dissemination delay and system overhead. PMID:29439443

  9. Value at risk estimation with entropy-based wavelet analysis in exchange markets

    Science.gov (United States)

    He, Kaijian; Wang, Lijun; Zou, Yingchao; Lai, Kin Keung

    2014-08-01

    In recent years, exchange markets are increasingly integrated together. Fluctuations and risks across different exchange markets exhibit co-moving and complex dynamics. In this paper we propose the entropy-based multivariate wavelet based approaches to analyze the multiscale characteristic in the multidimensional domain and improve further the Value at Risk estimation reliability. Wavelet analysis has been introduced to construct the entropy-based Multiscale Portfolio Value at Risk estimation algorithm to account for the multiscale dynamic correlation. The entropy measure has been proposed as the more effective measure with the error minimization principle to select the best basis when determining the wavelet families and the decomposition level to use. The empirical studies conducted in this paper have provided positive evidence as to the superior performance of the proposed approach, using the closely related Chinese Renminbi and European Euro exchange market.

  10. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    Science.gov (United States)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  11. Novel Quantum Encryption Algorithm Based on Multiqubit Quantum Shift Register and Hill Cipher

    International Nuclear Information System (INIS)

    Khalaf, Rifaat Zaidan; Abdullah, Alharith Abdulkareem

    2014-01-01

    Based on a quantum shift register, a novel quantum block cryptographic algorithm that can be used to encrypt classical messages is proposed. The message is encoded and decoded by using a code generated by the quantum shift register. The security of this algorithm is analysed in detail. It is shown that, in the quantum block cryptographic algorithm, two keys can be used. One of them is the classical key that is used in the Hill cipher algorithm where Alice and Bob use the authenticated Diffie Hellman key exchange algorithm using the concept of digital signature for the authentication of the two communicating parties and so eliminate the man-in-the-middle attack. The other key is generated by the quantum shift register and used for the coding of the encryption message, where Alice and Bob share the key by using the BB84 protocol. The novel algorithm can prevent a quantum attack strategy as well as a classical attack strategy. The problem of key management is discussed and circuits for the encryption and the decryption are suggested

  12. CONAN—The cruncher of local exchange coefficients for strongly interacting confined systems in one dimension

    DEFF Research Database (Denmark)

    Loft, Niels Jakob Søe; Kristensen, Lasse Bjørn; Thomsen, Anders

    2016-01-01

    We consider a one-dimensional system of particles with strong zero-range interactions. This system can be mapped onto a spin chain of the Heisenberg type with exchange coefficients that depend on the external trap. In this paper, we present an algorithm that can be used to compute these exchange...... coefficients. We introduce an open source code CONAN (Coefficients of One-dimensional N-Atom Networks) which is based on this algorithm. CONAN works with arbitrary external potentials and we have tested its reliability for system sizes up to around 35 particles. As illustrative examples, we consider a harmonic...... trap and a box trap with a superimposed asymmetric tilted potential. For these examples, the computation time typically scales with the number of particles as O(N3.5±0.4). Computation times are around 10 s for N=10 particles and less than 10 min for N=20 particles....

  13. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    Science.gov (United States)

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  14. Measurement of the exchange rate of waters of hydration in elastin by 2D T{sub 2}-T{sub 2} correlation nuclear magnetic resonance spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Sun Cheng; Boutis, Gregory S, E-mail: gboutis@brooklyn.cuny.edu [Brooklyn College, Department of Physics, 2900 Bedford Avenue, Brooklyn, NY 11210 (United States)

    2011-02-15

    We report on a direct measurement of the exchange rate of waters of hydration in elastin by T{sub 2}-T{sub 2} exchange spectroscopy. The exchange rates in bovine nuchal ligament elastin and aortic elastin at temperatures near, below and at the physiological temperature are reported here. Using an inverse Laplace transform (ILT) algorithm, we are able to identify four components in the relaxation times. While three of the components are in good agreement with previous measurements that used multi-exponential fitting, the ILT algorithm distinguishes a fourth component having relaxation times close to that of free water and is identified as water between fibers. With the aid of scanning electron microscopy, a model is proposed that allows for the application of a two-site exchange analysis between any two components for the determination of exchange rates between reservoirs. The results of the measurements support a model (described by Urry and Parker 2002 J. Muscle Res. Cell Motil. 23 543-59) wherein the net entropy of waters of hydration should increase with increasing temperature in the inverse temperature transition.

  15. A biologically plausible transform for visual recognition that is invariant to translation, scale and rotation

    Directory of Open Access Journals (Sweden)

    Pavel eSountsov

    2011-11-01

    Full Text Available Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled or rotated.

  16. A Biologically Plausible Transform for Visual Recognition that is Invariant to Translation, Scale, and Rotation.

    Science.gov (United States)

    Sountsov, Pavel; Santucci, David M; Lisman, John E

    2011-01-01

    Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated.

  17. Combined cation-exchange and extraction chromatographic method of pre-concentration and concomitant separation of Cu(II) with high molecular mass liquid cation exchanger after its online detection.

    Science.gov (United States)

    Mandal, B; Roy, U S; Datta, D; Ghosh, N

    2011-08-19

    A selective method has been developed for the extraction chromatographic trace level separation of Cu(II) with Versatic 10 (liquid cation exchanger) coated on silanised silica gel (SSG-V10). Cu(II) has been extracted from 0.1M acetate buffer at the range of pH 4.0-5.5. The effects of foreign ions, pH, flow-rate, stripping agents on extraction and elution have been investigated. Exchange capacity of the prepared exchanger at different temperatures with respect to Cu(II) has been determined. The extraction equilibrium constant (K(ex)) and different standard thermodynamic parameters have also been calculated by temperature variation method. Positive value of ΔH (7.98 kJ mol⁻¹) and ΔS (0.1916 kJ mol⁻¹) and negative value of ΔG (-49.16 kJ mol⁻¹) indicated that the process was endothermic, entropy gaining and spontaneous. Preconcentration factor was optimized at 74.7 ± 0.2 and the desorption constants K(desorption)¹(1.4 × 10⁻²) and K(desorption)²(9.8 × 10⁻²) were determined. The effect of pH on R(f) values in ion exchange paper chromatography has been investigated. In order to investigate the sorption isotherm, two equilibrium models, the Freundlich and Langmuir isotherms, were analyzed. Cu(II) has been separated from synthetic binary and multi-component mixtures containing various metal ions associated with it in ores and alloy samples. The method effectively permits sequential separation of Cu(II) from synthetic quaternary mixture containing its congeners Bi(III), Sn(II), Hg(II) and Cu(II), Cd(II), Pb(II) of same analytical group. The method was found effective for the selective detection, removal and recovery of Cu(II) from industrial waste and standard alloy samples following its preconcentration on the column. A plausible mechanism for the extraction of Cu(II) has been suggested. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Advanced Reactors-Intermediate Heat Exchanger (IHX) Coupling: Theoretical Modeling and Experimental Validation

    Energy Technology Data Exchange (ETDEWEB)

    Utgikar, Vivek [Univ. of Idaho, Moscow, ID (United States); Sun, Xiaodong [The Ohio State Univ., Columbus, OH (United States); Christensen, Richard [The Ohio State Univ., Columbus, OH (United States); Sabharwall, Piyush [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-12-29

    The overall goal of the research project was to model the behavior of the advanced reactorintermediate heat exchange system and to develop advanced control techniques for off-normal conditions. The specific objectives defined for the project were: 1. To develop the steady-state thermal hydraulic design of the intermediate heat exchanger (IHX); 2. To develop mathematical models to describe the advanced nuclear reactor-IHX-chemical process/power generation coupling during normal and off-normal operations, and to simulate models using multiphysics software; 3. To develop control strategies using genetic algorithm or neural network techniques and couple these techniques with the multiphysics software; 4. To validate the models experimentally The project objectives were accomplished by defining and executing four different tasks corresponding to these specific objectives. The first task involved selection of IHX candidates and developing steady state designs for those. The second task involved modeling of the transient and offnormal operation of the reactor-IHX system. The subsequent task dealt with the development of control strategies and involved algorithm development and simulation. The last task involved experimental validation of the thermal hydraulic performances of the two prototype heat exchangers designed and fabricated for the project at steady state and transient conditions to simulate the coupling of the reactor- IHX-process plant system. The experimental work utilized the two test facilities at The Ohio State University (OSU) including one existing High-Temperature Helium Test Facility (HTHF) and the newly developed high-temperature molten salt facility.

  19. MULTIFRACTAL STRUCTURE OF CENTRAL AND EASTERN EUROPEAN FOREIGN EXCHANGE MARKETS

    Directory of Open Access Journals (Sweden)

    Cn#259;pun#351;an Rn#259;zvan

    2012-07-01

    Full Text Available It is well known that empirical data coming from financial markets, like stock market indices, commodities, interest rates, traded volumes and foreign exchange rates have a multifractal structure. Multifractals were introduced in the field of economics to surpass the shortcomings of classical models like the fractional Brownian motion or GARCH processes. In this paper we investigate the multifractal behavior of Central and Eastern European foreign exchange rates, namely the Czech koruna, Croatian kuna, Hungarian forint, Polish zlot, Romanian leu and Russian rouble with respect to euro from January 13, 2000 to February 29, 2012. The dynamics of exchange rates is of interest for investors and traders, monetary and fiscal authorities, economic agents or policy makers. The exchange rate movements affect the international balance of payments, trade flows, and allocation of the resources in national and international economy. The empirical results from the multifractal detrending fluctuation analysis algorithm show that the six exchange rate series analysed display significant multifractality. Moreover, generating shuffled and surrogate time series, we analyze the sources of multifractality, long-range correlations and heavy-tailed distributions, and we find that this multifractal behavior can be mainly attributed to the latter. Finally, we propose a foreign exchange market inefficiency ranking by considering the multifractality degree as a measure of inefficiency. The regulators, through policy instruments, aim to improve the informational inefficiency of the markets, to reduce the associated risks and to ensure economic stabilization. Evaluation of the degree of information efficiency of foreign exchange markets, for Central and Eastern Europe countries, is important to assess to what extent these countries are prepared for the transition towards fully monetary integration. The weak form efficiency implies that the past exchange rates cannot help to

  20. Quantum theory as plausible reasoning applied to data obtained by robust experiments.

    Science.gov (United States)

    De Raedt, H; Katsnelson, M I; Michielsen, K

    2016-05-28

    We review recent work that employs the framework of logical inference to establish a bridge between data gathered through experiments and their objective description in terms of human-made concepts. It is shown that logical inference applied to experiments for which the observed events are independent and for which the frequency distribution of these events is robust with respect to small changes of the conditions under which the experiments are carried out yields, without introducing any concept of quantum theory, the quantum theoretical description in terms of the Schrödinger or the Pauli equation, the Stern-Gerlach or Einstein-Podolsky-Rosen-Bohm experiments. The extraordinary descriptive power of quantum theory then follows from the fact that it is plausible reasoning, that is common sense, applied to reproducible and robust experimental data. © 2016 The Author(s).

  1. Air Circulation and Heat Exchange under Reduced Pressures

    Science.gov (United States)

    Rygalov, Vadim; Wheeler, Raymond; Dixon, Mike; Hillhouse, Len; Fowler, Philip

    Low pressure atmospheres were suggested for Space Greenhouses (SG) design to minimize sys-tem construction and re-supply materials, as well as system manufacturing and deployment costs. But rarified atmospheres modify heat exchange mechanisms what finally leads to alter-ations in thermal control for low pressure closed environments. Under low atmospheric pressures (e.g., lower than 25 kPa compare to 101.3 kPa for normal Earth atmosphere), convection is becoming replaced by diffusion and rate of heat exchange reduces significantly. During a period from 2001 to 2009, a series of hypobaric experiments were conducted at Space Life Sciences Lab (SLSLab) NASA's Kennedy Space Center and the Department of Space Studies, University of North Dakota. Findings from these experiments showed: -air circulation rate decreases non-linearly with lowering of total atmospheric pressure; -heat exchange slows down with pressure decrease creating risk of thermal stress (elevated leaf tem-peratures) for plants in closed environments; -low pressure-induced thermal stress could be reduced by either lowering system temperature set point or increasing forced convection rates (circulation fan power) within certain limits; Air circulation is an important constituent of controlled environments and plays crucial role in material and heat exchange. Theoretical schematics and mathematical models are developed from a series of observations. These models can be used to establish optimal control algorithms for low pressure environments, such as a space greenhouse, as well as assist in fundamental design concept developments for these or similar habitable structures.

  2. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    Science.gov (United States)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  3. High Temperature Gas-to-Gas Heat Exchanger Based on a Solid Intermediate Medium

    Directory of Open Access Journals (Sweden)

    R. Amirante

    2014-04-01

    Full Text Available This paper proposes the design of an innovative high temperature gas-to-gas heat exchanger based on solid particles as intermediate medium, with application in medium and large scale externally fired combined power plants fed by alternative and dirty fuels, such as biomass and coal. An optimization procedure, performed by means of a genetic algorithm combined with computational fluid dynamics (CFD analysis, is employed for the design of the heat exchanger: the goal is the minimization of its size for an assigned heat exchanger efficiency. Two cases, corresponding to efficiencies equal to 80% and 90%, are considered. The scientific and technical difficulties for the realization of the heat exchanger are also faced up; in particular, this work focuses on the development both of a pressurization device, which is needed to move the solid particles within the heat exchanger, and of a pneumatic conveyor, which is required to deliver back the particles from the bottom to the top of the plant in order to realize a continuous operation mode. An analytical approach and a thorough experimental campaign are proposed to analyze the proposed systems and to evaluate the associated energy losses.

  4. Structure before meaning: sentence processing, plausibility, and subcategorization.

    Science.gov (United States)

    Kizach, Johannes; Nyvad, Anne Mette; Christensen, Ken Ramshøj

    2013-01-01

    Natural language processing is a fast and automatized process. A crucial part of this process is parsing, the online incremental construction of a syntactic structure. The aim of this study was to test whether a wh-filler extracted from an embedded clause is initially attached as the object of the matrix verb with subsequent reanalysis, and if so, whether the plausibility of such an attachment has an effect on reaction time. Finally, we wanted to examine whether subcategorization plays a role. We used a method called G-Maze to measure response time in a self-paced reading design. The experiments confirmed that there is early attachment of fillers to the matrix verb. When this attachment is implausible, the off-line acceptability of the whole sentence is significantly reduced. The on-line results showed that G-Maze was highly suited for this type of experiment. In accordance with our predictions, the results suggest that the parser ignores (or has no access to information about) implausibility and attaches fillers as soon as possible to the matrix verb. However, the results also show that the parser uses the subcategorization frame of the matrix verb. In short, the parser ignores semantic information and allows implausible attachments but adheres to information about which type of object a verb can take, ensuring that the parser does not make impossible attachments. We argue that the evidence supports a syntactic parser informed by syntactic cues, rather than one guided by semantic cues or one that is blind, or completely autonomous.

  5. Structure before meaning: sentence processing, plausibility, and subcategorization.

    Directory of Open Access Journals (Sweden)

    Johannes Kizach

    Full Text Available Natural language processing is a fast and automatized process. A crucial part of this process is parsing, the online incremental construction of a syntactic structure. The aim of this study was to test whether a wh-filler extracted from an embedded clause is initially attached as the object of the matrix verb with subsequent reanalysis, and if so, whether the plausibility of such an attachment has an effect on reaction time. Finally, we wanted to examine whether subcategorization plays a role. We used a method called G-Maze to measure response time in a self-paced reading design. The experiments confirmed that there is early attachment of fillers to the matrix verb. When this attachment is implausible, the off-line acceptability of the whole sentence is significantly reduced. The on-line results showed that G-Maze was highly suited for this type of experiment. In accordance with our predictions, the results suggest that the parser ignores (or has no access to information about implausibility and attaches fillers as soon as possible to the matrix verb. However, the results also show that the parser uses the subcategorization frame of the matrix verb. In short, the parser ignores semantic information and allows implausible attachments but adheres to information about which type of object a verb can take, ensuring that the parser does not make impossible attachments. We argue that the evidence supports a syntactic parser informed by syntactic cues, rather than one guided by semantic cues or one that is blind, or completely autonomous.

  6. Computer-aided thermohydraulic design of TEMA type E shell and tube heat exchangers for use in low pressure, liquid-to-liquid, single phase applications

    Science.gov (United States)

    Kolar, N. J.

    1985-04-01

    Classification, nomenclature, utilization and cost estimating of shell and tube heat exchangers are presented along with an historical overview of various methods currently employed in their design. A procedure for providing preliminary estimates of shell and tube heat exchanger design is developed in detail. The author formulates a computer program which employs this sizing algorithm for low pressure liquid-to-liquid heat exchanger applications. Additionally, problems encountered in the design and manufacture of shell and tube heat exchangers are described along with present methods of solution for each.

  7. EDDA: An Efficient Distributed Data Replication Algorithm in VANETs

    Directory of Open Access Journals (Sweden)

    Junyu Zhu

    2018-02-01

    Full Text Available Efficient data dissemination in vehicular ad hoc networks (VANETs is a challenging issue due to the dynamic nature of the network. To improve the performance of data dissemination, we study distributed data replication algorithms in VANETs for exchanging information and computing in an arbitrarily-connected network of vehicle nodes. To achieve low dissemination delay and improve the network performance, we control the number of message copies that can be disseminated in the network and then propose an efficient distributed data replication algorithm (EDDA. The key idea is to let the data carrier distribute the data dissemination tasks to multiple nodes to speed up the dissemination process. We calculate the number of communication stages for the network to enter into a balanced status and show that the proposed distributed algorithm can converge to a consensus in a small number of communication stages. Most of the theoretical results described in this paper are to study the complexity of network convergence. The lower bound and upper bound are also provided in the analysis of the algorithm. Simulation results show that the proposed EDDA can efficiently disseminate messages to vehicles in a specific area with low dissemination delay and system overhead.

  8. Event-based plausibility immediately influences on-line language comprehension.

    Science.gov (United States)

    Matsuki, Kazunaga; Chow, Tracy; Hare, Mary; Elman, Jeffrey L; Scheepers, Christoph; McRae, Ken

    2011-07-01

    In some theories of sentence comprehension, linguistically relevant lexical knowledge, such as selectional restrictions, is privileged in terms of the time-course of its access and influence. We examined whether event knowledge computed by combining multiple concepts can rapidly influence language understanding even in the absence of selectional restriction violations. Specifically, we investigated whether instruments can combine with actions to influence comprehension of ensuing patients of (as in Rayner, Warren, Juhuasz, & Liversedge, 2004; Warren & McConnell, 2007). Instrument-verb-patient triplets were created in a norming study designed to tap directly into event knowledge. In self-paced reading (Experiment 1), participants were faster to read patient nouns, such as hair, when they were typical of the instrument-action pair (Donna used the shampoo to wash vs. the hose to wash). Experiment 2 showed that these results were not due to direct instrument-patient relations. Experiment 3 replicated Experiment 1 using eyetracking, with effects of event typicality observed in first fixation and gaze durations on the patient noun. This research demonstrates that conceptual event-based expectations are computed and used rapidly and dynamically during on-line language comprehension. We discuss relationships among plausibility and predictability, as well as their implications. We conclude that selectional restrictions may be best considered as event-based conceptual knowledge rather than lexical-grammatical knowledge.

  9. Difficult Sudoku Puzzles Created by Replica Exchange Monte Carlo Method

    OpenAIRE

    Watanabe, Hiroshi

    2013-01-01

    An algorithm to create difficult Sudoku puzzles is proposed. An Ising spin-glass like Hamiltonian describing difficulty of puzzles is defined, and difficult puzzles are created by minimizing the energy of the Hamiltonian. We adopt the replica exchange Monte Carlo method with simultaneous temperature adjustments to search lower energy states efficiently, and we succeed in creating a puzzle which is the world hardest ever created in our definition, to our best knowledge. (Added on Mar. 11, the ...

  10. Semantics-based plausible reasoning to extend the knowledge coverage of medical knowledge bases for improved clinical decision support

    OpenAIRE

    Mohammadhassanzadeh, Hossein; Van Woensel, William; Abidi, Samina Raza; Abidi, Syed Sibte Raza

    2017-01-01

    Background Capturing complete medical knowledge is challenging-often due to incomplete patient Electronic Health Records (EHR), but also because of valuable, tacit medical knowledge hidden away in physicians? experiences. To extend the coverage of incomplete medical knowledge-based systems beyond their deductive closure, and thus enhance their decision-support capabilities, we argue that innovative, multi-strategy reasoning approaches should be applied. In particular, plausible reasoning mech...

  11. Numerical simulation of two phase flows in heat exchangers

    International Nuclear Information System (INIS)

    Grandotto Biettoli, M.

    2006-04-01

    The author gives an overview of his research activity since 1981. He first gives a detailed presentation of properties and equations of two-phase flows in heat exchangers, and of their mathematical and numerical investigation: semi-local equations (mass conservation, momentum conservation and energy conservation), homogenized conservation equations (mass, momentum and enthalpy conservation, boundary conditions), equation closures, discretization, resolution algorithm, computational aspects and applications. Then, he reports the works performed in the field of turbulent flows, hyperbolic methods, low Mach methods, the Neptune project, and parallel computing

  12. An Event-Driven Hybrid Molecular Dynamics and Direct Simulation Monte Carlo Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Donev, A; Garcia, A L; Alder, B J

    2007-07-30

    A novel algorithm is developed for the simulation of polymer chains suspended in a solvent. The polymers are represented as chains of hard spheres tethered by square wells and interact with the solvent particles with hard core potentials. The algorithm uses event-driven molecular dynamics (MD) for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in event-driven algorithms, rather, the momentum and energy exchange in the solvent is determined stochastically using the Direct Simulation Monte Carlo (DSMC) method. The coupling between the solvent and the solute is consistently represented at the particle level, however, unlike full MD simulations of both the solvent and the solute, the spatial structure of the solvent is ignored. The algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard wall subjected to uniform shear. The algorithm closely reproduces full MD simulations with two orders of magnitude greater efficiency. Results do not confirm the existence of periodic (cycling) motion of the polymer chain.

  13. On Matrix Sampling and Imputation of Context Questionnaires with Implications for the Generation of Plausible Values in Large-Scale Assessments

    Science.gov (United States)

    Kaplan, David; Su, Dan

    2016-01-01

    This article presents findings on the consequences of matrix sampling of context questionnaires for the generation of plausible values in large-scale assessments. Three studies are conducted. Study 1 uses data from PISA 2012 to examine several different forms of missing data imputation within the chained equations framework: predictive mean…

  14. The Sarrazin effect: the presence of absurd statements in conspiracy theories makes canonical information less plausible.

    Science.gov (United States)

    Raab, Marius Hans; Auer, Nikolas; Ortlieb, Stefan A; Carbon, Claus-Christian

    2013-01-01

    Reptile prime ministers and flying Nazi saucers-extreme and sometimes off-wall conclusion are typical ingredients of conspiracy theories. While individual differences are a common research topic concerning conspiracy theories, the role of extreme statements in the process of acquiring and passing on conspiratorial stories has not been regarded in an experimental design so far. We identified six morphological components of conspiracy theories empirically. On the basis of these content categories a set of narrative elements for a 9/11 story was compiled. These elements varied systematically in terms of conspiratorial allegation, i.e., they contained official statements concerning the events of 9/11, statements alleging to a conspiracy limited in time and space as well as extreme statements indicating an all-encompassing cover-up. Using the method of narrative construction, 30 people were given a set of cards with these statements and asked to construct the course of events of 9/11 they deem most plausible. When extreme statements were present in the set, the resulting stories were more conspiratorial; the number of official statements included in the narrative dropped significantly, whereas the self-assessment of the story's plausibility did not differ between conditions. This indicates that blatant statements in a pool of information foster the synthesis of conspiracy theories on an individual level. By relating these findings to one of Germany's most successful (and controversial) non-fiction books, we refer to the real-world dangers of this effect.

  15. Segmented heat exchanger

    Science.gov (United States)

    Baldwin, Darryl Dean; Willi, Martin Leo; Fiveland, Scott Byron; Timmons, Kristine Ann

    2010-12-14

    A segmented heat exchanger system for transferring heat energy from an exhaust fluid to a working fluid. The heat exchanger system may include a first heat exchanger for receiving incoming working fluid and the exhaust fluid. The working fluid and exhaust fluid may travel through at least a portion of the first heat exchanger in a parallel flow configuration. In addition, the heat exchanger system may include a second heat exchanger for receiving working fluid from the first heat exchanger and exhaust fluid from a third heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the second heat exchanger in a counter flow configuration. Furthermore, the heat exchanger system may include a third heat exchanger for receiving working fluid from the second heat exchanger and exhaust fluid from the first heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the third heat exchanger in a parallel flow configuration.

  16. Exchange rate rebounds after foreign exchange market interventions

    Science.gov (United States)

    Hoshikawa, Takeshi

    2017-03-01

    This study examined the rebounds in the exchange rate after foreign exchange intervention. When intervention is strongly effective, the exchange rate rebounds at next day. The effect of intervention is reduced slightly by the rebound after the intervention. The exchange rate might have been 67.12-77.47 yen to a US dollar without yen-selling/dollar-purchasing intervention of 74,691,100 million yen implemented by the Japanese government since 1991, in comparison to the actual exchange rate was 103.19 yen to the US dollar at the end of March 2014.

  17. Experimental and numerical analysis of the optimized finned-tube heat exchanger for OM314 diesel exhaust exergy recovery

    International Nuclear Information System (INIS)

    Hatami, M.; Ganji, D.D.; Gorji-Bandpy, M.

    2015-01-01

    Highlights: • An optimized finned-tube heat exchanger is modeled. • Artificial Neural Networks and Genetic Algorithm are applied. • Exergy recovery from exhaust of a diesel engine is studied. - Abstract: In this research, a multi objective optimization based on Artificial Neural Network (ANN) and Genetic Algorithm (GA) are applied on the obtained results from numerical outcomes for a finned-tube heat exchanger (HEX) in diesel exhaust heat recovery. Thirty heat exchangers with different fin length, thickness and fin numbers are modeled and those results in three engine loads are optimized with weight functions for pressure drop, recovered heat and HEX weight. Finally, two cases of HEXs (an optimized and a non-optimized) are produced experimentally and mounted on the exhaust of an OM314 diesel engine to compare their results in heat and exergy recovery. All experiments are done for five engine loads (0%, 20%, 40%, 60% and 80% of full load) and four water mass flow rates (50, 40, 30 and 20 g/s). Results show that maximum exergy recovers occurs in high engine loads and optimized HEX with 10 fins have averagely 8% second law efficiency in exergy recovery

  18. MORPH-PRO: a novel algorithm and web server for protein morphing.

    Science.gov (United States)

    Castellana, Natalie E; Lushnikov, Andrey; Rotkiewicz, Piotr; Sefcovic, Natasha; Pevzner, Pavel A; Godzik, Adam; Vyatkina, Kira

    2013-07-11

    Proteins are known to be dynamic in nature, changing from one conformation to another while performing vital cellular tasks. It is important to understand these movements in order to better understand protein function. At the same time, experimental techniques provide us with only single snapshots of the whole ensemble of available conformations. Computational protein morphing provides a visualization of a protein structure transitioning from one conformation to another by producing a series of intermediate conformations. We present a novel, efficient morphing algorithm, Morph-Pro based on linear interpolation. We also show that apart from visualization, morphing can be used to provide plausible intermediate structures. We test this by using the intermediate structures of a c-Jun N-terminal kinase (JNK1) conformational change in a virtual docking experiment. The structures are shown to dock with higher score to known JNK1-binding ligands than structures solved using X-Ray crystallography. This experiment demonstrates the potential applications of the intermediate structures in modeling or virtual screening efforts. Visualization of protein conformational changes is important for characterization of protein function. Furthermore, the intermediate structures produced by our algorithm are good approximations to true structures. We believe there is great potential for these computationally predicted structures in protein-ligand docking experiments and virtual screening. The Morph-Pro web server can be accessed at http://morph-pro.bioinf.spbau.ru.

  19. A Localization Algorithm Based on AOA for Ad-Hoc Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yang Sun Lee

    2012-01-01

    Full Text Available Knowledge of positions of sensor nodes in Wireless Sensor Networks (WSNs will make possible many applications such as asset monitoring, object tracking and routing. In WSNs, the errors may happen in the measurement of distances and angles between pairs of nodes in WSN and these errors will be propagated to different nodes, the estimation of positions of sensor nodes can be difficult and have huge errors. In this paper, we will propose localization algorithm based on both distance and angle to landmark. So, we introduce a method of incident angle to landmark and the algorithm to exchange physical data such as distances and incident angles and update the position of a node by utilizing multiple landmarks and multiple paths to landmarks.

  20. Tensor exchange amplitudes in K +- N charge exchange reactions

    International Nuclear Information System (INIS)

    Svec, M.

    1979-01-01

    Tensor (A 2 ) exchange amplitudes in K +- N charge exchange (CEX) are constructed from the K +- N CEX data supplemented by information on the vector (rho) exchange amplitudes from πN sca tering. We observed new features in the t-structure of A 2 exchange amplitudes which contradict the t-de pendence anticipated by most of the Regge models. The results also provide evidence for violation of weak exchange degeneracy

  1. Efficient exact-exchange time-dependent density-functional theory methods and their relation to time-dependent Hartree-Fock.

    Science.gov (United States)

    Hesselmann, Andreas; Görling, Andreas

    2011-01-21

    A recently introduced time-dependent exact-exchange (TDEXX) method, i.e., a response method based on time-dependent density-functional theory that treats the frequency-dependent exchange kernel exactly, is reformulated. In the reformulated version of the TDEXX method electronic excitation energies can be calculated by solving a linear generalized eigenvalue problem while in the original version of the TDEXX method a laborious frequency iteration is required in the calculation of each excitation energy. The lowest eigenvalues of the new TDEXX eigenvalue equation corresponding to the lowest excitation energies can be efficiently obtained by, e.g., a version of the Davidson algorithm appropriate for generalized eigenvalue problems. Alternatively, with the help of a series expansion of the new TDEXX eigenvalue equation, standard eigensolvers for large regular eigenvalue problems, e.g., the standard Davidson algorithm, can be used to efficiently calculate the lowest excitation energies. With the help of the series expansion as well, the relation between the TDEXX method and time-dependent Hartree-Fock is analyzed. Several ways to take into account correlation in addition to the exact treatment of exchange in the TDEXX method are discussed, e.g., a scaling of the Kohn-Sham eigenvalues, the inclusion of (semi)local approximate correlation potentials, or hybrids of the exact-exchange kernel with kernels within the adiabatic local density approximation. The lowest lying excitations of the molecules ethylene, acetaldehyde, and pyridine are considered as examples.

  2. Probabilistic estimation of residential air exchange rates for ...

    Science.gov (United States)

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of

  3. Design and Analysis of Optimization Algorithms to Minimize Cryptographic Processing in BGP Security Protocols.

    Science.gov (United States)

    Sriram, Vinay K; Montgomery, Doug

    2017-07-01

    The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.

  4. A blind matching algorithm for cognitive radio networks

    KAUST Repository

    Hamza, Doha R.

    2016-08-15

    We consider a cognitive radio network where secondary users (SUs) are allowed access time to the spectrum belonging to the primary users (PUs) provided that they relay primary messages. PUs and SUs negotiate over allocations of the secondary power that will be used to relay PU data. We formulate the problem as a generalized assignment market to find an epsilon pairwise stable matching. We propose a distributed blind matching algorithm (BLMA) to produce the pairwise-stable matching plus the associated power allocations. We stipulate a limited information exchange in the network so that agents only calculate their own utilities but no information is available about the utilities of any other users in the network. We establish convergence to epsilon pairwise stable matchings in finite time. Finally we show that our algorithm exhibits a limited degradation in PU utility when compared with the Pareto optimal results attained using perfect information assumptions. © 2016 IEEE.

  5. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  6. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  7. BASIMO - Borehole Heat Exchanger Array Simulation and Optimization Tool

    Science.gov (United States)

    Schulte, Daniel O.; Bastian, Welsch; Wolfram, Rühaak; Kristian, Bär; Ingo, Sass

    2017-04-01

    Arrays of borehole heat exchangers are an increasingly popular source for renewable energy. Furthermore, they can serve as borehole thermal energy storage (BTES) systems for seasonally fluctuating heat sources like solar thermal energy or district heating grids. The high temperature level of these heat sources prohibits the use of the shallow subsurface for environmental reasons. Therefore, deeper reservoirs have to be accessed instead. The increased depth of the systems results in high investment costs and has hindered the implementation of this technology until now. Therefore, research of medium deep BTES systems relies on numerical simulation models. Current simulation tools cannot - or only to some extent - describe key features like partly insulated boreholes unless they run fully discretized models of the borehole heat exchangers. However, fully discretized models often come at a high computational cost, especially for large arrays of borehole heat exchangers. We give an update on the development of BASIMO: a tool, which uses one dimensional thermal resistance and capacity models for the borehole heat exchangers coupled with a numerical finite element model for the subsurface heat transport in a dual-continuum approach. An unstructured tetrahedral mesh bypasses the limitations of structured grids for borehole path geometries, while the thermal resistance and capacity model is improved to account for borehole heat exchanger properties changing with depth. Thereby, partly insulated boreholes can be considered in the model. Furthermore, BASIMO can be used to improve the design of BTES systems: the tool allows for automated parameter variations and is readily coupled to other code like mathematical optimization algorithms. Optimization can be used to determine the required minimum system size or to increase the system performance.

  8. Thermal-economic optimization of an air-cooled heat exchanger unit

    International Nuclear Information System (INIS)

    Alinia Kashani, Amir Hesam; Maddahi, Alireza; Hajabdollahi, Hassan

    2013-01-01

    Thermodynamic modeling and optimal design of an air-cooled heat exchanger (ACHE) unit are developed in this study. For this purpose, ε–NTU method and mathematical relations are applied to estimate the fluids outlet temperatures and pressure drops in tube and air sides. The main goal of this study is minimizing of two conflicting objective functions namely the temperature approach and the minimum total annual cost, simultaneously. For this purpose, fast and elitist non-dominated sorting genetic-algorithm (NSGA-II) is applied to minimize the objective functions by considering ten design parameters. In addition, a set of typical constraints, governing on the ACHE unit design, is subjected to obtain more practical optimum design points. Furthermore, sensitivity analysis of change in the objective functions, when the optimum design parameters vary, is conducted and the degree of each parameter on conflicting objective functions has been investigated. Finally, a selection procedure of the best optimum point is introduced and final optimum design point is determined. -- Highlights: ► Multi-objective optimization of air-cooled heat exchanger. ► Considering ten new design parameters in this type of heat exchanger. ► A detailed cost function is used to estimate the heat exchanger investment cost. ► Presenting a mathematical relation for optimum total cost vs. temperature approach. ► The sensitivity analysis of parameters in the optimum situation

  9. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  10. Exchange functional by a range-separated exchange hole

    International Nuclear Information System (INIS)

    Toyoda, Masayuki; Ozaki, Taisuke

    2011-01-01

    An approximation to the exchange-hole density is proposed for the evaluation of the exact exchange energy in electronic structure calculations within the density-functional theory and the Kohn-Sham scheme. Based on the localized nature of density matrix, the exchange hole is divided into the short-range (SR) and long-range (LR) parts by using an adequate filter function, where the LR part is deduced by matching of moments with the exactly calculated SR counterpart, ensuring the correct asymptotic -1/r behavior of the exchange potential. With this division, the time-consuming integration is truncated at a certain interaction range, largely reducing the computation cost. The total energies, exchange energies, exchange potentials, and eigenvalues of the highest-occupied orbitals are calculated for the noble-gas atoms. The close agreement of the results with the exact values suggests the validity of the approximation.

  11. Building test data from real outbreaks for evaluating detection algorithms.

    Science.gov (United States)

    Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.

  12. Building test data from real outbreaks for evaluating detection algorithms.

    Directory of Open Access Journals (Sweden)

    Gaetan Texier

    Full Text Available Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler. We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1 resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak

  13. Developing cross entropy genetic algorithm for solving Two-Dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP)

    Science.gov (United States)

    Paramestha, D. L.; Santosa, B.

    2018-04-01

    Two-dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP) is a combination of Heterogeneous Fleet VRP and a packing problem well-known as Two-Dimensional Bin Packing Problem (BPP). 2L-HFVRP is a Heterogeneous Fleet VRP in which these costumer demands are formed by a set of two-dimensional rectangular weighted item. These demands must be served by a heterogeneous fleet of vehicles with a fix and variable cost from the depot. The objective function 2L-HFVRP is to minimize the total transportation cost. All formed routes must be consistent with the capacity and loading process of the vehicle. Sequential and unrestricted scenarios are considered in this paper. We propose a metaheuristic which is a combination of the Genetic Algorithm (GA) and the Cross Entropy (CE) named Cross Entropy Genetic Algorithm (CEGA) to solve the 2L-HFVRP. The mutation concept on GA is used to speed up the algorithm CE to find the optimal solution. The mutation mechanism was based on local improvement (2-opt, 1-1 Exchange, and 1-0 Exchange). The probability transition matrix mechanism on CE is used to avoid getting stuck in the local optimum. The effectiveness of CEGA was tested on benchmark instance based 2L-HFVRP. The result of experiments shows a competitive result compared with the other algorithm.

  14. Using MODFLOW with CFP to understand conduit-matrix exchange in a karst aquifer during flooding

    Science.gov (United States)

    Spellman, P.; Screaton, E.; Martin, J. B.; Gulley, J.; Brown, A.

    2011-12-01

    Karst springs may reverse flow when allogenic runoff increases river stage faster than groundwater heads and may exchange of surface water with groundwater in the surrounding aquifer matrix. Recharged flood water is rich in nutrients, metals, and organic matter and is undersaturated with respect to calcite. Understanding the physical processes controlling this exchange of water is critical to understanding metal cycling, redox chemistry and dissolution in the subsurface. Ultimately the magnitude of conduit-matrix exchange should be governed by head gradients between the conduit and the aquifer which are affected by the hydraulic conductivity of the matrix, conduit properties and antecedent groundwater heads. These parameters are interrelated and it is unknown which ones exert the greatest control over the magnitude of exchange. This study uses MODFLOW-2005 coupled with the Conduit Flow Processes (CFP) package to determine how physical properties of conduits and aquifers influence the magnitude of surface water-groundwater exchange. We use hydraulic data collected during spring reversals in a mapped underwater cave that sources Madison Blue Spring in north-central Florida to explore which factors are most important in governing exchange. The simulation focused on a major flood in 2009, when river stage increased by about 10 meters over 9 days. In a series of simulations, we varied hydraulic conductivity, conduit diameter, roughness height and tortuosity in addition to antecedent groundwater heads to estimate the relative effects of each parameter on the magnitude of conduit-matrix exchange. Each parameter was varied across plausible ranges for karst aquifers. Antecedent groundwater heads were varied using well data recorded through wet and dry seasons throughout the spring shed. We found hydraulic conductivity was the most important factor governing exchange. The volume of exchange increased by about 61% from the lowest value (1.8x10-6 m/d) to the highest value (6 m

  15. Incorporating a modified uniform crossover and 2-exchange neighborhood mechanism in a discrete bat algorithm to solve the quadratic assignment problem

    Directory of Open Access Journals (Sweden)

    Mohammed Essaid Riffi

    2017-11-01

    Full Text Available The bat algorithm is one of the recent nature-inspired algorithms, which has been emerged as a powerful search method for solving continuous as well as discrete problems. The quadratic assignment problem is a well-known NP-hard problem in combinatorial optimization. The goal of this problem is to assign n facilities to n locations in such a way as to minimize the assignment cost. For that purpose, this paper introduces a novel discrete variant of bat algorithm to deal with this combinatorial optimization problem. The proposed algorithm was evaluated on a set of benchmark instances from the QAPLIB library and the performance was compared to other algorithms. The empirical results of exhaustive experiments were promising and illustrated the efficacy of the suggested approach.

  16. Road network selection for small-scale maps using an improved centrality-based algorithm

    Directory of Open Access Journals (Sweden)

    Roy Weiss

    2014-12-01

    Full Text Available The road network is one of the key feature classes in topographic maps and databases. In the task of deriving road networks for products at smaller scales, road network selection forms a prerequisite for all other generalization operators, and is thus a fundamental operation in the overall process of topographic map and database production. The objective of this work was to develop an algorithm for automated road network selection from a large-scale (1:10,000 to a small-scale database (1:200,000. The project was pursued in collaboration with swisstopo, the national mapping agency of Switzerland, with generic mapping requirements in mind. Preliminary experiments suggested that a selection algorithm based on betweenness centrality performed best for this purpose, yet also exposed problems. The main contribution of this paper thus consists of four extensions that address deficiencies of the basic centrality-based algorithm and lead to a significant improvement of the results. The first two extensions improve the formation of strokes concatenating the road segments, which is crucial since strokes provide the foundation upon which the network centrality measure is computed. Thus, the first extension ensures that roundabouts are detected and collapsed, thus avoiding interruptions of strokes by roundabouts, while the second introduces additional semantics in the process of stroke formation, allowing longer and more plausible strokes to built. The third extension detects areas of high road density (i.e., urban areas using density-based clustering and then locally increases the threshold of the centrality measure used to select road segments, such that more thinning takes place in those areas. Finally, since the basic algorithm tends to create dead-ends—which however are not tolerated in small-scale maps—the fourth extension reconnects these dead-ends to the main network, searching for the best path in the main heading of the dead-end.

  17. Modified cuckoo search: A new gradient free optimisation algorithm

    International Nuclear Information System (INIS)

    Walton, S.; Hassan, O.; Morgan, K.; Brown, M.R.

    2011-01-01

    Highlights: → Modified cuckoo search (MCS) is a new gradient free optimisation algorithm. → MCS shows a high convergence rate, able to outperform other optimisers. → MCS is particularly strong at high dimension objective functions. → MCS performs well when applied to engineering problems. - Abstract: A new robust optimisation algorithm, which can be regarded as a modification of the recently developed cuckoo search, is presented. The modification involves the addition of information exchange between the top eggs, or the best solutions. Standard optimisation benchmarking functions are used to test the effects of these modifications and it is demonstrated that, in most cases, the modified cuckoo search performs as well as, or better than, the standard cuckoo search, a particle swarm optimiser, and a differential evolution strategy. In particular the modified cuckoo search shows a high convergence rate to the true global minimum even at high numbers of dimensions.

  18. Stochastic resource allocation in emergency departments with a multi-objective simulation optimization algorithm.

    Science.gov (United States)

    Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li

    2017-03-01

    The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.

  19. Automated exchange transfusion and exchange rate.

    Science.gov (United States)

    Funato, M; Shimada, S; Tamai, H; Taki, H; Yoshioka, Y

    1989-10-01

    An automated blood exchange transfusion (BET) with a two-site technique has been devised by Goldmann et al and by us, using an infusion pump. With this method, we successfully performed exchange transfusions 189 times in the past four years on 110 infants with birth weights ranging from 530 g to 4,000 g. The exchange rate by the automated method was compared with the rate by Diamond's method. Serum bilirubin (SB) levels before and after BET and the maximal SB rebound within 24 hours after BET were: 21.6 +/- 2.4, 11.5 +/- 2.2, and 15.0 +/- 1.5 mg/dl in the automated method, and 22.0 +/- 2.9, 11.2 +/- 2.5, and 17.7 +/- 3.2 mg/dl in Diamond's method, respectively. The result showed that the maximal rebound of the SB level within 24 hours after BET was significantly lower in the automated method than in Diamond's method (p less than 0.01), though SB levels before and after BET were not significantly different between the two methods. The exchange rate was also measured by means of staining the fetal red cells (F cells) both in the automated method and in Diamond's method, and comparing them. The exchange rate of F cells in Diamond's method went down along the theoretical exchange curve proposed by Diamond, while the rate in the automated method was significantly better than in Diamond's, especially in the early stage of BET (p less than 0.01). We believe that the use of this automated method may give better results than Diamond's method in the rate of exchange, because this method is performed with a two-site technique using a peripheral artery and vein.

  20. Multilinear Model of Heat Exchanger with Hammerstein Structure

    Directory of Open Access Journals (Sweden)

    Dragan Pršić

    2016-01-01

    Full Text Available The multilinear model control design approach is based on the approximation of the nonlinear model of the system by a set of linear models. The paper presents the method of creation of a bank of linear models of the two-pass shell and tube heat exchanger. The nonlinear model is assumed to have a Hammerstein structure. The set of linear models is formed by decomposition of the nonlinear steady-state characteristic by using the modified Included Angle Dividing method. Two modifications of this method are proposed. The first one refers to the addition to the algorithm for decomposition, which reduces the number of linear segments. The second one refers to determination of the threshold value. The dependence between decomposition of the nonlinear characteristic and the linear dynamics of the closed-loop system is established. The decoupling process is more formal and it can be easily implemented by using software tools. Due to its simplicity, the method is particularly suitable in complex systems, such as heat exchanger networks.

  1. A New Improved Quantum Evolution Algorithm with Local Search Procedure for Capacitated Vehicle Routing Problem

    Directory of Open Access Journals (Sweden)

    Ligang Cui

    2013-01-01

    Full Text Available The capacitated vehicle routing problem (CVRP is the most classical vehicle routing problem (VRP; many solution techniques are proposed to find its better answer. In this paper, a new improved quantum evolution algorithm (IQEA with a mixed local search procedure is proposed for solving CVRPs. First, an IQEA with a double chain quantum chromosome, new quantum rotation schemes, and self-adaptive quantum Not gate is constructed to initialize and generate feasible solutions. Then, to further strengthen IQEA's searching ability, three local search procedures 1-1 exchange, 1-0 exchange, and 2-OPT, are adopted. Experiments on a small case have been conducted to analyze the sensitivity of main parameters and compare the performances of the IQEA with different local search strategies. Together with results from the testing of CVRP benchmarks, the superiorities of the proposed algorithm over the PSO, SR-1, and SR-2 have been demonstrated. At last, a profound analysis of the experimental results is presented and some suggestions on future researches are given.

  2. A Comparative Evaluation of Algorithms in the Implementation of an Ultra-Secure Router-to-Router Key Exchange System

    Directory of Open Access Journals (Sweden)

    Nishaal J. Parmar

    2017-01-01

    Full Text Available This paper presents a comparative evaluation of possible encryption algorithms for use in a self-contained, ultra-secure router-to-router communication system, first proposed by El Rifai and Verma. The original proposal utilizes a discrete logarithm-based encryption solution, which will be compared in this paper to RSA, AES, and ECC encryption algorithms. RSA certificates are widely used within the industry but require a trusted key generation and distribution architecture. AES and ECC provide advantages in key length, processing requirements, and storage space, also maintaining an arbitrarily high level of security. This paper modifies each of the four algorithms for use within the self-contained router-to-router environment system and then compares them in terms of features offered, storage space and data transmission needed, encryption/decryption efficiency, and key generation requirements.

  3. Heat exchanger

    International Nuclear Information System (INIS)

    Dostatni, A.W.; Dostatni, Michel.

    1976-01-01

    In the main patent, a description was given of a heat exchanger with an exchange surface in preformed sheet metal designed for the high pressure and temperature service particularly encountered in nuclear pressurized water reactors and which is characterised by the fact that it is composed of at least one exchanger bundle sealed in a containment, the said bundle or bundles being composed of numerous juxtaposed individual compartments whose exchange faces are built of preformed sheet metal. The present addendun certificate concerns shapes of bundles and their positioning methods in the exchanger containment enabling its compactness to be increased [fr

  4. Algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations with the use of parallel computations

    Energy Technology Data Exchange (ETDEWEB)

    Moryakov, A. V., E-mail: sailor@orc.ru [National Research Centre Kurchatov Institute (Russian Federation)

    2016-12-15

    An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.

  5. Simulated Tempering Distributed Replica Sampling, Virtual Replica Exchange, and Other Generalized-Ensemble Methods for Conformational Sampling.

    Science.gov (United States)

    Rauscher, Sarah; Neale, Chris; Pomès, Régis

    2009-10-13

    Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.

  6. On the plausibility of socioeconomic mortality estimates derived from linked data: a demographic approach.

    Science.gov (United States)

    Lerch, Mathias; Spoerri, Adrian; Jasilionis, Domantas; Viciana Fernandèz, Francisco

    2017-07-14

    Reliable estimates of mortality according to socioeconomic status play a crucial role in informing the policy debate about social inequality, social cohesion, and exclusion as well as about the reform of pension systems. Linked mortality data have become a gold standard for monitoring socioeconomic differentials in survival. Several approaches have been proposed to assess the quality of the linkage, in order to avoid the misclassification of deaths according to socioeconomic status. However, the plausibility of mortality estimates has never been scrutinized from a demographic perspective, and the potential problems with the quality of the data on the at-risk populations have been overlooked. Using indirect demographic estimation (i.e., the synthetic extinct generation method), we analyze the plausibility of old-age mortality estimates according to educational attainment in four European data contexts with different quality issues: deterministic and probabilistic linkage of deaths, as well as differences in the methodology of the collection of educational data. We evaluate whether the at-risk population according to educational attainment is misclassified and/or misestimated, correct these biases, and estimate the education-specific linkage rates of deaths. The results confirm a good linkage of death records within different educational strata, even when probabilistic matching is used. The main biases in mortality estimates concern the classification and estimation of the person-years of exposure according to educational attainment. Changes in the census questions about educational attainment led to inconsistent information over time, which misclassified the at-risk population. Sample censuses also misestimated the at-risk populations according to educational attainment. The synthetic extinct generation method can be recommended for quality assessments of linked data because it is capable not only of quantifying linkage precision, but also of tracking problems in

  7. Nitrogenous Derivatives of Phosphorus and the Origins of Life: Plausible Prebiotic Phosphorylating Agents in Water

    Directory of Open Access Journals (Sweden)

    Megha Karki

    2017-07-01

    Full Text Available Phosphorylation under plausible prebiotic conditions continues to be one of the defining issues for the role of phosphorus in the origins of life processes. In this review, we cover the reactions of alternative forms of phosphate, specifically the nitrogenous versions of phosphate (and other forms of reduced phosphorus species from a prebiotic, synthetic organic and biochemistry perspective. The ease with which such amidophosphates or phosphoramidate derivatives phosphorylate a wide variety of substrates suggests that alternative forms of phosphate could have played a role in overcoming the “phosphorylation in water problem”. We submit that serious consideration should be given to the search for primordial sources of nitrogenous versions of phosphate and other versions of phosphorus.

  8. An improved molecular dynamics algorithm to study thermodiffusion in binary hydrocarbon mixtures

    Science.gov (United States)

    Antoun, Sylvie; Saghir, M. Ziad; Srinivasan, Seshasai

    2018-03-01

    In multicomponent liquid mixtures, the diffusion flow of chemical species can be induced by temperature gradients, which leads to a separation of the constituent components. This cross effect between temperature and concentration is known as thermodiffusion or the Ludwig-Soret effect. The performance of boundary driven non-equilibrium molecular dynamics along with the enhanced heat exchange (eHEX) algorithm was studied by assessing the thermodiffusion process in n-pentane/n-decane (nC5-nC10) binary mixtures. The eHEX algorithm consists of an extended version of the HEX algorithm with an improved energy conservation property. In addition to this, the transferable potentials for phase equilibria-united atom force field were employed in all molecular dynamics (MD) simulations to precisely model the molecular interactions in the fluid. The Soret coefficients of the n-pentane/n-decane (nC5-nC10) mixture for three different compositions (at 300.15 K and 0.1 MPa) were calculated and compared with the experimental data and other MD results available in the literature. Results of our newly employed MD algorithm showed great agreement with experimental data and a better accuracy compared to other MD procedures.

  9. Component Cooling Heat Exchanger Heat Transfer Capability Operability Monitoring

    International Nuclear Information System (INIS)

    Mihalina, M.; Djetelic, N.

    2010-01-01

    .g. using CC Heat Exchanger bypass valves for CC temperature control, variation of plant heat loads, pumps performance, and day-night temperature difference, with lagging effects on heat transfer dynamics). Krsko NPP is continuously monitoring the Component Cooling (CC) Heat Exchanger performance using the on-line process information system (PIS). By defining the mathematical algorithm, it is possible to continuously evaluate the CC Heat Exchanger operability by verifying if the heat transfer rate calculation is in accordance with the heat exchanger design specification sheet requirements. These calculations are limited to summer periods only when the bypass valves are neither throttled nor open.(author).

  10. Flow vibrations and dynamic instability of heat exchanger tube bundles

    International Nuclear Information System (INIS)

    Granger, S.; Langre, E. de

    1995-01-01

    This paper presents a review of external-flow-induced vibration of heat exchanger tube bundles. Attention is focused on a dynamic instability, known as ''fluidelastic instability'', which can develop when flow is transverse to the tube axis. The main physical models proposed in the literature are successively reviewed in a critical way. As a consequence, some concepts are clarified, some a priori plausible misinterpretations are rejected and finally, certain basic mechanisms, induced by the flow-structure interaction and responsible for the ultimate onset of fluidelastic instability, are elucidated. Design tools and methods for predictive analysis of industrial cases are then presented. The usual design tool is the ''stability map'', i.e. an empirical correlation which must be interpreted in a conservative way. Of course, when using this approach, the designer must also consider reasonable safety margins. In the area of predictive analysis, the ''unsteady semi-analytical models'' seem to be a promising and efficient methodology. A modern implementation of these ideas mix an original experimental approach for taking fluid dynamic forces into account, together with non-classical numerical methods of mechanical vibration. (authors). 20 refs., 9 figs

  11. Preserving the Boltzmann ensemble in replica-exchange molecular dynamics.

    Science.gov (United States)

    Cooke, Ben; Schmidler, Scott C

    2008-10-28

    We consider the convergence behavior of replica-exchange molecular dynamics (REMD) [Sugita and Okamoto, Chem. Phys. Lett. 314, 141 (1999)] based on properties of the numerical integrators in the underlying isothermal molecular dynamics (MD) simulations. We show that a variety of deterministic algorithms favored by molecular dynamics practitioners for constant-temperature simulation of biomolecules fail either to be measure invariant or irreducible, and are therefore not ergodic. We then show that REMD using these algorithms also fails to be ergodic. As a result, the entire configuration space may not be explored even in an infinitely long simulation, and the simulation may not converge to the desired equilibrium Boltzmann ensemble. Moreover, our analysis shows that for initial configurations with unfavorable energy, it may be impossible for the system to reach a region surrounding the minimum energy configuration. We demonstrate these failures of REMD algorithms for three small systems: a Gaussian distribution (simple harmonic oscillator dynamics), a bimodal mixture of Gaussians distribution, and the alanine dipeptide. Examination of the resulting phase plots and equilibrium configuration densities indicates significant errors in the ensemble generated by REMD simulation. We describe a simple modification to address these failures based on a stochastic hybrid Monte Carlo correction, and prove that this is ergodic.

  12. Segment-based dose optimization using a genetic algorithm

    International Nuclear Information System (INIS)

    Cotrutz, Cristian; Xing Lei

    2003-01-01

    Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning

  13. Ion exchange equilibrium constants

    CERN Document Server

    Marcus, Y

    2013-01-01

    Ion Exchange Equilibrium Constants focuses on the test-compilation of equilibrium constants for ion exchange reactions. The book first underscores the scope of the compilation, equilibrium constants, symbols used, and arrangement of the table. The manuscript then presents the table of equilibrium constants, including polystyrene sulfonate cation exchanger, polyacrylate cation exchanger, polymethacrylate cation exchanger, polysterene phosphate cation exchanger, and zirconium phosphate cation exchanger. The text highlights zirconium oxide anion exchanger, zeolite type 13Y cation exchanger, and

  14. Parallel simulated annealing algorithms for cell placement on hypercube multiprocessors

    Science.gov (United States)

    Banerjee, Prithviraj; Jones, Mark Howard; Sargent, Jeff S.

    1990-01-01

    Two parallel algorithms for standard cell placement using simulated annealing are developed to run on distributed-memory message-passing hypercube multiprocessors. The cells can be mapped in a two-dimensional area of a chip onto processors in an n-dimensional hypercube in two ways, such that both small and large cell exchange and displacement moves can be applied. The computation of the cost function in parallel among all the processors in the hypercube is described, along with a distributed data structure that needs to be stored in the hypercube to support the parallel cost evaluation. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. A dynamic parallel annealing schedule estimates the errors due to interacting parallel moves and adapts the rate of synchronization automatically. Two novel approaches in controlling error in parallel algorithms are described: heuristic cell coloring and adaptive sequence control.

  15. Nuclear fuel management optimization using adaptive evolutionary algorithms with heuristics

    International Nuclear Information System (INIS)

    Axmann, J.K.; Van de Velde, A.

    1996-01-01

    Adaptive Evolutionary Algorithms in combination with expert knowledge encoded in heuristics have proved to be a robust and powerful optimization method for the design of optimized PWR fuel loading pattern. Simple parallel algorithmic structures coupled with a low amount of communications between computer processor units in use makes it possible for workstation clusters to be employed efficiently. The extension of classic evolution strategies not only by new and alternative methods but also by the inclusion of heuristics with effects on the exchange probabilities of the fuel assemblies at specific core positions leads to the RELOPAT optimization code of the Technical University of Braunschweig. In combination with the new, neutron-physical 3D nodal core simulator PRISM developed by SIEMENS the PRIMO loading pattern optimization system has been designed. Highly promising results in the recalculation of known reload plans for German PWR's new lead to a commercially usable program. (author)

  16. A new algorithm for DNS of turbulent polymer solutions using the FENE-P model

    Science.gov (United States)

    Vaithianathan, T.; Collins, Lance; Robert, Ashish; Brasseur, James

    2004-11-01

    Direct numerical simulations (DNS) of polymer solutions based on the finite extensible nonlinear elastic model with the Peterlin closure (FENE-P) solve for a conformation tensor with properties that must be maintained by the numerical algorithm. In particular, the eigenvalues of the tensor are all positive (to maintain positive definiteness) and the sum is bounded by the maximum extension length. Loss of either of these properties will give rise to unphysical instabilities. In earlier work, Vaithianathan & Collins (2003) devised an algorithm based on an eigendecomposition that allows you to update the eigenvalues of the conformation tensor directly, making it easier to maintain the necessary conditions for a stable calculation. However, simple fixes (such as ceilings and floors) yield results that violate overall conservation. The present finite-difference algorithm is inherently designed to satisfy all of the bounds on the eigenvalues, and thus restores overall conservation. New results suggest that the earlier algorithm may have exaggerated the energy exchange at high wavenumbers. In particular, feedback of the polymer elastic energy to the isotropic turbulence is now greatly reduced.

  17. Validation of the method for determination of the thermal resistance of fouling in shell and tube heat exchangers

    International Nuclear Information System (INIS)

    Markowski, Mariusz; Trafczynski, Marian; Urbaniec, Krzysztof

    2013-01-01

    Highlights: • Heat recovery in a heat exchanger network (HEN). • A novel method for on-line determination of the thermal resistance of fouling is presented. • Details are developed for shell and tube heat exchangers. • The method was validated and sensibility analysis was carried out. • Developed approach allows long-term monitoring of changes in the HEN efficiency. - Abstract: A novel method for on-line determination of the thermal resistance of fouling in shell and tube heat exchangers is presented. It can be applied under the condition that the data on pressure, temperature, mass flowrate and thermophysical properties of both heat-exchanging media are continuously available. The calculation algorithm for use in the novel method is robust and ensures reliable determination of the thermal resistance of fouling even if the operating parameters fluctuate. The method was validated using measurement data retrieved from the operation records of a heat exchanger network connected with a crude distillation unit rated 800 t/h. Sensibility analysis of the method was carried out and the calculated values of the thermal resistance of fouling were critically reviewed considering the results of qualitative evaluation of fouling layers in the exchangers inspected during plant overhaul

  18. 2nd International Workshop on Eigenvalue Problems : Algorithms, Software and Applications in Petascale Computing

    CERN Document Server

    Zhang, Shao-Liang; Imamura, Toshiyuki; Yamamoto, Yusaku; Kuramashi, Yoshinobu; Hoshi, Takeo

    2017-01-01

    This book provides state-of-the-art and interdisciplinary topics on solving matrix eigenvalue problems, particularly by using recent petascale and upcoming post-petascale supercomputers. It gathers selected topics presented at the International Workshops on Eigenvalue Problems: Algorithms; Software and Applications, in Petascale Computing (EPASA2014 and EPASA2015), which brought together leading researchers working on the numerical solution of matrix eigenvalue problems to discuss and exchange ideas – and in so doing helped to create a community for researchers in eigenvalue problems. The topics presented in the book, including novel numerical algorithms, high-performance implementation techniques, software developments and sample applications, will contribute to various fields that involve solving large-scale eigenvalue problems.

  19. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  20. NLP model and stochastic multi-start optimization approach for heat exchanger networks

    International Nuclear Information System (INIS)

    Núñez-Serna, Rosa I.; Zamora, Juan M.

    2016-01-01

    Highlights: • An NLP model for the optimal design of heat exchanger networks is proposed. • The NLP model is developed from a stage-wise grid diagram representation. • A two-phase stochastic multi-start optimization methodology is utilized. • Improved network designs are obtained with different heat load distributions. • Structural changes and reductions in the number of heat exchangers are produced. - Abstract: Heat exchanger network synthesis methodologies frequently identify good network structures, which nevertheless, might be accompanied by suboptimal values of design variables. The objective of this work is to develop a nonlinear programming (NLP) model and an optimization approach that aim at identifying the best values for intermediate temperatures, sub-stream flow rate fractions, heat loads and areas for a given heat exchanger network topology. The NLP model that minimizes the total annual cost of the network is constructed based on a stage-wise grid diagram representation. To improve the possibilities of obtaining global optimal designs, a two-phase stochastic multi-start optimization algorithm is utilized for the solution of the developed model. The effectiveness of the proposed optimization approach is illustrated with the optimization of two network designs proposed in the literature for two well-known benchmark problems. Results show that from the addressed base network topologies it is possible to achieve improved network designs, with redistributions in exchanger heat loads that lead to reductions in total annual costs. The results also show that the optimization of a given network design sometimes leads to structural simplifications and reductions in the total number of heat exchangers of the network, thereby exposing alternative viable network topologies initially not anticipated.

  1. Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks.

    Science.gov (United States)

    Walter, Florian; Röhrbein, Florian; Knoll, Alois

    2015-12-01

    The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level of abstraction but employs highly realistic simulations of actual biological nervous systems. Even today, carrying out these simulations efficiently at appropriate timescales is challenging. Neuromorphic chip designs specially tailored to this task therefore offer an interesting perspective for neurorobotics. Unlike Von Neumann CPUs, these chips cannot be simply programmed with a standard programming language. Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. Enabling higher cognitive functions for neurorobotics consequently requires the application of neurobiological learning algorithms to adjust synaptic weights in a biologically plausible way. In this paper, we therefore investigate how to program neuromorphic chips by means of learning. First, we provide an overview over selected neuromorphic chip designs and analyze them in terms of neural computation, communication systems and software infrastructure. On the theoretical side, we review neurobiological learning techniques. Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. A final discussion puts the findings of this work into context and highlights how neuromorphic hardware can potentially advance the field of autonomous robot systems. The paper thus gives an in-depth overview of neuromorphic implementations of basic mechanisms of synaptic plasticity which are required to realize advanced cognitive capabilities with spiking neural networks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    Science.gov (United States)

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (pcoding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (pcoding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Exchange rate policy

    Directory of Open Access Journals (Sweden)

    Plačkov Slađana

    2013-01-01

    Full Text Available Small oscillations of exchange rate certainly affect the loss of confidence in the currency (Serbian dinar, CSD and because of the shallow market even the smallest change in the supply and demand leads to a shift in exchange rate and brings uncertainty. Some economists suggest that the course should be linked to inflation and thus ensure predictable and stable exchange rates. Real exchange rate or slightly depressed exchange rate will encourage the competitiveness of exporters and perhaps ensure the development of new production lines which, in terms of overvalued exchange rate, had no economic justification. Fixed exchange rate will bring lower interest rates, lower risk and lower business uncertainty (uncertainty avoidance, but Serbia will also reduce foreign exchange reserves by following this trend. On the other hand, a completely free exchange rate, would lead to a (real fall of Serbian currency, which in a certain period would lead to a significant increase in exports, but the consequences for businessmen and citizens with loans pegged to the euro exchange rate, would be disastrous. We will pay special attention to the depreciation of the exchange rate, as it is generally favorable to the export competitiveness of Serbia and, on the other hand, it leads to an increase in debt servicing costs of the government as well as of the private sector. Oscillations of the dinar exchange rate, appreciation and depreciation, sometimes have disastrous consequences on the economy, investors, imports and exports. In subsequent work, we will observe the movement of the dinar exchange rate in Serbia, in the time interval 2009-2012, in order to strike a balance and maintain economic equilibrium. A movement of foreign currencies against the local currency is controlled in the foreign exchange market, so in case economic interests require, The National Bank of Serbia (NBS, on the basis of arbitrary criteria, can intervene in the market.

  4. Toward Petascale Biologically Plausible Neural Networks

    Science.gov (United States)

    Long, Lyle

    This talk will describe an approach to achieving petascale neural networks. Artificial intelligence has been oversold for many decades. Computers in the beginning could only do about 16,000 operations per second. Computer processing power, however, has been doubling every two years thanks to Moore's law, and growing even faster due to massively parallel architectures. Finally, 60 years after the first AI conference we have computers on the order of the performance of the human brain (1016 operations per second). The main issues now are algorithms, software, and learning. We have excellent models of neurons, such as the Hodgkin-Huxley model, but we do not know how the human neurons are wired together. With careful attention to efficient parallel computing, event-driven programming, table lookups, and memory minimization massive scale simulations can be performed. The code that will be described was written in C + + and uses the Message Passing Interface (MPI). It uses the full Hodgkin-Huxley neuron model, not a simplified model. It also allows arbitrary network structures (deep, recurrent, convolutional, all-to-all, etc.). The code is scalable, and has, so far, been tested on up to 2,048 processor cores using 107 neurons and 109 synapses.

  5. An Idle-State Detection Algorithm for SSVEP-Based Brain-Computer Interfaces Using a Maximum Evoked Response Spatial Filter.

    Science.gov (United States)

    Zhang, Dan; Huang, Bisheng; Wu, Wei; Li, Siliang

    2015-11-01

    Although accurate recognition of the idle state is essential for the application of brain-computer interfaces (BCIs) in real-world situations, it remains a challenging task due to the variability of the idle state. In this study, a novel algorithm was proposed for the idle state detection in a steady-state visual evoked potential (SSVEP)-based BCI. The proposed algorithm aims to solve the idle state detection problem by constructing a better model of the control states. For feature extraction, a maximum evoked response (MER) spatial filter was developed to extract neurophysiologically plausible SSVEP responses, by finding the combination of multi-channel electroencephalogram (EEG) signals that maximized the evoked responses while suppressing the unrelated background EEGs. The extracted SSVEP responses at the frequencies of both the attended and the unattended stimuli were then used to form feature vectors and a series of binary classifiers for recognition of each control state and the idle state were constructed. EEG data from nine subjects in a three-target SSVEP BCI experiment with a variety of idle state conditions were used to evaluate the proposed algorithm. Compared to the most popular canonical correlation analysis-based algorithm and the conventional power spectrum-based algorithm, the proposed algorithm outperformed them by achieving an offline control state classification accuracy of 88.0 ± 11.1% and idle state false positive rates (FPRs) ranging from 7.4 ± 5.6% to 14.2 ± 10.1%, depending on the specific idle state conditions. Moreover, the online simulation reported BCI performance close to practical use: 22.0 ± 2.9 out of the 24 control commands were correctly recognized and the FPRs achieved as low as approximately 0.5 event/min in the idle state conditions with eye open and 0.05 event/min in the idle state condition with eye closed. These results demonstrate the potential of the proposed algorithm for implementing practical SSVEP BCI systems.

  6. Exchange market pressure

    NARCIS (Netherlands)

    Jager, H.; Klaassen, F.; Durlauf, S.N.; Blume, L.E.

    2010-01-01

    Currencies can be under severe pressure in the foreign exchange market, but in a fixed (or managed) exchange rate regime that is not fully visible via the change in the exchange rate. Exchange market pressure (EMP) is a concept developed to nevertheless measure the pressure in such cases. This

  7. Signature of Plausible Accreting Supermassive Black Holes in Mrk 261/262 and Mrk 266

    Directory of Open Access Journals (Sweden)

    Gagik Ter-Kazarian

    2013-01-01

    Full Text Available We address the neutrino radiation of plausible accreting supermassive black holes closely linking to the 5 nuclear components of galaxy samples of Mrk 261/262 and Mrk 266. We predict a time delay before neutrino emission of the same scale as the age of the Universe. The ultrahigh energy neutrinos are produced in superdense protomatter medium via simple (quark or pionic reactions or modified URCA processes (G. Gamow was inspired to name the process URCA after the name of a casino in Rio de Janeiro. The resulting neutrino fluxes for quark reactions are ranging from to , where is the opening parameter. For pionic and modified URCA reactions, the fluxes are and , respectively. These fluxes are highly beamed along the plane of accretion disk, peaked at ultrahigh energies, and collimated in smaller opening angle .

  8. The use of the multi-cumulant tensor analysis for the algorithmic optimisation of investment portfolios

    Science.gov (United States)

    Domino, Krzysztof

    2017-02-01

    The cumulant analysis plays an important role in non Gaussian distributed data analysis. The shares' prices returns are good example of such data. The purpose of this research is to develop the cumulant based algorithm and use it to determine eigenvectors that represent investment portfolios with low variability. Such algorithm is based on the Alternating Least Square method and involves the simultaneous minimisation 2'nd- 6'th cumulants of the multidimensional random variable (percentage shares' returns of many companies). Then the algorithm was tested during the recent crash on the Warsaw Stock Exchange. To determine incoming crash and provide enter and exit signal for the investment strategy the Hurst exponent was calculated using the local DFA. It was shown that introduced algorithm is on average better that benchmark and other portfolio determination methods, but only within examination window determined by low values of the Hurst exponent. Remark that the algorithm is based on cumulant tensors up to the 6'th order calculated for a multidimensional random variable, what is the novel idea. It can be expected that the algorithm would be useful in the financial data analysis on the world wide scale as well as in the analysis of other types of non Gaussian distributed data.

  9. EXTERNALITIES IN EXCHANGE NETWORKS AN ADAPTATION OF EXISTING THEORIES OF EXCHANGE NETWORKS

    NARCIS (Netherlands)

    Dijkstra, Jacob

    2009-01-01

    The present paper extends the focus of network exchange research to externalities in exchange networks. Externalities of exchange are defined as direct effects on an actor's utility, of an exchange in which this actor is not involved. Existing theories in the field of network exchange do not inform

  10. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  11. A Probabilistic Framework for Constructing Temporal Relations in Replica Exchange Molecular Trajectories.

    Science.gov (United States)

    Chattopadhyay, Aditya; Zheng, Min; Waller, Mark Paul; Priyakumar, U Deva

    2018-05-23

    Knowledge of the structure and dynamics of biomolecules is essential for elucidating the underlying mechanisms of biological processes. Given the stochastic nature of many biological processes, like protein unfolding, it's almost impossible that two independent simulations will generate the exact same sequence of events, which makes direct analysis of simulations difficult. Statistical models like Markov Chains, transition networks etc. help in shedding some light on the mechanistic nature of such processes by predicting long-time dynamics of these systems from short simulations. However, such methods fall short in analyzing trajectories with partial or no temporal information, for example, replica exchange molecular dynamics or Monte Carlo simulations. In this work we propose a probabilistic algorithm, borrowing concepts from graph theory and machine learning, to extract reactive pathways from molecular trajectories in the absence of temporal data. A suitable vector representation was chosen to represent each frame in the macromolecular trajectory (as a series of interaction and conformational energies) and dimensionality reduction was performed using principal component analysis (PCA). The trajectory was then clustered using a density-based clustering algorithm, where each cluster represents a metastable state on the potential energy surface (PES) of the biomolecule under study. A graph was created with these clusters as nodes with the edges learnt using an iterative expectation maximization algorithm. The most reactive path is conceived as the widest path along this graph. We have tested our method on RNA hairpin unfolding trajectory in aqueous urea solution. Our method makes the understanding of the mechanism of unfolding in RNA hairpin molecule more tractable. As this method doesn't rely on temporal data it can be used to analyze trajectories from Monte Carlo sampling techniques and replica exchange molecular dynamics (REMD).

  12. Heat exchanger

    International Nuclear Information System (INIS)

    Leigh, D.G.

    1976-01-01

    The arrangement described relates particularly to heat exchangers for use in fast reactor power plants, in which heat is extracted from the reactor core by primary liquid metal coolant and is then transferred to secondary liquid metal coolant by means of intermediate heat exchangers. One of the main requirements of such a system, if used in a pool type fast reactor, is that the pressure drop on the primary coolant side must be kept to a minimum consistent with the maintenance of a limited dynamic head in the pool vessel. The intermediate heat exchanger must also be compact enough to be accommodated in the reactor vessel, and the heat exchanger tubes must be available for inspection and the detection and plugging of leaks. If, however, the heat exchanger is located outside the reactor vessel, as in the case of a loop system reactor, a higher pressure drop on the primary coolant side is acceptable, and space restriction is less severe. An object of the arrangement described is to provide a method of heat exchange and a heat exchanger to meet these problems. A further object is to provide a method that ensures that excessive temperature variations are not imposed on welded tube joints by sudden changes in the primary coolant flow path. Full constructional details are given. (U.K.)

  13. Parallel genetic algorithms with migration for the hybrid flow shop scheduling problem

    Directory of Open Access Journals (Sweden)

    K. Belkadi

    2006-01-01

    Full Text Available This paper addresses scheduling problems in hybrid flow shop-like systems with a migration parallel genetic algorithm (PGA_MIG. This parallel genetic algorithm model allows genetic diversity by the application of selection and reproduction mechanisms nearer to nature. The space structure of the population is modified by dividing it into disjoined subpopulations. From time to time, individuals are exchanged between the different subpopulations (migration. Influence of parameters and dedicated strategies are studied. These parameters are the number of independent subpopulations, the interconnection topology between subpopulations, the choice/replacement strategy of the migrant individuals, and the migration frequency. A comparison between the sequential and parallel version of genetic algorithm (GA is provided. This comparison relates to the quality of the solution and the execution time of the two versions. The efficiency of the parallel model highly depends on the parameters and especially on the migration frequency. In the same way this parallel model gives a significant improvement of computational time if it is implemented on a parallel architecture which offers an acceptable number of processors (as many processors as subpopulations.

  14. Coupled eco-hydrology and biogeochemistry algorithms enable the simulation of water table depth effects on boreal peatland net CO2 exchange

    Science.gov (United States)

    Mezbahuddin, Mohammad; Grant, Robert F.; Flanagan, Lawrence B.

    2017-12-01

    Water table depth (WTD) effects on net ecosystem CO2 exchange of boreal peatlands are largely mediated by hydrological effects on peat biogeochemistry and the ecophysiology of peatland vegetation. The lack of representation of these effects in carbon models currently limits our predictive capacity for changes in boreal peatland carbon deposits under potential future drier and warmer climates. We examined whether a process-level coupling of a prognostic WTD with (1) oxygen transport, which controls energy yields from microbial and root oxidation-reduction reactions, and (2) vascular and nonvascular plant water relations could explain mechanisms that control variations in net CO2 exchange of a boreal fen under contrasting WTD conditions, i.e., shallow vs. deep WTD. Such coupling of eco-hydrology and biogeochemistry algorithms in a process-based ecosystem model, ecosys, was tested against net ecosystem CO2 exchange measurements in a western Canadian boreal fen peatland over a period of drier-weather-driven gradual WTD drawdown. A May-October WTD drawdown of ˜ 0.25 m from 2004 to 2009 hastened oxygen transport to microbial and root surfaces, enabling greater microbial and root energy yields and peat and litter decomposition, which raised modeled ecosystem respiration (Re) by 0.26 µmol CO2 m-2 s-1 per 0.1 m of WTD drawdown. It also augmented nutrient mineralization, and hence root nutrient availability and uptake, which resulted in improved leaf nutrient (nitrogen) status that facilitated carboxylation and raised modeled vascular gross primary productivity (GPP) and plant growth. The increase in modeled vascular GPP exceeded declines in modeled nonvascular (moss) GPP due to greater shading from increased vascular plant growth and moss drying from near-surface peat desiccation, thereby causing a net increase in modeled growing season GPP by 0.39 µmol CO2 m-2 s-1 per 0.1 m of WTD drawdown. Similar increases in GPP and Re caused no significant WTD effects on modeled

  15. An Evolutionary Algorithm for Multiobjective Fuzzy Portfolio Selection Models with Transaction Cost and Liquidity

    Directory of Open Access Journals (Sweden)

    Wei Yue

    2015-01-01

    Full Text Available The major issues for mean-variance-skewness models are the errors in estimations that cause corner solutions and low diversity in the portfolio. In this paper, a multiobjective fuzzy portfolio selection model with transaction cost and liquidity is proposed to maintain the diversity of portfolio. In addition, we have designed a multiobjective evolutionary algorithm based on decomposition of the objective space to maintain the diversity of obtained solutions. The algorithm is used to obtain a set of Pareto-optimal portfolios with good diversity and convergence. To demonstrate the effectiveness of the proposed model and algorithm, the performance of the proposed algorithm is compared with the classic MOEA/D and NSGA-II through some numerical examples based on the data of the Shanghai Stock Exchange Market. Simulation results show that our proposed algorithm is able to obtain better diversity and more evenly distributed Pareto front than the other two algorithms and the proposed model can maintain quite well the diversity of portfolio. The purpose of this paper is to deal with portfolio problems in the weighted possibilistic mean-variance-skewness (MVS and possibilistic mean-variance-skewness-entropy (MVS-E frameworks with transaction cost and liquidity and to provide different Pareto-optimal investment strategies as diversified as possible for investors at a time, rather than one strategy for investors at a time.

  16. Estimating Exchange Market Pressure and the Degree of Exchange Market Intervention for Finland during the Floating Exchange Rate Regime

    OpenAIRE

    Pösö, Mika; Spolander, Mikko

    1997-01-01

    In this paper, we use a fairly simple monetary macro model to calculate the quarterly measures of exchange market pressure and the degree of the Bank of Finland's intervention during the time the markka was floated. Exchange market pressure measures the size of the exchange rate change that would have occurred if the central bank had unexpectedly refrained from intervening in the foreign exchange market. Intervention activity of the central bank is measured as the proportion of exchange marke...

  17. Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies.

    Science.gov (United States)

    Xu, Lei; Jeavons, Peter

    2015-11-01

    Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.

  18. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  19. Dynamic analysis of complex tube systems in heat exchangers

    International Nuclear Information System (INIS)

    Kouba, J.; Dvorak, P.

    1985-01-01

    Using a computation model, a dynamic analysis was made of tube assemblies of heat exchanger bundles by the finite element method. The algorithm is presented for determining the frequency mode properties, based on the Sturm sequences combined with inverse vector iteration. The results obtained using the method are compared with those obtained by analytical solution and by the transfer matrix method, this for the cases of both eigenvibrations and resonance vibrations. The results are in very good agreement. For the first four eigenfrequencies, the calculation error is less than 1.5% as against the analytical solution. (J.B.). 4 tabs., 8 figs., 5 refs

  20. A continuous exchange factor method for radiative exchange in enclosures with participating media

    International Nuclear Information System (INIS)

    Naraghi, M.H.N.; Chung, B.T.F.; Litkouhi, B.

    1987-01-01

    A continuous exchange factor method for analysis of radiative exchange in enclosures is developed. In this method two types of exchange functions are defined, direct exchange function and total exchange function. Certain integral equations relating total exchange functions to direct exchange functions are developed. These integral equations are solved using Gaussian quadrature integration method. The results obtained based on the present approach are found to be more accurate than those of the zonal method

  1. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  2. Electron exchange reaction in anion exchangers as observed in uranium isotope separation

    International Nuclear Information System (INIS)

    Obanawa, Heiichiro; Takeda, Kunihiko; Seko, Maomi

    1991-01-01

    The mechanism of electron exchange in an ion exchanger, as occurring between U 4+ and UO 2 2+ in uranium isotope separation, was investigated. The height of the separation unit (H q ) in the presence of metal ion catalysts, as obtained from the separation experiments, was found to be almost coincident with the theoretical value of H q as calculated on the basis of the intrasolution acceleration mechanism of the metal ion, suggesting that the electron exchange mechanism in the ion-exchanger is essentially the same as that in the solution when metal ion catalysts are present. Separation experiments with no metal ion catalyst, on the other hand, showed the electron exchange reaction in the ion exchanger to be substantially higher than that in the solution, suggesting an acceleration of the electron exchange reaction by the ion-exchanger which is due to the close existence of higher order Cl - complexes of UO 2 2+ and U 4+ in the vicinity of the ion-exchange group. (author)

  3. Investigation of Chemical Exchange at Intermediate Exchange Rates using a Combination of Chemical Exchange Saturation Transfer (CEST) and Spin-Locking methods (CESTrho)

    Science.gov (United States)

    Kogan, Feliks; Singh, Anup; Cai, Keija; Haris, Mohammad; Hariharan, Hari; Reddy, Ravinder

    2011-01-01

    Proton exchange imaging is important as it allows for visualization and quantification of the distribution of specific metabolites with conventional MRI. Current exchange mediated MRI methods suffer from poor contrast as well as confounding factors that influence exchange rates. In this study we developed a new method to measure proton exchange which combines chemical exchange saturation transfer (CEST) and T1ρ magnetization preparation methods (CESTrho). We demonstrated that this new CESTrho sequence can detect proton exchange in the slow to intermediate exchange regimes. It has a linear dependence on proton concentration which allows it to be used to quantitatively measure changes in metabolite concentration. Additionally, the magnetization scheme of this new method can be customized to make it insensitive to changes in exchange rate while retaining its dependency on solute concentration. Finally, we showed the feasibility of using CESTrho in vivo. This sequence is able to detect proton exchange at intermediate exchange rates and is unaffected by the confounding factors that influence proton exchange rates thus making it ideal for the measurement of metabolites with exchangeable protons in this exchange regime. PMID:22009759

  4. Investigation of chemical exchange at intermediate exchange rates using a combination of chemical exchange saturation transfer (CEST) and spin-locking methods (CESTrho).

    Science.gov (United States)

    Kogan, Feliks; Singh, Anup; Cai, Keija; Haris, Mohammad; Hariharan, Hari; Reddy, Ravinder

    2012-07-01

    Proton exchange imaging is important as it allows for visualization and quantification of the distribution of specific metabolites with conventional MRI. Current exchange mediated MRI methods suffer from poor contrast as well as confounding factors that influence exchange rates. In this study we developed a new method to measure proton exchange which combines chemical exchange saturation transfer and T(1)(ρ) magnetization preparation methods (CESTrho). We demonstrated that this new CESTrho sequence can detect proton exchange in the slow to intermediate exchange regimes. It has a linear dependence on proton concentration which allows it to be used to quantitatively measure changes in metabolite concentration. Additionally, the magnetization scheme of this new method can be customized to make it insensitive to changes in exchange rate while retaining its dependency on solute concentration. Finally, we showed the feasibility of using CESTrho in vivo. This sequence is able to detect proton exchange at intermediate exchange rates and is unaffected by the confounding factors that influence proton exchange rates thus making it ideal for the measurement of metabolites with exchangeable protons in this exchange regime. Copyright © 2011 Wiley Periodicals, Inc.

  5. MARKET-MAKING STRATEGY IN THE SYSTEM OF ALGORITHMIC HIGH-FREQUENCY TRADING

    Directory of Open Access Journals (Sweden)

    A. V. Toropov

    2014-01-01

    Full Text Available Market maker is the most important participant of modern exchange trading, it provides the increase of market liquidity and reduces the difference between bid and ask (spread. The paper presents automatic market-making strategy for quoting of options and other kinds of financial instruments on electronic markets. Quotes are based on theoretical pricing which is a resource-intensive task. Presented algorithmic optimizations, in particular quotes caching and smoothing of underlying asset price oscillation, give the possibility up to four times boost for quote modify scenario on real market data. Mechanism of quotes caching precalculates quotes in certain diapason around current underlying price. If underlying price changes within the diapason, algorithm sends already filled message for quote modification, instead of new complex computation. Smoothing of underlying asset price oscillation prevents permanent moving of the diapason and reacts only on significant market moving. A size of caching diapason which provides optimal correlation between speed of quotes modification and resource consumption has been defined experimentally (40 elements. In case of quoting 36 options on Eurex Exchange an average delay between underlying price change and quote modification is 277 usec. The measurements were carried out on the Sun X4170 M3: CPU(s: 2xXeon 2.9GHz RAM: 128 GB server under Solaris 10 operating system. Obtained results correspond to modern market-making requirements. The developed strategy is used by big European banks and trading firms.

  6. On one pion exchange potential with quark exchange in the resonating group method

    International Nuclear Information System (INIS)

    Braeuer, K.; Faessler, A.; Fernandez, F.; Shimizu, K.

    1985-01-01

    The effect of quark exchange between different nucleons on the one pion exchange potential is studied in the framework of the resonating group method. The calculated phase shifts including the one pion exchange potential with quark exchange in addition to the one gluon plus sigma meson exchange are shown to be consistent with experiments. Especially the p-wave phase shifts are improved by taking into account the quark exchange on the one pion exchange potential. (orig.)

  7. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  8. New Secure E-mail System Based on Bio-Chaos Key Generation and Modified AES Algorithm

    Science.gov (United States)

    Hoomod, Haider K.; Radi, A. M.

    2018-05-01

    The E-mail messages exchanged between sender’s Mailbox and recipient’s Mailbox over the open systems and insecure Networks. These messages may be vulnerable to eavesdropping and itself poses a real threat to the privacy and data integrity from unauthorized persons. The E-mail Security includes the following properties (Confidentiality, Authentication, Message integrity). We need a safe encryption algorithm to encrypt Email messages such as the algorithm Advanced Encryption Standard (AES) or Data Encryption Standard DES, as well as biometric recognition and chaotic system. The proposed E-mail system security uses modified AES algorithm and uses secret key-bio-chaos that consist of biometric (Fingerprint) and chaotic system (Lu and Lorenz). This modification makes the proposed system more sensitive and random. The execution time for both encryption and decryption of the proposed system is much less from original AES, in addition to being compatible with all Mail Servers.

  9. Performance of an electronic health record-based phenotype algorithm to identify community associated methicillin-resistant Staphylococcus aureus cases and controls for genetic association studies

    Directory of Open Access Journals (Sweden)

    Kathryn L. Jackson

    2016-11-01

    Full Text Available Abstract Background Community associated methicillin-resistant Staphylococcus aureus (CA-MRSA is one of the most common causes of skin and soft tissue infections in the United States, and a variety of genetic host factors are suspected to be risk factors for recurrent infection. Based on the CDC definition, we have developed and validated an electronic health record (EHR based CA-MRSA phenotype algorithm utilizing both structured and unstructured data. Methods The algorithm was validated at three eMERGE consortium sites, and positive predictive value, negative predictive value and sensitivity, were calculated. The algorithm was then run and data collected across seven total sites. The resulting data was used in GWAS analysis. Results Across seven sites, the CA-MRSA phenotype algorithm identified a total of 349 cases and 7761 controls among the genotyped European and African American biobank populations. PPV ranged from 68 to 100% for cases and 96 to 100% for controls; sensitivity ranged from 94 to 100% for cases and 75 to 100% for controls. Frequency of cases in the populations varied widely by site. There were no plausible GWAS-significant (p < 5 E −8 findings. Conclusions Differences in EHR data representation and screening patterns across sites may have affected identification of cases and controls and accounted for varying frequencies across sites. Future work identifying these patterns is necessary.

  10. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  11. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  12. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    Science.gov (United States)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  13. The Inter-Annual Variability Analysis of Carbon Exchange in Low Artic Fen Uncovers The Climate Sensitivity And The Uncertainties Around Net Ecosystem Exchange Partitioning

    Science.gov (United States)

    Blanco, E. L.; Lund, M.; Williams, M. D.; Christensen, T. R.; Tamstorf, M. P.

    2015-12-01

    An improvement in our process-based understanding of CO2 exchanges in the Arctic, and their climate sensitivity, is critical for examining the role of tundra ecosystems in changing climates. Arctic organic carbon storage has seen increased attention in recent years due to large potential for carbon releases following thaw. Our knowledge about the exact scale and sensitivity for a phase-change of these C stocks are, however, limited. Minor variations in Gross Primary Production (GPP) and Ecosystem Respiration (Reco) driven by changes in the climate can lead to either C sink or C source states, which likely will impact the overall C cycle of the ecosystem. Eddy covariance data is usually used to partition Net Ecosystem Exchange (NEE) into GPP and Reco achieved by flux separation algorithms. However, different partitioning approaches lead to different estimates. as well as undefined uncertainties. The main objectives of this study are to use model-data fusion approaches to (1) determine the inter-annual variability in C source/sink strength for an Arctic fen, and attribute such variations to GPP vs Reco, (2) investigate the climate sensitivity of these processes and (3) explore the uncertainties in NEE partitioning. The intention is to elaborate on the information gathered in an existing catchment area under an extensive cross-disciplinary ecological monitoring program in low Arctic West Greenland, established under the auspices of the Greenland Ecosystem Monitoring (GEM) program. The use of such a thorough long-term (7 years) dataset applied to the exploration in inter-annual variability of carbon exchange, related driving factors and NEE partition uncertainties provides a novel input into our understanding about land-atmosphere CO2 exchange.

  14. Action dependent heuristic dynamic programming based residential energy scheduling with home energy inter-exchange

    International Nuclear Information System (INIS)

    Xu, Yancai; Liu, Derong; Wei, Qinglai

    2015-01-01

    Highlights: • The algorithm is developed in the two-household energy management environment. • We develop the absent energy penalty cost for the first time. • The algorithm has ability to keep adapting in real-time operations. • Its application can lower total costs and achieve better load balancing. - Abstract: Residential energy scheduling is a hot topic nowadays in the background of energy saving and environmental protection worldwide. To achieve this objective, a new residential energy scheduling algorithm is developed for energy management, based on action dependent heuristic dynamic programming. The algorithm works under the circumstance of residential real-time pricing and two adjacent housing units with energy inter-exchange, which can reduce the overall cost and enhance renewable energy efficiency after long-term operation. It is designed to obtain the optimal control policy to manage the directions and amounts of electricity energy flux. The algorithm’s architecture is mainly constructed based on neural networks, denoting the learned characteristics in the linkage of layers. To get close to real situations, many constraints such as maximum charging/discharging power of batteries are taken into account. The absent energy penalty cost is developed for the first time as a part of the performance index function. When the environment changes, the residential energy scheduling algorithm gains new features and keeps adapting in real-time operations. Simulation results show that the developed algorithm is beneficial to energy conversation

  15. Fault diagnosis for the heat exchanger of the aircraft environmental control system based on the strong tracking filter.

    Science.gov (United States)

    Ma, Jian; Lu, Chen; Liu, Hongmei

    2015-01-01

    The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system's efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger.

  16. Fault diagnosis for the heat exchanger of the aircraft environmental control system based on the strong tracking filter.

    Directory of Open Access Journals (Sweden)

    Jian Ma

    Full Text Available The aircraft environmental control system (ECS is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system's efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger.

  17. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  18. Phthalates impact human health: Epidemiological evidences and plausible mechanism of action.

    Science.gov (United States)

    Benjamin, Sailas; Masai, Eiji; Kamimura, Naofumi; Takahashi, Kenji; Anderson, Robin C; Faisal, Panichikkal Abdul

    2017-10-15

    Disregarding the rising alarm on the hazardous nature of various phthalates and their metabolites, ruthless usage of phthalates as plasticizer in plastics and as additives in innumerable consumer products continues due low their cost, attractive properties, and lack of suitable alternatives. Globally, in silico computational, in vitro mechanistic, in vivo preclinical and limited clinical or epidemiological human studies showed that over a dozen phthalates and their metabolites ingested passively by man from the general environment, foods, drinks, breathing air, and routine household products cause various dysfunctions. Thus, this review addresses the health hazards posed by phthalates on children and adolescents, epigenetic modulation, reproductive toxicity in women and men; insulin resistance and type II diabetes; overweight and obesity, skeletal anomalies, allergy and asthma, cancer, etc., coupled with the description of major phthalates and their general uses, phthalate exposure routes, biomonitoring and risk assessment, special account on endocrine disruption; and finally, a plausible molecular cross-talk with a unique mechanism of action. This clinically focused comprehensive review on the hazards of phthalates would benefit the general population, academia, scientists, clinicians, environmentalists, and law or policy makers to decide upon whether usage of phthalates to be continued swiftly without sufficient deceleration or regulated by law or to be phased out from earth forever. Copyright © 2017. Published by Elsevier B.V.

  19. Preliminary ILAW Formulation Algorithm Description, 24590 LAW RPT-RT-04-0003, Rev. 1

    Energy Technology Data Exchange (ETDEWEB)

    Kruger, Albert A.; Kim, Dong-Sang; Vienna, John D.

    2013-12-03

    The U.S. Department of Energy (DOE), Office of River Protection (ORP), has contracted with Bechtel National, Inc. (BNI) to design, construct, and commission the Hanford Tank Waste Treatment and Immobilization Plant (WTP) at the Hanford Site (DOE 2000). This plant is designed to operate for 40 years and treat roughly 50 million gallons of mixed hazardous high-level waste (HLW) stored in 177 underground tanks at the Hanford Site. The process involves separating the hight-level and low-activity waste (LAW) fractions through filtration, leaching, Cs ion exchange, and precipitation. Each fraction will be separately vitrified into borosilicate waste glass. This report documents the initial algorithm for use by Hanford WTP in batching LAW and glass-forming chemicals (GFCs) in the LAW melter feed preparation vessel (MFPV). Algorithm inputs include the chemical analyses of the pretreated LAW in the concentrate receipt vessel (CRV), the volume of the MFPV heel, and the compositions of individual GFCs. In addition to these inputs, uncertainties in the LAW composition and processing parameters are included in the algorithm.

  20. A java based simulator with user interface to simulate ventilated patients

    Directory of Open Access Journals (Sweden)

    Stehle P.

    2015-09-01

    Full Text Available Mechanical ventilation is a life-saving intervention, which despite its use on a routine basis, poses the risk of inflicting further damage to the lung tissue if ventilator settings are chosen inappropriately. Medical decision support systems may help to prevent such injuries while providing the optimal settings to reach a defined clinical goal. In order to develop and verify decision support algorithms, a test bench simulating a patient’s behaviour is needed. We propose a Java based system that allows simulation of respiratory mechanics, gas exchange and cardiovascular dynamics of a mechanically ventilated patient. The implemented models are allowed to interact and are interchangeable enabling the simulation of various clinical scenarios. Model simulations are running in real-time and show physiologically plausible results.

  1. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  2. A flocking algorithm for multi-agent systems with connectivity preservation under hybrid metric-topological interactions.

    Science.gov (United States)

    He, Chenlong; Feng, Zuren; Ren, Zhigang

    2018-01-01

    In this paper, we propose a connectivity-preserving flocking algorithm for multi-agent systems in which the neighbor set of each agent is determined by the hybrid metric-topological distance so that the interaction topology can be represented as the range-limited Delaunay graph, which combines the properties of the commonly used disk graph and Delaunay graph. As a result, the proposed flocking algorithm has the following advantages over the existing ones. First, range-limited Delaunay graph is sparser than the disk graph so that the information exchange among agents is reduced significantly. Second, some links irrelevant to the connectivity can be dynamically deleted during the evolution of the system. Thus, the proposed flocking algorithm is more flexible than existing algorithms, where links are not allowed to be disconnected once they are created. Finally, the multi-agent system spontaneously generates a regular quasi-lattice formation without imposing the constraint on the ratio of the sensing range of the agent to the desired distance between two adjacent agents. With the interaction topology induced by the hybrid distance, the proposed flocking algorithm can still be implemented in a distributed manner. We prove that the proposed flocking algorithm can steer the multi-agent system to a stable flocking motion, provided the initial interaction topology of multi-agent systems is connected and the hysteresis in link addition is smaller than a derived upper bound. The correctness and effectiveness of the proposed algorithm are verified by extensive numerical simulations, where the flocking algorithms based on the disk and Delaunay graph are compared.

  3. A flocking algorithm for multi-agent systems with connectivity preservation under hybrid metric-topological interactions.

    Directory of Open Access Journals (Sweden)

    Chenlong He

    Full Text Available In this paper, we propose a connectivity-preserving flocking algorithm for multi-agent systems in which the neighbor set of each agent is determined by the hybrid metric-topological distance so that the interaction topology can be represented as the range-limited Delaunay graph, which combines the properties of the commonly used disk graph and Delaunay graph. As a result, the proposed flocking algorithm has the following advantages over the existing ones. First, range-limited Delaunay graph is sparser than the disk graph so that the information exchange among agents is reduced significantly. Second, some links irrelevant to the connectivity can be dynamically deleted during the evolution of the system. Thus, the proposed flocking algorithm is more flexible than existing algorithms, where links are not allowed to be disconnected once they are created. Finally, the multi-agent system spontaneously generates a regular quasi-lattice formation without imposing the constraint on the ratio of the sensing range of the agent to the desired distance between two adjacent agents. With the interaction topology induced by the hybrid distance, the proposed flocking algorithm can still be implemented in a distributed manner. We prove that the proposed flocking algorithm can steer the multi-agent system to a stable flocking motion, provided the initial interaction topology of multi-agent systems is connected and the hysteresis in link addition is smaller than a derived upper bound. The correctness and effectiveness of the proposed algorithm are verified by extensive numerical simulations, where the flocking algorithms based on the disk and Delaunay graph are compared.

  4. Minimizing Back Exchange in the Hydrogen Exchange-Mass Spectrometry Experiment

    Science.gov (United States)

    Walters, Benjamin T.; Ricciuti, Alec; Mayne, Leland; Englander, S. Walter

    2012-12-01

    The addition of mass spectrometry (MS) analysis to the hydrogen exchange (HX) proteolytic fragmentation experiment extends powerful HX methodology to the study of large biologically important proteins. A persistent problem is the degradation of HX information due to back exchange of deuterium label during the fragmentation-separation process needed to prepare samples for MS measurement. This paper reports a systematic analysis of the factors that influence back exchange (solution pH, ionic strength, desolvation temperature, LC column interaction, flow rates, system volume). The many peptides exhibit a range of back exchange due to intrinsic amino acid HX rate differences. Accordingly, large back exchange leads to large variability in D-recovery from one residue to another as well as one peptide to another that cannot be corrected for by reference to any single peptide-level measurement. The usual effort to limit back exchange by limiting LC time provides little gain. Shortening the LC elution gradient by 3-fold only reduced back exchange by ~2 %, while sacrificing S/N and peptide count. An unexpected dependence of back exchange on ionic strength as well as pH suggests a strategy in which solution conditions are changed during sample preparation. Higher salt should be used in the first stage of sample preparation (proteolysis and trapping) and lower salt (<20 mM) and pH in the second stage before electrospray injection. Adjustment of these and other factors together with recent advances in peptide fragment detection yields hundreds of peptide fragments with D-label recovery of 90 % ± 5 %.

  5. Minimizing back exchange in the hydrogen exchange-mass spectrometry experiment.

    Science.gov (United States)

    Walters, Benjamin T; Ricciuti, Alec; Mayne, Leland; Englander, S Walter

    2012-12-01

    The addition of mass spectrometry (MS) analysis to the hydrogen exchange (HX) proteolytic fragmentation experiment extends powerful HX methodology to the study of large biologically important proteins. A persistent problem is the degradation of HX information due to back exchange of deuterium label during the fragmentation-separation process needed to prepare samples for MS measurement. This paper reports a systematic analysis of the factors that influence back exchange (solution pH, ionic strength, desolvation temperature, LC column interaction, flow rates, system volume). The many peptides exhibit a range of back exchange due to intrinsic amino acid HX rate differences. Accordingly, large back exchange leads to large variability in D-recovery from one residue to another as well as one peptide to another that cannot be corrected for by reference to any single peptide-level measurement. The usual effort to limit back exchange by limiting LC time provides little gain. Shortening the LC elution gradient by 3-fold only reduced back exchange by ~2%, while sacrificing S/N and peptide count. An unexpected dependence of back exchange on ionic strength as well as pH suggests a strategy in which solution conditions are changed during sample preparation. Higher salt should be used in the first stage of sample preparation (proteolysis and trapping) and lower salt (<20 mM) and pH in the second stage before electrospray injection. Adjustment of these and other factors together with recent advances in peptide fragment detection yields hundreds of peptide fragments with D-label recovery of 90% ± 5%.

  6. Enhanced Sampling in Molecular Dynamics Using Metadynamics, Replica-Exchange, and Temperature-Acceleration

    Directory of Open Access Journals (Sweden)

    Cameron Abrams

    2013-12-01

    Full Text Available We review a selection of methods for performing enhanced sampling in molecular dynamics simulations. We consider methods based on collective variable biasing and on tempering, and offer both historical and contemporary perspectives. In collective-variable biasing, we first discuss methods stemming from thermodynamic integration that use mean force biasing, including the adaptive biasing force algorithm and temperature acceleration. We then turn to methods that use bias potentials, including umbrella sampling and metadynamics. We next consider parallel tempering and replica-exchange methods. We conclude with a brief presentation of some combination methods.

  7. An Autonomous Connectivity Restoration Algorithm Based on Finite State Machine for Wireless Sensor-Actor Networks

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2018-01-01

    Full Text Available With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms.

  8. An Autonomous Connectivity Restoration Algorithm Based on Finite State Machine for Wireless Sensor-Actor Networks.

    Science.gov (United States)

    Zhang, Ying; Wang, Jun; Hao, Guan

    2018-01-08

    With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs) have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms.

  9. An Autonomous Connectivity Restoration Algorithm Based on Finite State Machine for Wireless Sensor-Actor Networks

    Science.gov (United States)

    Zhang, Ying; Wang, Jun; Hao, Guan

    2018-01-01

    With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs) have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms. PMID:29316702

  10. Mindfulness and Cardiovascular Disease Risk: State of the Evidence, Plausible Mechanisms, and Theoretical Framework

    Science.gov (United States)

    Schuman-Olivier, Zev; Britton, Willoughby B.; Fresco, David M.; Desbordes, Gaelle; Brewer, Judson A.; Fulwiler, Carl

    2016-01-01

    The purpose of this review is to provide (1) a synopsis on relations of mindfulness with cardiovascular disease (CVD) and major CVD risk factors, and (2) an initial consensus-based overview of mechanisms and theoretical framework by which mindfulness might influence CVD. Initial evidence, often of limited methodological quality, suggests possible impacts of mindfulness on CVD risk factors including physical activity, smoking, diet, obesity, blood pressure, and diabetes regulation. Plausible mechanisms include (1) improved attention control (e.g., ability to hold attention on experiences related to CVD risk, such as smoking, diet, physical activity, and medication adherence), (2) emotion regulation (e.g., improved stress response, self-efficacy, and skills to manage craving for cigarettes, palatable foods, and sedentary activities), and (3) self-awareness (e.g., self-referential processing and awareness of physical sensations due to CVD risk factors). Understanding mechanisms and theoretical framework should improve etiologic knowledge, providing customized mindfulness intervention targets that could enable greater mindfulness intervention efficacy. PMID:26482755

  11. Reciprocity-based reasons for benefiting research participants: most fail, the most plausible is problematic.

    Science.gov (United States)

    Sofaer, Neema

    2014-11-01

    A common reason for giving research participants post-trial access (PTA) to the trial intervention appeals to reciprocity, the principle, stated most generally, that if one person benefits a second, the second should reciprocate: benefit the first in return. Many authors consider it obvious that reciprocity supports PTA. Yet their reciprocity principles differ, with many authors apparently unaware of alternative versions. This article is the first to gather the range of reciprocity principles. It finds that: (1) most are false. (2) The most plausible principle, which is also problematic, applies only when participants experience significant net risks or burdens. (3) Seldom does reciprocity support PTA for participants or give researchers stronger reason to benefit participants than equally needy non-participants. (4) Reciprocity fails to explain the common view that it is bad when participants in a successful trial have benefited from the trial intervention but lack PTA to it. © 2013 John Wiley & Sons Ltd.

  12. How did China's foreign exchange reform affect the efficiency of foreign exchange market?

    Science.gov (United States)

    Ning, Ye; Wang, Yiming; Su, Chi-wei

    2017-10-01

    This study compares the market efficiency of China's onshore and offshore foreign exchange markets before and after the foreign exchange reform on August 11, 2015. We use the multifractal detrended fluctuation analysis of the onshore and offshore RMB/USD spot exchange rate series as basis. We then find that the onshore foreign exchange market before the reform has the lowest market efficiency, which increased after the reform. The offshore foreign exchange market before the reform has the highest market efficiency, which dropped after the reform. This finding implies the increased efficiency of the onshore foreign exchange market and the loss of efficiency in the offshore foreign exchange market. We also find that the offshore foreign exchange market is more efficient than the onshore market and that the gap shrank after the reform. Changes in intervention of the People's Bank of China since the reform is a possible explanation for the changes in the efficiency of the foreign exchange market.

  13. A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.

    Science.gov (United States)

    Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan

    2015-01-01

    In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network.

  14. A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.

    Directory of Open Access Journals (Sweden)

    Yuanfu Mo

    Full Text Available In a vehicular ad hoc network (VANET, the periodic exchange of single-hop status information broadcasts (beacon frames produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network.

  15. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    Science.gov (United States)

    Atanassov, E.; Dimitrov, D.; Gurov, T.

    2015-10-01

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  16. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Atanassov, E.; Dimitrov, D., E-mail: d.slavov@bas.bg, E-mail: emanouil@parallel.bas.bg, E-mail: gurov@bas.bg; Gurov, T. [Institute of Information and Communication Technologies, BAS, Acad. G. Bonchev str., bl. 25A, 1113 Sofia (Bulgaria)

    2015-10-28

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  17. Coupled eco-hydrology and biogeochemistry algorithms enable the simulation of water table depth effects on boreal peatland net CO2 exchange

    Directory of Open Access Journals (Sweden)

    M. Mezbahuddin

    2017-12-01

    Full Text Available Water table depth (WTD effects on net ecosystem CO2 exchange of boreal peatlands are largely mediated by hydrological effects on peat biogeochemistry and the ecophysiology of peatland vegetation. The lack of representation of these effects in carbon models currently limits our predictive capacity for changes in boreal peatland carbon deposits under potential future drier and warmer climates. We examined whether a process-level coupling of a prognostic WTD with (1 oxygen transport, which controls energy yields from microbial and root oxidation–reduction reactions, and (2 vascular and nonvascular plant water relations could explain mechanisms that control variations in net CO2 exchange of a boreal fen under contrasting WTD conditions, i.e., shallow vs. deep WTD. Such coupling of eco-hydrology and biogeochemistry algorithms in a process-based ecosystem model, ecosys, was tested against net ecosystem CO2 exchange measurements in a western Canadian boreal fen peatland over a period of drier-weather-driven gradual WTD drawdown. A May–October WTD drawdown of  ∼  0.25 m from 2004 to 2009 hastened oxygen transport to microbial and root surfaces, enabling greater microbial and root energy yields and peat and litter decomposition, which raised modeled ecosystem respiration (Re by 0.26 µmol CO2 m−2 s−1 per 0.1 m of WTD drawdown. It also augmented nutrient mineralization, and hence root nutrient availability and uptake, which resulted in improved leaf nutrient (nitrogen status that facilitated carboxylation and raised modeled vascular gross primary productivity (GPP and plant growth. The increase in modeled vascular GPP exceeded declines in modeled nonvascular (moss GPP due to greater shading from increased vascular plant growth and moss drying from near-surface peat desiccation, thereby causing a net increase in modeled growing season GPP by 0.39 µmol CO2 m−2 s−1 per 0.1 m of WTD drawdown. Similar increases in

  18. Distributed Classification of Localization Attacks in Sensor Networks Using Exchange-Based Feature Extraction and Classifier

    Directory of Open Access Journals (Sweden)

    Su-Zhe Wang

    2016-01-01

    Full Text Available Secure localization under different forms of attack has become an essential task in wireless sensor networks. Despite the significant research efforts in detecting the malicious nodes, the problem of localization attack type recognition has not yet been well addressed. Motivated by this concern, we propose a novel exchange-based attack classification algorithm. This is achieved by a distributed expectation maximization extractor integrated with the PECPR-MKSVM classifier. First, the mixed distribution features based on the probabilistic modeling are extracted using a distributed expectation maximization algorithm. After feature extraction, by introducing the theory from support vector machine, an extensive contractive Peaceman-Rachford splitting method is derived to build the distributed classifier that diffuses the iteration calculation among neighbor sensors. To verify the efficiency of the distributed recognition scheme, four groups of experiments were carried out under various conditions. The average success rate of the proposed classification algorithm obtained in the presented experiments for external attacks is excellent and has achieved about 93.9% in some cases. These testing results demonstrate that the proposed algorithm can produce much greater recognition rate, and it can be also more robust and efficient even in the presence of excessive malicious scenario.

  19. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  20. Investigating the effects of liquidity and exchange rate on Tehran Stock Exchange

    Directory of Open Access Journals (Sweden)

    Younos Vakil Alroaia

    2014-08-01

    Full Text Available This paper presents an empirical investigation to study the effects of two macroeconomic factors; namely exchange rate and liquidity on stock index. The proposed study was applied in Iran and on major index of Tehran Stock Exchange over the period 2001-2011. They reported that the currency exchange maintained negative impact on stock exchange for the period of investigation. This is due to the fact that when currency devalued, working capital decreases and firms did not enough money to purchase raw materials, pay wages, etc. In addition, liquidity marinated a direct and positive relationship with exchange index. However, the impact of liquidity seems to be bigger than currency exchange.

  1. Theory and design of heat exchanger : air cooled plate, spiral heat exchanger

    International Nuclear Information System (INIS)

    Min, Ui Dong

    1960-02-01

    This book deals with air cooled heat exchanger, which introduces heat rejection system, wet surface cooler in new from, explanation of structure and design, materials, basic design like plenums chambers and fan ring, finned tube fouling factor, airflow in forced draft and fan design. It also tells of plate heat exchanger and spiral heat exchanger giving descriptions of summary, selection, basic design, device and safety function, maintenance, structure of plate heat exchanger, frames and connector plate and, basic things of spiral tube heat exchanger.

  2. Algorithms and programs for solution of static and dynamic characteristics of counterflow heat exchangers with dissociating coolant

    International Nuclear Information System (INIS)

    Nitej, N.V.; Sharovarov, G.A.

    1982-01-01

    The method of estimation of counterflow heat exchanger characteristics is presented. Mathematical description of the processes is presented by the mass, energy and pulse conservation equations for both coolants and energy conservation equation for the wall which devides them. In the presence of chemical reactions the system is supplemented by equations, characterizing the kinetics of their progress. The methods of numerical solution of static and dynamic problems have been chosen, and the computer programs on the Fortran language have been developed. The schemes of solution of both problems are so constructed, that the conservation equations are placed in the main program, and such characteristics of the coolants as properties, heat transfer and friction coefficients, the mechanism of chemical reaction are concentrated in the subprogram unit. This allows to create the single method of solution with the flow of single-phase and two-phase coolants of abovecritical and supercritical paramters. The evaluation results of three heat exchangers are given: with heating of N 2 O 4 gas phase by heat of flue gas; with cooling of N 2 O 4 supercritical parameters by water; regenerator on N 2 O 4

  3. The centroidal algorithm in molecular similarity and diversity calculations on confidential datasets

    Science.gov (United States)

    Trepalin, Sergey; Osadchiy, Nikolay

    2005-09-01

    Chemical structure provides exhaustive description of a compound, but it is often proprietary and thus an impediment in the exchange of information. For example, structure disclosure is often needed for the selection of most similar or dissimilar compounds. Authors propose a centroidal algorithm based on structural fragments (screens) that can be efficiently used for the similarity and diversity selections without disclosing structures from the reference set. For an increased security purposes, authors recommend that such set contains at least some tens of structures. Analysis of reverse engineering feasibility showed that the problem difficulty grows with decrease of the screen's radius. The algorithm is illustrated with concrete calculations on known steroidal, quinoline, and quinazoline drugs. We also investigate a problem of scaffold identification in combinatorial library dataset. The results show that relatively small screens of radius equal to 2 bond lengths perform well in the similarity sorting, while radius 4 screens yield better results in diversity sorting. The software implementation of the algorithm taking SDF file with a reference set generates screens of various radii which are subsequently used for the similarity and diversity sorting of external SDFs. Since the reverse engineering of the reference set molecules from their screens has the same difficulty as the RSA asymmetric encryption algorithm, generated screens can be stored openly without further encryption. This approach ensures an end user transfers only a set of structural fragments and no other data. Like other algorithms of encryption, the centroid algorithm cannot give 100% guarantee of protecting a chemical structure from dataset, but probability of initial structure identification is very small-order of 10-40 in typical cases.

  4. Reactor fuel exchanging facility

    International Nuclear Information System (INIS)

    Kubota, Shin-ichi.

    1981-01-01

    Purpose: To enable operation of an emergency manual operating mechanism for a fuel exchanger with all operatorless trucks and remote operation of a manipulator even if the exchanger fails during the fuel exchanging operation. Constitution: When a fuel exchanging system fails while connected to a pressure tube of a nuclear reactor during a fuel exchanging operation, a stand-by self-travelling truck automatically runs along a guide line to the position corresponding to the stopping position at that time of the fuel exchanger based on a command from a central control chamber. At this time the truck is switched to manual operation, and approaches the exchanger while being monitored through a television camera and then stops. Then, a manipurator is connected to the emergency manual operating mechanism of the exchanger, and is operated through necessary emergency steps by driving the snout, the magazine, the grab or the like in the exchanger in response to the problem, and necessary operations for the emergency treatment are thus performed. (Sekiya, K.)

  5. Theory and design of heat exchanger : Double pipe and heat exchanger in abnormal condition

    International Nuclear Information System (INIS)

    Min, Ui Dong

    1996-02-01

    This book introduces theory and design of heat exchanger, which includes HTRI program, multiple tube heat exchanger external heating, theory of heat transfer, basis of design of heat exchanger, two-phase flow, condensation, boiling, material of heat exchanger, double pipe heat exchanger like hand calculation, heat exchanger in abnormal condition such as Jackets Vessel, and Coiled Vessel, design and summary of steam tracing.

  6. The Metaphysics of Economic Exchanges

    Directory of Open Access Journals (Sweden)

    Massin Olivier

    2017-05-01

    Full Text Available What are economic exchanges? The received view has it that exchanges are mutual transfers of goods motivated by inverse valuations thereof. As a corollary, the standard approach treats exchanges of services as a subspecies of exchanges of goods. We raise two objections against this standard approach. First, it is incomplete, as it fails to take into account, among other things, the offers and acceptances that lie at the core of even the simplest cases of exchanges. Second, it ultimately fails to generalize to exchanges of services, in which neither inverse preferences nor mutual transfers hold true. We propose an alternative definition of exchanges, which treats exchanges of goods as a special case of exchanges of services and which builds in offers and acceptances. According to this theory: (i The valuations motivating exchanges are propositional and convergent rather than objectual and inverse; (ii All exchanges of goods involve exchanges of services/actions, but not the reverse; (iii Offers and acceptances, together with the contractual obligations and claims they bring about, lie at the heart of all cases of exchange.

  7. Marriage exchanges, seed exchanges, and the dynamics of manioc diversity.

    Science.gov (United States)

    Delêtre, Marc; McKey, Doyle B; Hodkinson, Trevor R

    2011-11-08

    The conservation of crop genetic resources requires understanding the different variables-cultural, social, and economic-that impinge on crop diversity. In small-scale farming systems, seed exchanges represent a key mechanism in the dynamics of crop genetic diversity, and analyzing the rules that structure social networks of seed exchange between farmer communities can help decipher patterns of crop genetic diversity. Using a combination of ethnobotanical and molecular genetic approaches, we investigated the relationships between regional patterns of manioc genetic diversity in Gabon and local networks of seed exchange. Spatially explicit Bayesian clustering methods showed that geographical discontinuities of manioc genetic diversity mirror major ethnolinguistic boundaries, with a southern matrilineal domain characterized by high levels of varietal diversity and a northern patrilineal domain characterized by low varietal diversity. Borrowing concepts from anthropology--kinship, bridewealth, and filiation--we analyzed the relationships between marriage exchanges and seed exchange networks in patrilineal and matrilineal societies. We demonstrate that, by defining marriage prohibitions, kinship systems structure social networks of exchange between farmer communities and influence the movement of seeds in metapopulations, shaping crop diversity at local and regional levels.

  8. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  9. Simplified modeling of liquid-liquid heat exchangers for use in control systems

    International Nuclear Information System (INIS)

    Laszczyk, Piotr

    2017-01-01

    For last decades various models of heat exchange processes have been developed to capture their specific dynamic nature. These models have different degrees of complexity depending on modeling assumptions and simplifications. Complexity of mathematical model can be very critical when the model is to be a basis for deriving the control law because it directly affects the complexity of mathematical transformations and complexity of final control algorithm. In this paper, the simplified cross convection model for wide class of heat exchangers is suggested. Apart from very few reports so far, the properties of this modeling approach have never been investigated in detail. The concept for this model is derived from the fundamental principle of energy conservation and combined with a simple dynamical approximation in the form of ordinary differential equations. Within this framework, the simplified tuning procedure of the proposed model is suggested and verified for plate and spiral tube heat exchangers based on experimental data. The dynamical properties and stability of the suggested model are addressed and sensitivity analysis is also presented. It is shown that such a modeling approach preserves high modeling accuracy at very low numerical complexity. The validation results show that the suggested modeling and tuning method is useful for practical applications.

  10. Parallel Implementation of Gamma-Point Pseudopotential Plane-Wave DFT with Exact Exchange

    International Nuclear Information System (INIS)

    Bylaska, Eric J.; Tsemekhman, Kiril L.; Baden, Scott B.; Weare, John H.; Jonsson, Hannes

    2011-01-01

    One of the more persistent failures of conventional density functional theory (DFT) methods has been their failure to yield localized charge states such as polarons, excitons and solitons in solid-state and extended systems. It has been suggested that conventional DFT functionals, which are not self-interaction free, tend to favor delocalized electronic states since self-interaction creates a Coulomb barrier to charge localization. Pragmatic approaches in which the exchange correlation functionals are augmented with small amount of exact exchange (hybrid-DFT, e.g. B3LYP and PBE0) have shown promise in localizing charge states and predicting accurate band gaps and reaction barriers. We have developed a parallel algorithm for implementing exact exchange into pseudopotential plane-wave density functional theory and we have implemented it in the NWChem program package. The technique developed can readily be employed in plane-wave DFT programs. Furthermore, atomic forces and stresses are straightforward to implement, making it applicable to both confined and extended systems, as well as to Car-Parrinello ab initio molecular dynamic simulations. This method has been applied to several systems for which conventional DFT methods do not work well, including calculations for band gaps in oxides and the electronic structure of a charge trapped state in the Fe(II) containing mica, annite.

  11. Study of kinetics, equilibrium and isotope exchange in ion exchange systems Pt. 6

    International Nuclear Information System (INIS)

    Plicka, J.; Stamberg, K.; Cabicar, J.; Gosman, A.

    1986-01-01

    The description of kinetics of ion exchange in ternary system was based upon three Nernst-Planck equations, each of them describing the particle diffusion flux of a given counterion as an independent process. For experimental verification, the strongly acidic cation exchanger OSTION KS 08 the shallow-bed technique, and 0.2 mol x dm -3 aqueous nitrate solutions were chosen. The kinetics of ion exchange in the system of cations Na + - Mg 2+ - UO 2 2+ was studied. The values of diffusion coefficients obtained by evaluating of kinetics of isotope exchange and binary ion exchange were used for calculation. The comparison of calculated exchange rate curves with the experimental ones was made. It was found that the exchanging counterions were affected by each other. (author)

  12. Quantification of exchangeable and non-exchangeable organically bound tritium (OBT) in vegetation

    International Nuclear Information System (INIS)

    Kim, S.B.; Korolevych, V.

    2013-01-01

    The objective of this study is to quantify the relative amounts of exchangeable organically bound tritium (OBT) and non-exchangeable OBT in various vegetables. A garden plot at Perch Lake, where tritium levels are slightly elevated due to releases of tritium from a nearby nuclear waste management area and Chalk River Laboratories (CRL) operations, was used to cultivate a variety of vegetables. Five different kinds of vegetables (lettuce, cabbage, tomato, radish and beet) were studied. Exchangeable OBT behaves like tritium in tissue free water in living organisms and, based on past measurements, accounts for about 20% of the total tritium in dehydrated organic materials. In this study, the percentage of the exchangeable OBT was determined to range from 20% to 57% and was found to depend on the type of vegetables as well as the sequence of the plants exposure to HTO. -- Highlights: ► This study was to quantify the amount of exchangeable OBT compared to non-exchangeable OBT in vegetables. ► The percentage of exchangeable OBT varied between vegetable types and HTO exposure conditions. ► Exchangeable OBT varied from 20 to 36% in un-treated vegetables and from 30 to 57% in treated vegetables

  13. Non-specific effects of vaccines: plausible and potentially important, but implications uncertain.

    Science.gov (United States)

    Pollard, Andrew J; Finn, Adam; Curtis, Nigel

    2017-11-01

    Non-specific effects (NSE) or heterologous effects of vaccines are proposed to explain observations in some studies that certain vaccines have an impact beyond the direct protection against infection with the specific pathogen for which the vaccines were designed. The importance and implications of such effects remain controversial. There are several known immunological mechanisms which could lead to NSE, since it is widely recognised that the generation of specific immunity is initiated by non-specific innate immune mechanisms that may also have wider effects on adaptive immune function. However, there are no published studies that demonstrate a mechanistic link between such immunological phenomena and clinically relevant NSE in humans. While it is highly plausible that some vaccines do have NSE, their magnitude and duration, and thus importance, remain uncertain. Although the WHO recently concluded that current evidence does not justify changes to immunisation policy, further studies of sufficient size and quality are needed to assess the importance of NSE for all-cause mortality. This could provide insights into vaccine immunobiology with important implications for infant health and survival. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Horizontal Curve Virtual Peer Exchange : an RSPCB Peer Exchange

    Science.gov (United States)

    2014-06-01

    This report summarizes the Horizontal Curve Virtual Peer Exchange sponsored by the Federal Highway Administration (FHWA) Office of Safetys Roadway Safety Professional Capacity Building Program on June 17, 2014. This virtual peer exchange was the f...

  15. Interest Rate Rules, Exchange Market Pressure, and Successful Exchange Rate Management

    NARCIS (Netherlands)

    Klaassen, F.; Mavromatis, K.

    2016-01-01

    Central banks with an exchange rate objective set the interest rate in response to what they call ''pressure.'' Instead, existing interest rate rules rely on the exchange rate minus its target. To stay closer to actual policy, we introduce a rule that uses exchange market pressure (EMP), the

  16. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    Science.gov (United States)

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  17. Standardizing exchange formats

    International Nuclear Information System (INIS)

    Lemmel, H.D.; Schmidt, J.J.

    1992-01-01

    An international network of co-operating data centres is described who maintain identical data bases which are simultaneously updated by an agreed data exchange procedure. The agreement covers ''data exchange formats'' which are compatible to the centres' internal data storage and retrieval systems which remain different, optimized at each centre to the available computer facilities and to the needs of the data users. Essential condition for the data exchange is an agreement on common procedures for the data exchange is an agreement on common procedures for the data compilation, including critical data analysis and validation. The systems described (''EXFOR'', ''ENDF'', ''CINDA'') are used for ''nuclear reaction data'', but the principles used for data compilation and exchange should be valid also for other data types. (author). 24 refs, 4 figs

  18. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    Science.gov (United States)

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  19. Arbitrary temporal shape pulsed fiber laser based on SPGD algorithm

    Science.gov (United States)

    Jiang, Min; Su, Rongtao; Zhang, Pengfei; Zhou, Pu

    2018-06-01

    A novel adaptive pulse shaping method for a pulsed master oscillator power amplifier fiber laser to deliver an arbitrary pulse shape is demonstrated. Numerical simulation has been performed to validate the feasibility of the scheme and provide meaningful guidance for the design of the algorithm control parameters. In the proof-of-concept experiment, information on the temporal property of the laser is exchanged and evaluated through a local area network, and the laser adjusted the parameters of the seed laser according to the monitored output of the system automatically. Various pulse shapes, including a rectangular shape, ‘M’ shape, and elliptical shape are achieved through experimental iterations.

  20. Communication Reducing Algorithms for Distributed Hierarchical N-Body Problems with Boundary Distributions

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed

    2017-05-11

    Reduction of communication and efficient partitioning are key issues for achieving scalability in hierarchical N-Body algorithms like Fast Multipole Method (FMM). In the present work, we propose three independent strategies to improve partitioning and reduce communication. First, we show that the conventional wisdom of using space-filling curve partitioning may not work well for boundary integral problems, which constitute a significant portion of FMM’s application user base. We propose an alternative method that modifies orthogonal recursive bisection to relieve the cell-partition misalignment that has kept it from scaling previously. Secondly, we optimize the granularity of communication to find the optimal balance between a bulk-synchronous collective communication of the local essential tree and an RDMA per task per cell. Finally, we take the dynamic sparse data exchange proposed by Hoefler et al. [1] and extend it to a hierarchical sparse data exchange, which is demonstrated at scale to be faster than the MPI library’s MPI_Alltoallv that is commonly used.

  1. Magnetic stability in exchange-spring and exchange bias systems after multiple switching cycles.

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, J. S.; Inomata, A.; You, C.-Y.; Pearson, J. E.; Bader, S. D.

    2001-06-01

    We have studied the magnetic stability in exchange bias and exchange spring systems prepared via epitaxial sputter deposition. The two interfacial exchange coupled systems, Fe/Cr(211) double superlattices consisting of a ferromagnetic and an antiferromagnetic Fe/Cr superlattice that are exchange coupled through a Cr spacer, and Sin-Co/Fe exchange-spring bilayer structures with ferromagnetically coupled hard Sin-Co layer and soft Fe layer, were epitaxially grown on suitably prepared Cr buffer layers to give rise to different microstructure and magnetic anisotropy. The magnetic stability was investigated using the magneto-optic Kerr effect during repeated reversal of the soft layer magnetization by field cycling up to 10{sup 7} times. For uniaxial Fe/Cr exchange biased double superlattices and exchange spring bilayers with uniaxial Sin-Co, small but rapid initial decay in the exchange bias field HE and in the remanent magnetization is observed. However, the exchange spring bilayers with biaxial and random in-plane anisotropy in the Sin-Co layer shows gradual decay in H{sub E} and without large reduction of the magnetization. The different decay behaviors are attributed to the different microstructure and spin configuration of the pinning layers.

  2. The effects of real exchange rate misalignment and real exchange volatility on exports

    OpenAIRE

    Diallo, Ibrahima Amadou

    2011-01-01

    This paper uses panel data cointegration techniques to study the impacts of real exchange rate misalignment and real exchange rate volatility on total exports for a panel of 42 developing countries from 1975 to 2004. The results show that both real exchange rate misalignment and real exchange rate volatility affect negatively exports. The results also illustrate that real exchange rate volatility is more harmful to exports than misalignment. These outcomes are corroborated by estimations on s...

  3. Impulse Response of the Exchange Rate Volatility to a Foreign Exchange Intervention Shock

    OpenAIRE

    Hoshikawa, Takeshi

    2009-01-01

    This paper uses Lin's technique (1997) to report on the impulse response function analysis that traces the dynamics of exchange rate volatility from innovations in Japanese foreign exchange intervention. Using a multivariate GARCH model, we employed a volatility impulse response function based on Lin (1997) to detect the impulse response of exchange rate volatility on a one-unit foreign exchange intervention shock. The main findings of t his paper are as follows: (1) a foreign exchange inter...

  4. Data quality system using reference dictionaries and edit distance algorithms

    Science.gov (United States)

    Karbarz, Radosław; Mulawka, Jan

    2015-09-01

    The real art of management it is important to make smart decisions, what in most of the cases is not a trivial task. Those decisions may lead to determination of production level, funds allocation for investments etc. Most of the parameters in decision-making process such as: interest rate, goods value or exchange rate may change. It is well know that these parameters in the decision-making are based on the data contained in datamarts or data warehouse. However, if the information derived from the processed data sets is the basis for the most important management decisions, it is required that the data is accurate, complete and current. In order to achieve high quality data and to gain from them measurable business benefits, data quality system should be used. The article describes the approach to the problem, shows the algorithms in details and their usage. Finally the test results are provide. Test results show the best algorithms (in terms of quality and quantity) for different parameters and data distribution.

  5. Anticipating and Communicating Plausible Environmental and Health Concerns Associated with Future Disasters: The ShakeOut and ARkStorm Scenarios as Examples

    Science.gov (United States)

    Plumlee, G. S.; Morman, S. A.; Alpers, C. N.; Hoefen, T. M.; Meeker, G. P.

    2010-12-01

    Disasters commonly pose immediate threats to human safety, but can also produce hazardous materials (HM) that pose short- and long-term environmental-health threats. The U.S. Geological Survey (USGS) has helped assess potential environmental health characteristics of HM produced by various natural and anthropogenic disasters, such as the 2001 World Trade Center collapse, 2005 hurricanes Katrina and Rita, 2007-2009 southern California wildfires, various volcanic eruptions, and others. Building upon experience gained from these responses, we are now developing methods to anticipate plausible environmental and health implications of the 2008 Great Southern California ShakeOut scenario (which modeled the impacts of a 7.8 magnitude earthquake on the southern San Andreas fault, http://urbanearth.gps.caltech.edu/scenario08/), and the recent ARkStorm scenario (modeling the impacts of a major, weeks-long winter storm hitting nearly all of California, http://urbanearth.gps.caltech.edu/winter-storm/). Environmental-health impacts of various past earthquakes and extreme storms are first used to identify plausible impacts that could be associated with the disaster scenarios. Substantial insights can then be gleaned using a Geographic Information Systems (GIS) approach to link ShakeOut and ARkStorm effects maps with data extracted from diverse database sources containing geologic, hazards, and environmental information. This type of analysis helps constrain where potential geogenic (natural) and anthropogenic sources of HM (and their likely types of contaminants or pathogens) fall within areas of predicted ShakeOut-related shaking, firestorms, and landslides, and predicted ARkStorm-related precipitation, flooding, and winds. Because of uncertainties in the event models and many uncertainties in the databases used (e.g., incorrect location information, lack of detailed information on specific facilities, etc.) this approach should only be considered as the first of multiple steps

  6. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  7. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  8. Two-dimensional exchange and nutation exchange nuclear quadrupole resonance spectroscopy

    International Nuclear Information System (INIS)

    Mackowiak, M.; Sinyavsky, N.; Velikite, N.; Nikolaev, D.

    2002-01-01

    A theoretical treatment of the 2D exchange NQR pulse sequence is presented and applied to a quantitative study of exchange processes in molecular crystals. It takes into account the off-resonance irradiation, which critically influences the spin dynamics. The response to the three-pulse sequence of a system of spins I=3/2 in zero applied field, experiencing electric quadrupole couplings, is analysed. The mixing dynamics by exchange and the expected cross-peak intensities as a function of the frequency offset have been derived. The theory is illustrated by a study of the optimization procedure, which is of crucial importance for the detection of the cross- and diagonal-peaks in a 2D-exchange spectrum. The systems investigated are hexachloroethane and tetrachloroethylene. They show threefold and twofold reorientational jumps about the carbon-carbon axis, respectively. A new method of direct determination of rotational angles based on two-dimensional nutation exchange NQR spectroscopy is proposed. The method involves the detection of exchange processes through NQR nutation spectra recorded after the mixing interval. The response of a system of spins I=3/2 to the three-pulse sequence with increasing pulse widths is analyzed. It is shown that the 2D-nutation exchange NQR spectrum exhibits characteristic ridges, which manifest the motional mechanism in a model-independent fashion. The angles through which the molecule rotates can be read directly from elliptical ridges in the 2D spectrum, which are also sensitive to the asymmetry parameter of the electric field gradient tensor. (orig.)

  9. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  10. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  11. Segmentation algorithm for non-stationary compound Poisson processes. With an application to inventory time series of market members in a financial market

    Science.gov (United States)

    Tóth, B.; Lillo, F.; Farmer, J. D.

    2010-11-01

    We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.

  12. Systematic analysis of the heat exchanger arrangement problem using multi-objective genetic optimization

    International Nuclear Information System (INIS)

    Daróczy, László; Janiga, Gábor; Thévenin, Dominique

    2014-01-01

    A two-dimensional cross-flow tube bank heat exchanger arrangement problem with internal laminar flow is considered in this work. The objective is to optimize the arrangement of tubes and find the most favorable geometries, in order to simultaneously maximize the rate of heat exchange while obtaining a minimum pressure loss. A systematic study was performed involving a large number of simulations. The global optimization method NSGA-II was retained. A fully automatized in-house optimization environment was used to solve the problem, including mesh generation and CFD (computational fluid dynamics) simulations. The optimization was performed in parallel on a Linux cluster with a very good speed-up. The main purpose of this article is to illustrate and analyze a heat exchanger arrangement problem in its most general form and to provide a fundamental understanding of the structure of the Pareto front and optimal geometries. The considered conditions are particularly suited for low-power applications, as found in a growing number of practical systems in an effort toward increasing energy efficiency. For such a detailed analysis with more than 140 000 CFD-based evaluations, a design-of-experiment study involving a response surface would not be sufficient. Instead, all evaluations rely on a direct solution using a CFD solver. - Highlights: • Cross-flow tube bank heat exchanger arrangement problem. • A fully automatized multi-objective optimization based on genetic algorithm. • A systematic study involving a large number of CFD (computational fluid dynamics) simulations

  13. Can abnormal returns be earned on bandwidth-bounded currencies? Evidence from a genetic algorithm

    OpenAIRE

    Pedro Godinho

    2012-01-01

    Most of the studies about the Foreign Exchange market (Forex) analyse the behaviour of currencies that are allowed to float freely (or almost freely), but some currencies are still bounded by bandwidths (either disclosed or undisclosed). In this paper, I try to find out whether two bandwidth-bounded currencies, the Hong Kong dollar (HKD) and the Singapore dollar (SGD), present opportunities for abnormal returns. I consider a set of trading rules, and I use a genetic algorithm to optimise both...

  14. Noninvasive mapping of water diffusional exchange in the human brain using filter-exchange imaging.

    Science.gov (United States)

    Nilsson, Markus; Lätt, Jimmy; van Westen, Danielle; Brockstedt, Sara; Lasič, Samo; Ståhlberg, Freddy; Topgaard, Daniel

    2013-06-01

    We present the first in vivo application of the filter-exchange imaging protocol for diffusion MRI. The protocol allows noninvasive mapping of the rate of water exchange between microenvironments with different self-diffusivities, such as the intracellular and extracellular spaces in tissue. Since diffusional water exchange across the cell membrane is a fundamental process in human physiology and pathophysiology, clinically feasible and noninvasive imaging of the water exchange rate would offer new means to diagnose disease and monitor treatment response in conditions such as cancer and edema. The in vivo use of filter-exchange imaging was demonstrated by studying the brain of five healthy volunteers and one intracranial tumor (meningioma). Apparent exchange rates in white matter range from 0.8±0.08 s(-1) in the internal capsule, to 1.6±0.11 s(-1) for frontal white matter, indicating that low values are associated with high myelination. Solid tumor displayed values of up to 2.9±0.8 s(-1). In white matter, the apparent exchange rate values suggest intra-axonal exchange times in the order of seconds, confirming the slow exchange assumption in the analysis of diffusion MRI data. We propose that filter-exchange imaging could be used clinically to map the water exchange rate in pathologies. Filter-exchange imaging may also be valuable for evaluating novel therapies targeting the function of aquaporins. Copyright © 2012 Wiley Periodicals, Inc.

  15. A Semiautomatic Segmentation Algorithm for Extracting the Complete Structure of Acini from Synchrotron Micro-CT Images

    Directory of Open Access Journals (Sweden)

    Luosha Xiao

    2013-01-01

    Full Text Available Pulmonary acinus is the largest airway unit provided with alveoli where blood/gas exchange takes place. Understanding the complete structure of acinus is necessary to measure the pathway of gas exchange and to simulate various mechanical phenomena in the lungs. The usual manual segmentation of a complete acinus structure from their experimentally obtained images is difficult and extremely time-consuming, which hampers the statistical analysis. In this study, we develop a semiautomatic segmentation algorithm for extracting the complete structure of acinus from synchrotron micro-CT images of the closed chest of mouse lungs. The algorithm uses a combination of conventional binary image processing techniques based on the multiscale and hierarchical nature of lung structures. Specifically, larger structures are removed, while smaller structures are isolated from the image by repeatedly applying erosion and dilation operators in order, adjusting the parameter referencing to previously obtained morphometric data. A cluster of isolated acini belonging to the same terminal bronchiole is obtained without floating voxels. The extracted acinar models above 98% agree well with those extracted manually. The run time is drastically shortened compared with manual methods. These findings suggest that our method may be useful for taking samples used in the statistical analysis of acinus.

  16. Compressed sensing along physically plausible sampling trajectories in MRI

    International Nuclear Information System (INIS)

    Chauffert, Nicolas

    2015-01-01

    . First, we propose continuous sampling schemes based on random walks and on travelling salesman (TSP) problem. Then, we propose a projection algorithm onto the space of constraints that returns the closest feasible curve of an input curve (eg, a TSP solution). Finally, we provide an algorithm to project a measure onto a set of measures carried by parameterizations. In particular, if this set is the one carried by admissible curves, the algorithm returns a curve which sampling density is close to the measure to project. This designs an admissible variable density sampler. The reconstruction results obtained in simulations using this strategy outperform existing acquisition trajectories (spiral, radial) by about 3 dB. They permit to envision a future implementation on a real 7 T scanner soon, notably in the context of high resolution anatomical imaging. (author) [fr

  17. Rocket Based Combined Cycle Exchange Inlet Performance Estimation at Supersonic Speeds

    Science.gov (United States)

    Murzionak, Aliaksandr

    A method to estimate the performance of an exchange inlet for a Rocket Based Combined Cycle engine is developed. This method is to be used for exchange inlet geometry optimization and as such should be able to predict properties that can be used in the design process within a reasonable amount of time to allow multiple configurations to be evaluated. The method is based on a curve fit of the shocks developed around the major components of the inlet using solutions for shocks around sharp cones and 2D estimations of the shocks around wedges with blunt leading edges. The total pressure drop across the estimated shocks as well as the mass flow rate through the exchange inlet are calculated. The estimations for a selected range of free-stream Mach numbers between 1.1 and 7 are compared against numerical finite volume method simulations which were performed using available commercial software (Ansys-CFX). The total pressure difference between the two methods is within 10% for the tested Mach numbers of 5 and below, while for the Mach 7 test case the difference is 30%. The mass flow rate on average differs by less than 5% for all tested cases with the maximum difference not exceeding 10%. The estimation method takes less than 3 seconds on 3.0 GHz single core processor to complete the calculations for a single flight condition as oppose to over 5 days on 8 cores at 2.4 GHz system while using 3D finite volume method simulation with 1.5 million elements mesh. This makes the estimation method suitable for the use with exchange inlet geometry optimization algorithm.

  18. An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2013-01-01

    Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

  19. Application of machine learning algorithms to the study of noise artifacts in gravitational-wave data

    Science.gov (United States)

    Biswas, Rahul; Blackburn, Lindy; Cao, Junwei; Essick, Reed; Hodge, Kari Alison; Katsavounidis, Erotokritos; Kim, Kyungmin; Kim, Young-Min; Le Bigot, Eric-Olivier; Lee, Chang-Hwan; Oh, John J.; Oh, Sang Hoon; Son, Edwin J.; Tao, Ye; Vaulin, Ruslan; Wang, Xiaoge

    2013-09-01

    by this model. Future performance gains are thus likely to involve additional sources of information, rather than improvements in the classification algorithms themselves. We discuss several plausible sources of such new information as well as the ways of propagating it through the classifiers into gravitational-wave searches.

  20. Usage of the hybrid encryption in a cloud instant messages exchange system

    Science.gov (United States)

    Kvyetnyy, Roman N.; Romanyuk, Olexander N.; Titarchuk, Evgenii O.; Gromaszek, Konrad; Mussabekov, Nazarbek

    2016-09-01

    A new approach for constructing cloud instant messaging represented in this article allows users to encrypt data locally by using Diffie - Hellman key exchange protocol. The described approach allows to construct a cloud service which operates only by users encrypted messages; encryption and decryption takes place locally at the user party using a symmetric AES encryption. A feature of the service is the conferences support without the need for messages reecryption for each participant. In the article it is given an example of the protocol implementation on the ECC and RSA encryption algorithms basis, as well as a comparison of these implementations.

  1. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  2. Reconfiguration of distribution networks to minimize loss and disruption costs using genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Cebrian, Juan Carlos; Kagan, Nelson [Department of Electrical Engineering, University of Sao Paulo, Escola Politecnica, Av. Prof. Luciano Gualberto, travessa 3 n 380 - CEP - 05508-970 - Sao Paulo (Brazil)

    2010-01-15

    In this paper a computational implementation of an evolutionary algorithm (EA) is shown in order to tackle the problem of reconfiguring radial distribution systems. The developed module considers power quality indices such as long duration interruptions and customer process disruptions due to voltage sags, by using the Monte Carlo simulation method. Power quality costs are modeled into the mathematical problem formulation, which are added to the cost of network losses. As for the EA codification proposed, a decimal representation is used. The EA operators, namely selection, recombination and mutation, which are considered for the reconfiguration algorithm, are herein analyzed. A number of selection procedures are analyzed, namely tournament, elitism and a mixed technique using both elitism and tournament. The recombination operator was developed by considering a chromosome structure representation that maps the network branches and system radiality, and another structure that takes into account the network topology and feasibility of network operation to exchange genetic material. The topologies regarding the initial population are randomly produced so as radial configurations are produced through the Prim and Kruskal algorithms that rapidly build minimum spanning trees. (author)

  3. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  4. MODIS-Based Estimation of Terrestrial Latent Heat Flux over North America Using Three Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Xuanyu Wang

    2017-12-01

    Full Text Available Terrestrial latent heat flux (LE is a key component of the global terrestrial water, energy, and carbon exchanges. Accurate estimation of LE from moderate resolution imaging spectroradiometer (MODIS data remains a major challenge. In this study, we estimated the daily LE for different plant functional types (PFTs across North America using three machine learning algorithms: artificial neural network (ANN; support vector machines (SVM; and, multivariate adaptive regression spline (MARS driven by MODIS and Modern Era Retrospective Analysis for Research and Applications (MERRA meteorology data. These three predictive algorithms, which were trained and validated using observed LE over the period 2000–2007, all proved to be accurate. However, ANN outperformed the other two algorithms for the majority of the tested configurations for most PFTs and was the only method that arrived at 80% precision for LE estimation. We also applied three machine learning algorithms for MODIS data and MERRA meteorology to map the average annual terrestrial LE of North America during 2002–2004 using a spatial resolution of 0.05°, which proved to be useful for estimating the long-term LE over North America.

  5. An Extended Genetic Algorithm for Distributed Integration of Fuzzy Process Planning and Scheduling

    Directory of Open Access Journals (Sweden)

    Shuai Zhang

    2016-01-01

    Full Text Available The distributed integration of process planning and scheduling (DIPPS aims to simultaneously arrange the two most important manufacturing stages, process planning and scheduling, in a distributed manufacturing environment. Meanwhile, considering its advantage corresponding to actual situation, the triangle fuzzy number (TFN is adopted in DIPPS to represent the machine processing and transportation time. In order to solve this problem and obtain the optimal or near-optimal solution, an extended genetic algorithm (EGA with innovative three-class encoding method, improved crossover, and mutation strategies is proposed. Furthermore, a local enhancement strategy featuring machine replacement and order exchange is also added to strengthen the local search capability on the basic process of genetic algorithm. Through the verification of experiment, EGA achieves satisfactory results all in a very short period of time and demonstrates its powerful performance in dealing with the distributed integration of fuzzy process planning and scheduling (DIFPPS.

  6. A plausible mechanism of biosorption in dual symbioses by vesicular-arbuscular mycorrhizal in plants.

    Science.gov (United States)

    Azmat, Rafia; Hamid, Neelofer

    2015-03-01

    Dual symbioses of vesicular-arbuscular mycorrhizal (VAM) fungi with growth of Momordica charantia were elucidated in terms of plausible mechanism of biosorption in this article. The experiment was conducted in green house and mixed inoculum of the VAM fungi was used in the three replicates. Results demonstrated that the starch contents were the main source of C for the VAM to builds their hyphae. The increased plant height and leaves surface area were explained in relation with an increase in the photosynthetic rates to produce rapid sugar contents for the survival of plants. A decreased in protein, and amino acid contents and increased proline and protease activity in VAM plants suggested that these contents were the main bio-indicators of the plants under biotic stress. The decline in protein may be due to the degradation of these contents, which later on converted into dextrose where it can easily be absorbed by for the period of symbioses. A mechanism of C chemisorption in relation with physiology and morphology of plant was discussed.

  7. MIDAS: a database-searching algorithm for metabolite identification in metabolomics.

    Science.gov (United States)

    Wang, Yingfeng; Kora, Guruprasad; Bowen, Benjamin P; Pan, Chongle

    2014-10-07

    A database searching approach can be used for metabolite identification in metabolomics by matching measured tandem mass spectra (MS/MS) against the predicted fragments of metabolites in a database. Here, we present the open-source MIDAS algorithm (Metabolite Identification via Database Searching). To evaluate a metabolite-spectrum match (MSM), MIDAS first enumerates possible fragments from a metabolite by systematic bond dissociation, then calculates the plausibility of the fragments based on their fragmentation pathways, and finally scores the MSM to assess how well the experimental MS/MS spectrum from collision-induced dissociation (CID) is explained by the metabolite's predicted CID MS/MS spectrum. MIDAS was designed to search high-resolution tandem mass spectra acquired on time-of-flight or Orbitrap mass spectrometer against a metabolite database in an automated and high-throughput manner. The accuracy of metabolite identification by MIDAS was benchmarked using four sets of standard tandem mass spectra from MassBank. On average, for 77% of original spectra and 84% of composite spectra, MIDAS correctly ranked the true compounds as the first MSMs out of all MetaCyc metabolites as decoys. MIDAS correctly identified 46% more original spectra and 59% more composite spectra at the first MSMs than an existing database-searching algorithm, MetFrag. MIDAS was showcased by searching a published real-world measurement of a metabolome from Synechococcus sp. PCC 7002 against the MetaCyc metabolite database. MIDAS identified many metabolites missed in the previous study. MIDAS identifications should be considered only as candidate metabolites, which need to be confirmed using standard compounds. To facilitate manual validation, MIDAS provides annotated spectra for MSMs and labels observed mass spectral peaks with predicted fragments. The database searching and manual validation can be performed online at http://midas.omicsbio.org.

  8. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  9. An Algorithm of an X-ray Hit Allocation to a Single Pixel in a Cluster and Its Test-Circuit Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Deptuch, G. W. [AGH-UST, Cracow; Fahim, F. [Fermilab; Grybos, P. [AGH-UST, Cracow; Hoff, J. [Fermilab; Maj, P. [AGH-UST, Cracow; Siddons, D. P. [Brookhaven; Kmon, P. [AGH-UST, Cracow; Trimpl, M. [Fermilab; Zimmerman, T. [Fermilab

    2017-05-06

    An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels to one virtual pixel that recovers composite signals and event driven strobes to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32×32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3 μm X-ray beam. The results of these tests are given in the paper assessing physical implementation of the algorithm.

  10. NEON's Eddy-Covariance Storage Exchange: from Tower to Data Portal

    Science.gov (United States)

    Durden, N. P.; Luo, H.; Xu, K.; Metzger, S.; Durden, D.

    2017-12-01

    NEON's eddy-covariance storage exchange system (ECSE) consists of a suite of sensors including temperature sensors, a CO2 and H2O gas analyzer, and isotopic CO2 and H2O analyzers. NEON's ECSE was developed to provide the vertical profile measurements of temperature, CO2 and H2O concentrations, the stable isotope ratios in CO2 (δ13C) and H2O (δ18O and δ2H) in the atmosphere. The profiles of temperature and concentrations of CO2 and H2O are key to calculate storage fluxes for eddy-covariance tower sites. Storage fluxes have a strong diurnal cycle and can be large in magnitude, especially at temporal scales less than one day. However, the storage term is often neglected in flux computations. To obtian accurate eddy-covariance fluxes, the storage fluxes are calculated and incorporated into the calculations of net surface-atmosphere ecosystem exchange of heat, CO2, and H2O for each NEON tower site. Once the ECSE raw data (Level 0, or L0) is retrieved at NEON's headquarters, it is preconditioned through a sequence of unit conversion, time regularization, and plausibility tests. By utilizing NEON's eddy4R framework (Metzger et al., 2017), higher-level data products are generated including: Level 1 (L1): Measurement-level specific averages of temperature and concentrations of CO2 and H2O. Level 2 (L2): Time rate of change of temperature and concentrations of CO2 and H2O over 30 min at each measurement level along the vertical tower profile. Level 3 (L3): Time rate of change of temperature and concentrations of CO2 and H2O over 30 min (L2), spatially interpolated along the vertical tower profile. Level 4 (L4): Storage fluxes of heat, CO2, and H2O calculated from the integrated time rate of change spatially interpolated profile (L3). The L4 storage fluxes are combined with turbulent fluxes to calculate the net surface-atmosphere ecosystem exchange of heat, CO2, and H2O. Moreover, a final quality flag and uncertainty budget are produced individually for each data stream

  11. Chemical exchange rotation transfer imaging of intermediate-exchanging amines at 2 ppm.

    Science.gov (United States)

    Zu, Zhongliang; Louie, Elizabeth A; Lin, Eugene C; Jiang, Xiaoyu; Does, Mark D; Gore, John C; Gochberg, Daniel F

    2017-10-01

    Chemical exchange saturation transfer (CEST) imaging of amine protons exchanging at intermediate rates and whose chemical shift is around 2 ppm may provide a means of mapping creatine. However, the quantification of this effect may be compromised by the influence of overlapping CEST signals from fast-exchanging amines and hydroxyls. We aimed to investigate the exchange rate filtering effect of a variation of CEST, named chemical exchange rotation transfer (CERT), as a means of isolating creatine contributions at around 2 ppm from other overlapping signals. Simulations were performed to study the filtering effects of CERT for the selection of transfer effects from protons of specific exchange rates. Control samples containing the main metabolites in brain, bovine serum albumin (BSA) and egg white albumen (EWA) at their physiological concentrations and pH were used to study the ability of CERT to isolate molecules with amines at 2 ppm that exchange at intermediate rates, and corresponding methods were used for in vivo rat brain imaging. Simulations showed that exchange rate filtering can be combined with conventional filtering based on chemical shift. Studies on samples showed that signal contributions from creatine can be separated from those of other metabolites using this combined filter, but contributions from protein amines may still be significant. This exchange filtering can also be used for in vivo imaging. CERT provides more specific quantification of amines at 2 ppm that exchange at intermediate rates compared with conventional CEST imaging. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Ion exchange kinetics of alkaline earths on Zr(IV) arsenosilicate cation exchanger

    International Nuclear Information System (INIS)

    Varshney, K.G.; Agrawal, S.; Varshney, K.

    1984-01-01

    A new approach based on the Nernst-Planck equations was applied to study the ion exchange kinetics for the exchange reactions of Mg(II), Ca(II), Sr(II) and Ba(II) with H + -ions at various temperatures on the zirconium(IV) arsenosilicate phase. Under the conditions of particle diffusion, the rate of exchange was found to be independent of the metal ion concentration at and above 0.1 M in aqueous medium. Energy and entropy of activation were determined and found to vary linearly with the ionic radii and mobilities of alkaline earths, a unique feature observed for an inorganic ion exchanger. The results are useful for predicting the ion exchange processes occurring on the surface of an inorganic material of the type studied. (author)

  13. Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities.

    Science.gov (United States)

    Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J; Gómez-Rodríguez, Alma

    2014-12-08

    In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment.

  14. Comparison of Genetic Algorithm and Hill Climbing for Shortest Path Optimization Mapping

    Directory of Open Access Journals (Sweden)

    Fronita Mona

    2018-01-01

    Full Text Available Traveling Salesman Problem (TSP is an optimization to find the shortest path to reach several destinations in one trip without passing through the same city and back again to the early departure city, the process is applied to the delivery systems. This comparison is done using two methods, namely optimization genetic algorithm and hill climbing. Hill Climbing works by directly selecting a new path that is exchanged with the neighbour’s to get the track distance smaller than the previous track, without testing. Genetic algorithms depend on the input parameters, they are the number of population, the probability of crossover, mutation probability and the number of generations. To simplify the process of determining the shortest path supported by the development of software that uses the google map API. Tests carried out as much as 20 times with the number of city 8, 16, 24 and 32 to see which method is optimal in terms of distance and time computation. Based on experiments conducted with a number of cities 3, 4, 5 and 6 producing the same value and optimal distance for the genetic algorithm and hill climbing, the value of this distance begins to differ with the number of city 7. The overall results shows that these tests, hill climbing are more optimal to number of small cities and the number of cities over 30 optimized using genetic algorithms.

  15. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  16. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  17. Differential multiple quantum relaxation caused by chemical exchange outside the fast exchange limit

    International Nuclear Information System (INIS)

    Wang Chunyu; Palmer, Arthur G.

    2002-01-01

    Differential relaxation of multiple quantum coherences is a signature for chemical exchange processes in proteins. Previous analyses of experimental data have used theoretical descriptions applicable only in the limit of fast exchange. Theoretical expressions for differential relaxation rate constants that are accurate outside fast exchange are presented for two-spin-system subject to two-site chemical exchange. The theoretical expressions are validated using experimental results for 15 N- 1 H relaxation in basic pancreatic trypsin inhibitor. The new theoretical expression is valuable for identification and characterization of exchange processes in proteins using differential relaxation of multiple quantum coherences

  18. Outlook for ion exchange

    International Nuclear Information System (INIS)

    Kunin, R.

    1977-01-01

    This paper presents the history and theory of ion exchange technology and discusses the usefulness of ion exchange resins which found broad applications in chemical operations. It is demonstrated that the theory of ion exchange technology seems to be moving away from the physical chemist back to the polymer chemist where it started originally. This but confronted the polymer chemists with some knotty problems. It is pointed out that one has still to learn how to use ion exchange materials as efficiently as possible in terms of the waste load that is being pumped into the environment. It is interesting to note that, whereas ion exchange is used for abating pollution, it is also a polluter. One must learn how to use ion exchange as an antipollution device, and at the same time minimize its polluting properties

  19. Longitudinal exchange: an alternative strategy towards quantification of dynamics parameters in ZZ exchange spectroscopy

    International Nuclear Information System (INIS)

    Kloiber, Karin; Spitzer, Romana; Grutsch, Sarina; Kreutz, Christoph; Tollinger, Martin

    2011-01-01

    Longitudinal exchange experiments facilitate the quantification of the rates of interconversion between the exchanging species, along with their longitudinal relaxation rates, by analyzing the time-dependence of direct correlation and exchange cross peaks. Here we present a simple and robust alternative to this strategy, which is based on the combination of two complementary experiments, one with and one without resolving exchange cross peaks. We show that by combining the two data sets systematic errors that are caused by differential line-broadening of the exchanging species are avoided and reliable quantification of kinetic and relaxation parameters in the presence of additional conformational exchange on the ms–μs time scale is possible. The strategy is applied to a bistable DNA oligomer that displays different line-broadening in the two exchanging species.

  20. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  1. Analytical applications of ion exchangers

    CERN Document Server

    Inczédy, J

    1966-01-01

    Analytical Applications of Ion Exchangers presents the laboratory use of ion-exchange resins. This book discusses the development in the analytical application of ion exchangers. Organized into 10 chapters, this book begins with an overview of the history and significance of ion exchangers for technical purposes. This text then describes the properties of ion exchangers, which are large molecular water-insoluble polyelectrolytes having a cross-linked structure that contains ionic groups. Other chapters consider the theories concerning the operation of ion-exchange resins and investigate th

  2. Foundations and latest advances in replica exchange transition interface sampling

    Science.gov (United States)

    Cabriolu, Raffaela; Skjelbred Refsnes, Kristin M.; Bolhuis, Peter G.; van Erp, Titus S.

    2017-10-01

    Nearly 20 years ago, transition path sampling (TPS) emerged as an alternative method to free energy based approaches for the study of rare events such as nucleation, protein folding, chemical reactions, and phase transitions. TPS effectively performs Monte Carlo simulations with relatively short molecular dynamics trajectories, with the advantage of not having to alter the actual potential energy surface nor the underlying physical dynamics. Although the TPS approach also introduced a methodology to compute reaction rates, this approach was for a long time considered theoretically attractive, providing the exact same results as extensively long molecular dynamics simulations, but still expensive for most relevant applications. With the increase of computer power and improvements in the algorithmic methodology, quantitative path sampling is finding applications in more and more areas of research. In particular, the transition interface sampling (TIS) and the replica exchange TIS (RETIS) algorithms have, in turn, improved the efficiency of quantitative path sampling significantly, while maintaining the exact nature of the approach. Also, open-source software packages are making these methods, for which implementation is not straightforward, now available for a wider group of users. In addition, a blooming development takes place regarding both applications and algorithmic refinements. Therefore, it is timely to explore the wide panorama of the new developments in this field. This is the aim of this article, which focuses on the most efficient exact path sampling approach, RETIS, as well as its recent applications, extensions, and variations.

  3. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  4. Information security in data exchange between mobile devices with Android system using RSA encryption

    Directory of Open Access Journals (Sweden)

    Fernando Solís

    2017-02-01

    Full Text Available The new styles and ways of life lead to greater use of wireless networks, the mobile device being a tool for data transmission, which are susceptible to threats in the transmission channels in the network. IT security plays a very important role in guaranteeing the availability, privacy and integrity of information, one of the techniques that helps in this task is cryptography, whose foundation is to transform a message so that it is unintelligible except for those who have the Key to decipher it. The research focuses on the use of the RSA algorithm between mobile devices, the encrypted data is sent through communication channels called threads that through formulas and processes executed on the server, will help to execute the encryption and decryption of the data. To carry it out, a prototype for the exchange of data between mobile devices wirelessly was designed and implemented, conducting performance tests with three nodes to improve the security. The results show the efficiency of the algorithm and additionally its functionality, the times of encryption and decryption are fast against the sending of information without any method or algorithm used.

  5. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network.

    Science.gov (United States)

    Falat, Lukas; Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  6. Social dilemmas as exchange dilemmas

    NARCIS (Netherlands)

    Dijkstra, Jacob; van Assen, Marcel A.L.M.

    2016-01-01

    We develop a new paradigm to study social dilemmas, called exchange dilemmas. Exchange dilemmas arise from externalities of exchanges with third parties, and many real-life social dilemmas are more accurately modeled as exchange dilemmas rather than prisoner's dilemmas. Building on focusing and

  7. Social dilemmas as exchange dilemmas

    NARCIS (Netherlands)

    Dijkstra, J.; van Assen, M.A.L.M.

    2016-01-01

    We develop a new paradigm to study social dilemmas, called exchange dilemmas. Exchange dilemmas arise from externalities of exchanges with third parties, and many real-life social dilemmas are more accurately modeled as exchange dilemmas rather than prisoner's dilemmas. Bulding on focusing and

  8. Update heat exchanger designing principles

    International Nuclear Information System (INIS)

    Lipets, A.U.; Yampol'skij, A.E.

    1985-01-01

    Update heat exchanger design principles are analysed. Different coolant pattern in a heat exchanger are considered. It is suggested to rationally organize flow rates irregularity in it. Applying on heat exchanger designing measures on using really existing temperature and flow rate irregularities will permit to improve heat exchanger efficiency. It is expedient in some cases to artificially produce irregularities. In this connection some heat exchanger design principles must be reviewed now

  9. IoT security with one-time pad secure algorithm based on the double memory technique

    Science.gov (United States)

    Wiśniewski, Remigiusz; Grobelny, Michał; Grobelna, Iwona; Bazydło, Grzegorz

    2017-11-01

    Secure encryption of data in Internet of Things is especially important as many information is exchanged every day and the number of attack vectors on IoT elements still increases. In the paper a novel symmetric encryption method is proposed. The idea bases on the one-time pad technique. The proposed solution applies double memory concept to secure transmitted data. The presented algorithm is considered as a part of communication protocol and it has been initially validated against known security issues.

  10. Compact heat exchanger for power plants

    International Nuclear Information System (INIS)

    Kinnunen, L.

    2001-01-01

    Vahterus Oy, located at Kalanti, has manufactured heat exchangers since the beginning of 1990s. About 90% of the equipment produced are exported. In the PSHE (Plate and Shell) solution of the Vahterus heat exchanger the heat is transferred by round plated welded to form a compact package, which is assembled into a cylindrical steel casing. The heat exchanger contains no gaskets or soldered joints, which eliminates the leak risks. Traditional heat exchanges are usually operated at higher temperatures and pressures, but the heat transfer capacities of them are lower. Plate heat exchangers, on the other hand, are efficient, but the application range of them is narrow. Additionally, the rubber gasket of the heat exchange plates, sealing the joints of the heat exchanging plates, does not stand high pressures or temperatures, or corroding fluids. The new welded plate heat exchanger combine the pressure and temperature resistance of tube heat exchangers and the high heat exchange capacity of plate heat exchangers. The new corrosion resisting heat exchanger can be applied for especially hard conditions. The operating temperature range of the PSHE heat exchanger is - 200 - 900 deg C. The pressure resistance is as high as 100 bar. The space requirement of PSHE is only one tenth of the space requirement of traditional tube heat exchangers. Adjusting the number of heat exchanging plates can change the capacity of the heat exchanger. Power range of the heat exchanger can be as high as 80 MW. Due to the corrosion preventive construction and the small dimension the PSHE heat exchanger can be applied for refrigerators using ammonia as refrigerant. These kinds of new Vahterus heat exchangers are in use in 60 countries in more than 2000 refrigerators

  11. Exchanging Description Logic Knowledge Bases

    NARCIS (Netherlands)

    Arenas, M.; Botoeva, E.; Calvanese, D.; Ryzhikov, V.; Sherkhonov, E.

    2012-01-01

    In this paper, we study the problem of exchanging knowledge between a source and a target knowledge base (KB), connected through mappings. Differently from the traditional database exchange setting, which considers only the exchange of data, we are interested in exchanging implicit knowledge. As

  12. Methodological approach for evaluating the geo-exchange potential: VIGOR Project

    Directory of Open Access Journals (Sweden)

    Antonio Galgaro

    2012-12-01

    Full Text Available In the framework of VIGOR Project, a national project coordinated by the Institute of Geosciences and Earth Resources (CNR-IGG and sponsored by the Ministry of Economic Development (MiSE, dedicated to the evaluation of geothermal potential in the regions of the Convergence Objective in Italy (Puglia, Calabria, Campania and Sicily, is expected to evaluate the ability of the territory to heat exchange with the ground for air conditioning of buildings. To identify the conditions for the development of low enthalpy geothermal systems collected and organized on a regional scale geological and stratigraphic data useful for the preparation of a specific thematic mapping, able to represent in a synergistic and simplified way the physical parameters (geological, lithostratigraphic, hydrogeological, thermodynamic that most influence the subsoil behavior for thermal exchange. The litho-stratigraphic and hydrogeological database created for every region led to the production of different cartographic thematic maps, such as the thermal conductivity (lithological and stratigraphical, the surface geothermal flux, the average annual temperature of air, the climate zoning, the areas of hydrogeological restrictions. To obtain a single representation of the geo-exchange potential of the region, the different thematic maps described must be combined together by means of an algorithm, defined on the basis of the SINTACS methodology. The purpose is to weigh the contributions of the involved parameters and to produce a preliminary synthesis map able to identify the territorial use of geothermal heat pump systems, based on the geological characteristics and in agreement with the existing regulatory constraints.

  13. GENERALIZED MATRIXES OF GALOIS PROTOCOLS EXCHANGE ENCRYPTION KEYS

    Directory of Open Access Journals (Sweden)

    Anatoly Beletsky

    2016-03-01

    Full Text Available The methods of construction of matrix formation the secret protocols legalized subscribers of public communications networks encryption keys. Based key exchange protocols laid asymmetric cryptography algorithms. The solution involves the calculation of one-way functions and is based on the use of generalized Galois arrays of isomorphism relationship with forming elements, and depending on the selected irreducible polynomial generating matrix. A simple method for constructing generalized Galois matrix by the method of filling the diagonal. In order to eliminate the isomorphism of Galois arrays and their constituent elements, limiting the possibility of building one-way functions, Galois matrix subjected to similarity transformation carried out by means of permutation matrices. The variant of the organization of the algebraic attacks on encryption keys sharing protocols and discusses options for easing the consequences of an attack.

  14. Estimation of Exchange Rate Volatility using APARCH-type Models: A Case Study of Indonesia (2010–2015

    Directory of Open Access Journals (Sweden)

    Didit B Nugroho

    2017-02-01

    Full Text Available Volatiliy measurement and modeling is an important aspect in many areas of finance. The main purpose of this study is to apply seven APARCH-type models with (1,1 lags to investigate the behavior of exchange rate volatility for the EUR, JPY, and USD selling exchange rates to IDR for the duration from January 2010 to December 2015. The competing models include ARCH, GARCH, TARCH, TS-ARCH, GJR-GARCH, NARCH, and APARCH used with Gaussian normal distribution. In order to estimate the model parameters, this study applies the Bayesian inference using the adaptive random walk Metropolis method in the MCMC algorithm. Empirical results based on the deviance information criterion indicate that the GARCH (1,1, APARCH (1,1, and TARCH (1,1 models provide the best fit for the EUR, JPY, and USD data, respectively. In those models, both the JPY and USD data have significant negative leverage effect at the 99% credible level. Moreover, the JPY returns also have significant Taylor effect in return volatility at the 99% credible level.   Keywords: APARCH, ARWM, IDR exchange rate, MCMC, volatility

  15. A Probabilistic and Highly Efficient Topology Control Algorithm for Underwater Cooperating AUV Networks.

    Science.gov (United States)

    Li, Ning; Cürüklü, Baran; Bastos, Joaquim; Sucasas, Victor; Fernandez, Jose Antonio Sanchez; Rodriguez, Jonathan

    2017-05-04

    The aim of the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) project is to make autonomous underwater vehicles (AUVs), remote operated vehicles (ROVs) and unmanned surface vehicles (USVs) more accessible and useful. To achieve cooperation and communication between different AUVs, these must be able to exchange messages, so an efficient and reliable communication network is necessary for SWARMs. In order to provide an efficient and reliable communication network for mission execution, one of the important and necessary issues is the topology control of the network of AUVs that are cooperating underwater. However, due to the specific properties of an underwater AUV cooperation network, such as the high mobility of AUVs, large transmission delays, low bandwidth, etc., the traditional topology control algorithms primarily designed for terrestrial wireless sensor networks cannot be used directly in the underwater environment. Moreover, these algorithms, in which the nodes adjust their transmission power once the current transmission power does not equal an optimal one, are costly in an underwater cooperating AUV network. Considering these facts, in this paper, we propose a Probabilistic Topology Control (PTC) algorithm for an underwater cooperating AUV network. In PTC, when the transmission power of an AUV is not equal to the optimal transmission power, then whether the transmission power needs to be adjusted or not will be determined based on the AUV's parameters. Each AUV determines their own transmission power adjustment probability based on the parameter deviations. The larger the deviation, the higher the transmission power adjustment probability is, and vice versa. For evaluating the performance of PTC, we combine the PTC algorithm with the Fuzzy logic Topology Control (FTC) algorithm and compare the performance of these two algorithms. The simulation results have demonstrated that the PTC is efficient at reducing the transmission power

  16. A Probabilistic and Highly Efficient Topology Control Algorithm for Underwater Cooperating AUV Networks

    Directory of Open Access Journals (Sweden)

    Ning Li

    2017-05-01

    Full Text Available The aim of the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs project is to make autonomous underwater vehicles (AUVs, remote operated vehicles (ROVs and unmanned surface vehicles (USVs more accessible and useful. To achieve cooperation and communication between different AUVs, these must be able to exchange messages, so an efficient and reliable communication network is necessary for SWARMs. In order to provide an efficient and reliable communication network for mission execution, one of the important and necessary issues is the topology control of the network of AUVs that are cooperating underwater. However, due to the specific properties of an underwater AUV cooperation network, such as the high mobility of AUVs, large transmission delays, low bandwidth, etc., the traditional topology control algorithms primarily designed for terrestrial wireless sensor networks cannot be used directly in the underwater environment. Moreover, these algorithms, in which the nodes adjust their transmission power once the current transmission power does not equal an optimal one, are costly in an underwater cooperating AUV network. Considering these facts, in this paper, we propose a Probabilistic Topology Control (PTC algorithm for an underwater cooperating AUV network. In PTC, when the transmission power of an AUV is not equal to the optimal transmission power, then whether the transmission power needs to be adjusted or not will be determined based on the AUV’s parameters. Each AUV determines their own transmission power adjustment probability based on the parameter deviations. The larger the deviation, the higher the transmission power adjustment probability is, and vice versa. For evaluating the performance of PTC, we combine the PTC algorithm with the Fuzzy logic Topology Control (FTC algorithm and compare the performance of these two algorithms. The simulation results have demonstrated that the PTC is efficient at reducing the

  17. Social dilemmas as exchange dilemmas

    OpenAIRE

    Dijkstra, J.; van Assen, M.A.L.M.

    2016-01-01

    We develop a new paradigm to study social dilemmas, called exchange dilemmas. Exchange dilemmas arise from externalities of exchanges with third parties, and many real-life social dilemmas are more accurately modeled as exchange dilemmas rather than prisoner's dilemmas. Bulding on focusing and framing research we predict that defection is omnipresent in exchange dilemmas, which is corroborated in to very different experiments. Our results suggest that the fundamental problem of cooperation in...

  18. Classical Methods and Calculation Algorithms for Determining Lime Requirements

    Directory of Open Access Journals (Sweden)

    André Guarçoni

    Full Text Available ABSTRACT The methods developed for determination of lime requirements (LR are based on widely accepted principles. However, the formulas used for calculation have evolved little over recent decades, and in some cases there are indications of their inadequacy. The aim of this study was to compare the lime requirements calculated by three classic formulas and three algorithms, defining those most appropriate for supplying Ca and Mg to coffee plants and the smaller possibility of causing overliming. The database used contained 600 soil samples, which were collected in coffee plantings. The LR was estimated by the methods of base saturation, neutralization of Al3+, and elevation of Ca2+ and Mg2+ contents (two formulas and by the three calculation algorithms. Averages of the lime requirements were compared, determining the frequency distribution of the 600 lime requirements (LR estimated through each calculation method. In soils with low cation exchange capacity at pH 7, the base saturation method may fail to adequately supply the plants with Ca and Mg in many situations, while the method of Al3+ neutralization and elevation of Ca2+ and Mg2+ contents can result in the calculation of application rates that will increase the pH above the suitable range. Among the methods studied for calculating lime requirements, the algorithm that predicts reaching a defined base saturation, with adequate Ca and Mg supply and the maximum application rate limited to the H+Al value, proved to be the most efficient calculation method, and it can be recommended for use in numerous crops conditions.

  19. Apparatus and process for deuterium exchange

    International Nuclear Information System (INIS)

    Ergenc, M.S.

    1976-01-01

    The deuterium exchange plant is combined with an absorption refrigeration plant in order to improve the exchange process and to produce refrigeration. The refrigeration plant has a throttling means for expanding and cooling a portion of the liquid exchange medium separated in the exchange plant as well as an evaporator, in which the said liquid exchange medium is brought into heat exchange with a cold consumer device, absorption means for forming a solution of the used exchange medium and fresh water and a pump for pumping the solution into the exchange plant

  20. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  1. Exchange, x-ray and IR spectral behaviour of lanthanum and praseodymium exchanged zeolite X

    International Nuclear Information System (INIS)

    Das, D.; Upreti, M.C.

    1995-01-01

    Exchange behaviour of lanthanum and praseodymium ions in zeolite X involves three steps: preferential exchange, intrazeolitic exchange and irreversible exchange. At room temperature , higher exchange has been observed with La(III) than with Pr(III) which is attributed to the smaller hydrosphere of lanthanum than praseodymium. IR spectra of these zeolites in KBr pellets show a shift in the major Si-O stretching vibration of 972 cm -1 to higher frequencies. Their x-ray diffraction patterns remain unchanged except a large decrease of the line intensities caused by the absorption of x-rays by heavy La(III) and Pr(III) ions. The present study reports the preparation and physicochemical properties of lanthanum and praseodymium exchanged zeolite X. (author). 12 refs., 3 figs., 3 tabs

  2. Simulation of Automated Vehicles' Drive Cycles

    Science.gov (United States)

    2018-02-28

    This research has two objectives: 1) To develop algorithms for plausible and legally-justifiable freeway car-following and arterial-street gap acceptance driving behavior for AVs 2) To implement these algorithms on a representative road network, in o...

  3. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  4. Isotopically exchangeable phosphorus

    International Nuclear Information System (INIS)

    Barbaro, N.O.

    1984-01-01

    A critique revision of isotope dilution is presented. The concepts and use of exchangeable phosphorus, the phosphate adsorption, the kinetics of isotopic exchange and the equilibrium time in soils are discussed. (M.A.C.) [pt

  5. Optimization of heat pump system in indoor swimming pool using particle swarm algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Wen-Shing; Kung, Chung-Kuan [Department of Energy and Refrigerating Air-Conditioning Engineering, National Taipei University of Technology, 1, Section 3, Chung-Hsiao East Road, Taipei (China)

    2008-09-15

    When it comes to indoor swimming pool facilities, a large amount of energy is required to heat up low-temperature outdoor air before it is being introduced indoors to maintain indoor humidity. Since water is evaporated from the pool surface, the exhausted air contains more water and specific enthalpy. In response to this indoor air, heat pump is generally used in heat recovery for indoor swimming pools. To reduce the cost in energy consumption, this paper utilizes particle swarm algorithm to optimize the design of heat pump system. The optimized parameters include continuous parameters and discrete parameters. The former consists of outdoor air mass flow and heat conductance of heat exchangers; the latter comprises compressor type and boiler type. In a case study, life cycle energy cost is considered as an objective function. In this regard, the optimized outdoor air flow and the optimized design for heating system can be deduced by using particle swarm algorithm. (author)

  6. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  7. Is a more stable exchange rate associated with reduced exchange rate pass-through?

    OpenAIRE

    Mark J. Holmes

    2007-01-01

    Pass-through from the nominal effective exchange rate to import prices is modelled within a regime-switching environment. Evidence suggests that exchange rate pass through can be characterised as regime-specific where the probability of switching between regimes is influenced by the extent of exchange rate volatility.

  8. Exchange of Information in Tax Matters

    Directory of Open Access Journals (Sweden)

    Paweł Szwajdler

    2017-01-01

    Full Text Available The main aim of this paper is to present issues related to exchange of tax information. The author focuses on models of exchange of information and boundaries of obligations in reference to above-mentioned problems. Automatic exchange of information, spontaneous exchange of information and exchange of information on request are analysed in this work on the base of OECD Convention on Mutual Administrative Assistance in Tax Matters, Council Directive 2011/16 and OECD Model Agreement on Exchange of Information in Tax Matters. In the summary, it was showed that the most efficient method of exchange of tax information is automatic exchange of information. Furthermore, it was stated that exchange on request can be related to negative phenomenon such as fishing expedition. Spontaneous exchange of information is thought to play only supportive role.  The author also considers that boundaries of exchange of information in tax matters were regulated so as to protect jeopardised raison d’ État.

  9. Bretton Woods Fixed Exchange Rate System versus Floating Exchange Rate System

    OpenAIRE

    Geza, Paula; Giurca Vasilescu, Laura

    2011-01-01

    One of the most important issues of monetary policy is to find out whether the state should intervene among the exchange rates, taking into account the fact that changes in the exchange rates represent a significant transmission channel of the effects generated by the monetary policy. Taking into consideration the failure of fixed exchange rate regimes and the recent improvement of financial markets, the return in the near future to such a regime – as for example the Bretton Woods system –...

  10. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  11. Hydrogen exchange

    DEFF Research Database (Denmark)

    Jensen, Pernille Foged; Rand, Kasper Dyrberg

    2016-01-01

    Hydrogen exchange (HX) monitored by mass spectrometry (MS) is a powerful analytical method for investigation of protein conformation and dynamics. HX-MS monitors isotopic exchange of hydrogen in protein backbone amides and thus serves as a sensitive method for probing protein conformation...... and dynamics along the entire protein backbone. This chapter describes the exchange of backbone amide hydrogen which is highly quenchable as it is strongly dependent on the pH and temperature. The HX rates of backbone amide hydrogen are sensitive and very useful probes of protein conformation......, as they are distributed along the polypeptide backbone and form the fundamental hydrogen-bonding networks of basic secondary structure. The effect of pressure on HX in unstructured polypeptides (poly-dl-lysine and oxidatively unfolded ribonuclease A) and native folded proteins (lysozyme and ribonuclease A) was evaluated...

  12. Anion exchange membrane

    Science.gov (United States)

    Verkade, John G; Wadhwa, Kuldeep; Kong, Xueqian; Schmidt-Rohr, Klaus

    2013-05-07

    An anion exchange membrane and fuel cell incorporating the anion exchange membrane are detailed in which proazaphosphatrane and azaphosphatrane cations are covalently bonded to a sulfonated fluoropolymer support along with anionic counterions. A positive charge is dispersed in the aforementioned cations which are buried in the support to reduce the cation-anion interactions and increase the mobility of hydroxide ions, for example, across the membrane. The anion exchange membrane has the ability to operate at high temperatures and in highly alkaline environments with high conductivity and low resistance.

  13. Radial flow heat exchanger

    Science.gov (United States)

    Valenzuela, Javier

    2001-01-01

    A radial flow heat exchanger (20) having a plurality of first passages (24) for transporting a first fluid (25) and a plurality of second passages (26) for transporting a second fluid (27). The first and second passages are arranged in stacked, alternating relationship, are separated from one another by relatively thin plates (30) and (32), and surround a central axis (22). The thickness of the first and second passages are selected so that the first and second fluids, respectively, are transported with laminar flow through the passages. To enhance thermal energy transfer between first and second passages, the latter are arranged so each first passage is in thermal communication with an associated second passage along substantially its entire length, and vice versa with respect to the second passages. The heat exchangers may be stacked to achieve a modular heat exchange assembly (300). Certain heat exchangers in the assembly may be designed slightly differently than other heat exchangers to address changes in fluid properties during transport through the heat exchanger, so as to enhance overall thermal effectiveness of the assembly.

  14. Two-component mixture model: Application to palm oil and exchange rate

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-12-01

    Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.

  15. Exchange Studies as Actor-Networks: Following Korean Exchange Students in Swedish Higher Education

    Science.gov (United States)

    Ahn, Song-ee

    2011-01-01

    This article explores how Korean exchange students organized their studies during exchange programs in Swedish higher education. For most students, the programs became a disordered period in relation to their education. The value of exchange studies seems mainly to be extra-curricular. Drawing upon actor network theory, the article argues that the…

  16. Imaging of endogenous exchangeable proton signals in the human brain using frequency labeled exchange transfer imaging.

    Science.gov (United States)

    Yadav, Nirbhay N; Jones, Craig K; Hua, Jun; Xu, Jiadi; van Zijl, Peter C M

    2013-04-01

    To image endogenous exchangeable proton signals in the human brain using a recently reported method called frequency labeled exchange transfer (FLEX) MRI. As opposed to labeling exchangeable protons using saturation (i.e., chemical exchange saturation transfer, or CEST), FLEX labels exchangeable protons with their chemical shift evolution. The use of short high-power frequency pulses allows more efficient labeling of rapidly exchanging protons, while time domain acquisition allows removal of contamination from semi-solid magnetization transfer effects. FLEX-based exchangeable proton signals were detected in human brain over the 1-5 ppm frequency range from water. Conventional magnetization transfer contrast and the bulk water signal did not interfere in the FLEX spectrum. The information content of these signals differed from in vivo CEST data in that the average exchange rate of these signals was 350-400 s(-1) , much faster than the amide signal usually detected using direct saturation (∼30 s(-1) ). Similarly, fast exchanging protons could be detected in egg white in the same frequency range where amide and amine protons of mobile proteins and peptides are known to resonate. FLEX MRI in the human brain preferentially detects more rapidly exchanging amide/amine protons compared to traditional CEST experiments, thereby changing the information content of the exchangeable proton spectrum. This has the potential to open up different types of endogenous applications as well as more easy detection of rapidly exchanging protons in diaCEST agents or fast exchanging units such as water molecules in paracest agents without interference of conventional magnetization transfer contrast. Copyright © 2013 Wiley Periodicals, Inc.

  17. Hibernation and gas exchange.

    Science.gov (United States)

    Milsom, William K; Jackson, Donald C

    2011-01-01

    Hibernation in endotherms and ectotherms is characterized by an energy-conserving metabolic depression due to low body temperatures and poorly understood temperature-independent mechanisms. Rates of gas exchange are correspondly reduced. In hibernating mammals, ventilation falls even more than metabolic rate leading to a relative respiratory acidosis that may contribute to metabolic depression. Breathing in some mammals becomes episodic and in some small mammals significant apneic gas exchange may occur by passive diffusion via airways or skin. In ectothermic vertebrates, extrapulmonary gas exchange predominates and in reptiles and amphibians hibernating underwater accounts for all gas exchange. In aerated water diffusive exchange permits amphibians and many species of turtles to remain fully aerobic, but hypoxic conditions can challenge many of these animals. Oxygen uptake into blood in both endotherms and ectotherms is enhanced by increased affinity of hemoglobin for O₂ at low temperature. Regulation of gas exchange in hibernating mammals is predominately linked to CO₂/pH, and in episodic breathers, control is principally directed at the duration of the apneic period. Control in submerged hibernating ectotherms is poorly understood, although skin-diffusing capacity may increase under hypoxic conditions. In aerated water blood pH of frogs and turtles either adheres to alphastat regulation (pH ∼8.0) or may even exhibit respiratory alkalosis. Arousal in hibernating mammals leads to restoration of euthermic temperature, metabolic rate, and gas exchange and occurs periodically even as ambient temperatures remain low, whereas body temperature, metabolic rate, and gas exchange of hibernating ectotherms are tightly linked to ambient temperature. © 2011 American Physiological Society.

  18. Mastering Microsoft Exchange Server 2010

    CERN Document Server

    McBee, Jim

    2010-01-01

    A top-selling guide to Exchange Server-now fully updated for Exchange Server 2010. Keep your Microsoft messaging system up to date and protected with the very newest version, Exchange Server 2010, and this comprehensive guide. Whether you're upgrading from Exchange Server 2007 SP1 or earlier, installing for the first time, or migrating from another system, this step-by-step guide provides the hands-on instruction, practical application, and real-world advice you need.: Explains Microsoft Exchange Server 2010, the latest release of Microsoft's messaging system that protects against spam and vir

  19. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  20. Prediction of the heat transfer rate of a single layer wire-on-tube type heat exchanger using ANFIS

    Energy Technology Data Exchange (ETDEWEB)

    Hayati, Mohsen [Electrical Engineering Department, Faculty of Engineering, Razi University, Tagh-E-Bostan, Kermanshah 67149 (Iran); Computational Intelligence Research Center, Razi University, Tagh-E-Bostan, Kermanshah 67149 (Iran); Rezaei, Abbas; Seifi, Majid [Electrical Engineering Department, Faculty of Engineering, Razi University, Tagh-E-Bostan, Kermanshah 67149 (Iran)

    2009-12-15

    In this paper, we applied an Adaptive Neuro-Fuzzy Inference System (ANFIS) model for prediction of the heat transfer rate of the wire-on-tube type heat exchanger. Limited experimental data was used for training and testing ANFIS configuration with the help of hybrid learning algorithm consisting of backpropagation and least-squares estimation. The predicted values are found to be in good agreement with the actual values from the experiments with mean relative error less than 2.55%. Also, we compared the proposed ANFIS model to an ANN approach. Results show that the ANFIS model has more accuracy in comparison to ANN approach. Therefore, we can use ANFIS model to predict the performances of thermal systems in engineering applications, such as modeling heat exchangers for heat transfer analysis. (author)

  1. Study of ion exchange equilibrium and determination of heat of ion exchange by ion chromatography

    International Nuclear Information System (INIS)

    Liu Kailu; Yang Wenying

    1996-01-01

    Ion chromatography using pellicularia ion exchange resins and dilute solution can be devoted to the study of ion exchange thermodynamics and kinetics. Ion exchange equilibrium equation was obtained, and examined by the experiments. Based on ion exchange equilibrium, the influence of eluent concentration and resin capacity on adjusted retention volumes was examined. The effect of temperature on adjusted retention volumes was investigated and heats of ion exchange of seven anions were determined by ion chromatography. The interaction between anions and skeleton structure of resins were observed

  2. Progress in liquid ion exchangers

    International Nuclear Information System (INIS)

    Nakagawa, Genkichi

    1974-01-01

    Review is made on the extraction with anion exchangers and the extraction with liquid cation exchangers. On the former, explanation is made on the extraction of acids, the relation between anion exchange and the extraction of metals, the composition of the metallic complexes that are extracted, and the application of the extraction with anion exchangers to analytical chemistry. On the latter, explanation is made on the extraction of metals and its application to analytical chemistry. The extraction with liquid ion exchangers is suitable for the operation in chromatography, because the distribution of extracting agents into aqueous phase is small, and extraction equilibrium is quickly reached, usually within 1 to several minutes. The separation by means of anion exchangers is usually made from hydrochloric acid solution. For example, Brinkman et al. determined Rf values for more than 50 elements by thin layer chromatography. Tables are given for showing the structure of the liquid ion exchangers and the polymerized state of various amines. (Mori, K.)

  3. The missing link between sleep disorders and age-related dementia: recent evidence and plausible mechanisms.

    Science.gov (United States)

    Zhang, Feng; Zhong, Rujia; Li, Song; Chang, Raymond Chuen-Chung; Le, Weidong

    2017-05-01

    Sleep disorders are among the most common clinical problems and possess a significant concern for the geriatric population. More importantly, while around 40% of elderly adults have sleep-related complaints, sleep disorders are more frequently associated with co-morbidities including age-related neurodegenerative diseases and mild cognitive impairment. Recently, increasing evidence has indicated that disturbed sleep may not only serve as the consequence of brain atrophy, but also contribute to the pathogenesis of dementia and, therefore, significantly increase dementia risk. Since the current therapeutic interventions lack efficacies to prevent, delay or reverse the pathological progress of dementia, a better understanding of underlying mechanisms by which sleep disorders interact with the pathogenesis of dementia will provide possible targets for the prevention and treatment of dementia. In this review, we briefly describe the physiological roles of sleep in learning/memory, and specifically update the recent research evidence demonstrating the association between sleep disorders and dementia. Plausible mechanisms are further discussed. Moreover, we also evaluate the possibility of sleep therapy as a potential intervention for dementia.

  4. Road Safety Peer Exchange for Tribal Governments : an RSPCB Peer Exchange

    Science.gov (United States)

    2014-12-01

    This report provides a summary of the proceedings of the Road Safety Peer Exchange for Tribal : Governments held in Albuquerque, New Mexico on December 9th and 10th, 2014. The peer exchange : brought together safety practitioners from across the Unit...

  5. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  6. APPLYING ARTIFICIAL NEURAL NETWORK OPTIMIZED BY FIREWORKS ALGORITHM FOR STOCK PRICE ESTIMATION

    Directory of Open Access Journals (Sweden)

    Khuat Thanh Tung

    2016-04-01

    Full Text Available Stock prediction is to determine the future value of a company stock dealt on an exchange. It plays a crucial role to raise the profit gained by firms and investors. Over the past few years, many methods have been developed in which plenty of efforts focus on the machine learning framework achieving the promising results. In this paper, an approach based on Artificial Neural Network (ANN optimized by Fireworks algorithm and data preprocessing by Haar Wavelet is applied to estimate the stock prices. The system was trained and tested with real data of various companies collected from Yahoo Finance. The obtained results are encouraging.

  7. Mapping Global Ocean Surface Albedo from Satellite Observations: Models, Algorithms, and Datasets

    Science.gov (United States)

    Li, X.; Fan, X.; Yan, H.; Li, A.; Wang, M.; Qu, Y.

    2018-04-01

    Ocean surface albedo (OSA) is one of the important parameters in surface radiation budget (SRB). It is usually considered as a controlling factor of the heat exchange among the atmosphere and ocean. The temporal and spatial dynamics of OSA determine the energy absorption of upper level ocean water, and have influences on the oceanic currents, atmospheric circulations, and transportation of material and energy of hydrosphere. Therefore, various parameterizations and models have been developed for describing the dynamics of OSA. However, it has been demonstrated that the currently available OSA datasets cannot full fill the requirement of global climate change studies. In this study, we present a literature review on mapping global OSA from satellite observations. The models (parameterizations, the coupled ocean-atmosphere radiative transfer (COART), and the three component ocean water albedo (TCOWA)), algorithms (the estimation method based on reanalysis data, and the direct-estimation algorithm), and datasets (the cloud, albedo and radiation (CLARA) surface albedo product, dataset derived by the TCOWA model, and the global land surface satellite (GLASS) phase-2 surface broadband albedo product) of OSA have been discussed, separately.

  8. Security analysis of RSA cryptosystem algorithm and it’s properties

    International Nuclear Information System (INIS)

    Liu, Chenglian; Guo, Yongning; Lin, Juan

    2014-01-01

    The information technology rapidly development and dramatically changed the life style people, in addition to shortening the distance of communication, but also promote the smooth exchange of information flows. However, derivatives to facilitate the relative safety of these issues, since into the digital information age, the majority of the practitioners of engineering and technical personnel and technical workers in terms of technology, information security is increasingly becoming an important issue. The RSA algorithm was published in 1978. It is a kind of very popular and widely application modem cryptosystem in the world. Even though there are lots of articles to discuss about how to break the RSA, but it is still secure today. In this paper, the authors would like to introduce a variant attack to RSA

  9. Security analysis of RSA cryptosystem algorithm and it’s properties

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chenglian [School of Mathematics and Computer Science, Long Yan university, Lonyan 364012 (China); Guo, Yongning, E-mail: guoyn@163.com, E-mail: linjuanliucaihong@qq.com; Lin, Juan, E-mail: guoyn@163.com, E-mail: linjuanliucaihong@qq.com [Department of Mathematics and Computer Science, Fuqing Branch of Fujian Normal University, Fuqing 350300 (China)

    2014-10-06

    The information technology rapidly development and dramatically changed the life style people, in addition to shortening the distance of communication, but also promote the smooth exchange of information flows. However, derivatives to facilitate the relative safety of these issues, since into the digital information age, the majority of the practitioners of engineering and technical personnel and technical workers in terms of technology, information security is increasingly becoming an important issue. The RSA algorithm was published in 1978. It is a kind of very popular and widely application modem cryptosystem in the world. Even though there are lots of articles to discuss about how to break the RSA, but it is still secure today. In this paper, the authors would like to introduce a variant attack to RSA.

  10. Exchange of Information in Tax Matters

    OpenAIRE

    Paweł Szwajdler

    2017-01-01

    The main aim of this paper is to present issues related to exchange of tax information. The author focuses on models of exchange of information and boundaries of obligations in reference to above-mentioned problems. Automatic exchange of information, spontaneous exchange of information and exchange of information on request are analysed in this work on the base of OECD Convention on Mutual Administrative Assistance in Tax Matters, Council Directive 2011/16 and OECD Model Agreement on Exchange...

  11. Cluster fusion-fission dynamics in the Singapore stock exchange

    Science.gov (United States)

    Teh, Boon Kin; Cheong, Siew Ann

    2015-10-01

    In this paper, we investigate how the cross-correlations between stocks in the Singapore stock exchange (SGX) evolve over 2008 and 2009 within overlapping one-month time windows. In particular, we examine how these cross-correlations change before, during, and after the Sep-Oct 2008 Lehman Brothers Crisis. To do this, we extend the complete-linkage hierarchical clustering algorithm, to obtain robust clusters of stocks with stronger intracluster correlations, and weaker intercluster correlations. After we identify the robust clusters in all time windows, we visualize how these change in the form of a fusion-fission diagram. Such a diagram depicts graphically how the cluster sizes evolve, the exchange of stocks between clusters, as well as how strongly the clusters mix. From the fusion-fission diagram, we see a giant cluster growing and disintegrating in the SGX, up till the Lehman Brothers Crisis in September 2008 and the market crashes of October 2008. After the Lehman Brothers Crisis, clusters in the SGX remain small for few months before giant clusters emerge once again. In the aftermath of the crisis, we also find strong mixing of component stocks between clusters. As a result, the correlation between initially strongly-correlated pairs of stocks decay exponentially with average life time of about a month. These observations impact strongly how portfolios and trading strategies should be formulated.

  12. Investigation of ammonia air-surface exchange processes in a ...

    Science.gov (United States)

    Recent assessments of atmospheric deposition in North America note the increasing importance of reduced (NHx = NH3 + NH4+) forms of nitrogen (N) relative to oxidized forms. This shift in in the composition of inorganic nitrogen deposition has both ecological and policy implications. Deposition budgets developed from inferential models applied at the landscape scale, as well as regional and global chemical transport models, indicate that NH3 dry deposition contributes a significant portion of inorganic N deposition in many areas. However, the bidirectional NH3 flux algorithms employed in these models have not been extensively evaluated for North American conditions (e.g, atmospheric chemistry, meteorology, biogeochemistry). Further understanding of the processes controlling NH3 air-surface exchange in natural systems is critically needed. Based on preliminary results from the Southern Appalachian Nitrogen Deposition Study (SANDS), this presentation examines processes of NH3 air-surface exchange in a deciduous montane forest at the Coweeta Hydrologic Laboratory in western North Carolina. A combination of measurements and modeling are used to investigate net fluxes of NH3 above the forest and sources and sinks of NH3 within the canopy and forest floor. Measurements of biogeochemical NH4+ pools are used to characterize emission potential and NH3 compensation points of canopy foliage (i.e., green vegetation), leaf litter, and soil and their relation to NH3 fluxes

  13. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  14. 3.5 Radiation stability of ion exchangers

    International Nuclear Information System (INIS)

    Marhol, M.

    1976-01-01

    The main knowledge is summed up of the radiation stability of ion exchangers. No basic changes occur in inorganic ion exchangers with the exception of the exchange capacity at doses of up to 10 9 rad. This also applies to coal-based ion exchangers. Tables are given showing the changes in specific volume, exchange capacity and weight of different types of organic ion exchangers in dependence on the radiation dose. The effects are discussed of the structure of organic cation and anion exchangers, polymeric strong basic anion exchangers, polycondensate anion exchangers and ion exchange membranes on their radiation stability. General experimental procedures are given for laboratory tests of the radiation stability of exchangers. (L.K.)

  15. Local Road Safety Peer Exchange - Region 4 : An RSPCB Peer Exchange

    Science.gov (United States)

    2013-03-01

    This report provides a summary of the proceedings of the Local Road Safety Peer Exchange held in Atlanta, Georgia on March 6th and 7th, 2013. The Federal Highway Administration (FHWA) sponsored the Peer Exchange in coordination with Region 4 Local Te...

  16. Local Road Safety Peer Exchange - Region 7 : An RSPCB Peer Exchange

    Science.gov (United States)

    2012-05-01

    This report provides a summary of the proceedings of the Local Road Safety Peer Exchange held in Denver, Colorado from May 31 to June 1, 2012. The Federal Highway Administration (FHWA) sponsored the Peer Exchange in coordination with Region 7 Local a...

  17. Local Road Safety Peer Exchange - Region 1 : An RSPCB Peer Exchange

    Science.gov (United States)

    2012-10-01

    This report provides a summary of the proceedings of the Local Road Safety Peer Exchange held in Piscataway, New Jersey October 10th and 11th, 2012. The Federal Highway Administration (FHWA) sponsored the Peer Exchange in coordination with Region 1 L...

  18. Synthetic inorganic ion-exchange materials

    International Nuclear Information System (INIS)

    Abe, M.

    1979-01-01

    Exchange isotherms for hydrogen ion/alkali metal ions have been measured at 20 and 40 0 C, with a solution ionic strength of 0.1, in crystalline antimonic(V) acid as a cation-exchanger. The isotherms showed S-shaped curves for the systems of H + /Na + , H + /K + , H + /Rb + and H + /Cs + , but not for H + /Li + exchange. The selectivity coefficients (logarithm scale) vs equivalent fraction of alkali metal ions in the exchanger give linear functions for all systems studied. The selectivity sequences are shown. Overall and hypothetical (zero loading) thermodynamic equilibrium constants were evaluated for these ion-exchange reactions. (author)

  19. Microsoft Exchange 2013 cookbook

    CERN Document Server

    Van Horenbeeck, Michael

    2013-01-01

    This book is a practical, hands-on guide that provides the reader with a number of clear, step-by-step exercises.""Microsoft Exchange 2013 Cookbook"" is targeted at network administrators who deal with the Exchange server in their day-to-day jobs. It assumes you have some practical experience with previous versions of Exchange (although this is not a requirement), without being a subject matter expert.

  20. Choice of exchange rate regimes for African countries: Fixed or Flexible Exchange rate regimes?

    OpenAIRE

    Simwaka, Kisu

    2010-01-01

    The choice of an appropriate exchange rate regime has been a subject of ongoing debate in international economics. The majority of African countries are small open economies and thus where the choice of the exchange rate regime is an important policy issue. Aside from factors such as interest rates and inflation, the exchange rate is one of the most important determinants of a country’s relative level of economic health. For this reason, exchange rates are among the most watched analyzed and ...

  1. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  2. Parametric investigation of a non-constant cross sectional area air to air heat exchanger

    International Nuclear Information System (INIS)

    Cárdenas, Bruno; Garvey, Seamus; Kantharaj, Bharath; Simpson, Michael

    2017-01-01

    Highlights: • Evaluation of complex geometry aimed at minimizing volume per unit of exergy transfer. • The use of a non-constant cross-section for the heat exchanger is proposed. • The performance gains attainable via modern manufacturing techniques are discussed. • The trade-off between overall exergy efficiency and cost is thoroughly analysed. • A quadratic proportion between volume and characteristic dimension has been found. - Abstract: The present article addresses the design, mathematical modelling and analysis of a novel highly exergy-efficient air to air heat exchanger. An intricate design based on an hexagonal mesh is proposed for the cross-sectional area of the heat exchanger with aims to explore the performance gains that can be obtained by exploiting the capabilities and benefits offered by modern fabrication techniques such as additive manufacturing. Special attention is paid to understanding the relationship or trade-off that exists between the overall exergy efficiency of the heat exchanger and its cost. The iterative algorithm used to find the geometrical parameters that yield the best performance in terms of volume of material required per unit of exergy transfer at a certain level of efficiency, as well as the assumptions and simplifications made, are comprehensively explained. It has been found through the analyses carried out performed, which are thoroughly discussed throughout the paper, that if the characteristic dimension of the heat exchanger is scaled up by a factor of n, the volume of material per kW of exergy transfer at certain exergy efficiency will increase by a factor of n squared. This is a very important observation, possibly applicable to other types of heat exchangers, that indicates that performance improves dramatically at smaller scales. The overall performance of the case study presented is satisfactory, a volume of material as low as 84.8 cm"3 for one kW of exergy transfer can be achieved with a 99% exergy

  3. Artificial Bee Colony Algorithm Based on K-Means Clustering for Multiobjective Optimal Power Flow Problem

    Directory of Open Access Journals (Sweden)

    Liling Sun

    2015-01-01

    Full Text Available An improved multiobjective ABC algorithm based on K-means clustering, called CMOABC, is proposed. To fasten the convergence rate of the canonical MOABC, the way of information communication in the employed bees’ phase is modified. For keeping the population diversity, the multiswarm technology based on K-means clustering is employed to decompose the population into many clusters. Due to each subcomponent evolving separately, after every specific iteration, the population will be reclustered to facilitate information exchange among different clusters. Application of the new CMOABC on several multiobjective benchmark functions shows a marked improvement in performance over the fast nondominated sorting genetic algorithm (NSGA-II, the multiobjective particle swarm optimizer (MOPSO, and the multiobjective ABC (MOABC. Finally, the CMOABC is applied to solve the real-world optimal power flow (OPF problem that considers the cost, loss, and emission impacts as the objective functions. The 30-bus IEEE test system is presented to illustrate the application of the proposed algorithm. The simulation results demonstrate that, compared to NSGA-II, MOPSO, and MOABC, the proposed CMOABC is superior for solving OPF problem, in terms of optimization accuracy.

  4. Innovative heat exchangers

    CERN Document Server

    Scholl, Stephan

    2018-01-01

    This accessible book presents unconventional technologies in heat exchanger design that have the capacity to provide solutions to major concerns within the process and power-generating industries. Demonstrating the advantages and limits of these innovative heat exchangers, it also discusses micro- and nanostructure surfaces and micro-scale equipment, and introduces pillow-plate, helical and expanded metal baffle concepts. It offers step-by-step worked examples, which provide instructions for developing an initial configuration and are supported by clear, detailed drawings and pictures. Various types of heat exchangers are available, and they are widely used in all fields of industry for cooling or heating purposes, including in combustion engines. The market in 2012 was estimated to be U$ 42.7 billion and the global demand for heat exchangers is experiencing an annual growth of about 7.8 %. The market value is expected to reach U$ 57.9 billion in 2016, and approach U$ 78.16 billion in 2020. Providing a valua...

  5. Upright heat exchanger

    International Nuclear Information System (INIS)

    Martoch, J.; Kugler, V.; Krizek, V.; Strmiska, F.

    1988-01-01

    The claimed heat exchanger is characteristic by the condensate level being maintained directly in the exchanger while preserving the so-called ''dry'' tube plate. This makes it unnecessary to build another pressure vessel into the circuit. The design of the heat exchanger allows access to both tube plates, which facilitates any repair. Another advantage is the possibility of accelerating the indication of leakage from the space of the second operating medium which is given by opening the drainage pipes of the lower bundle into the collar space and from there through to the indication pipe. The exchanger is especially suitable for deployment in the circuits of nuclear power plants where the second operating medium will be hot water of considerably lower purity than is that of the condensate. A rapid display of leakage can prevent any long-term penetration of this water into the condensate, which would result in worsening water quality in the entire secondary circuit of the nuclear power plant. (J.B.). 1 fig

  6. An Improved Global Harmony Search Algorithm for the Identification of Nonlinear Discrete-Time Systems Based on Volterra Filter Modeling

    Directory of Open Access Journals (Sweden)

    Zongyan Li

    2016-01-01

    Full Text Available This paper describes an improved global harmony search (IGHS algorithm for identifying the nonlinear discrete-time systems based on second-order Volterra model. The IGHS is an improved version of the novel global harmony search (NGHS algorithm, and it makes two significant improvements on the NGHS. First, the genetic mutation operation is modified by combining normal distribution and Cauchy distribution, which enables the IGHS to fully explore and exploit the solution space. Second, an opposition-based learning (OBL is introduced and modified to improve the quality of harmony vectors. The IGHS algorithm is implemented on two numerical examples, and they are nonlinear discrete-time rational system and the real heat exchanger, respectively. The results of the IGHS are compared with those of the other three methods, and it has been verified to be more effective than the other three methods on solving the above two problems with different input signals and system memory sizes.

  7. Australian Universities' Strategic Goals of Student Exchange and Participation Rates in Outbound Exchange Programmes

    Science.gov (United States)

    Daly, Amanda; Barker, Michelle

    2010-01-01

    International student exchange programmes are acknowledged as one aspect of a broader suite of internationalisation strategies aimed at enhancing students' intercultural understanding and competence. The decision to participate in an exchange programme is dependent on both individual and contextual factors such as student exchange policies and…

  8. 78 FR 46622 - Application of Topaz Exchange, LLC for Registration as a National Securities Exchange; Findings...

    Science.gov (United States)

    2013-08-01

    ... Exchange, LLC for Registration as a National Securities Exchange; Findings, Opinion, and Order of the... Registration as a National Securities Exchange (``Form 1 Application'') \\1\\ under Section 6 of the Securities... substantive, are consistent with the existing rules of other registered national securities exchanges, or are...

  9. Determination of edge plasma parameters by a genetic algorithm analysis of spectral line shapes

    Energy Technology Data Exchange (ETDEWEB)

    Marandet, Y.; Genesio, P.; Godbert-Mouret, L.; Koubiti, M.; Stamm, R. [Universite de Provence (PIIM), Centre de Saint-Jerome, 13 - Marseille (France); Capes, H.; Guirlet, R. [Association Euratom-CEA Cadarache, 13 - Saint-Paul-lez-Durance (France). Dept. de Recherches sur la Fusion Controlee

    2003-07-01

    Comparing an experimental and a theoretical line shape can be achieved by a genetic algorithm (GA) based on an analogy to the mechanisms of natural selection. Such an algorithm is able to deal with complex non-linear models, and can avoid local minima. We have used this optimization tool in the context of edge plasma spectroscopy, for a determination of the temperatures and fractions of the various populations of neutral deuterium emitting the D{sub {alpha}} line in 2 configurations of Tore-Supra: ergodic divertor and toroidal pumped limiter. Using the GA fit, the neutral emitters are separated into up to 4 populations which can be identified as resulting from molecular dissociation reactions, charge exchange, or reflection. In all the edge plasmas studied, a significant fraction of neutrals emit in the line wings, leading to neutrals with a temperature up to a few hundreds eV if a Gaussian line shape is assumed. This conclusion could be modified if the line wing exhibits a non Gaussian behavior.

  10. Determination of edge plasma parameters by a genetic algorithm analysis of spectral line shapes

    International Nuclear Information System (INIS)

    Marandet, Y.; Genesio, P.; Godbert-Mouret, L.; Koubiti, M.; Stamm, R.; Capes, H.; Guirlet, R.

    2003-01-01

    Comparing an experimental and a theoretical line shape can be achieved by a genetic algorithm (GA) based on an analogy to the mechanisms of natural selection. Such an algorithm is able to deal with complex non-linear models, and can avoid local minima. We have used this optimization tool in the context of edge plasma spectroscopy, for a determination of the temperatures and fractions of the various populations of neutral deuterium emitting the D α line in 2 configurations of Tore-Supra: ergodic divertor and toroidal pumped limiter. Using the GA fit, the neutral emitters are separated into up to 4 populations which can be identified as resulting from molecular dissociation reactions, charge exchange, or reflection. In all the edge plasmas studied, a significant fraction of neutrals emit in the line wings, leading to neutrals with a temperature up to a few hundreds eV if a Gaussian line shape is assumed. This conclusion could be modified if the line wing exhibits a non Gaussian behavior

  11. Distributed Constrained Stochastic Subgradient Algorithms Based on Random Projection and Asynchronous Broadcast over Networks

    Directory of Open Access Journals (Sweden)

    Junlong Zhu

    2017-01-01

    Full Text Available We consider a distributed constrained optimization problem over a time-varying network, where each agent only knows its own cost functions and its constraint set. However, the local constraint set may not be known in advance or consists of huge number of components in some applications. To deal with such cases, we propose a distributed stochastic subgradient algorithm over time-varying networks, where the estimate of each agent projects onto its constraint set by using random projection technique and the implement of information exchange between agents by employing asynchronous broadcast communication protocol. We show that our proposed algorithm is convergent with probability 1 by choosing suitable learning rate. For constant learning rate, we obtain an error bound, which is defined as the expected distance between the estimates of agent and the optimal solution. We also establish an asymptotic upper bound between the global objective function value at the average of the estimates and the optimal value.

  12. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  13. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    Directory of Open Access Journals (Sweden)

    Lukas Falat

    2016-01-01

    Full Text Available This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  14. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    Science.gov (United States)

    Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450

  15. Increased CEST specificity for amide and fast-exchanging amine protons using exchange-dependent relaxation rate.

    Science.gov (United States)

    Zhang, Xiao-Yong; Wang, Feng; Xu, Junzhong; Gochberg, Daniel F; Gore, John C; Zu, Zhongliang

    2018-02-01

    Chemical exchange saturation transfer (CEST) imaging of amides at 3.5 ppm and fast-exchanging amines at 3 ppm provides a unique means to enhance the sensitivity of detection of, for example, proteins/peptides and neurotransmitters, respectively, and hence can provide important information on molecular composition. However, despite the high sensitivity relative to conventional magnetic resonance spectroscopy (MRS), in practice, CEST often has relatively poor specificity. For example, CEST signals are typically influenced by several confounding effects, including direct water saturation (DS), semi-solid non-specific magnetization transfer (MT), the influence of water relaxation times (T 1w ) and nearby overlapping CEST signals. Although several editing techniques have been developed to increase the specificity by removing DS, semi-solid MT and T 1w influences, it is still challenging to remove overlapping CEST signals from different exchanging sites. For instance, the amide proton transfer (APT) signal could be contaminated by CEST effects from fast-exchanging amines at 3 ppm and intermediate-exchanging amines at 2 ppm. The current work applies an exchange-dependent relaxation rate (R ex ) to address this problem. Simulations demonstrate that: (1) slowly exchanging amides and fast-exchanging amines have distinct dependences on irradiation powers; and (2) R ex serves as a resonance frequency high-pass filter to selectively reduce CEST signals with resonance frequencies closer to water. These characteristics of R ex provide a means to isolate the APT signal from amines. In addition, previous studies have shown that CEST signals from fast-exchanging amines have no distinct features around their resonance frequencies. However, R ex gives Lorentzian lineshapes centered at their resonance frequencies for fast-exchanging amines and thus can significantly increase the specificity of CEST imaging for amides and fast-exchanging amines. Copyright © 2017 John Wiley & Sons

  16. Study of kinetics, equilibrium and isotope exchange in ion exchange systems Pt. 4

    International Nuclear Information System (INIS)

    Stamberg, K.; Plicka, J.; Calibar, J.; Gosman, A.

    1985-01-01

    The kinetics of ion exchange in the Nasup(+)-Mgsup(2+)-strongly acidic cation exchanger system in a batch stirred reactor was studied. The samples of exchangers OSTION KS (containing DVB in the range of 1.5 - 12%) and AMBERLITE IR 120 for experimental work were used; the concentration of the aqueous nitrate solution was always 0.2M. The Nernst-Planck equation for description of diffusion of ions in a particle was used. The values of diffusion coefficients of magnesium ions in the exchangers and their dependence on the content of DVB were obtained by evaluating the experimental data and using the self-diffusion coefficients of sodium. (author)

  17. Exchange rate regulation, the behavior of exchange rates, and macroeconomic stability in Brazil

    Directory of Open Access Journals (Sweden)

    Francisco Eduardo Pires de Souza

    2011-12-01

    Full Text Available In the last two decades an entirely new set of rules governing the foreign exchange transactions was established in Brazil, substituting for the framework inherited from the 1930s. Foreign exchange controls were dismantled and a floating exchange rate regime replaced different forms of peg. In this paper we argue that although successful by comparison to previous experiences, the current arrangement has important flaws that should be addressed. We discuss how it first led to high volatility and extremely high interest rates, which, when overcome, gave way to a long lasting appreciation of the real exchange rate with adverse consequences to industry.

  18. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  19. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    Science.gov (United States)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  20. Numerical investigation of transient behaviour of the recuperative heat exchanger in a MR J-T cryocooler using different heat transfer correlations

    Science.gov (United States)

    Damle, R. M.; Ardhapurkar, P. M.; Atrey, M. D.

    2016-12-01

    In J-T cryocoolers operating with mixed refrigerants (nitrogen-hydrocarbons), the recuperative heat exchange takes place under two-phase conditions. Simultaneous boiling of the low pressure stream and condensation of the high pressure stream results in higher heat transfer coefficients. The mixture composition, operating conditions and the heat exchanger design are crucial for obtaining the required cryogenic temperature. In this work, a one-dimensional transient algorithm is developed for the simulation of the two-phase heat transfer in the recuperative heat exchanger of a mixed refrigerant J-T cryocooler. Modified correlation is used for flow boiling of the high pressure fluid while different condensation correlations are employed with and without the correction for the low pressure fluid. Simulations are carried out for different mixture compositions and numerical predictions are compared with the experimental data. The overall heat transfer is predicted reasonably well and the qualitative trends of the temperature profiles are also captured by the developed numerical model.

  1. The influence of a vertical ground heat exchanger length on the electricity consumption of the heat pumps

    Energy Technology Data Exchange (ETDEWEB)

    Michopoulos, A.; Kyriakis, N. [Process Equipment Design Laboratory, Mechanical Engineer Department, Aristotle University of Thessaloniki (AUTh), P.O. Box 487, 541 24 Thessaloniki (Greece)

    2010-07-15

    The use of heat pumps combined with vertical ground heat exchangers for heating and cooling of buildings, has significantly gained popularity in recent years. The design method for these systems, as it is proposed by ASHRAE, is taking into account the maximum thermal and cooling loads of the building, the thermophysical properties of the soil at the area of installation and a minimum Coefficient of Performance (COP) of the heat pumps. This approach usually results in larger than needed length of the ground heat exchanger, thus increasing the installation cost. A new analytical simulation tool, capable to determine the required ground heat exchanger length has been developed at the Process Equipment Design Laboratory (PEDL) of the AUTh. It models the function of the system as a whole over long time periods, e.g. 20 years, using as input parameters the thermal and cooling loads of the building, the thermophysical properties of the borehole and the characteristic curves of the heat pumps. The results include the electricity consumption of the heat pumps and the heat absorbed from or rejected to the ground. The aim of this paper is to describe the developed simulation algorithm and present the results of such a simulation in a case study. It is proved that the total required length of the ground heat exchanger is less than that calculated using the common numerical method. (author)

  2. Comparison between conventional heat exchanger performance and an heat pipes exchanger

    International Nuclear Information System (INIS)

    Souza, J.R.G. de; Rocha, N.R.

    1989-01-01

    The thermal performance of conventional compact heat exchanger and of exchanger with heat pipes are simulated using a digital computer, for equal volumes and the same process conditions. The comparative analysis is depicted in graphs that indicate which of the situations each equipment is more efficient. (author)

  3. Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Surafel Luleseged Tilahun

    2012-01-01

    Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.

  4. A spin exchange model for singlet fission

    Science.gov (United States)

    Yago, Tomoaki; Wakasa, Masanobu

    2018-03-01

    Singlet fission has been analyzed with the Dexter model in which electron exchange occurs between chromophores, conserving the spin for each electron. In the present study, we propose a spin exchange model for singlet fission. In the spin exchange model, spins are exchanged by the exchange interaction between two electrons. Our analysis with simple spin functions demonstrates that singlet fission is possible by spin exchange. A necessary condition for spin exchange is a variation in exchange interactions. We also adapt the spin exchange model to triplet fusion and triplet energy transfer, which often occur after singlet fission in organic solids.

  5. Optimization of a novel carbon dioxide cogeneration system using artificial neural network and multi-objective genetic algorithm

    International Nuclear Information System (INIS)

    Jamali, Arash; Ahmadi, Pouria; Mohd Jaafar, Mohammad Nazri

    2014-01-01

    In this research study, a combined cycle based on the Brayton power cycle and the ejector expansion refrigeration cycle is proposed. The proposed cycle can provide heating, cooling and power simultaneously. One of the benefits of such a system is to be driven by low temperature heat sources and using CO 2 as working fluid. In order to enhance the understanding of the current work, a comprehensive parametric study and exergy analysis are conducted to determine the effects of the thermodynamic parameters on the system performance and the exergy destruction rate in the components. The suggested cycle can save the energy around 46% in comparison with a system producing cooling, power and hot water separately. On the other hand, to optimize a system to meet the load requirement, the surface area of the heat exchangers is determined and optimized. The results of this section can be used when a compact system is also an objective function. Along with a comprehensive parametric study and exergy analysis, a complete optimization study is carried out using a multi-objective evolutionary based genetic algorithm considering two different objective functions, heat exchangers size (to be minimized) and exergy efficiency (to be maximized). The Pareto front of the optimization problem and a correlation between exergy efficiency and total heat exchangers length is presented in order to predict the trend of optimized points. The suggested system can be a promising combined system for buildings and outland regions. - Highlights: •Energy and exergy analysis of a novel CHP system are reported. •A comprehensive parametric study is conducted to enhance the understanding of the system performance. •Apply a multi-objective optimization technique based on a code developed in the Matlab software program using an evolutionary algorithm

  6. Ion-exchange chromatographic protein refolding

    NARCIS (Netherlands)

    Freydell, E.; Wielen, van der L.; Eppink, M.H.M.; Ottens, M.

    2010-01-01

    The application of ion-exchange (IEX) chromatography to protein refolding (IExR) has been successfully proven, as supported by various studies using different model proteins, ion-exchange media and flow configurations. Ion-exchange refolding offers a relatively high degree of process

  7. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....

  8. A novel hybrid algorithm of GSA with Kepler algorithm for numerical optimization

    Directory of Open Access Journals (Sweden)

    Soroor Sarafrazi

    2015-07-01

    Full Text Available It is now well recognized that pure algorithms can be promisingly improved by hybridization with other techniques. One of the relatively new metaheuristic algorithms is Gravitational Search Algorithm (GSA which is based on the Newton laws. In this paper, to enhance the performance of GSA, a novel algorithm called “Kepler”, inspired by the astrophysics, is introduced. The Kepler algorithm is based on the principle of the first Kepler law. The hybridization of GSA and Kepler algorithm is an efficient approach to provide much stronger specialization in intensification and/or diversification. The performance of GSA–Kepler is evaluated by applying it to 14 benchmark functions with 20–1000 dimensions and the optimal approximation of linear system as a practical optimization problem. The results obtained reveal that the proposed hybrid algorithm is robust enough to optimize the benchmark functions and practical optimization problems.

  9. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  10. 31 CFR 500.325 - National securities exchange.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false National securities exchange. 500.325... Definitions § 500.325 National securities exchange. The term national securities exchange shall mean an exchange registered as a national securities exchange under section 6 of the Securities Exchange Act of...

  11. 31 CFR 515.325 - National securities exchange.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false National securities exchange. 515.325... Definitions § 515.325 National securities exchange. The term national securities exchange shall mean an exchange registered as a national securities exchange under section 6 of the Securities Exchange Act of...

  12. An Empirical Investigation into Exchange Rate Regime Choice and Exchange Rate Volatility

    OpenAIRE

    Helge Berger; Jan-Egbert Sturm; Jakob de Haan

    2000-01-01

    We test a simple model of exchange rate regime choice with data for 65 non-OECD countries covering the period 1980-94. We find that the variance of output at home and in potential target c ountries as well as the correlation between home and foreign real activity are powerful and robust predictors of exchange rate regime choice. Surprisingly, a more volatile foreign economy can be an argument in favor of a fixed exchange rate regime once similarities in the business cycle are taken into accou...

  13. Experimental test of exchange degeneracy in hypercharge exchange reactions

    International Nuclear Information System (INIS)

    Moffeit, K.C.

    1978-10-01

    Two pairs of line-reversed reactions π + P → K + Σ + , K - p → π - Σ + and π + p → K + Y* + (1385), K - p → π - Y* + (1385) provide an experimental test of exchange degeneracy in hypercharge exchange reactions. From their study it is concluded that in contrast to the lower energy data, the 11.5 results for the two pairs of reactions are consistent with exchange degeneracy predictions for both helicity-flip and nonflip amplitudes. The Y(1385) decay angular distributions indicate that the quark model and Stodolsky--Sakurai predictions are in agreement with the main features of the data. However, small violations are observed at small momentum transfer. While the Y(1385) vertex is helicity-flip dominated, the nonvanishing of T/sub 3/2 - 1/2/ and T/sub -3/2 1/2/ suggests some finite helicity nonflip contribution in the forward direction. 23 references

  14. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  15. Ion exchange technology assessment report

    International Nuclear Information System (INIS)

    Duhn, E.F.

    1992-01-01

    In the execution of its charter, the SRS Ion Exchange Technology Assessment Team has determined that ion exchange (IX) technology has evolved to the point where it should now be considered as a viable alternative to the SRS reference ITP/LW/PH process. The ion exchange media available today offer the ability to design ion exchange processing systems tailored to the unique physical and chemical properties of SRS soluble HLW's. The technical assessment of IX technology and its applicability to the processing of SRS soluble HLW has demonstrated that IX is unquestionably a viable technology. A task team was chartered to evaluate the technology of ion exchange and its potential for replacing the present In-Tank Precipitation and proposed Late Wash processes to remove Cs, Sr, and Pu from soluble salt solutions at the Savannah River Site. This report documents the ion exchange technology assessment and conclusions of the task team

  16. Monte Carlo algorithms with absorbing Markov chains: Fast local algorithms for slow dynamics

    International Nuclear Information System (INIS)

    Novotny, M.A.

    1995-01-01

    A class of Monte Carlo algorithms which incorporate absorbing Markov chains is presented. In a particular limit, the lowest order of these algorithms reduces to the n-fold way algorithm. These algorithms are applied to study the escape from the metastable state in the two-dimensional square-lattice nearest-neighbor Ising ferromagnet in an unfavorable applied field, and the agreement with theoretical predictions is very good. It is demonstrated that the higher-order algorithms can be many orders of magnitude faster than either the traditional Monte Carlo or n-fold way algorithms

  17. PRTR ion exchange vault column sampling

    International Nuclear Information System (INIS)

    Cornwell, B.C.

    1995-01-01

    This report documents ion exchange column sampling and Non Destructive Assay (NDA) results from activities in 1994, for the Plutonium Recycle Test Reactor (PRTR) ion exchange vault. The objective was to obtain sufficient information to prepare disposal documentation for the ion exchange columns found in the PRTR Ion exchange vault. This activity also allowed for the monitoring of the liquid level in the lower vault. The sampling activity contained five separate activities: (1) Sampling an ion exchange column and analyzing the ion exchange media for purpose of waste disposal; (2) Gamma and neutron NDA testing on ion exchange columns located in the upper vault; (3) Lower vault liquid level measurement; (4) Radiological survey of the upper vault; and (5) Secure the vault pending waste disposal

  18. Custom, contract, and kidney exchange.

    Science.gov (United States)

    Healy, Kieran; Krawiec, Kimberly D

    2012-01-01

    In this Essay, we examine a case in which the organizational and logistical demands of a novel form of organ exchange (the nonsimultaneous, extended, altruistic donor (NEAD) chain) do not map cleanly onto standard cultural schemas for either market or gift exchange, resulting in sociological ambiguity and legal uncertainty. In some ways, a NEAD chain resembles a form of generalized exchange, an ancient and widespread instance of the norm of reciprocity that can be thought of simply as the obligation to “pay it forward” rather than the obligation to reciprocate directly with the original giver. At the same time, a NEAD chain resembles a string of promises and commitments to deliver something in exchange for some valuable consideration--that is, a series of contracts. Neither of these salient "social imaginaries" of exchange--gift giving or formal contract--perfectly meets the practical demands of the NEAD system. As a result, neither contract nor generalized exchange drives the practice of NEAD chains. Rather, the majority of actual exchanges still resemble a simpler form of exchange: direct, simultaneous exchange between parties with no time delay or opportunity to back out. If NEAD chains are to reach their full promise for large-scale, nonsimultaneous organ transfer, legal uncertainties and sociological ambiguities must be finessed, both in the practices of the coordinating agencies and in the minds of NEAD-chain participants. This might happen either through the further elaboration of gift-like language and practices, or through a creative use of the cultural form and motivational vocabulary, but not necessarily the legal and institutional machinery, of contract.

  19. Dynamic route guidance algorithm based algorithm based on artificial immune system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.

  20. New type fuel exchange system

    International Nuclear Information System (INIS)

    Meshii, Toshio; Maita, Yasushi; Hirota, Koichi; Kamishima, Yoshio.

    1988-01-01

    When the reduction of the construction cost of FBRs is considered from the standpoint of the machinery and equipment, to make the size small and to heighten the efficiency are the assigned mission. In order to make a reactor vessel small, it is indispensable to decrease the size of the equipment for fuel exchange installed on the upper part of a core. Mitsubishi Heavy Industries Ltd. carried out the research on the development of a new type fuel exchange system. As for the fuel exchange system for FBRs, it is necessary to change the mode of fuel exchange from that of LWRs, such as handling in the presence of chemically active sodium and inert argon atmosphere covering it and handling under heavy shielding against high radiation. The fuel exchange system for FBRs is composed of a fuel exchanger which inserts, pulls out and transfers fuel and rotary plugs. The mechanism adopted for the new type fuel exchange system that Mitsubishi is developing is explained. The feasibility of the mechanism on the upper part of a core was investigated by water flow test, vibration test and buckling test. The design of the mechanism on the upper part of the core of a demonstration FBR was examined, and the new type fuel exchange system was sufficiently applicable. (Kako, I.)

  1. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  2. Transition from reversible to irreversible magnetic exchange-spring processes in antiferromagnetically exchange-coupled hard/soft/hard trilayer structures

    International Nuclear Information System (INIS)

    Wang Xiguang; Guo Guanghua; Zhang Guangfu

    2011-01-01

    The demagnetization processes of antiferromagnetically exchange-coupled hard/soft/hard trilayer structures have been studied based on the discrete one-dimensional atomic chain model and the linear partial domain-wall model. It is found that, when the magnetic anisotropy of soft layer is taken into account, the changes of the soft layer thickness and the interfacial exchange coupling strength may lead a transition of demagnetization process in soft layer from the reversible to the irreversible magnetic exchange-spring process. For the trilayer structures with very thin soft layer, the demagnetization process exhibits typical reversible exchange-spring behavior. However, as the thickness of soft layer is increased, there is a crossover point t c , after which the process becomes irreversible. Similarly, there is also a critical interfacial exchange coupling constant A sh c , above which the exchange-spring process is reversible. When A sh sh c , the irreversible exchange-spring process is achieved. The phase diagram of reversible and irreversible exchange-spring processes is mapped in the plane of the interfacial exchange coupling A sh and soft layer thickness N s . - Research highlights: → A differing magnetic exchange-spring process is found in antiferromagnetically exchange-coupled hard/soft/hard trilayers if the magnetic anisotropy of the soft layers is taken into account. → The change of the soft layer thickness may lead to a transition of demagnetization process in soft layer from the reversible to the irreversible exchange-spring process. → The change of the soft-hard interfacial exchange coupling strength may lead a transition of demagnetization process in soft layer from the reversible to the irreversible exchange-spring process. → The phase diagram of reversible and irreversible exchange-spring processes is mapped in the plane of the interfacial exchange coupling and soft layer thickness.

  3. Indiana Health Information Exchange

    Science.gov (United States)

    The Indiana Health Information Exchange is comprised of various Indiana health care institutions, established to help improve patient safety and is recognized as a best practice for health information exchange.

  4. Evaluation of multilayer perceptron algorithms for an analysis of network flow data

    Science.gov (United States)

    Bieniasz, Jedrzej; Rawski, Mariusz; Skowron, Krzysztof; Trzepiński, Mateusz

    2016-09-01

    The volume of exchanged information through IP networks is larger than ever and still growing. It creates a space for both benign and malicious activities. The second one raises awareness on security network devices, as well as network infrastructure and a system as a whole. One of the basic tools to prevent cyber attacks is Network Instrusion Detection System (NIDS). NIDS could be realized as a signature-based detector or an anomaly-based one. In the last few years the emphasis has been placed on the latter type, because of the possibility of applying smart and intelligent solutions. An ideal NIDS of next generation should be composed of self-learning algorithms that could react on known and unknown malicious network activities respectively. In this paper we evaluated a machine learning approach for detection of anomalies in IP network data represented as NetFlow records. We considered Multilayer Perceptron (MLP) as the classifier and we used two types of learning algorithms - Backpropagation (BP) and Particle Swarm Optimization (PSO). This paper includes a comprehensive survey on determining the most optimal MLP learning algorithm for the classification problem in application to network flow data. The performance, training time and convergence of BP and PSO methods were compared. The results show that PSO algorithm implemented by the authors outperformed other solutions if accuracy of classifications is considered. The major disadvantage of PSO is training time, which could be not acceptable for larger data sets or in real network applications. At the end we compared some key findings with the results from the other papers to show that in all cases results from this study outperformed them.

  5. Exchange effects in Relativistic Schroedinger Theory

    International Nuclear Information System (INIS)

    Sigg, T.; Sorg, M.

    1998-01-01

    The Relativistic Schroedinger Theory predicts the occurrence of exchange and overlap effects in many-particle systems. For a 2-particle system, the interaction energy of the two particles consists of two contributions: Coulomb energy and exchange energy, where the first one is revealed to be the same as in standard quantum theory. However the exchange energy is mediated by an exchange potential, contrary to the kinematical origin of the exchange term in the standard theory

  6. Effective Exchange Rate Classifications and Growth

    OpenAIRE

    Justin M. Dubas; Byung-Joo Lee; Nelson C. Mark

    2005-01-01

    We propose an econometric procedure for obtaining de facto exchange rate regime classifications which we apply to study the relationship between exchange rate regimes and economic growth. Our classification method models the de jure regimes as outcomes of a multinomial logit choice problem conditional on the volatility of a country's effective exchange rate, a bilateral exchange rate and international reserves. An `effective' de facto exchange rate regime classification is then obtained by as...

  7. Exchange currents in nuclear physics

    International Nuclear Information System (INIS)

    Truglik, Eh.

    1980-01-01

    Starting from Adler's low-energy theorem for the soft pion production amplitudes the predictions of the meson exchange currents theory for the nuclear physics are discussed. The results are reformulated in terms of phenomenological lagrangians. This method allows one to pass naturally to the more realistic case of hard mesons. The predictions are critically compared with the existing experimental data. The main processes in which vector isovector exchange currents, vector isoscalar exchange currents and axial exchange currents take place are pointed out

  8. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  9. Can positive social exchanges buffer the detrimental effects of negative social exchanges? Age and gender differences.

    Science.gov (United States)

    Fiori, Katherine L; Windsor, Tim D; Pearson, Elissa L; Crisp, Dimity A

    2013-01-01

    Findings from existing research exploring whether positive social exchanges can help to offset (or 'buffer' against) the harmful effects of negative social exchanges on mental health have been inconsistent. This could be because the existing research is characterized by different approaches to studying various contexts of 'cross-domain' and 'within-domain' buffering, and/or because the nature of buffering effects varies according to sociodemographic characteristics that underlie different aspects of social network structure and function. The purpose of this study was to examine whether the buffering effects of global perceptions of positive exchanges on the link between global negative exchanges and mental health varied as a function of age and gender. We used a series of regressions in a sample of 556 Australian older adults (ages 55-94) to test for three-way interactions among gender, positive social exchanges, and negative social exchanges, as well as age and positive and negative social exchanges, in predicting mental health, controlling for years of education, partner status, and physical functioning. We found that positive exchanges buffered against negative exchanges for younger old adults, but not for older old adults, and for women, but not for men. Our findings are interpreted in light of research on individual differences in coping responses and interpersonal goals among late middle-aged and older adults. Our findings are in line with gerontological theories (e.g., socioemotional selectivity theory), and imply that an intervention aimed at using positive social exchanges as a means of coping with negative social exchanges might be more successful among particular populations (i.e., women, 'younger' old adults). Copyright © 2012 S. Karger AG, Basel.

  10. Isolating the exchanges in multiple production

    International Nuclear Information System (INIS)

    Pirila, P.; Thomas, G.H.; Quigg, C.

    1975-01-01

    We employ rapidity-gap distributions to identify and study the exchanges between hadron clusters produced in collisions at Fermilab energies. The observation of charge exchange disproves the neutral cluster model. At these energies, the data are consistent with the independent emission of clusters of limited charge or with a true limited-charge-exchange picture. For exchange models, two important parameters are identified: the ratio between vertical-bar ΔQ vertical-bar = 1 and ΔQ = 0 exchange, and the ratio between the density in rapidity of charged clusters and that of neutral clusters. In principle, both of these quantities are measurable, but with existing data only the first can be determined. We use a unitarity calculation to estimate the second. Given an exchange model which fits rapidity-gap distributions, we obtain a solution to the counting problem in the overlap-function calculation of the energy dependence of two-body charge exchange, and estimate the suppression of charge exchange with respect to elastic scattering

  11. Integrated Foreign Exchange Risk Management

    DEFF Research Database (Denmark)

    Aabo, Tom; Høg, Esben; Kuhn, Jochen

    Empirical research has focused on export as a proxy for the exchange rate exposure and the use of foreign exchange derivatives as the instrument to deal with this exposure. This empirical study applies an integrated foreign exchange risk management approach with a particular focus on the role...

  12. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  13. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  14. Back-exchange: a novel approach to quantifying oxygen diffusion and surface exchange in ambient atmospheres.

    Science.gov (United States)

    Cooper, Samuel J; Niania, Mathew; Hoffmann, Franca; Kilner, John A

    2017-05-17

    A novel two-step Isotopic Exchange (IE) technique has been developed to investigate the influence of oxygen containing components of ambient air (such as H 2 O and CO 2 ) on the effective surface exchange coefficient (k*) of a common mixed ionic electronic conductor material. The two step 'back-exchange' technique was used to introduce a tracer diffusion profile, which was subsequently measured using Time-of-Flight Secondary Ion Mass Spectrometry (ToF-SIMS). The isotopic fraction of oxygen in a dense sample as a function of distance from the surface, before and after the second exchange step, could then be used to determine the surface exchange coefficient in each atmosphere. A new analytical solution was found to the diffusion equation in a semi-infinite domain with a variable surface exchange boundary, for the special case where D* and k* are constant for all exchange steps. This solution validated the results of a numerical, Crank-Nicolson type finite-difference simulation, which was used to extract the parameters from the experimental data. When modelling electrodes, D* and k* are important input parameters, which significantly impact performance. In this study La 0.6 Sr 0.4 Co 0.2 Fe 0.8 O 3-δ (LSCF6428) was investigated and it was found that the rate of exchange was increased by around 250% in ambient air compared to high purity oxygen at the same pO 2 . The three experiments performed in this study were used to validate the back-exchange approach and show its utility.

  15. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  16. Key Exchange Trust Evaluation in Peer-to-Peer Sensor Networks With Unconditionally Secure Key Exchange

    Science.gov (United States)

    Gonzalez, Elias; Kish, Laszlo B.

    2016-03-01

    As the utilization of sensor networks continue to increase, the importance of security becomes more profound. Many industries depend on sensor networks for critical tasks, and a malicious entity can potentially cause catastrophic damage. We propose a new key exchange trust evaluation for peer-to-peer sensor networks, where part of the network has unconditionally secure key exchange. For a given sensor, the higher the portion of channels with unconditionally secure key exchange the higher the trust value. We give a brief introduction to unconditionally secured key exchange concepts and mention current trust measures in sensor networks. We demonstrate the new key exchange trust measure on a hypothetical sensor network using both wired and wireless communication channels.

  17. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  18. O3 and NOx Exchange

    NARCIS (Netherlands)

    Loubet, B.; Castell, J.F.; Laville, P.; Personne, E.; Tuzet, A.; Ammann, C.; Emberson, L.; Ganzeveld, L.; Kowalski, A.S.; Merbold, L.; Stella, P.; Tuovinen, J.P.

    2015-01-01

    This discussion was based on the background document “Review on modelling atmosphere-biosphere exchange of Ozone and Nitrogen oxides”, which reviews the processes contributing to biosphere-atmosphere exchange of O3 and NOx, including stomatal and non-stomatal exchange of O3 and NO, NO2.

  19. Empirical Correction for Differences in Chemical Exchange Rates in Hydrogen Exchange-Mass Spectrometry Measurements.

    Science.gov (United States)

    Toth, Ronald T; Mills, Brittney J; Joshi, Sangeeta B; Esfandiary, Reza; Bishop, Steven M; Middaugh, C Russell; Volkin, David B; Weis, David D

    2017-09-05

    A barrier to the use of hydrogen exchange-mass spectrometry (HX-MS) in many contexts, especially analytical characterization of various protein therapeutic candidates, is that differences in temperature, pH, ionic strength, buffering agent, or other additives can alter chemical exchange rates, making HX data gathered under differing solution conditions difficult to compare. Here, we present data demonstrating that HX chemical exchange rates can be substantially altered not only by the well-established variables of temperature and pH but also by additives including arginine, guanidine, methionine, and thiocyanate. To compensate for these additive effects, we have developed an empirical method to correct the hydrogen-exchange data for these differences. First, differences in chemical exchange rates are measured by use of an unstructured reporter peptide, YPI. An empirical chemical exchange correction factor, determined by use of the HX data from the reporter peptide, is then applied to the HX measurements obtained from a protein of interest under different solution conditions. We demonstrate that the correction is experimentally sound through simulation and in a proof-of-concept experiment using unstructured peptides under slow-exchange conditions (pD 4.5 at ambient temperature). To illustrate its utility, we applied the correction to HX-MS excipient screening data collected for a pharmaceutically relevant IgG4 mAb being characterized to determine the effects of different formulations on backbone dynamics.

  20. Mastering Microsoft Exchange Server 2013

    CERN Document Server

    Elfassy, David

    2013-01-01

    The bestselling guide to Exchange Server, fully updated for the newest version Microsoft Exchange Server 2013 is touted as a solution for lowering the total cost of ownership, whether deployed on-premises or in the cloud. Like the earlier editions, this comprehensive guide covers every aspect of installing, configuring, and managing this multifaceted collaboration system. It offers Windows systems administrators and consultants a complete tutorial and reference, ideal for anyone installing Exchange Server for the first time or those migrating from an earlier Exchange Server version.Microsoft