Algorithms for Lightweight Key Exchange.
Alvarez, Rafael; Caballero-Gil, Cándido; Santonja, Juan; Zamora, Antonio
2017-06-27
Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks.
Rethinking exchange market models as optimization algorithms
Luquini, Evandro; Omar, Nizam
2018-02-01
The exchange market model has mainly been used to study the inequality problem. Although the human society inequality problem is very important, the exchange market models dynamics until stationary state and its capability of ranking individuals is interesting in itself. This study considers the hypothesis that the exchange market model could be understood as an optimization procedure. We present herein the implications for algorithmic optimization and also the possibility of a new family of exchange market models
Welch, Vivian; Brand, Kevin; Kristjansson, Elizabeth; Smylie, Janet; Wells, George; Tugwell, Peter
2012-12-19
Systematic reviews have been challenged to consider effects on disadvantaged groups. A priori specification of subgroup analyses is recommended to increase the credibility of these analyses. This study aimed to develop and assess inter-rater agreement for an algorithm for systematic review authors to predict whether differences in effect measures are likely for disadvantaged populations relative to advantaged populations (only relative effect measures were addressed). A health equity plausibility algorithm was developed using clinimetric methods with three items based on literature review, key informant interviews and methodology studies. The three items dealt with the plausibility of differences in relative effects across sex or socioeconomic status (SES) due to: 1) patient characteristics; 2) intervention delivery (i.e., implementation); and 3) comparators. Thirty-five respondents (consisting of clinicians, methodologists and research users) assessed the likelihood of differences across sex and SES for ten systematic reviews with these questions. We assessed inter-rater reliability using Fleiss multi-rater kappa. The proportion agreement was 66% for patient characteristics (95% confidence interval: 61%-71%), 67% for intervention delivery (95% confidence interval: 62% to 72%) and 55% for the comparator (95% confidence interval: 50% to 60%). Inter-rater kappa, assessed with Fleiss kappa, ranged from 0 to 0.199, representing very low agreement beyond chance. Users of systematic reviews rated that important differences in relative effects across sex and socioeconomic status were plausible for a range of individual and population-level interventions. However, there was very low inter-rater agreement for these assessments. There is an unmet need for discussion of plausibility of differential effects in systematic reviews. Increased consideration of external validity and applicability to different populations and settings is warranted in systematic reviews to meet this
Starting design for use in variance exchange algorithms | Iwundu ...
African Journals Online (AJOL)
A new method of constructing the initial design for use in variance exchange algorithms is presented. The method chooses support points to go into the design as measures of distances of the support points from the centre of the geometric region and of permutation-invariant sets. The initial design is as close as possible to ...
Asset management using genetic algorithm: Evidence from Tehran Stock Exchange
Directory of Open Access Journals (Sweden)
Abbas Sarijaloo
2014-02-01
Full Text Available This paper presents an empirical investigation to study the effect of market management using Markowitz theorem. The study uses the information of 50 best performers on Tehran Stock Exchange over the period 2006-2009 and, using Markowitz theorem, the efficient asset allocation are determined and the result are analyzed. The proposed model of this paper has been solved using genetic algorithm. The results indicate that Tehran Stock Exchange has managed to perform much better than average world market in most years of studies especially on year 2009. The results of our investigation have also indicated that one could reach outstanding results using GA and forming efficient portfolio.
Enhanced diffie-hellman algorithm for reliable key exchange
Aryan; Kumar, Chaithanya; Vincent, P. M. Durai Raj
2017-11-01
The Diffie -Hellman is one of the first public-key procedure and is a certain way of exchanging the cryptographic keys securely. This concept was introduced by Ralph Markel and it is named after Whitfield Diffie and Martin Hellman. Sender and Receiver make a common secret key in Diffie-Hellman algorithm and then they start communicating with each other over the public channel which is known to everyone. A number of internet services are secured by Diffie -Hellman. In Public key cryptosystem, the sender has to trust while receiving the public key of the receiver and vice-versa and this is the challenge of public key cryptosystem. Man-in-the-Middle attack is very much possible on the existing Diffie-Hellman algorithm. In man-in-the-middle attack, the attacker exists in the public channel, the attacker receives the public key of both sender and receiver and sends public keys to sender and receiver which is generated by his own. This is how man-in-the-middle attack is possible on Diffie-Hellman algorithm. Denial of service attack is another attack which is found common on Diffie-Hellman. In this attack, the attacker tries to stop the communication happening between sender and receiver and attacker can do this by deleting messages or by confusing the parties with miscommunication. Some more attacks like Insider attack, Outsider attack, etc are possible on Diffie-Hellman. To reduce the possibility of attacks on Diffie-Hellman algorithm, we have enhanced the Diffie-Hellman algorithm to a next level. In this paper, we are extending the Diffie -Hellman algorithm by using the concept of the Diffie -Hellman algorithm to get a stronger secret key and that secret key is further exchanged between the sender and the receiver so that for each message, a new secret shared key would be generated. The second secret key will be generated by taking primitive root of the first secret key.
Optimization of heat exchanger networks using genetic algorithms
International Nuclear Information System (INIS)
Teyssedou, A.; Dipama, J.; Sorin, M.
2004-01-01
Most thermal processes encountered in the power industry (chemical, metallurgical, nuclear and thermal power stations) necessitate the transfer of large amounts of heat between fluids having different thermal potentials. A common practice applied to achieve such a requirement consists of using heat exchangers. In general, each current of fluid is conveniently cooled or heated independently from each other in the power plant. When the number of heat exchangers is large enough, however, a convenient arrangement of different flow currents may allow a considerable reduction in energy consumption to be obtained (Linnhoff and Hidmarsh, 1983). In such a case the heat exchangers form a 'Heat Exchanger Network' (HEN) that can be optimized to reduce the overall energy consumption. This type of optimization problem, involves two separates calculation procedures. First, it is necessary to optimize the topology of the HEN that will permit a reduction in energy consumption to be obtained. In a second step the power distribution across the HEN should be optimized without violating the second law of thermodynamics. The numerical treatment of this kind of problem requires the use of both discrete variables (for taking into account each heat exchanger unit) and continuous variables for handling the thermal load of each unit. It is obvious that for a large number of heat exchangers, the use of conventional calculation methods, i.e., Simplexe, becomes almost impossible. Therefore, in this paper we present a 'Genetic Algorithm' (GA), that has been implemented and successfully used to treat complex HENs, containing a large number of heat exchangers. As opposed to conventional optimization techniques that require the knowledge of the derivatives of a function, GAs start the calculation process from a large population of possible solutions of a given problem (Goldberg, 1999). Each possible solution is in turns evaluated according to a 'fitness' criterion obtained from an objective
Exchange inlet optimization by genetic algorithm for improved RBCC performance
Chorkawy, G.; Etele, J.
2017-09-01
A genetic algorithm based on real parameter representation using a variable selection pressure and variable probability of mutation is used to optimize an annular air breathing rocket inlet called the Exchange Inlet. A rapid and accurate design method which provides estimates for air breathing, mixing, and isentropic flow performance is used as the engine of the optimization routine. Comparison to detailed numerical simulations show that the design method yields desired exit Mach numbers to within approximately 1% over 75% of the annular exit area and predicts entrained air massflows to between 1% and 9% of numerically simulated values depending on the flight condition. Optimum designs are shown to be obtained within approximately 8000 fitness function evaluations in a search space on the order of 106. The method is also shown to be able to identify beneficial values for particular alleles when they exist while showing the ability to handle cases where physical and aphysical designs co-exist at particular values of a subset of alleles within a gene. For an air breathing engine based on a hydrogen fuelled rocket an exchange inlet is designed which yields a predicted air entrainment ratio within 95% of the theoretical maximum.
Anatomically Plausible Surface Alignment and Reconstruction
DEFF Research Database (Denmark)
Paulsen, Rasmus R.; Larsen, Rasmus
2010-01-01
With the increasing clinical use of 3D surface scanners, there is a need for accurate and reliable algorithms that can produce anatomically plausible surfaces. In this paper, a combined method for surface alignment and reconstruction is proposed. It is based on an implicit surface representation...
Exergetic optimization of shell and tube heat exchangers using a genetic based algorithm
Energy Technology Data Exchange (ETDEWEB)
Oezcelik, Yavuz [Ege University, Bornova, Izmir (Turkey). Engineering Faculty, Chemical Engineering Department
2007-08-15
In the computer-based optimization, many thousands of alternative shell and tube heat exchangers may be examined by varying the high number of exchanger parameters such as tube length, tube outer diameter, pitch size, layout angle, baffle space ratio, number of tube side passes. In the present study, a genetic based algorithm was developed, programmed, and applied to estimate the optimum values of discrete and continuous variables of the MINLP (mixed integer nonlinear programming) test problems. The results of the test problems show that the genetic based algorithm programmed can estimate the acceptable values of continuous variables and optimum values of integer variables. Finally the genetic based algorithm was extended to make parametric studies and to find optimum configuration of heat exchangers by minimizing the sum of the annual capital cost and exergetic cost of the shell and tube heat exchangers. The results of the example problems show that the proposed algorithm is applicable to find optimum and near optimum alternatives of the shell and tube heat exchanger configurations. (author)
International Nuclear Information System (INIS)
Adili, Ali; Ben Salah, Mohieddine; Kerkeni, Chekib; Ben Nasrallah, Sassi
2009-01-01
At high temperature, the circulation of fluid in heat exchangers provides a tendency for fouling accumulation to take place on the internal surface of tubes. This paper shows an experimental process of thermophysical properties estimation of the fouling deposited on internal surface of a heat exchanger tube using genetic algorithms (GAs). The genetic algorithm is used to minimize an objective function containing calculated and measured temperatures. The experimental bench using a photothermal method with a finite width pulse heat excitation is used and the estimated parameters are obtained with high accuracy
Turning Simulation into Estimation: Generalized Exchange Algorithms for Exponential Family Models.
Directory of Open Access Journals (Sweden)
Maarten Marsman
Full Text Available The Single Variable Exchange algorithm is based on a simple idea; any model that can be simulated can be estimated by producing draws from the posterior distribution. We build on this simple idea by framing the Exchange algorithm as a mixture of Metropolis transition kernels and propose strategies that automatically select the more efficient transition kernels. In this manner we achieve significant improvements in convergence rate and autocorrelation of the Markov chain without relying on more than being able to simulate from the model. Our focus will be on statistical models in the Exponential Family and use two simple models from educational measurement to illustrate the contribution.
International Nuclear Information System (INIS)
Sencan Sahin, Arzu; Kilic, Bayram; Kilic, Ulas
2011-01-01
Highlights: → Artificial Bee Colony for shell and tube heat exchanger optimization is used. → The total cost is minimized by varying design variables. → This new approach can be applied for optimization of heat exchangers. - Abstract: In this study, a new shell and tube heat exchanger optimization design approach is developed. Artificial Bee Colony (ABC) has been applied to minimize the total cost of the equipment including capital investment and the sum of discounted annual energy expenditures related to pumping of shell and tube heat exchanger by varying various design variables such as tube length, tube outer diameter, pitch size, baffle spacing, etc. Finally, the results are compared to those obtained by literature approaches. The obtained results indicate that Artificial Bee Colony (ABC) algorithm can be successfully applied for optimal design of shell and tube heat exchangers.
Energy Technology Data Exchange (ETDEWEB)
Sencan Sahin, Arzu, E-mail: sencan@tef.sdu.edu.tr [Department of Mechanical Education, Technical Education Faculty, Sueleyman Demirel University, 32260 Isparta (Turkey); Kilic, Bayram, E-mail: bayramkilic@hotmail.com [Bucak Emin Guelmez Vocational School, Mehmet Akif Ersoy University, Bucak (Turkey); Kilic, Ulas, E-mail: ulaskilic@mehmetakif.edu.tr [Bucak Emin Guelmez Vocational School, Mehmet Akif Ersoy University, Bucak (Turkey)
2011-10-15
Highlights: {yields} Artificial Bee Colony for shell and tube heat exchanger optimization is used. {yields} The total cost is minimized by varying design variables. {yields} This new approach can be applied for optimization of heat exchangers. - Abstract: In this study, a new shell and tube heat exchanger optimization design approach is developed. Artificial Bee Colony (ABC) has been applied to minimize the total cost of the equipment including capital investment and the sum of discounted annual energy expenditures related to pumping of shell and tube heat exchanger by varying various design variables such as tube length, tube outer diameter, pitch size, baffle spacing, etc. Finally, the results are compared to those obtained by literature approaches. The obtained results indicate that Artificial Bee Colony (ABC) algorithm can be successfully applied for optimal design of shell and tube heat exchangers.
Dash, Rajashree
2017-11-01
Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.
A Numerical Algorithm and a Graphical Method to Size a Heat Exchanger
DEFF Research Database (Denmark)
Berning, Torsten
2011-01-01
This paper describes the development of a numerical algorithm and a graphical method that can be employed in order to determine the overall heat transfer coefficient inside heat exchangers. The method is based on an energy balance and utilizes the spreadsheet application software Microsoft ExcelTM...
A Numerical Algorithm and a Graphical Method to Size a Heat Exchanger
DEFF Research Database (Denmark)
Berning, Torsten
2011-01-01
This paper describes the development of a numerical algorithm and a graphical method that can be employed in order to determine the overall heat transfer coefficient inside heat exchangers. The method is based on an energy balance and utilizes the spreadsheet application software Microsoft Excel...
A novel Random Walk algorithm with Compulsive Evolution for heat exchanger network synthesis
International Nuclear Information System (INIS)
Xiao, Yuan; Cui, Guomin
2017-01-01
Highlights: • A novel Random Walk Algorithm with Compulsive Evolution is proposed for HENS. • A simple and feasible evolution strategy is presented in RWCE algorithm. • The integer and continuous variables of HEN are optimized simultaneously in RWCE. • RWCE is demonstrated a relatively strong global search ability in HEN optimization. - Abstract: The heat exchanger network (HEN) synthesis can be characterized as highly combinatorial, nonlinear and nonconvex, contributing to unmanageable computational time and a challenge in identifying the global optimal network design. Stochastic methods are robust and show a powerful global optimizing ability. Based on the common characteristic of different stochastic methods, namely randomness, a novel Random Walk algorithm with Compulsive Evolution (RWCE) is proposed to achieve the best possible total annual cost of heat exchanger network with the relatively simple and feasible evolution strategy. A population of heat exchanger networks is first randomly initialized. Next, the heat load of heat exchanger for each individual is randomly expanded or contracted in order to optimize both the integer and continuous variables simultaneously and to obtain the lowest total annual cost. Besides, when individuals approach to local optima, there is a certain probability for them to compulsively accept the imperfect networks in order to keep the population diversity and ability of global optimization. The presented method is then applied to heat exchanger network synthesis cases from the literature to compare the best results published. RWCE consistently has a lower computed total annual cost compared to previously published results.
A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments
Harman, Radoslav
2018-01-17
We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.
A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments
Harman, Radoslav; Filová , Lenka; Richtarik, Peter
2018-01-01
We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.
Heuristic Elements of Plausible Reasoning.
Dudczak, Craig A.
At least some of the reasoning processes involved in argumentation rely on inferences which do not fit within the traditional categories of inductive or deductive reasoning. The reasoning processes involved in plausibility judgments have neither the formal certainty of deduction nor the imputed statistical probability of induction. When utilizing…
Plausible values in statistical inference
Marsman, M.
2014-01-01
In Chapter 2 it is shown that the marginal distribution of plausible values is a consistent estimator of the true latent variable distribution, and, furthermore, that convergence is monotone in an embedding in which the number of items tends to infinity. This result is used to clarify some of the
Noniterative accurate algorithm for the exact exchange potential of density-functional theory
International Nuclear Information System (INIS)
Cinal, M.; Holas, A.
2007-01-01
An algorithm for determination of the exchange potential is constructed and tested. It represents a one-step procedure based on the equations derived by Krieger, Li, and Iafrate (KLI) [Phys. Rev. A 46, 5453 (1992)], implemented already as an iterative procedure by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)]. Due to suitable transformation of the KLI equations, we can solve them avoiding iterations. Our algorithm is applied to the closed-shell atoms, from Be up to Kr, within the DFT exchange-only approximation. Using pseudospectral techniques for representing orbitals, we obtain extremely accurate values of total and orbital energies with errors at least four orders of magnitude smaller than known in the literature
A novel hybrid chaotic ant swarm algorithm for heat exchanger networks synthesis
International Nuclear Information System (INIS)
Zhang, Chunwei; Cui, Guomin; Peng, Fuyu
2016-01-01
Highlights: • The chaotic ant swarm algorithm is proposed to avoid trapping into a local optimum. • The organization variables update strategy makes full use of advantages of the chaotic search. • The structure evolution strategy is developed to handle integer variables optimization. • Overall three cases taken form the literatures are investigated with better optima. - Abstract: The heat exchanger networks synthesis (HENS) still remains an open problem due to its combinatorial nature, which can easily result in suboptimal design and unacceptable calculation effort. In this paper, a novel hybrid chaotic ant swarm algorithm is proposed. The presented algorithm, which consists of a combination of chaotic ant swarm (CAS) algorithm, structure evolution strategy, local optimization strategy and organization variables update strategy, can simultaneously optimize continuous variables and integer variables. The CAS algorithm chaotically searches and generates new solutions in the given space, and subsequently the structure evolution strategy evolves the structures represented by the solutions and limits the search space. Furthermore, the local optimizing strategy and the organization variables update strategy are introduced to enhance the performance of the algorithm. The study of three different cases, found in the literature, revealed special search abilities in both structure space and continuous variable space.
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
Directory of Open Access Journals (Sweden)
Oguz Emrah Turgut
2014-12-01
Full Text Available This study explores the thermal design of shell and tube heat exchangers by using Improved Intelligent Tuned Harmony Search (I-ITHS algorithm. Intelligent Tuned Harmony Search (ITHS is an upgraded version of harmony search algorithm which has an advantage of deciding intensification and diversification processes by applying proper pitch adjusting strategy. In this study, we aim to improve the search capacity of ITHS algorithm by utilizing chaotic sequences instead of uniformly distributed random numbers and applying alternative search strategies inspired by Artificial Bee Colony algorithm and Opposition Based Learning on promising areas (best solutions. Design variables including baffle spacing, shell diameter, tube outer diameter and number of tube passes are used to minimize total cost of heat exchanger that incorporates capital investment and the sum of discounted annual energy expenditures related to pumping and heat exchanger area. Results show that I-ITHS can be utilized in optimizing shell and tube heat exchangers.
Multi-objective optimization of a plate and frame heat exchanger via genetic algorithm
Energy Technology Data Exchange (ETDEWEB)
Najafi, Hamidreza; Najafi, Behzad [K. N. Toosi University of Technology, Department of Mechanical Engineering, Tehran (Iran)
2010-06-15
In the present paper, a plate and frame heat exchanger is considered. Multi-objective optimization using genetic algorithm is developed in order to obtain a set of geometric design parameters, which lead to minimum pressure drop and the maximum overall heat transfer coefficient. Vividly, considered objective functions are conflicting and no single solution can satisfy both objectives simultaneously. Multi-objective optimization procedure yields a set of optimal solutions, called Pareto front, each of which is a trade-off between objectives and can be selected by the user, regarding the application and the project's limits. The presented work takes care of numerous geometric parameters in the presence of logical constraints. A sensitivity analysis is also carried out to study the effects of different geometric parameters on the considered objective functions. Modeling the system and implementing the multi-objective optimization via genetic algorithm has been performed by MATLAB. (orig.)
A novel algorithm for demand-control of a single-room ventilation unit with a rotary heat exchanger
DEFF Research Database (Denmark)
Smith, Kevin Michael; Jansen, Anders Lund; Svendsen, Svend
in the indoor environment. Based on these values, a demand-control algorithm varies fan speeds to change airflow rates and varies the rotational speed of the heat exchanger to modulate heat and moisture recovery. The algorithm varies airflow rates to provide free cooling and limit CO2 concentrations and varies...... moisture recovery by varying the rotational speed and then safely unbalances airflows in a worst-case scenario. In the algorithm, frost protection and minimum supply temperature take the highest priority and override other controls. This paper documents the proposed demand control algorithm and analyses...... its impacts on compliance of building regulations in Denmark. The paper presents an algorithm that manufacturers can program into their controls. The commercially available single-room ventilation unit with a rotary heat exchanger uses this algorithm coded in the C language. Future work will document...
Minimizing shell-and-tube heat exchanger cost with genetic algorithms and considering maintenance
Energy Technology Data Exchange (ETDEWEB)
Wildi-Tremblay, P.; Gosselin, L. [Universite Laval, Quebec (Canada). Dept. de genie mecanique
2007-07-15
This paper presents a procedure for minimizing the cost of a shell-and-tube heat exchanger based on genetic algorithms (GA). The global cost includes the operating cost (pumping power) and the initial cost expressed in terms of annuities. Eleven design variables associated with shell-and-tube heat exchanger geometries are considered: tube pitch, tube layout patterns, number of tube passes, baffle spacing at the centre, baffle spacing at the inlet and outlet, baffle cut, tube-to-baffle diametrical clearance, shell-to-baffle diametrical clearance, tube bundle outer diameter, shell diameter, and tube outer diameter. Evaluations of the heat exchangers performances are based on an adapted version of the Bell-Delaware method. Pressure drops constraints are included in the procedure. Reliability and maintenance due to fouling are taken into account by restraining the coefficient of increase of surface into a given interval. Two case studies are presented. Results show that the procedure can properly and rapidly identify the optimal design for a specified heat transfer process. (author)
Directory of Open Access Journals (Sweden)
Nizar Hadi Abbas
2018-02-01
Full Text Available Quadrotors are coming up as an attractive platform for unmanned aerial vehicle (UAV research, due to the simplicity of their structure and maintenance, their ability to hover, and their vertical take-off and landing (VTOL capability. With the vast advancements in small-size sensors, actuators, and processors, researchers are now focusing on developing mini UAV’s to be used in both research and commercial applications. This work presents a detailed mathematical nonlinear dynamic model of the quadrotor which is formulated using the Newton-Euler method. Although the quadrotor is a 6 DOF under-actuated system, the derived rotational subsystem is fully actuated, while the translational subsystem is under-actuated. The derivation of the mathematical model was followed by the development of the controller to control the altitude, attitude, heading and position of the quadrotor in space, which is, based on the linear Proportional-Derivative- Integral (PID controller; thus, a simplified version of the model is obtained. The gains of the controllers will be tuned using optimization techniques to improve the system's dynamic response. The standard Imperialist Competitive Algorithm (ICA was applied to tune the PID parameters and then it was compared to Cultural Exchange Imperialist Competitive algorithm (CEICA tuning, and the results show improvement in the proposed algorithm. The objective function results were enhanced by (23.91% in the CEICA compared with ICA.
Searching for Plausible N-k Contingencies Endangering Voltage Stability
DEFF Research Database (Denmark)
Weckesser, Johannes Tilman Gabriel; Van Cutsem, Thierry
2017-01-01
This paper presents a novel search algorithm using time-domain simulations to identify plausible N − k contingencies endangering voltage stability. Starting from an initial list of disturbances, progressively more severe contingencies are investigated. After simulation of a N − k contingency......, the simulation results are assessed. If the system response is unstable, a plausible harmful contingency sequence has been found. Otherwise, components affected by the contingencies are considered as candidate next event leading to N − (k + 1) contingencies. This implicitly takes into account hidden failures...
International Nuclear Information System (INIS)
Sadeghzadeh, H.; Ehyaei, M.A.; Rosen, M.A.
2015-01-01
Highlights: • Calculating pressure drop and heat transfer coefficient by Delaware method. • The accuracy of the Delaware method is more than the Kern method. • The results of the PSO are better than the results of the GA. • The optimization results suggest that yields the best and most economic optimization. - Abstract: The use of genetic and particle swarm algorithms in the design of techno-economically optimum shell-and-tube heat exchangers is demonstrated. A cost function (including costs of the heat exchanger based on surface area and power consumption to overcome pressure drops) is the objective function, which is to be minimized. Selected decision variables include tube diameter, central baffles spacing and shell diameter. The Delaware method is used to calculate the heat transfer coefficient and the shell-side pressure drop. The accuracy and efficiency of the suggested algorithm and the Delaware method are investigated. A comparison of the results obtained by the two algorithms shows that results obtained with the particle swarm optimization method are superior to those obtained with the genetic algorithm method. By comparing these results with those from various references employing the Kern method and other algorithms, it is shown that the Delaware method accompanied by genetic and particle swarm algorithms achieves more optimum results, based on assessments for two case studies
A Genetic Algorithm That Exchanges Neighboring Centers for Fuzzy c-Means Clustering
Chahine, Firas Safwan
2012-01-01
Clustering algorithms are widely used in pattern recognition and data mining applications. Due to their computational efficiency, partitional clustering algorithms are better suited for applications with large datasets than hierarchical clustering algorithms. K-means is among the most popular partitional clustering algorithm, but has a major…
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Bisimulation for Single-Agent Plausibility Models
DEFF Research Database (Denmark)
Andersen, Mikkel Birkegaard; Bolander, Thomas; van Ditmarsch, H.
2013-01-01
define a proper notion of bisimulation, and prove that bisimulation corresponds to logical equivalence on image-finite models. We relate our results to other epistemic notions, such as safe belief and degrees of belief. Our results imply that there are only finitely many non-bisimilar single......-agent epistemic plausibility models on a finite set of propositions. This gives decidability for single-agent epistemic plausibility planning....
Directory of Open Access Journals (Sweden)
Xia Li
2018-01-01
Full Text Available Inspired by the basic theory of Fruit Fly Optimization Algorithm, in this paper, cat mapping was added to the original algorithm, and the individual distribution and evolution mechanism of fruit fly population were improved in order to increase the search speed and accuracy. The flowchart of the improved algorithm was drawn to show its procedure. Using classical test functions, simulation optimization results show that the improved algorithm has faster and more reliable optimization ability. The algorithm was then combined with sparse decomposition theory and used in processing fouling detection ultrasonic signals to verify the validity and practicability of the improved algorithm.
A replica exchange Monte Carlo algorithm for protein folding in the HP model
Directory of Open Access Journals (Sweden)
Shmygelska Alena
2007-09-01
Full Text Available Abstract Background The ab initio protein folding problem consists of predicting protein tertiary structure from a given amino acid sequence by minimizing an energy function; it is one of the most important and challenging problems in biochemistry, molecular biology and biophysics. The ab initio protein folding problem is computationally challenging and has been shown to be NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFneVtcqqGqbauaaa@3961@-hard even when conformations are restricted to a lattice. In this work, we implement and evaluate the replica exchange Monte Carlo (REMC method, which has already been applied very successfully to more complex protein models and other optimization problems with complex energy landscapes, in combination with the highly effective pull move neighbourhood in two widely studied Hydrophobic Polar (HP lattice models. Results We demonstrate that REMC is highly effective for solving instances of the square (2D and cubic (3D HP protein folding problem. When using the pull move neighbourhood, REMC outperforms current state-of-the-art algorithms for most benchmark instances. Additionally, we show that this new algorithm provides a larger ensemble of ground-state structures than the existing state-of-the-art methods. Furthermore, it scales well with sequence length, and it finds significantly better conformations on long biological sequences and sequences with a provably unique ground-state structure, which is believed to be a characteristic of real proteins. We also present evidence that our REMC algorithm can fold sequences which exhibit significant interaction between termini in the hydrophobic core relatively easily. Conclusion We demonstrate that REMC utilizing the pull move
International Nuclear Information System (INIS)
Gao Wa; Zha Fu-Sheng; Li Man-Tian; Song Bao-Yu
2014-01-01
This paper develops a fast filtering algorithm based on vibration systems theory and neural information exchange approach. The characters, including the derivation process and parameter analysis, are discussed and the feasibility and the effectiveness are testified by the filtering performance compared with various filtering methods, such as the fast wavelet transform algorithm, the particle filtering method and our previously developed single degree of freedom vibration system filtering algorithm, according to simulation and practical approaches. Meanwhile, the comparisons indicate that a significant advantage of the proposed fast filtering algorithm is its extremely fast filtering speed with good filtering performance. Further, the developed fast filtering algorithm is applied to the navigation and positioning system of the micro motion robot, which is a high real-time requirement for the signals preprocessing. Then, the preprocessing data is used to estimate the heading angle error and the attitude angle error of the micro motion robot. The estimation experiments illustrate the high practicality of the proposed fast filtering algorithm. (general)
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
An improved approach to exchange non-rectangular departments in CRAFT algorithm
Esmaeili Aliabadi, Danial; Pourghannad, Behrooz
2012-01-01
In this Paper, an algorithm which improves CRAFT algorithm’s efficacy is developed. CRAFT is an algorithm widely used to solve facility layout problems. Our proposed method, named Plasma, can be used to improve CRAFT results. In this note, Plasma algorithm is tested in several sample problems. The comparison between Plasma and classic CRAFT and also Micro-CRAFT indicates that Plasma is successful in cost reduction in comparison with CRAFT and Micro-CRAFT.
Optimality and Plausibility in Language Design
Directory of Open Access Journals (Sweden)
Michael R. Levot
2016-12-01
Full Text Available The Minimalist Program in generative syntax has been the subject of much rancour, a good proportion of it stoked by Noam Chomsky’s suggestion that language may represent “a ‘perfect solution’ to minimal design specifications.” A particular flash point has been the application of Minimalist principles to speculations about how language evolved in the human species. This paper argues that Minimalism is well supported as a plausible approach to language evolution. It is claimed that an assumption of minimal design specifications like that employed in MP syntax satisfies three key desiderata of evolutionary and general scientific plausibility: Physical Optimism, Rational Optimism, and Darwin’s Problem. In support of this claim, the methodologies employed in MP to maximise parsimony are characterised through an analysis of recent theories in Minimalist syntax, and those methodologies are defended with reference to practices and arguments from evolutionary biology and other natural sciences.
Directory of Open Access Journals (Sweden)
Heidar Sadeghzadeh
2015-08-01
Full Text Available Heat transfer rate and cost significantly affect designs of shell and tube heat exchangers. From the viewpoint of engineering, an optimum design is obtained via maximum heat transfer rate and minimum cost. Here, an analysis of a radial, finned, shell and tube heat exchanger is carried out, considering nine design parameters: tube arrangement, tube diameter, tube pitch, tube length, number of tubes, fin height, fin thickness, baffle spacing ratio and number of fins per unit length of tube. The “Delaware modified” technique is used to determine heat transfer coefficients and the shell-side pressure drop. In this technique, the baffle cut is 20 percent and the baffle ratio limits range from 0.2 to 0.4. The optimization of the objective functions (maximum heat transfer rate and minimum total cost is performed using a non-dominated sorting genetic algorithm (NSGA-II, and compared against a one-objective algorithm, to find the best solutions. The results are depicted as a set of solutions on a Pareto front, and show that the heat transfer rate ranges from 3517 to 7075 kW. Also, the minimum and maximum objective functions are specified, allowing the designer to select the best points among these solutions based on requirements. Additionally, variations of shell-side pressure drop with total cost are depicted, and indicate that the pressure drop ranges from 3.8 to 46.7 kPa.
International Nuclear Information System (INIS)
Wang, Zhe; Li, Yanzhong
2015-01-01
Highlights: • The first application of IMOCS for plate-fin heat exchanger design. • Irreversibility degrees of heat transfer and fluid friction are minimized. • Trade-off of efficiency, total cost and pumping power is achieved. • Both EGM and EDM methods have been compared in the optimization of PFHE. • This study has superiority over other single-objective optimization design. - Abstract: This paper introduces and applies an improved multi-objective cuckoo search (IMOCS) algorithm, a novel met-heuristic optimization algorithm based on cuckoo breeding behavior, for the multi-objective optimization design of plate-fin heat exchangers (PFHEs). A modified irreversibility degree of the PFHE is separated into heat transfer and fluid friction irreversibility degrees which are adopted as two initial objective functions to be minimized simultaneously for narrowing the search scope of the design. The maximization efficiency, minimization of pumping power, and total annual cost are considered final objective functions. Results obtained from a two dimensional normalized Pareto-optimal frontier clearly demonstrate the trade-off between heat transfer and fluid friction irreversibility. Moreover, a three dimensional Pareto-optimal frontier reveals a relationship between efficiency, total annual cost, and pumping power in the PFHE design. Three examples presented here further demonstrate that the presented method is able to obtain optimum solutions with higher accuracy, lower irreversibility, and fewer iterations as compared to the previous methods and single-objective design approaches
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
International Nuclear Information System (INIS)
Yang, Shipin; Chellali, Ryad; Lu, Xiaohua; Li, Lijuan; Bo, Cuimei
2016-01-01
Accurate models of PEM (proton exchange membrane) fuel cells are of great significance for the analysis and the control for power generation. We present a new semi-empirical model to predict the voltage outputs of PEM fuel cell stacks. We also introduce a new estimation method, called AC-POA (aging and challenging P systems based optimization algorithm) allowing deriving the parameters of the semi-empirical model. In our model, the cathode inlet pressure is selected as an additional factor to modify the expression of concentration over-voltage V con for traditional Amphlett's PEM fuel cell model. In AC-POA, the aging-mechanism inspired object updating rule is merged in existing P system. We validate through experiments the effectiveness of AC-POA and the fitting accuracy of our model. Modeling comparison results show that the predictions of our model are the best in terms of fitting to actual sample data. - Highlights: • Presented a p c -based modificatory semi-empirical model for PEMFC stack. • Introduced a new aging inspired improved parameter estimation algorithm, AC-POA. • Validated the effectiveness of the AC-POA and the new model. • Remodeled the practical PEM fuel cell system.
International Nuclear Information System (INIS)
Ghazi, M.; Ahmadi, P.; Sotoodeh, A.F.; Taherkhani, A.
2012-01-01
Highlights: ► Comprehensive thermodynamic modeling of a dual pressure HRSG with duct burners. ► Thermoeconomic performance assessment of the system. ► To find the best design parameters of the HRSG using a genetic algorithm. - Abstract: In the present study a comprehensive thermodynamic modeling of a dual pressure combined cycle power plant is performed. Moreover, an optimization study to find the best design parameters is carried out. Total cost per unit of produced steam exergy is defined as the objective function. The objective function includes capital or investment cost, operational and maintenance cost, and the corresponding cost of the exergy destruction. This objective function is minimized while satisfying a group of constraints. For this study, design variables are high and low drum pressures, steam mass flow rates, pinch point temperature differences and the duct burner fuel consumption flow rate. The variations of design parameters with the inlet hot gas enthalpy and exergy unit price are also shown. Finally the sensitivity analysis of change in design parameters with change in fuel and investment cost is performed. The results show that with increasing the exergy unit cost, the optimum values of design parameters are selected such that to decrease the objective function. Furthermore it is found that at higher inlet gas enthalpy, the required heat transfer surface area (and its corresponding capital cost) increases
Neural networks, nativism, and the plausibility of constructivism.
Quartz, S R
1993-09-01
Recent interest in PDP (parallel distributed processing) models is due in part to the widely held belief that they challenge many of the assumptions of classical cognitive science. In the domain of language acquisition, for example, there has been much interest in the claim that PDP models might undermine nativism. Related arguments based on PDP learning have also been given against Fodor's anti-constructivist position--a position that has contributed to the widespread dismissal of constructivism. A limitation of many of the claims regarding PDP learning, however, is that the principles underlying this learning have not been rigorously characterized. In this paper, I examine PDP models from within the framework of Valiant's PAC (probably approximately correct) model of learning, now the dominant model in machine learning, and which applies naturally to neural network learning. From this perspective, I evaluate the implications of PDP models for nativism and Fodor's influential anti-constructivist position. In particular, I demonstrate that, contrary to a number of claims, PDP models are nativist in a robust sense. I also demonstrate that PDP models actually serve as a good illustration of Fodor's anti-constructivist position. While these results may at first suggest that neural network models in general are incapable of the sort of concept acquisition that is required to refute Fodor's anti-constructivist position, I suggest that there is an alternative form of neural network learning that demonstrates the plausibility of constructivism. This alternative form of learning is a natural interpretation of the constructivist position in terms of neural network learning, as it employs learning algorithms that incorporate the addition of structure in addition to weight modification schemes. By demonstrating that there is a natural and plausible interpretation of constructivism in terms of neural network learning, the position that nativism is the only plausible model of
International Nuclear Information System (INIS)
Piepel, Gregory F.; Cooley, Scott K.; Jones, Bradley
2005-01-01
This paper describes the solution to a unique and challenging mixture experiment design problem involving: (1) 19 and 21 components for two different parts of the design, (2) many single-component and multi-component constraints, (3) augmentation of existing data, (4) a layered design developed in stages, and (5) a no-candidate-point optimal design approach. The problem involved studying the liquidus temperature of spinel crystals as a function of nuclear waste glass composition. The statistical objective was to develop an experimental design by augmenting existing glasses with new nonradioactive and radioactive glasses chosen to cover the designated nonradioactive and radioactive experimental regions. The existing 144 glasses were expressed as 19-component nonradioactive compositions and then augmented with 40 new nonradioactive glasses. These included 8 glasses on the outer layer of the region, 27 glasses on an inner layer, 2 replicate glasses at the centroid, and one replicate each of three existing glasses. Then, the 144 + 40 = 184 glasses were expressed as 21-component radioactive compositions, and augmented with 5 radioactive glasses. A D-optimal design algorithm was used to select the new outer layer, inner layer, and radioactive glasses. Several statistical software packages can generate D-optimal experimental designs, but nearly all of them require a set of candidate points (e.g., vertices) from which to select design points. The large number of components (19 or 21) and many constraints made it impossible to generate the huge number of vertices and other typical candidate points. JMP was used to select design points without candidate points. JMP uses a coordinate-exchange algorithm modified for mixture experiments, which is discussed and illustrated in the paper
Plausibility and evidence: the case of homeopathy.
Rutten, Lex; Mathie, Robert T; Fisher, Peter; Goossens, Maria; van Wassenhoven, Michel
2013-08-01
Homeopathy is controversial and hotly debated. The conclusions of systematic reviews of randomised controlled trials of homeopathy vary from 'comparable to conventional medicine' to 'no evidence of effects beyond placebo'. It is claimed that homeopathy conflicts with scientific laws and that homoeopaths reject the naturalistic outlook, but no evidence has been cited. We are homeopathic physicians and researchers who do not reject the scientific outlook; we believe that examination of the prior beliefs underlying this enduring stand-off can advance the debate. We show that interpretations of the same set of evidence--for homeopathy and for conventional medicine--can diverge. Prior disbelief in homeopathy is rooted in the perceived implausibility of any conceivable mechanism of action. Using the 'crossword analogy', we demonstrate that plausibility bias impedes assessment of the clinical evidence. Sweeping statements about the scientific impossibility of homeopathy are themselves unscientific: scientific statements must be precise and testable. There is growing evidence that homeopathic preparations can exert biological effects; due consideration of such research would reduce the influence of prior beliefs on the assessment of systematic review evidence.
Speech recognition employing biologically plausible receptive fields
DEFF Research Database (Denmark)
Fereczkowski, Michal; Bothe, Hans-Heinrich
2011-01-01
spectro-temporal receptive fields to auditory spectrogram input, motivated by the auditory pathway of humans, and ii) the adaptation or learning algorithms involved are biologically inspired. This is in contrast to state-of-the-art combinations of Mel-frequency cepstral coefficients and Hidden Markov...
Energy Technology Data Exchange (ETDEWEB)
Kim, Tae-Hoon; Kim, Sang-Hyun; Kim, Wook; Lee, Jong-Hak; Cho, Kwan-Seok; Choi, Woojin [Department of Electrical Engineering, Soongsil University, 1-1 Sangdo-dong, Dongjak-gu, Seoul 156-743 (Korea); Park, Kyung-Won [Department of Chemical/Environmental Engineering, Soongsil University, 1-1 Sangdo-dong, Dongjak-gu, Seoul 156-743 (Korea)
2010-09-15
Small PEM (proton exchange membrane) fuel cell systems do not require humidification and have great commercialization possibilities. However, methods for controlling small PEM fuel cell stacks have not been clearly established. In this paper, a control method for small PEM fuel cell systems using a dual closed loop with a static feed-forward structure is defined and realized using a microcontroller. The fundamental elements that need to be controlled in fuel cell systems include the supply of air and hydrogen, water management inside the stack, and heat management of the stack. For small PEM fuel cell stacks operated without a separate humidifier, fans are essential for air supply, heat management, and water management of the stack. A purge valve discharges surplus water from the stack. The proposed method controls the fan using a dual closed loop with a static feed-forward structure, thereby improving system efficiency and operation stability. The validity of the proposed method is confirmed by experiments using a 150-W PEM fuel cell stack. We expect the proposed algorithm to be widely used for controlling small PEM fuel cell stacks. (author)
Analytic models of plausible gravitational lens potentials
International Nuclear Information System (INIS)
Baltz, Edward A.; Marshall, Phil; Oguri, Masamune
2009-01-01
Gravitational lenses on galaxy scales are plausibly modelled as having ellipsoidal symmetry and a universal dark matter density profile, with a Sérsic profile to describe the distribution of baryonic matter. Predicting all lensing effects requires knowledge of the total lens potential: in this work we give analytic forms for that of the above hybrid model. Emphasising that complex lens potentials can be constructed from simpler components in linear combination, we provide a recipe for attaining elliptical symmetry in either projected mass or lens potential. We also provide analytic formulae for the lens potentials of Sérsic profiles for integer and half-integer index. We then present formulae describing the gravitational lensing effects due to smoothly-truncated universal density profiles in cold dark matter model. For our isolated haloes the density profile falls off as radius to the minus fifth or seventh power beyond the tidal radius, functional forms that allow all orders of lens potential derivatives to be calculated analytically, while ensuring a non-divergent total mass. We show how the observables predicted by this profile differ from that of the original infinite-mass NFW profile. Expressions for the gravitational flexion are highlighted. We show how decreasing the tidal radius allows stripped haloes to be modelled, providing a framework for a fuller investigation of dark matter substructure in galaxies and clusters. Finally we remark on the need for finite mass halo profiles when doing cosmological ray-tracing simulations, and the need for readily-calculable higher order derivatives of the lens potential when studying catastrophes in strong lenses
Toward Petascale Biologically Plausible Neural Networks
Long, Lyle
This talk will describe an approach to achieving petascale neural networks. Artificial intelligence has been oversold for many decades. Computers in the beginning could only do about 16,000 operations per second. Computer processing power, however, has been doubling every two years thanks to Moore's law, and growing even faster due to massively parallel architectures. Finally, 60 years after the first AI conference we have computers on the order of the performance of the human brain (1016 operations per second). The main issues now are algorithms, software, and learning. We have excellent models of neurons, such as the Hodgkin-Huxley model, but we do not know how the human neurons are wired together. With careful attention to efficient parallel computing, event-driven programming, table lookups, and memory minimization massive scale simulations can be performed. The code that will be described was written in C + + and uses the Message Passing Interface (MPI). It uses the full Hodgkin-Huxley neuron model, not a simplified model. It also allows arbitrary network structures (deep, recurrent, convolutional, all-to-all, etc.). The code is scalable, and has, so far, been tested on up to 2,048 processor cores using 107 neurons and 109 synapses.
International Nuclear Information System (INIS)
Neese, Frank; Wennmohs, Frank; Hansen, Andreas; Becker, Ute
2009-01-01
In this paper, the possibility is explored to speed up Hartree-Fock and hybrid density functional calculations by forming the Coulomb and exchange parts of the Fock matrix by different approximations. For the Coulomb part the previously introduced Split-RI-J variant (F. Neese, J. Comput. Chem. 24 (2003) 1740) of the well-known 'density fitting' approximation is used. The exchange part is formed by semi-numerical integration techniques that are closely related to Friesner's pioneering pseudo-spectral approach. Our potentially linear scaling realization of this algorithm is called the 'chain-of-spheres exchange' (COSX). A combination of semi-numerical integration and density fitting is also proposed. Both Split-RI-J and COSX scale very well with the highest angular momentum in the basis sets. It is shown that for extended basis sets speed-ups of up to two orders of magnitude compared to traditional implementations can be obtained in this way. Total energies are reproduced with an average error of <0.3 kcal/mol as determined from extended test calculations with various basis sets on a set of 26 molecules with 20-200 atoms and up to 2000 basis functions. Reaction energies agree to within 0.2 kcal/mol (Hartree-Fock) or 0.05 kcal/mol (hybrid DFT) with the canonical values. The COSX algorithm parallelizes with a speedup of 8.6 observed for 10 processes. Minimum energy geometries differ by less than 0.3 pm in the bond distances and 0.5 deg. in the bond angels from their canonical values. These developments enable highly efficient and accurate self-consistent field calculations including nonlocal Hartree-Fock exchange for large molecules. In combination with the RI-MP2 method and large basis sets, second-order many body perturbation energies can be obtained for medium sized molecules with unprecedented efficiency. The algorithms are implemented into the ORCA electronic structure system
International Nuclear Information System (INIS)
Feng, Hongcui; Zhong, Wei; Wu, Yanling; Tong, Shuiguang
2014-01-01
Highlights: • A general model of multi-pressure HRSG based on heat exchangers layout is built. • The minimum temperature difference is introduced to replace pinch point analysis. • Effects of layout on dual pressure HRSG thermodynamic performances are analyzed. - Abstract: Changes of heat exchangers layout in heat recovery steam generator (HRSG) will modify the amount of waste heat recovered from flue gas; this brings forward a desire for the optimization of the design of HRSG. In this paper the model of multi-pressure HRSG is built, and an instance of a dual pressure HRSG under three different layouts of Taihu Boiler Co., Ltd. is discussed, with specified values of inlet temperature, mass flow rate, composition of flue gas and water/steam parameters as temperature, pressure etc., steam mass flow rate and heat efficiency of different heat exchangers layout of HRSG are analyzed. This analysis is based on the laws of thermodynamics and incorporated into the energy balance equations for the heat exchangers. In the conclusion, the results of the steam mass flow rate, heat efficiency obtained for three heat exchangers layout of HRSGs are compared. The results show that the optimization of heat exchangers layout of HRSGs has a great significance for waste heat recovery and energy conservation
Application of plausible reasoning to AI-based control systems
Berenji, Hamid; Lum, Henry, Jr.
1987-01-01
Some current approaches to plausible reasoning in artificial intelligence are reviewed and discussed. Some of the most significant recent advances in plausible and approximate reasoning are examined. A synergism among the techniques of uncertainty management is advocated, and brief discussions on the certainty factor approach, probabilistic approach, Dempster-Shafer theory of evidence, possibility theory, linguistic variables, and fuzzy control are presented. Some extensions to these methods are described, and the applications of the methods are considered.
Pilgrims sailing the Titanic: plausibility effects on memory for misinformation.
Hinze, Scott R; Slaten, Daniel G; Horton, William S; Jenkins, Ryan; Rapp, David N
2014-02-01
People rely on information they read even when it is inaccurate (Marsh, Meade, & Roediger, Journal of Memory and Language 49:519-536, 2003), but how ubiquitous is this phenomenon? In two experiments, we investigated whether this tendency to encode and rely on inaccuracies from text might be influenced by the plausibility of misinformation. In Experiment 1, we presented stories containing inaccurate plausible statements (e.g., "The Pilgrims' ship was the Godspeed"), inaccurate implausible statements (e.g., . . . the Titanic), or accurate statements (e.g., . . . the Mayflower). On a subsequent test of general knowledge, participants relied significantly less on implausible than on plausible inaccuracies from the texts but continued to rely on accurate information. In Experiment 2, we replicated these results with the addition of a think-aloud procedure to elicit information about readers' noticing and evaluative processes for plausible and implausible misinformation. Participants indicated more skepticism and less acceptance of implausible than of plausible inaccuracies. In contrast, they often failed to notice, completely ignored, and at times even explicitly accepted the misinformation provided by plausible lures. These results offer insight into the conditions under which reliance on inaccurate information occurs and suggest potential mechanisms that may underlie reported misinformation effects.
International Nuclear Information System (INIS)
Kaur, Rajvir; Krishnasamy, Vijayakumar; Muthusamy, Kaleeswari; Chinnamuthan, Periasamy
2017-01-01
Highlights: • Proton exchange membrane fuel cell based telecom tower supply is proposed. • The use of diesel generator is eliminated and battery size is reduced. • Boost converter based intelligent interfacing unit is implemented. • The genetic algorithm assisted controller is proposed for effective interfacing. • The controller is robust against input and output disturbance rejection. - Abstract: This paper presents the fuel cell based simple electric energy conversion system for supplying the telecommunication towers to reduce the operation and maintenance cost of telecom companies. The telecom industry is at the boom and is penetrating deep into remote rural areas having unreliable or no grid supply. The telecom industry is getting heavily dependent on a diesel generator set and battery bank as a backup for continuously supplying a base transceiver station of telecom towers. This excessive usage of backup supply resulted in increased operational expenditure, the unreliability of power supply and had become a threat to the environment. A significant development and concern of clean energy sources, proton exchange membrane fuel cell based supply for base transceiver station is proposed with intelligent interfacing unit. The necessity of the battery bank capacity is significantly reduced as compared with the earlier solutions. Further, a simple closed loop and genetic algorithm assisted controller is proposed for intelligent interfacing unit which consists of power electronic boost converter for power conditioning. The proposed genetic algorithm assisted controller would ensure the tight voltage regulation at the DC distribution bus of the base transceiver station. Also, it will provide the robust performance of the base transceiver station under telecom load variation and proton exchange membrane fuel cell output voltage fluctuations. The complete electric energy conversion system along with telecom loads is simulated in MATLAB/Simulink platform and
Directory of Open Access Journals (Sweden)
Nishaal J. Parmar
2017-01-01
Full Text Available This paper presents a comparative evaluation of possible encryption algorithms for use in a self-contained, ultra-secure router-to-router communication system, first proposed by El Rifai and Verma. The original proposal utilizes a discrete logarithm-based encryption solution, which will be compared in this paper to RSA, AES, and ECC encryption algorithms. RSA certificates are widely used within the industry but require a trusted key generation and distribution architecture. AES and ECC provide advantages in key length, processing requirements, and storage space, also maintaining an arbitrarily high level of security. This paper modifies each of the four algorithms for use within the self-contained router-to-router environment system and then compares them in terms of features offered, storage space and data transmission needed, encryption/decryption efficiency, and key generation requirements.
A Stochastic Model of Plausibility in Live Virtual Constructive Environments
2017-09-14
from the model parameters that are inputs to the computer model ( mathematical model) but whose exact values are unknown to experimentalists and...Environments Jeremy R. Millar Follow this and additional works at: https://scholar.afit.edu/etd Part of the Computer Sciences Commons This Dissertation...25 3.3 Computing Plausibility Exceedance Probabilities . . . . . . . . . . . . . . . . . . . 28 IV
Endocrine distrupting chemicals and human health: The plausibility ...
African Journals Online (AJOL)
The plausibility of research results on DDT and reproductive health ... cals in the environment and that human health is inextri- cably linked to the health of .... periods of folliculo-genesis or embryo-genesis that increases risk for adverse effects.
International Nuclear Information System (INIS)
Nitej, N.V.; Sharovarov, G.A.
1982-01-01
The method of estimation of counterflow heat exchanger characteristics is presented. Mathematical description of the processes is presented by the mass, energy and pulse conservation equations for both coolants and energy conservation equation for the wall which devides them. In the presence of chemical reactions the system is supplemented by equations, characterizing the kinetics of their progress. The methods of numerical solution of static and dynamic problems have been chosen, and the computer programs on the Fortran language have been developed. The schemes of solution of both problems are so constructed, that the conservation equations are placed in the main program, and such characteristics of the coolants as properties, heat transfer and friction coefficients, the mechanism of chemical reaction are concentrated in the subprogram unit. This allows to create the single method of solution with the flow of single-phase and two-phase coolants of abovecritical and supercritical paramters. The evaluation results of three heat exchangers are given: with heating of N 2 O 4 gas phase by heat of flue gas; with cooling of N 2 O 4 supercritical parameters by water; regenerator on N 2 O 4
Probabilistic reasoning in intelligent systems networks of plausible inference
Pearl, Judea
1988-01-01
Probabilistic Reasoning in Intelligent Systems is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty--and offers techniques, based on belief networks, that provid
Generation of Plausible Hurricane Tracks for Preparedness Exercises
2017-04-25
product kernel. KDE with a beta kernel gene- rates maximum sustained winds, and linear regression simulates minimum central pressure. Maximum significant...the Storm level models the number of waypoints M , birth and death locations w1 and wM , and total number of steps L. The Stage level models the...MATLAB and leverages HURDAT2 to construct data-driven statistical models that can generate plausible yet never-before-seen storm behaviors. For a
Credibility judgments of narratives: language, plausibility, and absorption.
Nahari, Galit; Glicksohn, Joseph; Nachson, Israel
2010-01-01
Two experiments were conducted in order to find out whether textual features of narratives differentially affect credibility judgments made by judges having different levels of absorption (a disposition associated with rich visual imagination). Participants in both experiments were exposed to a textual narrative and requested to judge whether the narrator actually experienced the event he described in his story. In Experiment 1, the narrative varied in terms of language (literal, figurative) and plausibility (ordinary, anomalous). In Experiment 2, the narrative varied in terms of language only. The participants' perceptions of the plausibility of the story described and the extent to which they were absorbed in reading were measured. The data from both experiments together suggest that the groups applied entirely different criteria in credibility judgments. For high-absorption individuals, their credibility judgment depends on the degree to which the text can be assimilated into their own vivid imagination, whereas for low-absorption individuals it depends mainly on plausibility. That is, high-absorption individuals applied an experiential mental set while judging the credibility of the narrator, whereas low-absorption individuals applied an instrumental mental set. Possible cognitive mechanisms and implications for credibility judgments are discussed.
The ethical plausibility of the 'Right To Try' laws.
Carrieri, D; Peccatori, F A; Boniolo, G
2018-02-01
'Right To Try' (RTT) laws originated in the USA to allow terminally ill patients to request access to early stage experimental medical products directly from the producer, removing the oversight and approval of the Food and Drug Administration. These laws have received significant media attention and almost equally unanimous criticism by the bioethics, clinical and scientific communities. They touch indeed on complex issues such as the conflict between individual and public interest, and the public understanding of medical research and its regulation. The increased awareness around RTT laws means that healthcare providers directly involved in the management of patients with life-threatening conditions such as cancer, infective, or neurologic conditions will deal more frequently with patients' requests of access to experimental medical products. This paper aims to assess the ethical plausibility of the RTT laws, and to suggest some possible ethical tools and considerations to address the main issues they touch. Copyright © 2017 Elsevier B.V. All rights reserved.
On the biological plausibility of Wind Turbine Syndrome.
Harrison, Robert V
2015-01-01
An emerging environmental health issue relates to potential ill-effects of wind turbine noise. There have been numerous suggestions that the low-frequency acoustic components in wind turbine signals can cause symptoms associated with vestibular system disorders, namely vertigo, nausea, and nystagmus. This constellation of symptoms has been labeled as Wind Turbine Syndrome, and has been identified in case studies of individuals living close to wind farms. This review discusses whether it is biologically plausible for the turbine noise to stimulate the vestibular parts of the inner ear and, by extension, cause Wind Turbine Syndrome. We consider the sound levels that can activate the semicircular canals or otolith end organs in normal subjects, as well as in those with preexisting conditions known to lower vestibular threshold to sound stimulation.
Plausible scenarios for the radiography profession in Sweden in 2025
International Nuclear Information System (INIS)
Björkman, B.; Fridell, K.; Tavakol Olofsson, P.
2017-01-01
Introduction: Radiography is a healthcare speciality with many technical challenges. Advances in engineering and information technology applications may continue to drive and be driven by radiographers. The world of diagnostic imaging is changing rapidly and radiographers must be proactive in order to survive. To ensure sustainable development, organisations have to identify future opportunities and threats in a timely manner and incorporate them into their strategic planning. Hence, the aim of this study was to analyse and describe plausible scenarios for the radiography profession in 2025. Method: The study has a qualitative design with an inductive approach based on focus group interviews. The interviews were inspired by the Scenario-Planning method. Results: Of the seven trends identified in a previous study, the radiographers considered two as the most uncertain scenarios that would have the greatest impact on the profession should they occur. These trends, labelled “Access to career advancement” and “A sufficient number of radiographers”, were inserted into the scenario cross. The resulting four plausible future scenarios were: The happy radiographer, the specialist radiographer, the dying profession and the assembly line. Conclusion: It is suggested that “The dying profession” scenario could probably be turned in the opposite direction by facilitating career development opportunities for radiographers within the profession. Changing the direction would probably lead to a profession composed of “happy radiographers” who are specialists, proud of their profession and competent to carry out advanced tasks, in contrast to being solely occupied by “the assembly line”. - Highlights: • The world of radiography is changing rapidly and radiographers must be proactive in order to survive. • Future opportunities and threats should be identified and incorporated into the strategic planning. • Appropriate actions can probably change the
Mezbahuddin, Mohammad; Grant, Robert F.; Flanagan, Lawrence B.
2017-12-01
Water table depth (WTD) effects on net ecosystem CO2 exchange of boreal peatlands are largely mediated by hydrological effects on peat biogeochemistry and the ecophysiology of peatland vegetation. The lack of representation of these effects in carbon models currently limits our predictive capacity for changes in boreal peatland carbon deposits under potential future drier and warmer climates. We examined whether a process-level coupling of a prognostic WTD with (1) oxygen transport, which controls energy yields from microbial and root oxidation-reduction reactions, and (2) vascular and nonvascular plant water relations could explain mechanisms that control variations in net CO2 exchange of a boreal fen under contrasting WTD conditions, i.e., shallow vs. deep WTD. Such coupling of eco-hydrology and biogeochemistry algorithms in a process-based ecosystem model, ecosys, was tested against net ecosystem CO2 exchange measurements in a western Canadian boreal fen peatland over a period of drier-weather-driven gradual WTD drawdown. A May-October WTD drawdown of ˜ 0.25 m from 2004 to 2009 hastened oxygen transport to microbial and root surfaces, enabling greater microbial and root energy yields and peat and litter decomposition, which raised modeled ecosystem respiration (Re) by 0.26 µmol CO2 m-2 s-1 per 0.1 m of WTD drawdown. It also augmented nutrient mineralization, and hence root nutrient availability and uptake, which resulted in improved leaf nutrient (nitrogen) status that facilitated carboxylation and raised modeled vascular gross primary productivity (GPP) and plant growth. The increase in modeled vascular GPP exceeded declines in modeled nonvascular (moss) GPP due to greater shading from increased vascular plant growth and moss drying from near-surface peat desiccation, thereby causing a net increase in modeled growing season GPP by 0.39 µmol CO2 m-2 s-1 per 0.1 m of WTD drawdown. Similar increases in GPP and Re caused no significant WTD effects on modeled
Directory of Open Access Journals (Sweden)
M. Mezbahuddin
2017-12-01
Full Text Available Water table depth (WTD effects on net ecosystem CO2 exchange of boreal peatlands are largely mediated by hydrological effects on peat biogeochemistry and the ecophysiology of peatland vegetation. The lack of representation of these effects in carbon models currently limits our predictive capacity for changes in boreal peatland carbon deposits under potential future drier and warmer climates. We examined whether a process-level coupling of a prognostic WTD with (1 oxygen transport, which controls energy yields from microbial and root oxidation–reduction reactions, and (2 vascular and nonvascular plant water relations could explain mechanisms that control variations in net CO2 exchange of a boreal fen under contrasting WTD conditions, i.e., shallow vs. deep WTD. Such coupling of eco-hydrology and biogeochemistry algorithms in a process-based ecosystem model, ecosys, was tested against net ecosystem CO2 exchange measurements in a western Canadian boreal fen peatland over a period of drier-weather-driven gradual WTD drawdown. A May–October WTD drawdown of ∼ 0.25 m from 2004 to 2009 hastened oxygen transport to microbial and root surfaces, enabling greater microbial and root energy yields and peat and litter decomposition, which raised modeled ecosystem respiration (Re by 0.26 µmol CO2 m−2 s−1 per 0.1 m of WTD drawdown. It also augmented nutrient mineralization, and hence root nutrient availability and uptake, which resulted in improved leaf nutrient (nitrogen status that facilitated carboxylation and raised modeled vascular gross primary productivity (GPP and plant growth. The increase in modeled vascular GPP exceeded declines in modeled nonvascular (moss GPP due to greater shading from increased vascular plant growth and moss drying from near-surface peat desiccation, thereby causing a net increase in modeled growing season GPP by 0.39 µmol CO2 m−2 s−1 per 0.1 m of WTD drawdown. Similar increases in
Plausible inference: A multi-valued logic for problem solving
Friedman, L.
1979-01-01
A new logic is developed which permits continuously variable strength of belief in the truth of assertions. Four inference rules result, with formal logic as a limiting case. Quantification of belief is defined. Propagation of belief to linked assertions results from dependency-based techniques of truth maintenance so that local consistency is achieved or contradiction discovered in problem solving. Rules for combining, confirming, or disconfirming beliefs are given, and several heuristics are suggested that apply to revising already formed beliefs in the light of new evidence. The strength of belief that results in such revisions based on conflicting evidence are a highly subjective phenomenon. Certain quantification rules appear to reflect an orderliness in the subjectivity. Several examples of reasoning by plausible inference are given, including a legal example and one from robot learning. Propagation of belief takes place in directions forbidden in formal logic and this results in conclusions becoming possible for a given set of assertions that are not reachable by formal logic.
Liderazgo preventivo para la universidad. Una experiencia plausible
Directory of Open Access Journals (Sweden)
Alejandro Rodríguez Rodríguez
2015-06-01
Full Text Available El desarrollo del liderazgo, en el ámbito educativo superior, busca soluciones de aplicación inmediata a contextos en que todo líder se desenvuelve, pero se diluye el sustento teórico-práctico en la formación del líder que posibilite entender los procesos intelectivos durante la toma de decisiones. El paradigma de convergencia entre el método antropológico lonerganiano, la comunidad de aprendizaje vygotskiana y una relectura del sistema preventivo salesiano se presentan como propuesta plausible de formación al liderazgo preventivo entre los diversos actores de una comunidad universitaria. Un estudio de caso de la Universidad Salesiana en México empleando un método mixto de investigación, facilita una relectura del liderazgo desde una óptica preventiva como posibilidad de convergencia en un diálogo interdisciplinar. Los resultados teórico-práctico propuestos y examinados se muestran como herramienta útil para evaluar, enriquecer y renovar la teoría sobre el líder y el desarrollo de liderazgo en las universidades frente a una sociedad globalizada.
Structure before meaning: sentence processing, plausibility, and subcategorization.
Kizach, Johannes; Nyvad, Anne Mette; Christensen, Ken Ramshøj
2013-01-01
Natural language processing is a fast and automatized process. A crucial part of this process is parsing, the online incremental construction of a syntactic structure. The aim of this study was to test whether a wh-filler extracted from an embedded clause is initially attached as the object of the matrix verb with subsequent reanalysis, and if so, whether the plausibility of such an attachment has an effect on reaction time. Finally, we wanted to examine whether subcategorization plays a role. We used a method called G-Maze to measure response time in a self-paced reading design. The experiments confirmed that there is early attachment of fillers to the matrix verb. When this attachment is implausible, the off-line acceptability of the whole sentence is significantly reduced. The on-line results showed that G-Maze was highly suited for this type of experiment. In accordance with our predictions, the results suggest that the parser ignores (or has no access to information about) implausibility and attaches fillers as soon as possible to the matrix verb. However, the results also show that the parser uses the subcategorization frame of the matrix verb. In short, the parser ignores semantic information and allows implausible attachments but adheres to information about which type of object a verb can take, ensuring that the parser does not make impossible attachments. We argue that the evidence supports a syntactic parser informed by syntactic cues, rather than one guided by semantic cues or one that is blind, or completely autonomous.
Structure before meaning: sentence processing, plausibility, and subcategorization.
Directory of Open Access Journals (Sweden)
Johannes Kizach
Full Text Available Natural language processing is a fast and automatized process. A crucial part of this process is parsing, the online incremental construction of a syntactic structure. The aim of this study was to test whether a wh-filler extracted from an embedded clause is initially attached as the object of the matrix verb with subsequent reanalysis, and if so, whether the plausibility of such an attachment has an effect on reaction time. Finally, we wanted to examine whether subcategorization plays a role. We used a method called G-Maze to measure response time in a self-paced reading design. The experiments confirmed that there is early attachment of fillers to the matrix verb. When this attachment is implausible, the off-line acceptability of the whole sentence is significantly reduced. The on-line results showed that G-Maze was highly suited for this type of experiment. In accordance with our predictions, the results suggest that the parser ignores (or has no access to information about implausibility and attaches fillers as soon as possible to the matrix verb. However, the results also show that the parser uses the subcategorization frame of the matrix verb. In short, the parser ignores semantic information and allows implausible attachments but adheres to information about which type of object a verb can take, ensuring that the parser does not make impossible attachments. We argue that the evidence supports a syntactic parser informed by syntactic cues, rather than one guided by semantic cues or one that is blind, or completely autonomous.
Directory of Open Access Journals (Sweden)
Mohammed Essaid Riffi
2017-11-01
Full Text Available The bat algorithm is one of the recent nature-inspired algorithms, which has been emerged as a powerful search method for solving continuous as well as discrete problems. The quadratic assignment problem is a well-known NP-hard problem in combinatorial optimization. The goal of this problem is to assign n facilities to n locations in such a way as to minimize the assignment cost. For that purpose, this paper introduces a novel discrete variant of bat algorithm to deal with this combinatorial optimization problem. The proposed algorithm was evaluated on a set of benchmark instances from the QAPLIB library and the performance was compared to other algorithms. The empirical results of exhaustive experiments were promising and illustrated the efficacy of the suggested approach.
Compressed sensing along physically plausible sampling trajectories in MRI
International Nuclear Information System (INIS)
Chauffert, Nicolas
2015-01-01
. First, we propose continuous sampling schemes based on random walks and on travelling salesman (TSP) problem. Then, we propose a projection algorithm onto the space of constraints that returns the closest feasible curve of an input curve (eg, a TSP solution). Finally, we provide an algorithm to project a measure onto a set of measures carried by parameterizations. In particular, if this set is the one carried by admissible curves, the algorithm returns a curve which sampling density is close to the measure to project. This designs an admissible variable density sampler. The reconstruction results obtained in simulations using this strategy outperform existing acquisition trajectories (spiral, radial) by about 3 dB. They permit to envision a future implementation on a real 7 T scanner soon, notably in the context of high resolution anatomical imaging. (author) [fr
Phillips, Lawrence; Pearl, Lisa
2015-11-01
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.
Stereotyping to infer group membership creates plausible deniability for prejudice-based aggression.
Cox, William T L; Devine, Patricia G
2014-02-01
In the present study, participants administered painful electric shocks to an unseen male opponent who was either explicitly labeled as gay or stereotypically implied to be gay. Identifying the opponent with a gay-stereotypic attribute produced a situation in which the target's group status was privately inferred but plausibly deniable to others. To test the plausible deniability hypothesis, we examined aggression levels as a function of internal (personal) and external (social) motivation to respond without prejudice. Whether plausible deniability was present or absent, participants high in internal motivation aggressed at low levels, and participants low in both internal and external motivation aggressed at high levels. The behavior of participants low in internal and high in external motivation, however, depended on experimental condition. They aggressed at low levels when observers could plausibly attribute their behavior to prejudice and aggressed at high levels when the situation granted plausible deniability. This work has implications for both obstacles to and potential avenues for prejudice-reduction efforts.
Optimization of Heat Exchangers
International Nuclear Information System (INIS)
Catton, Ivan
2010-01-01
The objective of this research is to develop tools to design and optimize heat exchangers (HE) and compact heat exchangers (CHE) for intermediate loop heat transport systems found in the very high temperature reator (VHTR) and other Generation IV designs by addressing heat transfer surface augmentation and conjugate modeling. To optimize heat exchanger, a fast running model must be created that will allow for multiple designs to be compared quickly. To model a heat exchanger, volume averaging theory, VAT, is used. VAT allows for the conservation of mass, momentum and energy to be solved for point by point in a 3 dimensional computer model of a heat exchanger. The end product of this project is a computer code that can predict an optimal configuration for a heat exchanger given only a few constraints (input fluids, size, cost, etc.). As VAT computer code can be used to model characteristics (pumping power, temperatures, and cost) of heat exchangers more quickly than traditional CFD or experiment, optimization of every geometric parameter simultaneously can be made. Using design of experiment, DOE and genetric algorithms, GE, to optimize the results of the computer code will improve heat exchanger design.
Phillips, Lawrence; Pearl, Lisa
2015-01-01
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's "cognitive plausibility." We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition…
Lombardi, Doug; Bickel, Elliot S.; Bailey, Janelle M.; Burrell, Shondricka
2018-01-01
Evaluation is an important aspect of science and is receiving increasing attention in science education. The present study investigated (1) changes to plausibility judgments and knowledge as a result of a series of instructional scaffolds, called model-evidence link activities, that facilitated evaluation of scientific and alternative models in…
Yang, Jinmian
2013-01-01
The current paper examined the role of plausibility information in the parafovea for Chinese readers by using two-character transposed words (in which the order of the component characters is reversed but are still words). In two eye-tracking experiments, readers received a preview of a target word that was (1) identical to the target word, (2) a…
The Radical Promise of Reformist Zeal: What Makes "Inquiry for Equity" Plausible?
Lashaw, Amanda
2010-01-01
Education reform movements often promise more than they deliver. Why are such promises plausible in light of seemingly perpetual education reform? Drawing on ethnographic fieldwork based in a nonprofit education reform organization, this article explores the appeal of popular notions about "using data to close the racial achievement…
DEFF Research Database (Denmark)
Mahnke, Martina; Uprichard, Emma
2014-01-01
Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...
Heck, Daniel W; Hilbig, Benjamin E; Moshagen, Morten
2017-08-01
Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions. Copyright © 2017 Elsevier Inc. All rights reserved.
Acquavella, John; Doe, John; Tomenson, John; Chester, Graham; Cowell, John; Bloemen, Louis
2003-01-01
Epidemiologic studies frequently show associations between self-reported use of specific pesticides and human cancers. These findings have engendered debate largely on methodologic grounds. However, biologic plausibility is a more fundamental issue that has received only superficial attention. The purpose of this commentary is to review briefly the toxicology and exposure data that are developed as part of the pesticide regulatory process and to discuss the applicability of this data to epidemiologic research. The authors also provide a generic example of how worker pesticide exposures might be estimated and compared to relevant toxicologic dose levels. This example provides guidance for better characterization of exposure and for consideration of biologic plausibility in epidemiologic studies of pesticides.
Of paradox and plausibility: the dynamic of change in medical law.
Harrington, John
2014-01-01
This article develops a model of change in medical law. Drawing on systems theory, it argues that medical law participates in a dynamic of 'deparadoxification' and 'reparadoxification' whereby the underlying contingency of the law is variously concealed through plausible argumentation, or revealed by critical challenge. Medical law is, thus, thoroughly rhetorical. An examination of the development of the law on abortion and on the sterilization of incompetent adults shows that plausibility is achieved through the deployment of substantive common sense and formal stylistic devices. It is undermined where these elements are shown to be arbitrary and constructed. In conclusion, it is argued that the politics of medical law are constituted by this antagonistic process of establishing and challenging provisionally stable normative regimes. © The Author [2014]. Published by Oxford University Press; all rights reserved. For Permissions, please email: journals.permissions@oup.com.
L’Analyse du Risque Géopolitique: du Plausible au Probable
Adib Bencherif
2015-01-01
This paper is going to explore the logical process behind risk analysis, particularly in geopolitics. The main goal is to demonstrate the ambiguities behind risk calculation and to highlight the continuum between plausibility and probability in risk analysis. To demonstrate it, the author introduces two notions: the inference of abduction, often neglected in the social sciences literature, and the Bayesian calculation. Inspired by the works of Louise Amoore, this paper tries to go further by ...
Resolution of cosmological singularity and a plausible mechanism of the big bang
International Nuclear Information System (INIS)
Choudhury, D.C.
2002-01-01
The initial cosmological singularity in the framework of the general theory of relativity is resolved by introducing the effect of the uncertainty principle of quantum theory without violating conventional laws of physics. A plausible account of the mechanism of the big bang, analogous to that of a nuclear explosion, is given and the currently accepted Planck temperature of ≅10 32 K at the beginning of the big bang is predicted
International Nuclear Information System (INIS)
Dostatni, A.W.; Dostatni, Michel.
1976-01-01
In the main patent, a description was given of a heat exchanger with an exchange surface in preformed sheet metal designed for the high pressure and temperature service particularly encountered in nuclear pressurized water reactors and which is characterised by the fact that it is composed of at least one exchanger bundle sealed in a containment, the said bundle or bundles being composed of numerous juxtaposed individual compartments whose exchange faces are built of preformed sheet metal. The present addendun certificate concerns shapes of bundles and their positioning methods in the exchanger containment enabling its compactness to be increased [fr
De Götzen , Amalia; Mion , Luca; Tache , Olivier
2007-01-01
International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Directory of Open Access Journals (Sweden)
Pavel eSountsov
2011-11-01
Full Text Available Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled or rotated.
Sountsov, Pavel; Santucci, David M; Lisman, John E
2011-01-01
Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated.
Resolution of Cosmological Singularity and a Plausible Mechanism of the Big Bang
Choudhury, D. C.
2001-01-01
The initial cosmological singularity in the framework of the general theory of relativity is resolved by introducing the effect of the uncertainty principle of quantum theory without violating conventional laws of physics. A plausible account of the mechanism of the big bang, analogous to that of a nuclear explosion, is given and the currently accepted Planck temperature of about 10^(32)K at the beginning of the big bang is predicted. Subj-class: cosmology: theory-pre-big bang; mechanism of t...
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Molecular simulations of hydrated proton exchange membranes. The structure
Energy Technology Data Exchange (ETDEWEB)
Marcharnd, Gabriel [Duisburg-Essen Univ., Essen (Germany). Lehrstuhl fuer Theoretische Chemie; Bordeaux Univ., Talence (France). Dept. of Chemistry; Bopp, Philippe A. [Bordeaux Univ., Talence (France). Dept. of Chemistry; Spohr, Eckhard [Duisburg-Essen Univ., Essen (Germany). Lehrstuhl fuer Theoretische Chemie
2013-01-15
The structure of two hydrated proton exchange membranes for fuel cells (PEMFC), Nafion {sup registered} (Dupont) and Hyflon {sup registered} (Solvay), is studied by all-atom molecular dynamics (MD) computer simulations. Since the characteristic times of these systems are long compared to the times for which they can be simulated, several different, but equivalent, initial configurations with a large degree of randomness are generated for different water contents and then equilibrated and simulated in parallel. A more constrained structure, analog to the newest model proposed in the literature based on scattering experiments, is investigated in the same way. One might speculate that a limited degree of entanglement of the polymer chains is a key feature of the structures showing the best agreement with experiment. Nevertheless, the overall conclusion remains that the scattering experiments cannot distinguish between the several, in our view equally plausible, structural models. We thus find that the characteristic features of experimental scattering curves are, after equilibration, fairly well reproduced by all systems prepared with our method. We thus study in more detail some structural details. We attempt to characterize the spatial and size distribution of the water rich domains, which is where the proton diffusion mostly takes place, using several clustering algorithms. (orig.)
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
Raab, Marius Hans; Auer, Nikolas; Ortlieb, Stefan A; Carbon, Claus-Christian
2013-01-01
Reptile prime ministers and flying Nazi saucers-extreme and sometimes off-wall conclusion are typical ingredients of conspiracy theories. While individual differences are a common research topic concerning conspiracy theories, the role of extreme statements in the process of acquiring and passing on conspiratorial stories has not been regarded in an experimental design so far. We identified six morphological components of conspiracy theories empirically. On the basis of these content categories a set of narrative elements for a 9/11 story was compiled. These elements varied systematically in terms of conspiratorial allegation, i.e., they contained official statements concerning the events of 9/11, statements alleging to a conspiracy limited in time and space as well as extreme statements indicating an all-encompassing cover-up. Using the method of narrative construction, 30 people were given a set of cards with these statements and asked to construct the course of events of 9/11 they deem most plausible. When extreme statements were present in the set, the resulting stories were more conspiratorial; the number of official statements included in the narrative dropped significantly, whereas the self-assessment of the story's plausibility did not differ between conditions. This indicates that blatant statements in a pool of information foster the synthesis of conspiracy theories on an individual level. By relating these findings to one of Germany's most successful (and controversial) non-fiction books, we refer to the real-world dangers of this effect.
Neural correlates of early-closure garden-path processing: Effects of prosody and plausibility.
den Ouden, Dirk-Bart; Dickey, Michael Walsh; Anderson, Catherine; Christianson, Kiel
2016-01-01
Functional magnetic resonance imaging (fMRI) was used to investigate neural correlates of early-closure garden-path sentence processing and use of extrasyntactic information to resolve temporary syntactic ambiguities. Sixteen participants performed an auditory picture verification task on sentences presented with natural versus flat intonation. Stimuli included sentences in which the garden-path interpretation was plausible, implausible because of a late pragmatic cue, or implausible because of a semantic mismatch between an optionally transitive verb and the following noun. Natural sentence intonation was correlated with left-hemisphere temporal activation, but also with activation that suggests the allocation of more resources to interpretation when natural prosody is provided. Garden-path processing was associated with upregulation in bilateral inferior parietal and right-hemisphere dorsolateral prefrontal and inferior frontal cortex, while differences between the strength and type of plausibility cues were also reflected in activation patterns. Region of interest (ROI) analyses in regions associated with complex syntactic processing are consistent with a role for posterior temporal cortex supporting access to verb argument structure. Furthermore, ROI analyses within left-hemisphere inferior frontal gyrus suggest a division of labour, with the anterior-ventral part primarily involved in syntactic-semantic mismatch detection, the central part supporting structural reanalysis, and the posterior-dorsal part showing a general structural complexity effect.
Kentzoglanakis, Kyriakos; Poole, Matthew
2012-01-01
In this paper, we investigate the problem of reverse engineering the topology of gene regulatory networks from temporal gene expression data. We adopt a computational intelligence approach comprising swarm intelligence techniques, namely particle swarm optimization (PSO) and ant colony optimization (ACO). In addition, the recurrent neural network (RNN) formalism is employed for modeling the dynamical behavior of gene regulatory systems. More specifically, ACO is used for searching the discrete space of network architectures and PSO for searching the corresponding continuous space of RNN model parameters. We propose a novel solution construction process in the context of ACO for generating biologically plausible candidate architectures. The objective is to concentrate the search effort into areas of the structure space that contain architectures which are feasible in terms of their topological resemblance to real-world networks. The proposed framework is initially applied to the reconstruction of a small artificial network that has previously been studied in the context of gene network reverse engineering. Subsequently, we consider an artificial data set with added noise for reconstructing a subnetwork of the genetic interaction network of S. cerevisiae (yeast). Finally, the framework is applied to a real-world data set for reverse engineering the SOS response system of the bacterium Escherichia coli. Results demonstrate the relative advantage of utilizing problem-specific knowledge regarding biologically plausible structural properties of gene networks over conducting a problem-agnostic search in the vast space of network architectures.
Particulate air pollution and increased mortality: Biological plausibility for causal relationship
International Nuclear Information System (INIS)
Henderson, R.F.
1995-01-01
Recently, a number of epidemiological studies have concluded that ambient particulate exposure is associated with increased mortality and morbidity at PM concentrations well below those previously thought to affect human health. These studies have been conducted in several different geographical locations and have involved a range of populations. While the consistency of the findings and the presence of an apparent concentration response relationship provide a strong argument for causality, epidemiological studies can only conclude this based upon inference from statistical associations. The biological plausibility of a causal relationship between low concentrations of PM and daily mortality and morbidity rates is neither intuitively obvious nor expected based on past experimental studies on the toxicity of inhaled particles. Chronic toxicity from inhaled, poorly soluble particles has been observed based on the slow accumulation of large lung burdens of particles, not on small daily fluctuations in PM levels. Acute toxicity from inhaled particles is associated mainly with acidic particles and is observed at much higher concentrations than those observed in the epidemiology studies reporting an association between PM concentrations and morbidity/mortality. To approach the difficult problem of determining if the association between PM concentrations and daily morbidity and mortality is biologically plausible and causal, one must consider (1) the chemical and physical characteristics of the particles in the inhaled atmospheres, (2) the characteristics of the morbidity/mortality observed and the people who are affected, and (3) potential mechanisms that might link the two
Morality Principles for Risk Modelling: Needs and Links with the Origins of Plausible Inference
Solana-Ortega, Alberto; Solana, Vicente
2009-12-01
In comparison with the foundations of probability calculus, the inescapable and controversial issue of how to assign probabilities has only recently become a matter of formal study. The introduction of information as a technical concept was a milestone, but the most promising entropic assignment methods still face unsolved difficulties, manifesting the incompleteness of plausible inference theory. In this paper we examine the situation faced by risk analysts in the critical field of extreme events modelling, where the former difficulties are especially visible, due to scarcity of observational data, the large impact of these phenomena and the obligation to assume professional responsibilities. To respond to the claim for a sound framework to deal with extremes, we propose a metafoundational approach to inference, based on a canon of extramathematical requirements. We highlight their strong moral content, and show how this emphasis in morality, far from being new, is connected with the historic origins of plausible inference. Special attention is paid to the contributions of Caramuel, a contemporary of Pascal, unfortunately ignored in the usual mathematical accounts of probability.
Biologically plausible learning in neural networks: a lesson from bacterial chemotaxis.
Shimansky, Yury P
2009-12-01
Learning processes in the brain are usually associated with plastic changes made to optimize the strength of connections between neurons. Although many details related to biophysical mechanisms of synaptic plasticity have been discovered, it is unclear how the concurrent performance of adaptive modifications in a huge number of spatial locations is organized to minimize a given objective function. Since direct experimental observation of even a relatively small subset of such changes is not feasible, computational modeling is an indispensable investigation tool for solving this problem. However, the conventional method of error back-propagation (EBP) employed for optimizing synaptic weights in artificial neural networks is not biologically plausible. This study based on computational experiments demonstrated that such optimization can be performed rather efficiently using the same general method that bacteria employ for moving closer to an attractant or away from a repellent. With regard to neural network optimization, this method consists of regulating the probability of an abrupt change in the direction of synaptic weight modification according to the temporal gradient of the objective function. Neural networks utilizing this method (regulation of modification probability, RMP) can be viewed as analogous to swimming in the multidimensional space of their parameters in the flow of biochemical agents carrying information about the optimality criterion. The efficiency of RMP is comparable to that of EBP, while RMP has several important advantages. Since the biological plausibility of RMP is beyond a reasonable doubt, the RMP concept provides a constructive framework for the experimental analysis of learning in natural neural networks.
International Nuclear Information System (INIS)
Leigh, D.G.
1976-01-01
The arrangement described relates particularly to heat exchangers for use in fast reactor power plants, in which heat is extracted from the reactor core by primary liquid metal coolant and is then transferred to secondary liquid metal coolant by means of intermediate heat exchangers. One of the main requirements of such a system, if used in a pool type fast reactor, is that the pressure drop on the primary coolant side must be kept to a minimum consistent with the maintenance of a limited dynamic head in the pool vessel. The intermediate heat exchanger must also be compact enough to be accommodated in the reactor vessel, and the heat exchanger tubes must be available for inspection and the detection and plugging of leaks. If, however, the heat exchanger is located outside the reactor vessel, as in the case of a loop system reactor, a higher pressure drop on the primary coolant side is acceptable, and space restriction is less severe. An object of the arrangement described is to provide a method of heat exchange and a heat exchanger to meet these problems. A further object is to provide a method that ensures that excessive temperature variations are not imposed on welded tube joints by sudden changes in the primary coolant flow path. Full constructional details are given. (U.K.)
The Environmental Information Exchange Network (EN) is an Internet-based system used by state, tribal and territorial partners to securely share environmental and health information with one another and EPA.
Signature of Plausible Accreting Supermassive Black Holes in Mrk 261/262 and Mrk 266
Directory of Open Access Journals (Sweden)
Gagik Ter-Kazarian
2013-01-01
Full Text Available We address the neutrino radiation of plausible accreting supermassive black holes closely linking to the 5 nuclear components of galaxy samples of Mrk 261/262 and Mrk 266. We predict a time delay before neutrino emission of the same scale as the age of the Universe. The ultrahigh energy neutrinos are produced in superdense protomatter medium via simple (quark or pionic reactions or modified URCA processes (G. Gamow was inspired to name the process URCA after the name of a casino in Rio de Janeiro. The resulting neutrino fluxes for quark reactions are ranging from to , where is the opening parameter. For pionic and modified URCA reactions, the fluxes are and , respectively. These fluxes are highly beamed along the plane of accretion disk, peaked at ultrahigh energies, and collimated in smaller opening angle .
Directory of Open Access Journals (Sweden)
Megha Karki
2017-07-01
Full Text Available Phosphorylation under plausible prebiotic conditions continues to be one of the defining issues for the role of phosphorus in the origins of life processes. In this review, we cover the reactions of alternative forms of phosphate, specifically the nitrogenous versions of phosphate (and other forms of reduced phosphorus species from a prebiotic, synthetic organic and biochemistry perspective. The ease with which such amidophosphates or phosphoramidate derivatives phosphorylate a wide variety of substrates suggests that alternative forms of phosphate could have played a role in overcoming the “phosphorylation in water problem”. We submit that serious consideration should be given to the search for primordial sources of nitrogenous versions of phosphate and other versions of phosphorus.
Quantum theory as plausible reasoning applied to data obtained by robust experiments.
De Raedt, H; Katsnelson, M I; Michielsen, K
2016-05-28
We review recent work that employs the framework of logical inference to establish a bridge between data gathered through experiments and their objective description in terms of human-made concepts. It is shown that logical inference applied to experiments for which the observed events are independent and for which the frequency distribution of these events is robust with respect to small changes of the conditions under which the experiments are carried out yields, without introducing any concept of quantum theory, the quantum theoretical description in terms of the Schrödinger or the Pauli equation, the Stern-Gerlach or Einstein-Podolsky-Rosen-Bohm experiments. The extraordinary descriptive power of quantum theory then follows from the fact that it is plausible reasoning, that is common sense, applied to reproducible and robust experimental data. © 2016 The Author(s).
Sofaer, Neema
2014-11-01
A common reason for giving research participants post-trial access (PTA) to the trial intervention appeals to reciprocity, the principle, stated most generally, that if one person benefits a second, the second should reciprocate: benefit the first in return. Many authors consider it obvious that reciprocity supports PTA. Yet their reciprocity principles differ, with many authors apparently unaware of alternative versions. This article is the first to gather the range of reciprocity principles. It finds that: (1) most are false. (2) The most plausible principle, which is also problematic, applies only when participants experience significant net risks or burdens. (3) Seldom does reciprocity support PTA for participants or give researchers stronger reason to benefit participants than equally needy non-participants. (4) Reciprocity fails to explain the common view that it is bad when participants in a successful trial have benefited from the trial intervention but lack PTA to it. © 2013 John Wiley & Sons Ltd.
Directory of Open Access Journals (Sweden)
Hanns Holger Rutz
2016-11-01
Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory.
DEFF Research Database (Denmark)
Jensen, Pernille Foged; Rand, Kasper Dyrberg
2016-01-01
Hydrogen exchange (HX) monitored by mass spectrometry (MS) is a powerful analytical method for investigation of protein conformation and dynamics. HX-MS monitors isotopic exchange of hydrogen in protein backbone amides and thus serves as a sensitive method for probing protein conformation...... and dynamics along the entire protein backbone. This chapter describes the exchange of backbone amide hydrogen which is highly quenchable as it is strongly dependent on the pH and temperature. The HX rates of backbone amide hydrogen are sensitive and very useful probes of protein conformation......, as they are distributed along the polypeptide backbone and form the fundamental hydrogen-bonding networks of basic secondary structure. The effect of pressure on HX in unstructured polypeptides (poly-dl-lysine and oxidatively unfolded ribonuclease A) and native folded proteins (lysozyme and ribonuclease A) was evaluated...
International Nuclear Information System (INIS)
Drury, C.R.
1988-01-01
A heat exchanger having primary and secondary conduits in heat-exchanging relationship is described comprising: at least one serpentine tube having parallel sections connected by reverse bends, the serpentine tube constituting one of the conduits; a group of open-ended tubes disposed adjacent to the parallel sections, the open-ended tubes constituting the other of the conduits, and forming a continuous mass of contacting tubes extending between and surrounding the serpentine tube sections; and means securing the mass of tubes together to form a predetermined cross-section of the entirety of the mass of open-ended tubes and tube sections
Lerch, Mathias; Spoerri, Adrian; Jasilionis, Domantas; Viciana Fernandèz, Francisco
2017-07-14
Reliable estimates of mortality according to socioeconomic status play a crucial role in informing the policy debate about social inequality, social cohesion, and exclusion as well as about the reform of pension systems. Linked mortality data have become a gold standard for monitoring socioeconomic differentials in survival. Several approaches have been proposed to assess the quality of the linkage, in order to avoid the misclassification of deaths according to socioeconomic status. However, the plausibility of mortality estimates has never been scrutinized from a demographic perspective, and the potential problems with the quality of the data on the at-risk populations have been overlooked. Using indirect demographic estimation (i.e., the synthetic extinct generation method), we analyze the plausibility of old-age mortality estimates according to educational attainment in four European data contexts with different quality issues: deterministic and probabilistic linkage of deaths, as well as differences in the methodology of the collection of educational data. We evaluate whether the at-risk population according to educational attainment is misclassified and/or misestimated, correct these biases, and estimate the education-specific linkage rates of deaths. The results confirm a good linkage of death records within different educational strata, even when probabilistic matching is used. The main biases in mortality estimates concern the classification and estimation of the person-years of exposure according to educational attainment. Changes in the census questions about educational attainment led to inconsistent information over time, which misclassified the at-risk population. Sample censuses also misestimated the at-risk populations according to educational attainment. The synthetic extinct generation method can be recommended for quality assessments of linked data because it is capable not only of quantifying linkage precision, but also of tracking problems in
Wolowodiuk, Walter
1976-01-06
A heat exchanger of the straight tube type in which different rates of thermal expansion between the straight tubes and the supply pipes furnishing fluid to those tubes do not result in tube failures. The supply pipes each contain a section which is of helical configuration.
International Nuclear Information System (INIS)
1975-01-01
The tubes of a heat exchanger tube bank have a portion thereof formed in the shape of a helix, of effective radius equal to the tube radius and the space between two adjacent tubes, to tangentially contact the straight sections of the tubes immediately adjacent thereto and thereby provide support, maintain the spacing and account for differential thermal expansion thereof
Jamshidian, F.
2007-01-01
The contract is described and market examples given. Essential theoretical developments are introduced and cited chronologically. The principles and techniques of hedging and unique pricing are illustrated for the two simplest nontrivial examples: the classical Black-Scholes/Merton/Margrabe exchange
Mills, Bev
2003-09-01
IN MAY this year, I was lucky enough to go to Larissa in northern Greece as part of Hope Exchange 2003, an annual study tour organised by the European Union's hospital committee and administered by the Institute of Healthcare Management (IHM).
Daman, Ernest L.; McCallister, Robert A.
1979-01-01
A heat exchanger is provided having first and second fluid chambers for passing primary and secondary fluids. The chambers are spaced apart and have heat pipes extending from inside one chamber to inside the other chamber. A third chamber is provided for passing a purge fluid, and the heat pipe portion between the first and second chambers lies within the third chamber.
International Nuclear Information System (INIS)
Wolowodiuk, W.
1976-01-01
A heat exchanger of the straight tube type is described in which different rates of thermal expansion between the straight tubes and the supply pipes furnishing fluid to those tubes do not result in tube failures. The supply pipes each contain a section which is of helical configuration
Event-based plausibility immediately influences on-line language comprehension.
Matsuki, Kazunaga; Chow, Tracy; Hare, Mary; Elman, Jeffrey L; Scheepers, Christoph; McRae, Ken
2011-07-01
In some theories of sentence comprehension, linguistically relevant lexical knowledge, such as selectional restrictions, is privileged in terms of the time-course of its access and influence. We examined whether event knowledge computed by combining multiple concepts can rapidly influence language understanding even in the absence of selectional restriction violations. Specifically, we investigated whether instruments can combine with actions to influence comprehension of ensuing patients of (as in Rayner, Warren, Juhuasz, & Liversedge, 2004; Warren & McConnell, 2007). Instrument-verb-patient triplets were created in a norming study designed to tap directly into event knowledge. In self-paced reading (Experiment 1), participants were faster to read patient nouns, such as hair, when they were typical of the instrument-action pair (Donna used the shampoo to wash vs. the hose to wash). Experiment 2 showed that these results were not due to direct instrument-patient relations. Experiment 3 replicated Experiment 1 using eyetracking, with effects of event typicality observed in first fixation and gaze durations on the patient noun. This research demonstrates that conceptual event-based expectations are computed and used rapidly and dynamically during on-line language comprehension. We discuss relationships among plausibility and predictability, as well as their implications. We conclude that selectional restrictions may be best considered as event-based conceptual knowledge rather than lexical-grammatical knowledge.
Zhang, Feng; Zhong, Rujia; Li, Song; Chang, Raymond Chuen-Chung; Le, Weidong
2017-05-01
Sleep disorders are among the most common clinical problems and possess a significant concern for the geriatric population. More importantly, while around 40% of elderly adults have sleep-related complaints, sleep disorders are more frequently associated with co-morbidities including age-related neurodegenerative diseases and mild cognitive impairment. Recently, increasing evidence has indicated that disturbed sleep may not only serve as the consequence of brain atrophy, but also contribute to the pathogenesis of dementia and, therefore, significantly increase dementia risk. Since the current therapeutic interventions lack efficacies to prevent, delay or reverse the pathological progress of dementia, a better understanding of underlying mechanisms by which sleep disorders interact with the pathogenesis of dementia will provide possible targets for the prevention and treatment of dementia. In this review, we briefly describe the physiological roles of sleep in learning/memory, and specifically update the recent research evidence demonstrating the association between sleep disorders and dementia. Plausible mechanisms are further discussed. Moreover, we also evaluate the possibility of sleep therapy as a potential intervention for dementia.
Schuman-Olivier, Zev; Britton, Willoughby B.; Fresco, David M.; Desbordes, Gaelle; Brewer, Judson A.; Fulwiler, Carl
2016-01-01
The purpose of this review is to provide (1) a synopsis on relations of mindfulness with cardiovascular disease (CVD) and major CVD risk factors, and (2) an initial consensus-based overview of mechanisms and theoretical framework by which mindfulness might influence CVD. Initial evidence, often of limited methodological quality, suggests possible impacts of mindfulness on CVD risk factors including physical activity, smoking, diet, obesity, blood pressure, and diabetes regulation. Plausible mechanisms include (1) improved attention control (e.g., ability to hold attention on experiences related to CVD risk, such as smoking, diet, physical activity, and medication adherence), (2) emotion regulation (e.g., improved stress response, self-efficacy, and skills to manage craving for cigarettes, palatable foods, and sedentary activities), and (3) self-awareness (e.g., self-referential processing and awareness of physical sensations due to CVD risk factors). Understanding mechanisms and theoretical framework should improve etiologic knowledge, providing customized mindfulness intervention targets that could enable greater mindfulness intervention efficacy. PMID:26482755
Phthalates impact human health: Epidemiological evidences and plausible mechanism of action.
Benjamin, Sailas; Masai, Eiji; Kamimura, Naofumi; Takahashi, Kenji; Anderson, Robin C; Faisal, Panichikkal Abdul
2017-10-15
Disregarding the rising alarm on the hazardous nature of various phthalates and their metabolites, ruthless usage of phthalates as plasticizer in plastics and as additives in innumerable consumer products continues due low their cost, attractive properties, and lack of suitable alternatives. Globally, in silico computational, in vitro mechanistic, in vivo preclinical and limited clinical or epidemiological human studies showed that over a dozen phthalates and their metabolites ingested passively by man from the general environment, foods, drinks, breathing air, and routine household products cause various dysfunctions. Thus, this review addresses the health hazards posed by phthalates on children and adolescents, epigenetic modulation, reproductive toxicity in women and men; insulin resistance and type II diabetes; overweight and obesity, skeletal anomalies, allergy and asthma, cancer, etc., coupled with the description of major phthalates and their general uses, phthalate exposure routes, biomonitoring and risk assessment, special account on endocrine disruption; and finally, a plausible molecular cross-talk with a unique mechanism of action. This clinically focused comprehensive review on the hazards of phthalates would benefit the general population, academia, scientists, clinicians, environmentalists, and law or policy makers to decide upon whether usage of phthalates to be continued swiftly without sufficient deceleration or regulated by law or to be phased out from earth forever. Copyright © 2017. Published by Elsevier B.V.
Azmat, Rafia; Hamid, Neelofer
2015-03-01
Dual symbioses of vesicular-arbuscular mycorrhizal (VAM) fungi with growth of Momordica charantia were elucidated in terms of plausible mechanism of biosorption in this article. The experiment was conducted in green house and mixed inoculum of the VAM fungi was used in the three replicates. Results demonstrated that the starch contents were the main source of C for the VAM to builds their hyphae. The increased plant height and leaves surface area were explained in relation with an increase in the photosynthetic rates to produce rapid sugar contents for the survival of plants. A decreased in protein, and amino acid contents and increased proline and protease activity in VAM plants suggested that these contents were the main bio-indicators of the plants under biotic stress. The decline in protein may be due to the degradation of these contents, which later on converted into dextrose where it can easily be absorbed by for the period of symbioses. A mechanism of C chemisorption in relation with physiology and morphology of plant was discussed.
Non-specific effects of vaccines: plausible and potentially important, but implications uncertain.
Pollard, Andrew J; Finn, Adam; Curtis, Nigel
2017-11-01
Non-specific effects (NSE) or heterologous effects of vaccines are proposed to explain observations in some studies that certain vaccines have an impact beyond the direct protection against infection with the specific pathogen for which the vaccines were designed. The importance and implications of such effects remain controversial. There are several known immunological mechanisms which could lead to NSE, since it is widely recognised that the generation of specific immunity is initiated by non-specific innate immune mechanisms that may also have wider effects on adaptive immune function. However, there are no published studies that demonstrate a mechanistic link between such immunological phenomena and clinically relevant NSE in humans. While it is highly plausible that some vaccines do have NSE, their magnitude and duration, and thus importance, remain uncertain. Although the WHO recently concluded that current evidence does not justify changes to immunisation policy, further studies of sufficient size and quality are needed to assess the importance of NSE for all-cause mortality. This could provide insights into vaccine immunobiology with important implications for infant health and survival. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Delidovich, I. V.; Taran, O. P.; Simonov, A. N.; Matvienko, L. G.; Parmon, V. N.
2011-08-01
The article analyzes new and previously reported data on several catalytic and photochemical processes yielding biologically important molecules. UV-irradiation of formaldehyde aqueous solution yields acetaldehyde, glyoxal, glycolaldehyde and glyceraldehyde, which can serve as precursors of more complex biochemically relevant compounds. Photolysis of aqueous solution of acetaldehyde and ammonium nitrate results in formation of alanine and pyruvic acid. Dehydration of glyceraldehyde catalyzed by zeolite HZSM-5-17 yields pyruvaldehyde. Monosaccharides are formed in the course of the phosphate-catalyzed aldol condensation reactions of glycolaldehyde, glyceraldehyde and formaldehyde. The possibility of the direct synthesis of tetroses, keto- and aldo-pentoses from pure formaldehyde due to the combination of the photochemical production of glycolahyde and phosphate-catalyzed carbohydrate chain growth is demonstrated. Erythrulose and 3-pentulose are the main products of such combined synthesis with selectivity up to 10%. Biologically relevant aldotetroses, aldo- and ketopentoses are more resistant to the photochemical destruction owing to the stabilization in hemiacetal cyclic forms. They are formed as products of isomerization of erythrulose and 3-pentulose. The conjugation of the concerned reactions results in a plausible route to the formation of sugars, amino and organic acids from formaldehyde and ammonia under presumed 'prebiotic' conditions.
Instrument-induced spatial crosstalk deconvolution algorithm
Wright, Valerie G.; Evans, Nathan L., Jr.
1986-01-01
An algorithm has been developed which reduces the effects of (deconvolves) instrument-induced spatial crosstalk in satellite image data by several orders of magnitude where highly precise radiometry is required. The algorithm is based upon radiance transfer ratios which are defined as the fractional bilateral exchange of energy betwen pixels A and B.
Mohammadhassanzadeh, Hossein; Van Woensel, William; Abidi, Samina Raza; Abidi, Syed Sibte Raza
2017-01-01
Capturing complete medical knowledge is challenging-often due to incomplete patient Electronic Health Records (EHR), but also because of valuable, tacit medical knowledge hidden away in physicians' experiences. To extend the coverage of incomplete medical knowledge-based systems beyond their deductive closure, and thus enhance their decision-support capabilities, we argue that innovative, multi-strategy reasoning approaches should be applied. In particular, plausible reasoning mechanisms apply patterns from human thought processes, such as generalization, similarity and interpolation, based on attributional, hierarchical, and relational knowledge. Plausible reasoning mechanisms include inductive reasoning , which generalizes the commonalities among the data to induce new rules, and analogical reasoning , which is guided by data similarities to infer new facts. By further leveraging rich, biomedical Semantic Web ontologies to represent medical knowledge, both known and tentative, we increase the accuracy and expressivity of plausible reasoning, and cope with issues such as data heterogeneity, inconsistency and interoperability. In this paper, we present a Semantic Web-based, multi-strategy reasoning approach, which integrates deductive and plausible reasoning and exploits Semantic Web technology to solve complex clinical decision support queries. We evaluated our system using a real-world medical dataset of patients with hepatitis, from which we randomly removed different percentages of data (5%, 10%, 15%, and 20%) to reflect scenarios with increasing amounts of incomplete medical knowledge. To increase the reliability of the results, we generated 5 independent datasets for each percentage of missing values, which resulted in 20 experimental datasets (in addition to the original dataset). The results show that plausibly inferred knowledge extends the coverage of the knowledge base by, on average, 2%, 7%, 12%, and 16% for datasets with, respectively, 5%, 10%, 15
Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko
2013-01-01
In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.
Ion exchange equilibrium constants
Marcus, Y
2013-01-01
Ion Exchange Equilibrium Constants focuses on the test-compilation of equilibrium constants for ion exchange reactions. The book first underscores the scope of the compilation, equilibrium constants, symbols used, and arrangement of the table. The manuscript then presents the table of equilibrium constants, including polystyrene sulfonate cation exchanger, polyacrylate cation exchanger, polymethacrylate cation exchanger, polysterene phosphate cation exchanger, and zirconium phosphate cation exchanger. The text highlights zirconium oxide anion exchanger, zeolite type 13Y cation exchanger, and
International Nuclear Information System (INIS)
Bennett, J.C.
1975-01-01
A heat exchanger such as forms, for example, part of a power steam boiler is made up of a number of tubes that may be arranged in many different ways, and it is necessary that the tubes be properly supported. The means by which the tubes are secured must be as simple as possible so as to facilitate construction and must be able to continue to function effectively under the varying operating conditions to which the heat exchanger is subject. The arrangement described is designed to meet these requirements, in an improved way. The tubes are secured to a member extending past several tubes and abutment means are provided. At least some of the abutment means comprise two abutment pieces and a wedge secured to the supporting member, that acts on these pieces to maintain the engagement. (U.K.)
Energy Technology Data Exchange (ETDEWEB)
Schmidt, E L; Eisenmann, G; Hahne, E [Stuttgart Univ. (TH) (F.R. Germany). Inst. fuer Thermodynamik und Waermetechnik
1976-04-01
A survey is presented on publications on design, heat transfer, form factors, free convection, evaporation processes, cooling towers, condensation, annular gap, cross-flowed cylinders, axial flow through a bundle of tubes, roughnesses, convective heat transfer, loss of pressure, radiative heat transfer, finned surfaces, spiral heat exchangers, curved pipes, regeneraters, heat pipes, heat carriers, scaling, heat recovery systems, materials selection, strength calculation, control, instabilities, automation of circuits, operational problems and optimization.
International Nuclear Information System (INIS)
Wiebe, Keith; Islam, Shahnila; Mason-D’Croz, Daniel; Robertson, Richard; Robinson, Sherman; Lotze-Campen, Hermann; Biewald, Anne; Bodirsky, Benjamin; Müller, Christoph; Popp, Alexander; Sands, Ronald; Tabeau, Andrzej; Van Meijl, Hans; Van der Mensbrugghe, Dominique; Kavallari, Aikaterini; Willenbockel, Dirk
2015-01-01
Previous studies have combined climate, crop and economic models to examine the impact of climate change on agricultural production and food security, but results have varied widely due to differences in models, scenarios and input data. Recent work has examined (and narrowed) these differences through systematic model intercomparison using a high-emissions pathway to highlight the differences. This paper extends that analysis to explore a range of plausible socioeconomic scenarios and emission pathways. Results from multiple climate and economic models are combined to examine the global and regional impacts of climate change on agricultural yields, area, production, consumption, prices and trade for coarse grains, rice, wheat, oilseeds and sugar crops to 2050. We find that climate impacts on global average yields, area, production and consumption are similar across shared socioeconomic pathways (SSP 1, 2 and 3, as we implement them based on population, income and productivity drivers), except when changes in trade policies are included. Impacts on trade and prices are higher for SSP 3 than SSP 2, and higher for SSP 2 than for SSP 1. Climate impacts for all variables are similar across low to moderate emissions pathways (RCP 4.5 and RCP 6.0), but increase for a higher emissions pathway (RCP 8.5). It is important to note that these global averages may hide regional variations. Projected reductions in agricultural yields due to climate change by 2050 are larger for some crops than those estimated for the past half century, but smaller than projected increases to 2050 due to rising demand and intrinsic productivity growth. Results illustrate the sensitivity of climate change impacts to differences in socioeconomic and emissions pathways. Yield impacts increase at high emissions levels and vary with changes in population, income and technology, but are reduced in all cases by endogenous changes in prices and other variables. (paper)
Evaporative water loss is a plausible explanation for mortality of bats from white-nose syndrome.
Willis, Craig K R; Menzies, Allyson K; Boyles, Justin G; Wojciechowski, Michal S
2011-09-01
White-nose syndrome (WNS) has caused alarming declines of North American bat populations in the 5 years since its discovery. Affected bats appear to starve during hibernation, possibly because of disruption of normal cycles of torpor and arousal. The importance of hydration state and evaporative water loss (EWL) for influencing the duration of torpor bouts in hibernating mammals recently led to "the dehydration hypothesis," that cutaneous infection of the wing membranes of bats with the fungus Geomyces destructans causes dehydration which in turn, increases arousal frequency during hibernation. This hypothesis predicts that uninfected individuals of species most susceptible to WNS, like little brown bats (Myotis lucifugus), exhibit high rates of EWL compared to less susceptible species. We tested the feasibility of this prediction using data from the literature and new data quantifying EWL in Natterer's bats (Myotis nattereri), a species that is, like other European bats, sympatric with G. destructans but does not appear to suffer significant mortality from WNS. We found that little brown bats exhibited significantly higher rates of normothermic EWL than did other bat species for which comparable EWL data are available. We also found that Natterer's bats exhibited significantly lower rates of EWL, in both wet and dry air, compared with values predicted for little brown bats exposed to identical relative humidity (RH). We used a population model to show that the increase in EWL required to cause the pattern of mortality observed for WNS-affected little brown bats was small, equivalent to a solitary bat hibernating exposed to RH of ∼95%, or clusters hibernating in ∼87% RH, as opposed to typical near-saturation conditions. Both of these results suggest the dehydration hypothesis is plausible and worth pursuing as a possible explanation for mortality of bats from WNS.
Directory of Open Access Journals (Sweden)
Szymon Stoma
2008-10-01
Full Text Available Plants continuously generate new organs through the activity of populations of stem cells called meristems. The shoot apical meristem initiates leaves, flowers, and lateral meristems in highly ordered, spiralled, or whorled patterns via a process called phyllotaxis. It is commonly accepted that the active transport of the plant hormone auxin plays a major role in this process. Current hypotheses propose that cellular hormone transporters of the PIN family would create local auxin maxima at precise positions, which in turn would lead to organ initiation. To explain how auxin transporters could create hormone fluxes to distinct regions within the plant, different concepts have been proposed. A major hypothesis, canalization, proposes that the auxin transporters act by amplifying and stabilizing existing fluxes, which could be initiated, for example, by local diffusion. This convincingly explains the organised auxin fluxes during vein formation, but for the shoot apical meristem a second hypothesis was proposed, where the hormone would be systematically transported towards the areas with the highest concentrations. This implies the coexistence of two radically different mechanisms for PIN allocation in the membrane, one based on flux sensing and the other on local concentration sensing. Because these patterning processes require the interaction of hundreds of cells, it is impossible to estimate on a purely intuitive basis if a particular scenario is plausible or not. Therefore, computational modelling provides a powerful means to test this type of complex hypothesis. Here, using a dedicated computer simulation tool, we show that a flux-based polarization hypothesis is able to explain auxin transport at the shoot meristem as well, thus providing a unifying concept for the control of auxin distribution in the plant. Further experiments are now required to distinguish between flux-based polarization and other hypotheses.
Abdellah, Marwan
2017-02-15
Background We present a visualization pipeline capable of accurate rendering of highly scattering fluorescent neocortical neuronal models. The pipeline is mainly developed to serve the computational neurobiology community. It allows the scientists to visualize the results of their virtual experiments that are performed in computer simulations, or in silico. The impact of the presented pipeline opens novel avenues for assisting the neuroscientists to build biologically accurate models of the brain. These models result from computer simulations of physical experiments that use fluorescence imaging to understand the structural and functional aspects of the brain. Due to the limited capabilities of the current visualization workflows to handle fluorescent volumetric datasets, we propose a physically-based optical model that can accurately simulate light interaction with fluorescent-tagged scattering media based on the basic principles of geometric optics and Monte Carlo path tracing. We also develop an automated and efficient framework for generating dense fluorescent tissue blocks from a neocortical column model that is composed of approximately 31000 neurons. Results Our pipeline is used to visualize a virtual fluorescent tissue block of 50 μm3 that is reconstructed from the somatosensory cortex of juvenile rat. The fluorescence optical model is qualitatively analyzed and validated against experimental emission spectra of different fluorescent dyes from the Alexa Fluor family. Conclusion We discussed a scientific visualization pipeline for creating images of synthetic neocortical neuronal models that are tagged virtually with fluorescent labels on a physically-plausible basis. The pipeline is applied to analyze and validate simulation data generated from neuroscientific in silico experiments.
A plausible (overlooked) super-luminous supernova in the Sloan digital sky survey stripe 82 data
International Nuclear Information System (INIS)
Kostrzewa-Rutkowska, Zuzanna; Kozłowski, Szymon; Wyrzykowski, Łukasz; Djorgovski, S. George; Mahabal, Ashish A.; Glikman, Eilat; Koposov, Sergey
2013-01-01
We present the discovery of a plausible super-luminous supernova (SLSN), found in the archival data of Sloan Digital Sky Survey (SDSS) Stripe 82, called PSN 000123+000504. The supernova (SN) peaked at m g < 19.4 mag in the second half of 2005 September, but was missed by the real-time SN hunt. The observed part of the light curve (17 epochs) showed that the rise to the maximum took over 30 days, while the decline time lasted at least 70 days (observed frame), closely resembling other SLSNe of SN 2007bi type. The spectrum of the host galaxy reveals a redshift of z = 0.281 and the distance modulus of μ = 40.77 mag. Combining this information with the SDSS photometry, we found the host galaxy to be an LMC-like irregular dwarf galaxy with an absolute magnitude of M B = –18.2 ± 0.2 mag and an oxygen abundance of 12+log [O/H]=8.3±0.2; hence, the SN peaked at M g < –21.3 mag. Our SLSN follows the relation for the most energetic/super-luminous SNe exploding in low-metallicity environments, but we found no clear evidence for SLSNe to explode in low-luminosity (dwarf) galaxies only. The available information on the PSN 000123+000504 light curve suggests the magnetar-powered model as a likely scenario of this event. This SLSN is a new addition to a quickly growing family of super-luminous SNe.
A plausible neural circuit for decision making and its formation based on reinforcement learning.
Wei, Hui; Dai, Dawei; Bu, Yijie
2017-06-01
A human's, or lower insects', behavior is dominated by its nervous system. Each stable behavior has its own inner steps and control rules, and is regulated by a neural circuit. Understanding how the brain influences perception, thought, and behavior is a central mandate of neuroscience. The phototactic flight of insects is a widely observed deterministic behavior. Since its movement is not stochastic, the behavior should be dominated by a neural circuit. Based on the basic firing characteristics of biological neurons and the neural circuit's constitution, we designed a plausible neural circuit for this phototactic behavior from logic perspective. The circuit's output layer, which generates a stable spike firing rate to encode flight commands, controls the insect's angular velocity when flying. The firing pattern and connection type of excitatory and inhibitory neurons are considered in this computational model. We simulated the circuit's information processing using a distributed PC array, and used the real-time average firing rate of output neuron clusters to drive a flying behavior simulation. In this paper, we also explored how a correct neural decision circuit is generated from network flow view through a bee's behavior experiment based on the reward and punishment feedback mechanism. The significance of this study: firstly, we designed a neural circuit to achieve the behavioral logic rules by strictly following the electrophysiological characteristics of biological neurons and anatomical facts. Secondly, our circuit's generality permits the design and implementation of behavioral logic rules based on the most general information processing and activity mode of biological neurons. Thirdly, through computer simulation, we achieved new understanding about the cooperative condition upon which multi-neurons achieve some behavioral control. Fourthly, this study aims in understanding the information encoding mechanism and how neural circuits achieve behavior control
Pezdek, Kathy; Blandon-Gitlin, Iris; Lam, Shirley; Hart, Rhiannon Ellis; Schooler, Jonathan W
2006-12-01
False memories are more likely to be planted for plausible than for implausible events, but does just knowing about an implausible event make individuals more likely to think that the event happened to them? Two experiments assessed the independent contributions o f plausibility a nd background knowledge to planting false beliefs. In Experiment 1, subjects rated 20 childhood events as to the likelihood of each event having happened to them. The list included the implausible target event "received an enema," a critical target event of Pezdek, Finger, and Hodge (1997). Two weeks later, subjects were presented with (1) information regarding the high prevalence rate of enemas; (2) background information on how to administer an enema; (3) neither type of information; or (4) both. Immediately or 2 weeks later, they rated the 20 childhood events again. Only plausibility significantly increased occurrence ratings. In Experiment 2, the target event was changed from "barium enema administered in a hospital" to "home enema for constipation"; significant effects of both plausibility and background knowledge resulted. The results suggest that providing background knowledge can increase beliefs about personal events, but that its impact is limited by the extent of the individual's familiarity with the context of the suggested target event.
Graph-drawing algorithms geometries versus molecular mechanics in fullereness
Kaufman, M.; Pisanski, T.; Lukman, D.; Borštnik, B.; Graovac, A.
1996-09-01
The algorithms of Kamada-Kawai (KK) and Fruchterman-Reingold (FR) have been recently generalized (Pisanski et al., Croat. Chem. Acta 68 (1995) 283) in order to draw molecular graphs in three-dimensional space. The quality of KK and FR geometries is studied here by comparing them with the molecular mechanics (MM) and the adjacency matrix eigenvectors (AME) algorithm geometries. In order to compare different layouts of the same molecule, an appropriate method has been developed. Its application to a series of experimentally detected fullerenes indicates that the KK, FR and AME algorithms are able to reproduce plausible molecular geometries.
International Nuclear Information System (INIS)
Creutz, M.
1987-11-01
A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
International Nuclear Information System (INIS)
1971-01-01
The Agency has a statutory mandate to foster 'the exchange of scientific and technical information on the peaceful uses of atomic energy'. The prime responsibility for this work within the Agency lies with the Division of Scientific and Technical Information, a part of the Department of Technical Operations. The Division accomplishes its task by holding conferences and symposia (Scientific Conferences Section), through the Agency Library, by publishing scientific journals, and through the International Nuclear Information System (INIS). The Computer Section of the Division, which offers services to the Agency as a whole, provides resources for the automation of data storage and retrieval. (author)
Algorithms as fetish: Faith and possibility in algorithmic work
Directory of Open Access Journals (Sweden)
Suzanne L Thomas
2018-01-01
Full Text Available Algorithms are powerful because we invest in them the power to do things. With such promise, they can transform the ordinary, say snapshots along a robotic vacuum cleaner’s route, into something much more, such as a clean home. Echoing David Graeber’s revision of fetishism, we argue that this easy slip from technical capabilities to broader claims betrays not the “magic” of algorithms but rather the dynamics of their exchange. Fetishes are not indicators of false thinking, but social contracts in material form. They mediate emerging distributions of power often too nascent, too slippery or too disconcerting to directly acknowledge. Drawing primarily on 2016 ethnographic research with computer vision professionals, we show how faith in what algorithms can do shapes the social encounters and exchanges of their production. By analyzing algorithms through the lens of fetishism, we can see the social and economic investment in some people’s labor over others. We also see everyday opportunities for social creativity and change. We conclude that what is problematic about algorithms is not their fetishization but instead their stabilization into full-fledged gods and demons – the more deserving objects of critique.
Sobreira, Nara L M; Arachchi, Harindra; Buske, Orion J; Chong, Jessica X; Hutton, Ben; Foreman, Julia; Schiettecatte, François; Groza, Tudor; Jacobsen, Julius O B; Haendel, Melissa A; Boycott, Kym M; Hamosh, Ada; Rehm, Heidi L
2017-10-18
In well over half of the individuals with rare disease who undergo clinical or research next-generation sequencing, the responsible gene cannot be determined. Some reasons for this relatively low yield include unappreciated phenotypic heterogeneity; locus heterogeneity; somatic and germline mosaicism; variants of uncertain functional significance; technically inaccessible areas of the genome; incorrect mode of inheritance investigated; and inadequate communication between clinicians and basic scientists with knowledge of particular genes, proteins, or biological systems. To facilitate such communication and improve the search for patients or model organisms with similar phenotypes and variants in specific candidate genes, we have developed the Matchmaker Exchange (MME). MME was created to establish a federated network connecting databases of genomic and phenotypic data using a common application programming interface (API). To date, seven databases can exchange data using the API (GeneMatcher, PhenomeCentral, DECIPHER, MyGene2, matchbox, Australian Genomics Health Alliance Patient Archive, and Monarch Initiative; the latter included for model organism matching). This article guides usage of the MME for rare disease gene discovery. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley and Sons, Inc.
Energy Technology Data Exchange (ETDEWEB)
Harada, F; Yanagida, T; Fujie, K; Futawatari, H
1975-04-30
The purpose of this construction is the improvement of heat transfer in finned tube heat exchangers, and therefore the improvement of its efficiency or its output per unit volume. This is achieved by preventing the formation of flow boundary layers in gaseous fluid. This effect always occurs on flow of smooth adjacent laminae, and especially if these have pipes carrying liquid passing through them; it worsens the heat transfer of such a boundary layer considerably compared to that in the turbulent range. The fins, which have several rows of heat exchange tubes passing through them, are fixed at a small spacing on theses tubes. The fins have slots cut in them by pressing or punching, where the pressed-out material remains as a web, which runs parallel to the level of the fin and at a small distance from it. These webs and slots are arranged radially around every tube hole, e.g. 6 in number. For a suitable small tube spacing, two adjacent tubes opposite each other have one common slot. Many variants of such slot arrangements are illustrated.
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
Effective Teacher Practice on the Plausibility of Human-Induced Climate Change
Niepold, F.; Sinatra, G. M.; Lombardi, D.
2013-12-01
Climate change education programs in the United States seek to promote a deeper understanding of the science of climate change, behavior change and stewardship, and support informed decision making by individuals, organizations, and institutions--all of which are summarized under the term 'climate literacy.' The ultimate goal of climate literacy is to enable actors to address climate change, both in terms of stabilizing and reducing emissions of greenhouse gases, but also an increased capacity to prepare for the consequences and opportunities of climate change. However, the long-term nature of climate change and the required societal response involve the changing students' ideas about controversial scientific issues which presents unique challenges for educators (Lombardi & Sinatra, 2010; Sinatra & Mason, 2008). This session will explore how the United States educational efforts focus on three distinct, but related, areas: the science of climate change, the human-climate interaction, and using climate education to promote informed decision making. Each of these approaches are represented in the Atlas of Science Literacy (American Association for the Advancement of Science, 2007) and in the conceptual framework for science education developed at the National Research Council (NRC) in 2012. Instruction to develop these fundamental thinking skills (e.g., critical evaluation and plausibility reappraisal) has been called for by the Next Generation Science Standards (NGSS) (Achieve, 2013), an innovative and research based way to address climate change education within the decentralized U.S. education system. However, the promise of the NGSS is that students will have more time to build mastery on the subjects, but the form of that instructional practice has been show to be critical. Research has show that effective instructional activities that promote evaluation of evidence improve students' understanding and acceptance toward the scientifically accepted model of human
Baldwin, Darryl Dean; Willi, Martin Leo; Fiveland, Scott Byron; Timmons, Kristine Ann
2010-12-14
A segmented heat exchanger system for transferring heat energy from an exhaust fluid to a working fluid. The heat exchanger system may include a first heat exchanger for receiving incoming working fluid and the exhaust fluid. The working fluid and exhaust fluid may travel through at least a portion of the first heat exchanger in a parallel flow configuration. In addition, the heat exchanger system may include a second heat exchanger for receiving working fluid from the first heat exchanger and exhaust fluid from a third heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the second heat exchanger in a counter flow configuration. Furthermore, the heat exchanger system may include a third heat exchanger for receiving working fluid from the second heat exchanger and exhaust fluid from the first heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the third heat exchanger in a parallel flow configuration.
DEFF Research Database (Denmark)
Markham, Annette
This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
Mohammadhassanzadeh, Hossein; Van Woensel, William; Abidi, Samina Raza; Abidi, Syed Sibte Raza
2017-01-01
Background Capturing complete medical knowledge is challenging-often due to incomplete patient Electronic Health Records (EHR), but also because of valuable, tacit medical knowledge hidden away in physicians? experiences. To extend the coverage of incomplete medical knowledge-based systems beyond their deductive closure, and thus enhance their decision-support capabilities, we argue that innovative, multi-strategy reasoning approaches should be applied. In particular, plausible reasoning mech...
International Nuclear Information System (INIS)
Rodier, C.J.; Johnston, R.A.
2002-01-01
A sensitivity analysis of plausible errors in population, employment, fuel price, and income projections is conducted using the travel demand and emissions models of the Sacramento, CA, USA, region for their transportation plan. The results of the analyses indicate that plausible error ranges for household income and fuel prices are not a significant source of uncertainty with respect to the region's travel demand and emissions projections. However, plausible errors in population and employment projections (within approximately one standard deviation) may result in the region's transportation plan not meeting the conformity test for nitrogens of oxides (NO x ) in the year 2005 (i.e., an approximately 16% probability). This outcome is also possible in the year 2015 but less likely (within approximately two standard deviations or a 2.5% probability). Errors in socioeconomic projections are only one of many sources of error in travel demand and emissions models. These results have several policy implications. First, regions like Sacramento that meet their conformity tests by a very small margin should rethink new highway investment and consider contingency transportation plans that incorporate more aggressive emissions reduction policies. Second, regional transportation planning agencies should conduct sensitivity analyses as part of their conformity analysis to make explicit significant uncertainties in the methods and to identify the probability of their transportation plan not conforming. Third, the US Environmental Protection Agency (EPA) should clarify the interpretation of ''demonstrate'' conformity of transportation plans; that is, specify the level of certainty that it considers a sufficient demonstration of conformity. (author)
Directory of Open Access Journals (Sweden)
Tobias Lunt
2016-01-01
Full Text Available Climate risks pose a threat to the function of the global food system and therefore also a hazard to the global financial sector, the stability of governments, and the food security and health of the world’s population. This paper presents a method to assess plausible impacts of an agricultural production shock and potential materiality for global insurers. A hypothetical, near-term, plausible, extreme scenario was developed based upon modules of historical agricultural production shocks, linked under a warm phase El Niño-Southern Oscillation (ENSO meteorological framework. The scenario included teleconnected floods and droughts in disparate agricultural production regions around the world, as well as plausible, extreme biotic shocks. In this scenario, global crop yield declines of 10% for maize, 11% for soy, 7% for wheat and 7% for rice result in quadrupled commodity prices and commodity stock fluctuations, civil unrest, significant negative humanitarian consequences and major financial losses worldwide. This work illustrates a need for the scientific community to partner across sectors and industries towards better-integrated global data, modeling and analytical capacities, to better respond to and prepare for concurrent agricultural failure. Governments, humanitarian organizations and the private sector collectively may recognize significant benefits from more systematic assessment of exposure to agricultural climate risk.
A plausible congestion management scheme for the internal electricity market of the European Union
International Nuclear Information System (INIS)
Perez-Arriaga, I.J.; Olmos, L.
2005-01-01
This paper proposes a scheme for the management of network congestion in the Internal Electricity Market (IEM) of the European Union. This scheme tries to combine the rigor in the treatment of the energy and transmission capacity transactions with the flexibility and pragmatism that are necessary to make the scheme compatible with the current diversity of regulatory approaches and market structures in the Member States participating in the IEM. First, a reference scheme is presented with a complete formulation that jointly deals with the energy and capacity markets. Because of the implementation difficulties of this conceptually ideal approach, a more pragmatic scheme is proposed instead. The core of this scheme is an explicit auction mechanism that must be run prior to any short-term (daily, typically) energy markets. In this auction, where only transmission capacity is traded, both bilateral contracts and energy bids to Power Exchanges can participate in order to acquire the capacity that is necessary to carry out their transactions. Some technical issues related to the practical implementation of the proposed approach are also examined; these include market liquidity, the financial or physical nature of the long-term contracts, the potential problems of 'slicing' transmission capacity and the allocation of congestion rents. Market power issues are ignored. (author)
A plausible congestion management scheme for the internal electricity market of the European Union
Energy Technology Data Exchange (ETDEWEB)
Perez-Arriaga, I.J.; Olmos, L. [Universidad Pontificia Comillas, Madrid (Spain). Instituto de Investigacion Tecnologica
2005-06-01
This paper proposes a scheme for the management of network congestion in the Internal Electricity Market (IEM) of the European Union. This scheme tries to combine the rigor in the treatment of the energy and transmission capacity transactions with the flexibility and pragmatism that are necessary to make the scheme compatible with the current diversity of regulatory approaches and market structures in the Member States participating in the IEM. First, a reference scheme is presented with a complete formulation that jointly deals with the energy and capacity markets. Because of the implementation difficulties of this conceptually ideal approach, a more pragmatic scheme is proposed instead. The core of this scheme is an explicit auction mechanism that must be run prior to any short-term (daily, typically) energy markets. In this auction, where only transmission capacity is traded, both bilateral contracts and energy bids to Power Exchanges can participate in order to acquire the capacity that is necessary to carry out their transactions. Some technical issues related to the practical implementation of the proposed approach are also examined; these include market liquidity, the financial or physical nature of the long-term contracts, the potential problems of 'slicing' transmission capacity and the allocation of congestion rents. Market power issues are ignored. (author)
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...
Hayn, Dieter; Walch, Harald; Stieg, Jörg; Kreiner, Karl; Ebner, Hubert; Schreier, Günter
2017-01-01
Machine learning algorithms are a promising approach to help physicians to deal with the ever increasing amount of data collected in healthcare each day. However, interpretation of suggestions derived from predictive models can be difficult. The aim of this work was to quantify the influence of a specific feature on an individual decision proposed by a random forest (RF). For each decision tree within the RF, the influence of each feature on a specific decision (FID) was quantified. For each feature, changes in outcome value due to the feature were summarized along the path. Results from all the trees in the RF were statistically merged. The ratio of FID to the respective feature's global importance was calculated (FIDrel). Global feature importance, FID and FIDrel significantly differed, depending on the individual input data. Therefore, we suggest to present the most important features as determined for FID and for FIDrel, whenever results of a RF are visualized. Feature influence on a specific decision can be quantified in RFs. Further studies will be necessary to evaluate our approach in a real world scenario.
Genetic algorithm for neural networks optimization
Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta
2004-11-01
This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.
Directory of Open Access Journals (Sweden)
Christianto V.
2007-04-01
Full Text Available In the light of some recent hypotheses suggesting plausible unification of thermostatistics where Fermi-Dirac, Bose-Einstein and Tsallis statistics become its special subsets, we consider further plausible extension to include non-integer Hausdorff dimension, which becomes realization of fractal entropy concept. In the subsequent section, we also discuss plausible extension of this unified statistics to include anisotropic effect by using quaternion oscillator, which may be observed in the context of Cosmic Microwave Background Radiation. Further observation is of course recommended in order to refute or verify this proposition.
SPEEDUPtrademark ion exchange column model
International Nuclear Information System (INIS)
Hang, T.
2000-01-01
A transient model to describe the process of loading a solute onto the granular fixed bed in an ion exchange (IX) column has been developed using the SpeedUptrademark software package. SpeedUp offers the advantage of smooth integration into other existing SpeedUp flowsheet models. The mathematical algorithm of a porous particle diffusion model was adopted to account for convection, axial dispersion, film mass transfer, and pore diffusion. The method of orthogonal collocation on finite elements was employed to solve the governing transport equations. The model allows the use of a non-linear Langmuir isotherm based on an effective binary ionic exchange process. The SpeedUp column model was tested by comparing to the analytical solutions of three transport problems from the ion exchange literature. In addition, a sample calculation of a train of three crystalline silicotitanate (CST) IX columns in series was made using both the SpeedUp model and Purdue University's VERSE-LC code. All test cases showed excellent agreement between the SpeedUp model results and the test data. The model can be readily used for SuperLigtrademark ion exchange resins, once the experimental data are complete
Randomized Assignments for Barter Exchanges: Fairness vs Efficiency
DEFF Research Database (Denmark)
Fang, Wenyi; Filos-Ratsikas, Aris; Frederiksen, Søren Kristoffer Stiil
2015-01-01
We study fairness and efficiency properties of randomized algorithms for barter exchanges with direct applications to kidney exchange problems. It is well documented that randomization can serve as a tool to ensure fairness among participants. However, in many applications, practical constraints...
An intense Nigerian stock exchange market prediction using logistic ...
African Journals Online (AJOL)
This paper is a continuation of our research work on the Nigerian Stock Exchange Market (NSEM) uncertainties, In our previous work (Magaji et al, 2013) we presented the Naive Bayes and SVM-SMO algorithms as a tools for predicting the Nigerian Stock Exchange Market; subsequently we used the same transformed data ...
Trilateral market coupling. Algorithm appendix
International Nuclear Information System (INIS)
2006-03-01
Market Coupling is both a mechanism for matching orders on the exchange and an implicit cross-border capacity allocation mechanism. Market Coupling improves the economic surplus of the coupled markets: the highest purchase orders and the lowest sale orders of the coupled power exchanges are matched, regardless of the area where they have been submitted; matching results depend however on the Available Transfer Capacity (ATC) between the coupled hubs. Market prices and schedules of the day-ahead power exchanges of the several connected markets are simultaneously determined with the use of the Available Transfer Capacity defined by the relevant Transmission System Operators. The transmission capacity is thereby implicitly auctioned and the implicit cost of the transmission capacity from one market to the other is the price difference between the two markets. In particular, if the transmission capacity between two markets is not fully used, there is no price difference between the markets and the implicit cost of the transmission capacity is null. Market coupling relies on the principle that the market with the lowest price exports electricity to the market with the highest price. Two situations may appear: either the Available Transfer Capacity (ATC) is large enough and the prices of both markets are equalized (price convergence), or the ATC is too small and the prices cannot be equalized. The Market Coupling algorithm takes as an input: 1 - The Available Transfer Capacity (ATC) between each area for each flow direction and each Settlement Period of the following day (i.e. for each hour of following day); 2 - The (Block Free) Net Export Curves (NEC) of each market for each hour of the following day, i.e., the difference between the total quantity of Divisible Hourly Bids and the total quantity of Divisible Hourly Offers for each price level. The NEC reflects a market's import or export volume sensitivity to price. 3 - The Block Orders submitted by the participants in
Trilateral market coupling. Algorithm appendix
Energy Technology Data Exchange (ETDEWEB)
NONE
2006-03-15
Market Coupling is both a mechanism for matching orders on the exchange and an implicit cross-border capacity allocation mechanism. Market Coupling improves the economic surplus of the coupled markets: the highest purchase orders and the lowest sale orders of the coupled power exchanges are matched, regardless of the area where they have been submitted; matching results depend however on the Available Transfer Capacity (ATC) between the coupled hubs. Market prices and schedules of the day-ahead power exchanges of the several connected markets are simultaneously determined with the use of the Available Transfer Capacity defined by the relevant Transmission System Operators. The transmission capacity is thereby implicitly auctioned and the implicit cost of the transmission capacity from one market to the other is the price difference between the two markets. In particular, if the transmission capacity between two markets is not fully used, there is no price difference between the markets and the implicit cost of the transmission capacity is null. Market coupling relies on the principle that the market with the lowest price exports electricity to the market with the highest price. Two situations may appear: either the Available Transfer Capacity (ATC) is large enough and the prices of both markets are equalized (price convergence), or the ATC is too small and the prices cannot be equalized. The Market Coupling algorithm takes as an input: 1 - The Available Transfer Capacity (ATC) between each area for each flow direction and each Settlement Period of the following day (i.e. for each hour of following day); 2 - The (Block Free) Net Export Curves (NEC) of each market for each hour of the following day, i.e., the difference between the total quantity of Divisible Hourly Bids and the total quantity of Divisible Hourly Offers for each price level. The NEC reflects a market's import or export volume sensitivity to price. 3 - The Block Orders submitted by the
Jager, H.; Klaassen, F.; Durlauf, S.N.; Blume, L.E.
2010-01-01
Currencies can be under severe pressure in the foreign exchange market, but in a fixed (or managed) exchange rate regime that is not fully visible via the change in the exchange rate. Exchange market pressure (EMP) is a concept developed to nevertheless measure the pressure in such cases. This
Energy Technology Data Exchange (ETDEWEB)
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
A distributed algorithm for machine learning
Chen, Shihong
2018-04-01
This paper considers a distributed learning problem in which a group of machines in a connected network, each learning its own local dataset, aim to reach a consensus at an optimal model, by exchanging information only with their neighbors but without transmitting data. A distributed algorithm is proposed to solve this problem under appropriate assumptions.
Queue and stack sorting algorithm optimization and performance analysis
Qian, Mingzhu; Wang, Xiaobao
2018-04-01
Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.
LENUS (Irish Health Repository)
Anney, Richard J L
2012-02-01
Recent genome-wide association studies (GWAS) have implicated a range of genes from discrete biological pathways in the aetiology of autism. However, despite the strong influence of genetic factors, association studies have yet to identify statistically robust, replicated major effect genes or SNPs. We apply the principle of the SNP ratio test methodology described by O\\'Dushlaine et al to over 2100 families from the Autism Genome Project (AGP). Using a two-stage design we examine association enrichment in 5955 unique gene-ontology classifications across four groupings based on two phenotypic and two ancestral classifications. Based on estimates from simulation we identify excess of association enrichment across all analyses. We observe enrichment in association for sets of genes involved in diverse biological processes, including pyruvate metabolism, transcription factor activation, cell-signalling and cell-cycle regulation. Both genes and processes that show enrichment have previously been examined in autistic disorders and offer biologically plausibility to these findings.
Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security
Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra
2018-03-01
The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.
Bayesian analysis for exponential random graph models using the adaptive exchange sampler
Jin, Ick Hoon
2013-01-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the existence of intractable normalizing constants. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the issue of intractable normalizing constants encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
Isotopically exchangeable phosphorus
International Nuclear Information System (INIS)
Barbaro, N.O.
1984-01-01
A critique revision of isotope dilution is presented. The concepts and use of exchangeable phosphorus, the phosphate adsorption, the kinetics of isotopic exchange and the equilibrium time in soils are discussed. (M.A.C.) [pt
2008-09-01
Peer exchanges for state department of transportation (DOT) research programs originated with : the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA). That federal legislation : required the states to conduct periodic peer exchanges to...
Indiana Health Information Exchange
The Indiana Health Information Exchange is comprised of various Indiana health care institutions, established to help improve patient safety and is recognized as a best practice for health information exchange.
International Nuclear Information System (INIS)
Townsend, R.P.
1993-01-01
In this paper the fundamentals of ion exchange mechanisms and their thermodynamics are described. A range of ion exchange materials is considered and problems of communication and technology transfer between scientists working in the field are discussed. (UK)
DEFF Research Database (Denmark)
Wahlberg, Ayo
2008-01-01
Herbal medicine has long been contrasted to modern medicine in terms of a holistic approach to healing, vitalistic theories of health and illness and an emphasis on the body’s innate self-healing capacities. At the same time, since the early 20th century, the cultivation, preparation and mass...... production of herbal medicines have become increasingly industrialised, scientificised and commercialised. What is more, phytochemical efforts to identify and isolate particular ‘active ingredients’ from whole-plant extracts have intensified, often in response to increasing regulatory scrutiny of the safety...... and quality of herbal medicinal products. In this paper, I examine whether describing these developments in terms of a biomedical ‘colonisation’ of herbal medicine, as has been common, allows us to sufficiently account for the mundane collaborative efforts of herbalists, botanists, phytochemists...
Fe atom exchange between aqueous Fe2+ and magnetite.
Gorski, Christopher A; Handler, Robert M; Beard, Brian L; Pasakarnis, Timothy; Johnson, Clark M; Scherer, Michelle M
2012-11-20
The reaction between magnetite and aqueous Fe(2+) has been extensively studied due to its role in contaminant reduction, trace-metal sequestration, and microbial respiration. Previous work has demonstrated that the reaction of Fe(2+) with magnetite (Fe(3)O(4)) results in the structural incorporation of Fe(2+) and an increase in the bulk Fe(2+) content of magnetite. It is unclear, however, whether significant Fe atom exchange occurs between magnetite and aqueous Fe(2+), as has been observed for other Fe oxides. Here, we measured the extent of Fe atom exchange between aqueous Fe(2+) and magnetite by reacting isotopically "normal" magnetite with (57)Fe-enriched aqueous Fe(2+). The extent of Fe atom exchange between magnetite and aqueous Fe(2+) was significant (54-71%), and went well beyond the amount of Fe atoms found at the near surface. Mössbauer spectroscopy of magnetite reacted with (56)Fe(2+) indicate that no preferential exchange of octahedral or tetrahedral sites occurred. Exchange experiments conducted with Co-ferrite (Co(2+)Fe(2)(3+)O(4)) showed little impact of Co substitution on the rate or extent of atom exchange. Bulk electron conduction, as previously invoked to explain Fe atom exchange in goethite, is a possible mechanism, but if it is occurring, conduction does not appear to be the rate-limiting step. The lack of significant impact of Co substitution on the kinetics of Fe atom exchange, and the relatively high diffusion coefficients reported for magnetite suggest that for magnetite, unlike goethite, Fe atom diffusion is a plausible mechanism to explain the rapid rates of Fe atom exchange in magnetite.
Hong, Cheng William; Mamidipalli, Adrija; Hooker, Jonathan C.; Hamilton, Gavin; Wolfson, Tanya; Chen, Dennis H.; Dehkordy, Soudabeh Fazeli; Middleton, Michael S.; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.
2017-01-01
Background Proton density fat fraction (PDFF) estimation requires spectral modeling of the hepatic triglyceride (TG) signal. Deviations in the TG spectrum may occur, leading to bias in PDFF quantification. Purpose To investigate the effects of varying six-peak TG spectral models on PDFF estimation bias. Study Type Retrospective secondary analysis of prospectively acquired clinical research data. Population Forty-four adults with biopsy-confirmed nonalcoholic steatohepatitis. Field Strength/Sequence Confounder-corrected chemical-shift-encoded 3T MRI (using a 2D multiecho gradient-recalled echo technique with magnitude reconstruction) and MR spectroscopy. Assessment In each patient, 61 pairs of colocalized MRI-PDFF and MRS-PDFF values were estimated: one pair used the standard six-peak spectral model, the other 60 were six-peak variants calculated by adjusting spectral model parameters over their biologically plausible ranges. MRI-PDFF values calculated using each variant model and the standard model were compared, and the agreement between MRI-PDFF and MRS-PDFF was assessed. Statistical Tests MRS-PDFF and MRI-PDFF were summarized descriptively. Bland–Altman (BA) analyses were performed between PDFF values calculated using each variant model and the standard model. Linear regressions were performed between BA biases and mean PDFF values for each variant model, and between MRI-PDFF and MRS-PDFF. Results Using the standard model, mean MRS-PDFF of the study population was 17.9±8.0% (range: 4.1–34.3%). The difference between the highest and lowest mean variant MRI-PDFF values was 1.5%. Relative to the standard model, the model with the greatest absolute BA bias overestimated PDFF by 1.2%. Bias increased with increasing PDFF (P hepatic fat content, PDFF estimation is robust across the biologically plausible range of TG spectra. Although absolute estimation bias increased with higher PDFF, its magnitude was small and unlikely to be clinically meaningful. Level of
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.
2015-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.
2014-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Directory of Open Access Journals (Sweden)
Plačkov Slađana
2013-01-01
Full Text Available Small oscillations of exchange rate certainly affect the loss of confidence in the currency (Serbian dinar, CSD and because of the shallow market even the smallest change in the supply and demand leads to a shift in exchange rate and brings uncertainty. Some economists suggest that the course should be linked to inflation and thus ensure predictable and stable exchange rates. Real exchange rate or slightly depressed exchange rate will encourage the competitiveness of exporters and perhaps ensure the development of new production lines which, in terms of overvalued exchange rate, had no economic justification. Fixed exchange rate will bring lower interest rates, lower risk and lower business uncertainty (uncertainty avoidance, but Serbia will also reduce foreign exchange reserves by following this trend. On the other hand, a completely free exchange rate, would lead to a (real fall of Serbian currency, which in a certain period would lead to a significant increase in exports, but the consequences for businessmen and citizens with loans pegged to the euro exchange rate, would be disastrous. We will pay special attention to the depreciation of the exchange rate, as it is generally favorable to the export competitiveness of Serbia and, on the other hand, it leads to an increase in debt servicing costs of the government as well as of the private sector. Oscillations of the dinar exchange rate, appreciation and depreciation, sometimes have disastrous consequences on the economy, investors, imports and exports. In subsequent work, we will observe the movement of the dinar exchange rate in Serbia, in the time interval 2009-2012, in order to strike a balance and maintain economic equilibrium. A movement of foreign currencies against the local currency is controlled in the foreign exchange market, so in case economic interests require, The National Bank of Serbia (NBS, on the basis of arbitrary criteria, can intervene in the market.
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...
Energy Technology Data Exchange (ETDEWEB)
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Directory of Open Access Journals (Sweden)
Cappuccio Antonio
2009-03-01
Full Text Available Abstract Background There is experimental evidence from animal models favoring the notion that the disruption of interactions between stroma and epithelium plays an important role in the initiation of carcinogenesis. These disrupted interactions are hypothesized to be mediated by molecules, termed morphostats, which diffuse through the tissue to determine cell phenotype and maintain tissue architecture. Methods We developed a computer simulation based on simple properties of cell renewal and morphostats. Results Under the computer simulation, the disruption of the morphostat gradient in the stroma generated epithelial precursors of cancer without any mutation in the epithelium. Conclusion The model is consistent with the possibility that the accumulation of genetic and epigenetic changes found in tumors could arise after the formation of a founder population of aberrant cells, defined as cells that are created by low or insufficient morphostat levels and that no longer respond to morphostat concentrations. Because the model is biologically plausible, we hope that these results will stimulate further experiments.
Coleman, Sulamunn R M; Zawadzki, Matthew J; Heron, Kristin E; Vartanian, Lenny R; Smyth, Joshua M
2016-01-01
This study examined whether self-focused and other-focused resiliency help explain how early family adversity relates to perceived stress, subjective health, and health behaviors in college women. Female students (N = 795) participated between October 2009 and May 2010. Participants completed self-report measures of early family adversity, self-focused (self-esteem, personal growth initiative) and other-focused (perceived social support, gratitude) resiliency, stress, subjective health, and health behaviors. Using structural equation modeling, self-focused resiliency associated with less stress, better subjective health, more sleep, less smoking, and less weekend alcohol consumption. Other-focused resiliency associated with more exercise, greater stress, and more weekend alcohol consumption. Early family adversity was indirectly related to all health outcomes, except smoking, via self-focused and other-focused resiliency. Self-focused and other-focused resiliency represent plausible mechanisms through which early family adversity relates to stress and health in college women. This highlights areas for future research in disease prevention and management.
McConnell, Joseph R; Burke, Andrea; Dunbar, Nelia W; Köhler, Peter; Thomas, Jennie L; Arienzo, Monica M; Chellman, Nathan J; Maselli, Olivia J; Sigl, Michael; Adkins, Jess F; Baggenstos, Daniel; Burkhart, John F; Brook, Edward J; Buizert, Christo; Cole-Dai, Jihong; Fudge, T J; Knorr, Gregor; Graf, Hans-F; Grieman, Mackenzie M; Iverson, Nels; McGwire, Kenneth C; Mulvaney, Robert; Paris, Guillaume; Rhodes, Rachael H; Saltzman, Eric S; Severinghaus, Jeffrey P; Steffensen, Jørgen Peder; Taylor, Kendrick C; Winckler, Gisela
2017-09-19
Glacial-state greenhouse gas concentrations and Southern Hemisphere climate conditions persisted until ∼17.7 ka, when a nearly synchronous acceleration in deglaciation was recorded in paleoclimate proxies in large parts of the Southern Hemisphere, with many changes ascribed to a sudden poleward shift in the Southern Hemisphere westerlies and subsequent climate impacts. We used high-resolution chemical measurements in the West Antarctic Ice Sheet Divide, Byrd, and other ice cores to document a unique, ∼192-y series of halogen-rich volcanic eruptions exactly at the start of accelerated deglaciation, with tephra identifying the nearby Mount Takahe volcano as the source. Extensive fallout from these massive eruptions has been found >2,800 km from Mount Takahe. Sulfur isotope anomalies and marked decreases in ice core bromine consistent with increased surface UV radiation indicate that the eruptions led to stratospheric ozone depletion. Rather than a highly improbable coincidence, circulation and climate changes extending from the Antarctic Peninsula to the subtropics-similar to those associated with modern stratospheric ozone depletion over Antarctica-plausibly link the Mount Takahe eruptions to the onset of accelerated Southern Hemisphere deglaciation ∼17.7 ka.
Directory of Open Access Journals (Sweden)
Phanintra Teeranon
2007-12-01
Full Text Available The F0 values of vowels following voiceless consonants are higher than those of vowels following voiced consonants; high vowels have a higher F0 than low vowels. It has also been found that when high vowels follow voiced consonants, the F0 values decrease. In contrast, low vowels following voiceless consonants show increasing F0 values. In other words, the voicing of initial consonants has been found to counterbalance the intrinsic F0 values of high and low vowels (House and Fairbanks 1953, Lehiste and Peterson 1961, Lehiste 1970, Laver 1994, Teeranon 2006. To test whether these three findings are applicable to a disyllabic language, the F0 values of high and low vowels following voiceless and voiced consonants were studied in a Malay dialect of the Austronesian language family spoken in Pathumthani Province, Thailand. The data was collected from three male informants, aged 30-35. The Praat program was used for acoustic analysis. The findings revealed the influence of the voicing of initial consonants on the F0 of vowels to be greater than that of the influence of vowel height. Evidence from this acoustic study shows the plausibility for the Malay dialect spoken in Pathumthani to become a tonal language by the influence of initial consonants rather by the influence of the high-low vowel dimension.
Willmore, Ben D.B.; Bulstrode, Harry; Tolhurst, David J.
2012-01-01
Neuronal populations in the primary visual cortex (V1) of mammals exhibit contrast normalization. Neurons that respond strongly to simple visual stimuli – such as sinusoidal gratings – respond less well to the same stimuli when they are presented as part of a more complex stimulus which also excites other, neighboring neurons. This phenomenon is generally attributed to generalized patterns of inhibitory connections between nearby V1 neurons. The Bienenstock, Cooper and Munro (BCM) rule is a neural network learning rule that, when trained on natural images, produces model neurons which, individually, have many tuning properties in common with real V1 neurons. However, when viewed as a population, a BCM network is very different from V1 – each member of the BCM population tends to respond to the same dominant features of visual input, producing an incomplete, highly redundant code for visual information. Here, we demonstrate that, by adding contrast normalization into the BCM rule, we arrive at a neurally-plausible Hebbian learning rule that can learn an efficient sparse, overcomplete representation that is a better model for stimulus selectivity in V1. This suggests that one role of contrast normalization in V1 is to guide the neonatal development of receptive fields, so that neurons respond to different features of visual input. PMID:22230381
Directory of Open Access Journals (Sweden)
Belem G. López
2017-09-01
Full Text Available Previous work has shown that prior experience in language brokering (informal translation may facilitate the processing of meaning within and across language boundaries. The present investigation examined the influence of brokering on bilinguals' processing of two word collocations with either a literal or a figurative meaning in each language. Proficient Spanish-English bilinguals classified as brokers or non-brokers were asked to judge if adjective+noun phrases presented in each language made sense or not. Phrases with a literal meaning (e.g., stinging insect were interspersed with phrases with a figurative meaning (e.g., stinging insult and non-sensical phrases (e.g., stinging picnic. It was hypothesized that plausibility judgments would be facilitated for literal relative to figurative meanings in each language but that experience in language brokering would be associated with a more equivalent pattern of responding across languages. These predictions were confirmed. The findings add to the body of empirical work on individual differences in language processing in bilinguals associated with prior language brokering experience.
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Automated exchange transfusion and exchange rate.
Funato, M; Shimada, S; Tamai, H; Taki, H; Yoshioka, Y
1989-10-01
An automated blood exchange transfusion (BET) with a two-site technique has been devised by Goldmann et al and by us, using an infusion pump. With this method, we successfully performed exchange transfusions 189 times in the past four years on 110 infants with birth weights ranging from 530 g to 4,000 g. The exchange rate by the automated method was compared with the rate by Diamond's method. Serum bilirubin (SB) levels before and after BET and the maximal SB rebound within 24 hours after BET were: 21.6 +/- 2.4, 11.5 +/- 2.2, and 15.0 +/- 1.5 mg/dl in the automated method, and 22.0 +/- 2.9, 11.2 +/- 2.5, and 17.7 +/- 3.2 mg/dl in Diamond's method, respectively. The result showed that the maximal rebound of the SB level within 24 hours after BET was significantly lower in the automated method than in Diamond's method (p less than 0.01), though SB levels before and after BET were not significantly different between the two methods. The exchange rate was also measured by means of staining the fetal red cells (F cells) both in the automated method and in Diamond's method, and comparing them. The exchange rate of F cells in Diamond's method went down along the theoretical exchange curve proposed by Diamond, while the rate in the automated method was significantly better than in Diamond's, especially in the early stage of BET (p less than 0.01). We believe that the use of this automated method may give better results than Diamond's method in the rate of exchange, because this method is performed with a two-site technique using a peripheral artery and vein.
Directory of Open Access Journals (Sweden)
Andreas Hackl
2016-12-01
Full Text Available Developing functions for advanced driver assistance systems requires very accurate tyre models, especially for the simulation of transient conditions. In the past, parametrisation of a given tyre model based on measurement data showed shortcomings, and the globally optimal solution obtained did not appear to be plausible. In this article, an optimisation strategy is presented, which is able to find plausible and physically feasible solutions by detecting many local outcomes. The firefly algorithm mimics the natural behaviour of fireflies, which use a kind of flashing light to communicate with other members. An algorithm simulating the intensity of the light of a single firefly, diminishing with increasing distances, is implicitly able to detect local solutions on its way to the best solution in the search space. This implicit clustering feature is stressed by an additional explicit clustering step, where local solutions are stored and terminally processed to obtain a large number of possible solutions. The enhanced firefly algorithm will be first applied to the well-known Rastrigin functions and then to the tyre parametrisation problem. It is shown that the firefly algorithm is qualified to find a high number of optimisation solutions, which is required for plausible parametrisation for the given tyre model.
International Nuclear Information System (INIS)
Kunin, R.
1977-01-01
This paper presents the history and theory of ion exchange technology and discusses the usefulness of ion exchange resins which found broad applications in chemical operations. It is demonstrated that the theory of ion exchange technology seems to be moving away from the physical chemist back to the polymer chemist where it started originally. This but confronted the polymer chemists with some knotty problems. It is pointed out that one has still to learn how to use ion exchange materials as efficiently as possible in terms of the waste load that is being pumped into the environment. It is interesting to note that, whereas ion exchange is used for abating pollution, it is also a polluter. One must learn how to use ion exchange as an antipollution device, and at the same time minimize its polluting properties
Brijesh Thapa
2000-01-01
There is a positive correlation between the debt crisis of countries. To combat the crisis, Lovejoy (1984) introduced the debt-for-nature swap process that involves a mechanism of exchange in which a certain amount of the debtorâs foreign debt is cancelled or forgiven, in return for local currency from the debtor government to be invested in domestic environmental...
Microsoft Exchange 2013 cookbook
Van Horenbeeck, Michael
2013-01-01
This book is a practical, hands-on guide that provides the reader with a number of clear, step-by-step exercises.""Microsoft Exchange 2013 Cookbook"" is targeted at network administrators who deal with the Exchange server in their day-to-day jobs. It assumes you have some practical experience with previous versions of Exchange (although this is not a requirement), without being a subject matter expert.
Multidimensional generalized-ensemble algorithms for complex systems.
Mitsutake, Ayori; Okamoto, Yuko
2009-06-07
We give general formulations of the multidimensional multicanonical algorithm, simulated tempering, and replica-exchange method. We generalize the original potential energy function E(0) by adding any physical quantity V of interest as a new energy term. These multidimensional generalized-ensemble algorithms then perform a random walk not only in E(0) space but also in V space. Among the three algorithms, the replica-exchange method is the easiest to perform because the weight factor is just a product of regular Boltzmann-like factors, while the weight factors for the multicanonical algorithm and simulated tempering are not a priori known. We give a simple procedure for obtaining the weight factors for these two latter algorithms, which uses a short replica-exchange simulation and the multiple-histogram reweighting techniques. As an example of applications of these algorithms, we have performed a two-dimensional replica-exchange simulation and a two-dimensional simulated-tempering simulation using an alpha-helical peptide system. From these simulations, we study the helix-coil transitions of the peptide in gas phase and in aqueous solution.
Reactor fuel exchanging facility
International Nuclear Information System (INIS)
Kubota, Shin-ichi.
1981-01-01
Purpose: To enable operation of an emergency manual operating mechanism for a fuel exchanger with all operatorless trucks and remote operation of a manipulator even if the exchanger fails during the fuel exchanging operation. Constitution: When a fuel exchanging system fails while connected to a pressure tube of a nuclear reactor during a fuel exchanging operation, a stand-by self-travelling truck automatically runs along a guide line to the position corresponding to the stopping position at that time of the fuel exchanger based on a command from a central control chamber. At this time the truck is switched to manual operation, and approaches the exchanger while being monitored through a television camera and then stops. Then, a manipurator is connected to the emergency manual operating mechanism of the exchanger, and is operated through necessary emergency steps by driving the snout, the magazine, the grab or the like in the exchanger in response to the problem, and necessary operations for the emergency treatment are thus performed. (Sekiya, K.)
Quantum Computation and Algorithms
International Nuclear Information System (INIS)
Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.
1999-01-01
It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution
Zelber-Sagi, Shira; Salomone, Federico; Mlynarsky, Liat
2017-07-01
Non-alcoholic fatty liver disease (NAFLD) has become a major global health burden, leading to increased risk for cirrhosis, hepatocellular carcinoma, type-2 diabetes and cardiovascular disease. Lifestyle intervention aiming at weight reduction is the most established treatment. However, changing the dietary composition even without weight loss can also reduce steatosis and improve metabolic alterations as insulin resistance and lipid profile. The Mediterranean diet (MD) pattern has been proposed as appropriate for this goal, and was recommended as the diet of choice for the treatment of NAFLD by the EASL-EASD-EASO Clinical Practice Guidelines. The MD has an established superiority in long term weight reduction over low fat diet, but it improves metabolic status and steatosis even without it. However, the effect on liver inflammation and fibrosis was tested only in few observational studies with positive results. Furthermore, considering the strong association between NAFLD and diabetes and CVD, the MD has a highly established advantage in prevention of these diseases, demonstrated in randomized clinical trials. The individual components of the MD such as olive oil, fish, nuts, whole grains, fruits, and vegetables, have been shown to beneficially effect or negatively correlate with NAFLD, while consumption of components that characterize a Western dietary pattern as soft drinks, fructose, meat and saturated fatty acids have been shown to have detrimental association with NAFLD. In this review we will cover the epidemiological evidence and the plausible molecular mechanisms by which the MD as a whole and each of its components can be of benefit in NAFLD. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Norman, Laura M.; Feller, Mark; Villarreal, Miguel L.
2012-01-01
The SLEUTH urban growth model is applied to a binational dryland watershed to envision and evaluate plausible future scenarios of land use change into the year 2050. Our objective was to create a suite of geospatial footprints portraying potential land use change that can be used to aid binational decision-makers in assessing the impacts relative to sustainability of natural resources and potential socio-ecological consequences of proposed land-use management. Three alternatives are designed to simulate different conditions: (i) a Current Trends Scenario of unmanaged exponential growth, (ii) a Conservation Scenario with managed growth to protect the environment, and (iii) a Megalopolis Scenario in which growth is accentuated around a defined international trade corridor. The model was calibrated with historical data extracted from a time series of satellite images. Model materials, methodology, and results are presented. Our Current Trends Scenario predicts the footprint of urban growth to approximately triple from 2009 to 2050, which is corroborated by local population estimates. The Conservation Scenario results in protecting 46% more of the Evergreen class (more than 150,000 acres) than the Current Trends Scenario and approximately 95,000 acres of Barren Land, Crops, Deciduous Forest (Mesquite Bosque), Grassland/Herbaceous, Urban/Recreational Grasses, and Wetlands classes combined. The Megalopolis Scenario results also depict the preservation of some of these land-use classes compared to the Current Trends Scenario, most notably in the environmentally important headwaters region. Connectivity and areal extent of land cover types that provide wildlife habitat were preserved under the alternative scenarios when compared to Current Trends.
Kaplan, David; Su, Dan
2016-01-01
This article presents findings on the consequences of matrix sampling of context questionnaires for the generation of plausible values in large-scale assessments. Three studies are conducted. Study 1 uses data from PISA 2012 to examine several different forms of missing data imputation within the chained equations framework: predictive mean…
International Nuclear Information System (INIS)
Chandrasekharan, Shailesh
2000-01-01
Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm
Autonomous Star Tracker Algorithms
DEFF Research Database (Denmark)
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
GRUNDTVIG in transnational exchange
DEFF Research Database (Denmark)
Grundtvig in transnational exchange is the report from the seminar in december 2015 in cooperation with University of Cape Town and University of Hamburg.......Grundtvig in transnational exchange is the report from the seminar in december 2015 in cooperation with University of Cape Town and University of Hamburg....
Building Relationships through Exchange
Primavera, Angi; Hall, Ellen
2011-01-01
From the moment of birth, children form and develop relationships with others in their world based on exchange. Children recognize that engaging in such encounters offers them the opportunity to enter into a relationship with another individual and to nurture that relationship through the exchange of messages and gifts, items and ideas. At Boulder…
International Nuclear Information System (INIS)
Nicolescu, B.
1978-05-01
The prominent effects supposed to be associated with the exchange of exotic baryonium Regge trajectories are reviewed. The experimental presence of all expected effects leads to suggest that the baryonium exchange mechanism is a correct phenomenological picture and that mesons with isospin 2 or 3/2 or with strangeness 2, strongly coupled to the baryon-antibaryon channels, must be observed
Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa
2018-01-01
The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,
Standardizing exchange formats
International Nuclear Information System (INIS)
Lemmel, H.D.; Schmidt, J.J.
1992-01-01
An international network of co-operating data centres is described who maintain identical data bases which are simultaneously updated by an agreed data exchange procedure. The agreement covers ''data exchange formats'' which are compatible to the centres' internal data storage and retrieval systems which remain different, optimized at each centre to the available computer facilities and to the needs of the data users. Essential condition for the data exchange is an agreement on common procedures for the data exchange is an agreement on common procedures for the data compilation, including critical data analysis and validation. The systems described (''EXFOR'', ''ENDF'', ''CINDA'') are used for ''nuclear reaction data'', but the principles used for data compilation and exchange should be valid also for other data types. (author). 24 refs, 4 figs
Nature-inspired optimization algorithms
Yang, Xin-She
2014-01-01
Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning
VISUALIZATION OF PAGERANK ALGORITHM
Perhaj, Ervin
2013-01-01
The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
An improved genetic algorithm with dynamic topology
International Nuclear Information System (INIS)
Cai Kai-Quan; Tang Yan-Wu; Zhang Xue-Jun; Guan Xiang-Min
2016-01-01
The genetic algorithm (GA) is a nature-inspired evolutionary algorithm to find optima in search space via the interaction of individuals. Recently, researchers demonstrated that the interaction topology plays an important role in information exchange among individuals of evolutionary algorithm. In this paper, we investigate the effect of different network topologies adopted to represent the interaction structures. It is found that GA with a high-density topology ends up more likely with an unsatisfactory solution, contrarily, a low-density topology can impede convergence. Consequently, we propose an improved GA with dynamic topology, named DT-GA, in which the topology structure varies dynamically along with the fitness evolution. Several experiments executed with 15 well-known test functions have illustrated that DT-GA outperforms other test GAs for making a balance of convergence speed and optimum quality. Our work may have implications in the combination of complex networks and computational intelligence. (paper)
Modified Clipped LMS Algorithm
Directory of Open Access Journals (Sweden)
Lotfizad Mojtaba
2005-01-01
Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.
Dynamic Vehicle Routing Using an Improved Variable Neighborhood Search Algorithm
Directory of Open Access Journals (Sweden)
Yingcheng Xu
2013-01-01
Full Text Available In order to effectively solve the dynamic vehicle routing problem with time windows, the mathematical model is established and an improved variable neighborhood search algorithm is proposed. In the algorithm, allocation customers and planning routes for the initial solution are completed by the clustering method. Hybrid operators of insert and exchange are used to achieve the shaking process, the later optimization process is presented to improve the solution space, and the best-improvement strategy is adopted, which make the algorithm can achieve a better balance in the solution quality and running time. The idea of simulated annealing is introduced to take control of the acceptance of new solutions, and the influences of arrival time, distribution of geographical location, and time window range on route selection are analyzed. In the experiment, the proposed algorithm is applied to solve the different sizes' problems of DVRP. Comparing to other algorithms on the results shows that the algorithm is effective and feasible.
Verkade, John G; Wadhwa, Kuldeep; Kong, Xueqian; Schmidt-Rohr, Klaus
2013-05-07
An anion exchange membrane and fuel cell incorporating the anion exchange membrane are detailed in which proazaphosphatrane and azaphosphatrane cations are covalently bonded to a sulfonated fluoropolymer support along with anionic counterions. A positive charge is dispersed in the aforementioned cations which are buried in the support to reduce the cation-anion interactions and increase the mobility of hydroxide ions, for example, across the membrane. The anion exchange membrane has the ability to operate at high temperatures and in highly alkaline environments with high conductivity and low resistance.
Loop algorithms for quantum simulations of fermion models on lattices
International Nuclear Information System (INIS)
Kawashima, N.; Gubernatis, J.E.; Evertz, H.G.
1994-01-01
Two cluster algorithms, based on constructing and flipping loops, are presented for world-line quantum Monte Carlo simulations of fermions and are tested on the one-dimensional repulsive Hubbard model. We call these algorithms the loop-flip and loop-exchange algorithms. For these two algorithms and the standard world-line algorithm, we calculated the autocorrelation times for various physical quantities and found that the ordinary world-line algorithm, which uses only local moves, suffers from very long correlation times that makes not only the estimate of the error difficult but also the estimate of the average values themselves difficult. These difficulties are especially severe in the low-temperature, large-U regime. In contrast, we find that new algorithms, when used alone or in combinations with themselves and the standard algorithm, can have significantly smaller autocorrelation times, in some cases being smaller by three orders of magnitude. The new algorithms, which use nonlocal moves, are discussed from the point of view of a general prescription for developing cluster algorithms. The loop-flip algorithm is also shown to be ergodic and to belong to the grand canonical ensemble. Extensions to other models and higher dimensions are briefly discussed
Semioptimal practicable algorithmic cooling
International Nuclear Information System (INIS)
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-01-01
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
An Agent-Based Co-Evolutionary Multi-Objective Algorithm for Portfolio Optimization
Directory of Open Access Journals (Sweden)
Rafał Dreżewski
2017-08-01
Full Text Available Algorithms based on the process of natural evolution are widely used to solve multi-objective optimization problems. In this paper we propose the agent-based co-evolutionary algorithm for multi-objective portfolio optimization. The proposed technique is compared experimentally to the genetic algorithm, co-evolutionary algorithm and a more classical approach—the trend-following algorithm. During the experiments historical data from the Warsaw Stock Exchange is used in order to assess the performance of the compared algorithms. Finally, we draw some conclusions from these experiments, showing the strong and weak points of all the techniques.
Ross, S.; Jones, L.; Wilson, R. I.; Bahng, B.; Barberopoulou, A.; Borrero, J. C.; Brosnan, D.; Bwarie, J.; Geist, E. L.; Johnson, L.; Kirby, S. H.; Knight, W.; Long, K.; Lynett, P. J.; Miller, K.; Mortensen, C. E.; Nicolsky, D.; Oglesby, D. D.; Perry, S. C.; Plumlee, G. S.; Porter, K. A.; Real, C. R.; Ryan, K. J.; Suleimani, E.; Thio, H. K.; Titov, V.; Wein, A. M.; Whitmore, P.; Wood, N. J.
2013-12-01
The SAFRR Tsunami Scenario models a hypothetical but plausible tsunami, created by an Mw9.1 earthquake occurring offshore from the Alaskan peninsula, and its impacts on the California coast. We present the likely inundation areas, current velocities in key ports and harbors, physical damage and repair costs, economic consequences, environmental impacts, social vulnerability, emergency management, and policy implications for California associated with the tsunami scenario. The intended users are those who must make mitigation decisions before and rapid decisions during future tsunamis. Around a half million people would be present in the scenario's inundation area in residences, businesses, public venues, parks and beaches. Evacuation would likely be ordered for the State of California's maximum mapped tsunami inundation zone, evacuating an additional quarter million people from residences and businesses. Some island and peninsula communities would face particular evacuation challenges because of limited access options and short warning time, caused by the distance between Alaska and California. Evacuations may also be a challenge for certain dependent-care populations. One third of the boats in California's marinas could be damaged or sunk, costing at least 700 million in repairs to boats and docks, and potentially much more to address serious issues due to sediment transport and environmental contamination. Fires would likely start at many sites where fuel and petrochemicals are stored in ports and marinas. Tsunami surges and bores may travel several miles inland up coastal rivers. Debris clean-up and recovery of inundated and damaged areas will take days, months, or years depending on the severity of impacts and the available resources for recovery. The Ports of Los Angeles and Long Beach (POLA/LB) would be shut down for a miniμm of two days due to strong currents. Inundation of dry land in the ports would result in 100 million damages to cargo and additional
Data Exchange Inventory System (DEXI)
Social Security Administration — Enterprise tool used to identify data exchanges occurring between SSA and our trading partners. DEXI contains information on both incoming and outgoing exchanges and...
International Nuclear Information System (INIS)
Cooper, M.D.
1978-01-01
The pion double charge exchange data on the oxygen isotopes is reviewed and new data on 9 Be, 12 C, 24 Mg, and 28 Si are presented. Where theoretical calculations exist, they are compared to the data. 9 references
2017-08-01
The WSDOT Research Peer Exchange was held in Olympia, Washington on May 13 and 14, 2014 and addressed Research Program and Project Management as described in the following paragraphs: Program Management There are numerous funding programs, standing c...
Cation Exchange Water Softeners
WaterSense released a notice of intent to develop a specification for cation exchange water softeners. The program has made the decision not to move forward with a spec at this time, but is making this information available.
Exchange Risk Management Policy
2005-01-01
At the Finance Committee of March 2005, following a comment by the CERN Audit Committee, the Chairman invited the Management to prepare a document on exchange risk management policy. The Finance Committee is invited to take note of this document.
Department of Housing and Urban Development — The About Grantees section of the HUD Exchange brings up contact information, reports, award, jurisdiction, and location data for organizations that receive HUD...
National Aeronautics and Space Administration — The NASA Earth Exchange (NEX) represents a new platform for the Earth science community that provides a mechanism for scientific collaboration and knowledge sharing....
Quader, Syed Manzur
2004-01-01
In recent years, many developing countries having a history of high inflation, unfavorable balance of payment situation and a high level of foreign currencies denominated debt, have switched or are in the process of switching to a more flexible exchange rate regime. Therefore, the stability of the exchange rate and the dynamics of its volatility are more crucial than before to prevent financial crises and macroeconomic disturbances. This paper is designed to find out the reasons behind Bangla...
International Nuclear Information System (INIS)
Thurston, G.C.; McDaniels, J.D.; Gertsch, P.R.
1979-01-01
The present invention relates to heat exchangers used for transferring heat from the gas cooled core of a nuclear reactor to a secondary medium during standby and emergency conditions. The construction of the heat exchanger described is such that there is a minimum of welds exposed to the reactor coolant, the parasitic heat loss during normal operation of the reactor is minimized and the welds and heat transfer tubes are easily inspectable. (UK)
Milsom, William K; Jackson, Donald C
2011-01-01
Hibernation in endotherms and ectotherms is characterized by an energy-conserving metabolic depression due to low body temperatures and poorly understood temperature-independent mechanisms. Rates of gas exchange are correspondly reduced. In hibernating mammals, ventilation falls even more than metabolic rate leading to a relative respiratory acidosis that may contribute to metabolic depression. Breathing in some mammals becomes episodic and in some small mammals significant apneic gas exchange may occur by passive diffusion via airways or skin. In ectothermic vertebrates, extrapulmonary gas exchange predominates and in reptiles and amphibians hibernating underwater accounts for all gas exchange. In aerated water diffusive exchange permits amphibians and many species of turtles to remain fully aerobic, but hypoxic conditions can challenge many of these animals. Oxygen uptake into blood in both endotherms and ectotherms is enhanced by increased affinity of hemoglobin for O₂ at low temperature. Regulation of gas exchange in hibernating mammals is predominately linked to CO₂/pH, and in episodic breathers, control is principally directed at the duration of the apneic period. Control in submerged hibernating ectotherms is poorly understood, although skin-diffusing capacity may increase under hypoxic conditions. In aerated water blood pH of frogs and turtles either adheres to alphastat regulation (pH ∼8.0) or may even exhibit respiratory alkalosis. Arousal in hibernating mammals leads to restoration of euthermic temperature, metabolic rate, and gas exchange and occurs periodically even as ambient temperatures remain low, whereas body temperature, metabolic rate, and gas exchange of hibernating ectotherms are tightly linked to ambient temperature. © 2011 American Physiological Society.
Real exchange rate misalignments
Terra, Maria Cristina T.; Valladares, Frederico Estrella Carneiro
2003-01-01
This paper characterizes episodes of real appreciations and depreciations for a sample of 85 countries, approximately from 1960 to 1998. First, the equilibrium real exchange rate series are constructed for each country using Goldfajn and Valdes (1999) methodology (cointegration with fundamentals). Then, departures from equilibrium real exchange rate (misalignments) are obtained, and a Markov Switching Model is used to characterize the misalignments series as stochastic autor...
International Nuclear Information System (INIS)
Gatewood, J.R.
1980-01-01
A survey covers the various types of heat-exchange equipment that is cleaned routinely in fossil-fired generating plants, the hydrocarbon-processing industry, pulp and paper mills, and other industries; the various types, sources, and adverse effects of deposits in heat-exchange equipment; some details of the actual procedures for high-pressure water jetting and chemical cleaning of some specific pieces of equipment, including nuclear steam generators. (DN)
Dickey, Michael Walsh; Warren, Tessa
2016-01-01
Purpose This study examined the influence of verb–argument information and event-related plausibility on prediction of upcoming event locations in people with aphasia, as well as older and younger, neurotypical adults. It investigated how these types of information interact during anticipatory processing and how the ability to take advantage of the different types of information is affected by aphasia. Method This study used a modified visual-world task to examine eye movements and offline photo selection. Twelve adults with aphasia (aged 54–82 years) as well as 44 young adults (aged 18–31 years) and 18 older adults (aged 50–71 years) participated. Results Neurotypical adults used verb argument status and plausibility information to guide both eye gaze (a measure of anticipatory processing) and image selection (a measure of ultimate interpretation). Argument status did not affect the behavior of people with aphasia in either measure. There was only limited evidence of interaction between these 2 factors in eye gaze data. Conclusions Both event-related plausibility and verb-based argument status contributed to anticipatory processing of upcoming event locations among younger and older neurotypical adults. However, event-related likelihood had a much larger role in the performance of people with aphasia than did verb-based knowledge regarding argument structure. PMID:27997951
Hayes, Rebecca A; Dickey, Michael Walsh; Warren, Tessa
2016-12-01
This study examined the influence of verb-argument information and event-related plausibility on prediction of upcoming event locations in people with aphasia, as well as older and younger, neurotypical adults. It investigated how these types of information interact during anticipatory processing and how the ability to take advantage of the different types of information is affected by aphasia. This study used a modified visual-world task to examine eye movements and offline photo selection. Twelve adults with aphasia (aged 54-82 years) as well as 44 young adults (aged 18-31 years) and 18 older adults (aged 50-71 years) participated. Neurotypical adults used verb argument status and plausibility information to guide both eye gaze (a measure of anticipatory processing) and image selection (a measure of ultimate interpretation). Argument status did not affect the behavior of people with aphasia in either measure. There was only limited evidence of interaction between these 2 factors in eye gaze data. Both event-related plausibility and verb-based argument status contributed to anticipatory processing of upcoming event locations among younger and older neurotypical adults. However, event-related likelihood had a much larger role in the performance of people with aphasia than did verb-based knowledge regarding argument structure.
Cryptographic Combinatorial Securities Exchanges
Thorpe, Christopher; Parkes, David C.
We present a useful new mechanism that facilitates the atomic exchange of many large baskets of securities in a combinatorial exchange. Cryptography prevents information about the securities in the baskets from being exploited, enhancing trust. Our exchange offers institutions who wish to trade large positions a new alternative to existing methods of block trading: they can reduce transaction costs by taking advantage of other institutions’ available liquidity, while third party liquidity providers guarantee execution—preserving their desired portfolio composition at all times. In our exchange, institutions submit encrypted orders which are crossed, leaving a “remainder”. The exchange proves facts about the portfolio risk of this remainder to third party liquidity providers without revealing the securities in the remainder, the knowledge of which could also be exploited. The third parties learn either (depending on the setting) the portfolio risk parameters of the remainder itself, or how their own portfolio risk would change if they were to incorporate the remainder into a portfolio they submit. In one setting, these third parties submit bids on the commission, and the winner supplies necessary liquidity for the entire exchange to clear. This guaranteed clearing, coupled with external price discovery from the primary markets for the securities, sidesteps difficult combinatorial optimization problems. This latter method of proving how taking on the remainder would change risk parameters of one’s own portfolio, without revealing the remainder’s contents or its own risk parameters, is a useful protocol of independent interest.
Valenzuela, Javier
2001-01-01
A radial flow heat exchanger (20) having a plurality of first passages (24) for transporting a first fluid (25) and a plurality of second passages (26) for transporting a second fluid (27). The first and second passages are arranged in stacked, alternating relationship, are separated from one another by relatively thin plates (30) and (32), and surround a central axis (22). The thickness of the first and second passages are selected so that the first and second fluids, respectively, are transported with laminar flow through the passages. To enhance thermal energy transfer between first and second passages, the latter are arranged so each first passage is in thermal communication with an associated second passage along substantially its entire length, and vice versa with respect to the second passages. The heat exchangers may be stacked to achieve a modular heat exchange assembly (300). Certain heat exchangers in the assembly may be designed slightly differently than other heat exchangers to address changes in fluid properties during transport through the heat exchanger, so as to enhance overall thermal effectiveness of the assembly.
Introduction to Evolutionary Algorithms
Yu, Xinjie
2010-01-01
Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti
Recursive forgetting algorithms
DEFF Research Database (Denmark)
Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan
1992-01-01
In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...
Multi-agent Pareto appointment exchanging in hospital patient scheduling
Vermeulen, I.B.; Bohté, S.M.; Somefun, D.J.A.; Poutré, La J.A.
2007-01-01
We present a dynamic and distributed approach to the hospital patient scheduling problem, in which patients can have multiple appointments that have to be scheduled to different resources. To efficiently solve this problem we develop a multi-agent Pareto-improvement appointment exchanging algorithm:
Explaining algorithms using metaphors
Forišek, Michal
2013-01-01
There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo
Algorithms in Algebraic Geometry
Dickenstein, Alicia; Sommese, Andrew J
2008-01-01
In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its
Woo, Andrew
2012-01-01
Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Quick fuzzy backpropagation algorithm.
Nikov, A; Stoeva, S
2001-03-01
A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.
Portfolios of quantum algorithms.
Maurer, S M; Hogg, T; Huberman, B A
2001-12-17
Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.
Algorithm 426 : Merge sort algorithm [M1
Bron, C.
1972-01-01
Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives
Social signals and algorithmic trading of Bitcoin.
Garcia, David; Schweitzer, Frank
2015-09-01
The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment.
Tensor exchange amplitudes in K +- N charge exchange reactions
International Nuclear Information System (INIS)
Svec, M.
1979-01-01
Tensor (A 2 ) exchange amplitudes in K +- N charge exchange (CEX) are constructed from the K +- N CEX data supplemented by information on the vector (rho) exchange amplitudes from πN sca tering. We observed new features in the t-structure of A 2 exchange amplitudes which contradict the t-de pendence anticipated by most of the Regge models. The results also provide evidence for violation of weak exchange degeneracy
Performance measurement of plate fin heat exchanger by exploration: ANN, ANFIS, GA, and SA
A.K. Gupta; P. Kumar; R.K. Sahoo; A.K. Sahu; S.K. Sarangi
2017-01-01
An experimental work is conducted on counter flow plate fin compact heat exchanger using offset strip fin under different mass flow rates. The training, testing, and validation set of data has been collected by conducting experiments. Next, artificial neural network merged with Genetic Algorithm (GA) utilized to measure the performance of plate-fin compact heat exchanger. The main aim of present research is to measure the performance of plate-fin compact heat exchanger and to provide full exp...
Exchanging Description Logic Knowledge Bases
Arenas, M.; Botoeva, E.; Calvanese, D.; Ryzhikov, V.; Sherkhonov, E.
2012-01-01
In this paper, we study the problem of exchanging knowledge between a source and a target knowledge base (KB), connected through mappings. Differently from the traditional database exchange setting, which considers only the exchange of data, we are interested in exchanging implicit knowledge. As
Social dilemmas as exchange dilemmas
Dijkstra, Jacob; van Assen, Marcel A.L.M.
2016-01-01
We develop a new paradigm to study social dilemmas, called exchange dilemmas. Exchange dilemmas arise from externalities of exchanges with third parties, and many real-life social dilemmas are more accurately modeled as exchange dilemmas rather than prisoner's dilemmas. Building on focusing and
Social dilemmas as exchange dilemmas
Dijkstra, J.; van Assen, M.A.L.M.
2016-01-01
We develop a new paradigm to study social dilemmas, called exchange dilemmas. Exchange dilemmas arise from externalities of exchanges with third parties, and many real-life social dilemmas are more accurately modeled as exchange dilemmas rather than prisoner's dilemmas. Bulding on focusing and
Heat exchanger restart evaluation
International Nuclear Information System (INIS)
Morrison, J.M.; Hirst, C.W.; Lentz, T.F.
1992-01-01
On December 24, 1991, the K-Reactor was in the shutdown mode with full AC process water flow and full cooling water flow. Safety rod testing was being performed as part of the power ascension testing program. The results of cooling water samples indicated tritium concentrations higher than allowable. Further sampling and testing confirmed a Process Water System to Cooling Water System leak in heat exchanger 4A (HX 4A). The heat exchanger was isolated and the plant shutdown. Heat exchanger 4A was removed from the plant and moved to C-Area prior to performing examinations and diagnostic testing. This included locating and identifying the leaking tube or tubes, eddy current examination of the leaking tube and a number of adjacent tubes, visually inspecting the leaking tube from both the inside as well as the area surrounding the identified tube. The leaking tube was removed and examined metallurgically to determine the failure mechanism. In addition ten other tubes that either exhibited eddy current indications or would represent a baseline condition were removed from heat exchanger 4A for metallurgical examination. Additional analysis and review of heat exchanger leakage history was performed to determine if there are any patterns which can be used for predictive purposes. Compensatory actions have been taken to improve the sensitivity and response time to any future events of this type. The results of these actions are summary herein
Heat exchanger restart evaluation
International Nuclear Information System (INIS)
Morrison, J.M.; Hirst, C.W.; Lentz, T.F.
1992-01-01
On December 24, 1991, the K-Reactor was in the shutdown mode with full AC process water flow and full cooling water flow. Safety rod testing was being performed as part of the power ascension testing program. The results of cooling water samples indicated tritium concentrations higher than allowable. Further sampling and testing confirmed a Process Water System to Cooling Water System leak in heat exchanger 4A (HX 4A). The heat exchanger was isolated and the plant shutdown. Heat exchanger 4kA was removed from the plant and moved to C-Area prior to performing examinations and diagnostic testing. This included locating and identifying the leaking tube or tubes, eddy current examination of the leaking tube and a number of adjacent tubes, visually inspecting the leaking tube from both the inside as well as the area surrounding the failure mechanism. In addition ten other tubes that either exhibited eddy current indications or would represent a baseline condition were removed from heat exchanger 4A for metallurgical examination. Additional analysis and review of heat exchanger leakage history was performed to determine if there are any patterns which can be used for predictive purposes. Compensatory actions have been taken to improve the sensitivity and response time to any future events of this type. The results of these actions are summarized herein
Heat exchanger restart evaluation
International Nuclear Information System (INIS)
Morrison, J.M.; Hirst, C.W.; Lentz, T.F.
1992-01-01
On December 24, 1991, the K-Reactor was in the shutdown mode with full AC process water flow and full cooling water flow. Safety rod testing was being performed as part of the power ascension testing program. The results of cooling water samples indicated tritium concentrations higher than allowable. Further sampling and testing confirmed a Process Water System to Cooling Water System leak in heat exchanger 4A (HX 4A). The heat exchanger was isolated and the plant shutdown. Heat exchanger 4A was removed from the plant and moved to C-Area prior to performing examinations and diagnostic testing. This included locating and identifying the leaking tube or tubes, eddy current examination of the leaking tube and a number of adjacent tubes, visually inspecting the leaking tube from both the inside as well as the area surrounding the identified tube. The leaking tube was removed and examined metallurgically to determine the failure mechanism. In addition ten other tubes that either exhibited eddy current indications or would represent a baseline condition were removed from heat exchanger 4A for metallurgical examination. Additional analysis and review of heat exchanger leakage history was performed to determine if there are any patterns which can be used for predictive purposes. Compensatory actions have been taken to improve the sensitivity and response time to any future events of this type. The results of these actions are summarized
Composite Differential Search Algorithm
Directory of Open Access Journals (Sweden)
Bo Liu
2014-01-01
Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.
Algorithms and Their Explanations
Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.
2014-01-01
By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of
Finite lattice extrapolation algorithms
International Nuclear Information System (INIS)
Henkel, M.; Schuetz, G.
1987-08-01
Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)
Recursive automatic classification algorithms
Energy Technology Data Exchange (ETDEWEB)
Bauman, E V; Dorofeyuk, A A
1982-03-01
A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.
DEFF Research Database (Denmark)
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...
8. Algorithm Design Techniques
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...
Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing
2016-03-03
This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
Group leaders optimization algorithm
Daskin, Anmer; Kais, Sabre
2011-03-01
We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.
International Nuclear Information System (INIS)
Noga, M.T.
1984-01-01
This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Directory of Open Access Journals (Sweden)
Francesca Musiani
2013-08-01
Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.
Where genetic algorithms excel.
Baum, E B; Boneh, D; Garrett, C
2001-01-01
We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.
DEFF Research Database (Denmark)
Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino
2016-01-01
A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....
Update heat exchanger designing principles
International Nuclear Information System (INIS)
Lipets, A.U.; Yampol'skij, A.E.
1985-01-01
Update heat exchanger design principles are analysed. Different coolant pattern in a heat exchanger are considered. It is suggested to rationally organize flow rates irregularity in it. Applying on heat exchanger designing measures on using really existing temperature and flow rate irregularities will permit to improve heat exchanger efficiency. It is expedient in some cases to artificially produce irregularities. In this connection some heat exchanger design principles must be reviewed now
Social dilemmas as exchange dilemmas
Dijkstra, J.; van Assen, M.A.L.M.
2016-01-01
We develop a new paradigm to study social dilemmas, called exchange dilemmas. Exchange dilemmas arise from externalities of exchanges with third parties, and many real-life social dilemmas are more accurately modeled as exchange dilemmas rather than prisoner's dilemmas. Bulding on focusing and framing research we predict that defection is omnipresent in exchange dilemmas, which is corroborated in to very different experiments. Our results suggest that the fundamental problem of cooperation in...
Energy Technology Data Exchange (ETDEWEB)
Bourg, I.C.; Sposito, G.
2011-05-01
Ion exchange phenomena involve the population of readily exchangeable ions, the subset of adsorbed solutes that balance the intrinsic surface charge and can be readily replaced by major background electrolyte ions (Sposito, 2008). These phenomena have occupied a central place in soil chemistry research since Way (1850) first showed that potassium uptake by soils resulted in the release of an equal quantity of moles of charge of calcium and magnesium. Ion exchange phenomena are now routinely modeled in studies of soil formation (White et al., 2005), soil reclamation (Kopittke et al., 2006), soil fertilitization (Agbenin and Yakubu, 2006), colloidal dispersion/flocculation (Charlet and Tournassat, 2005), the mechanics of argillaceous media (Gajo and Loret, 2007), aquitard pore water chemistry (Tournassat et al., 2008), and groundwater (Timms and Hendry, 2007; McNab et al., 2009) and contaminant hydrology (Chatterjee et al., 2008; van Oploo et al., 2008; Serrano et al., 2009).
International Nuclear Information System (INIS)
Bradbury, M.H.; Baeyens, B.
1994-04-01
A procedure for introducing exchange into geochemical/surface complexation codes is described. Beginning with selectivity coefficients, K c , defined in terms of equivalent fractional ion occupancies, a general expression for the molar based exchange code input parameters, K ex , is derived. In natural systems the uptake of nuclides onto complex sorbents often occurs by more than one mechanism. The incorporation of cation exchange and surface complexation into a geochemical code therefore enables sorption by both mechanisms to be calculated simultaneously. The code and model concepts are tested against sets of experimental data from widely different sorption studies. A proposal is made to set up a data base of selectivity coefficients. Such a data base would form part of a more general one consisting of sorption mechanism specific parameters to be used in conjunction with geochemical/sorption codes to model and predict sorption. (author) 6 figs., 6 tabs., 26 refs
International Nuclear Information System (INIS)
Hayden, Owen; Willby, C.R.
1976-01-01
The invention concerns a heat exchanger of which the tubes, placed in a long casing, cross the casing cover in a sealed manner. These tubes are fixed to the tube plate forming this cover or to the branch tubes it comprises by means of compression joints. These joints make it possible to do away with welds that are sources of defects and to improve the operational safety of the apparatus. An advantageous form of the heat exchanger under the invention includes a manifold for each thermal exchange fluid, and one end of each tube is connected to this manifold by a pipe that is itself connected to the tube by a threaded connection. The latter provides for easy disconnection of the pipe in order to introduce a probe for inspecting the state of the tubes [fr
Channel Access Algorithm Design for Automatic Identification System
Institute of Scientific and Technical Information of China (English)
Oh Sang-heon; Kim Seung-pum; Hwang Dong-hwan; Park Chan-sik; Lee Sang-jeong
2003-01-01
The Automatic Identification System (AIS) is a maritime equipment to allow an efficient exchange of the navigational data between ships and between ships and shore stations. It utilizes a channel access algorithm which can quickly resolve conflicts without any intervention from control stations. In this paper, a design of channel access algorithm for the AIS is presented. The input/output relationship of each access algorithm module is defined by drawing the state transition diagram, dataflow diagram and flowchart based on the technical standard, ITU-R M.1371. In order to verify the designed channel access algorithm, the simulator was developed using the C/C++ programming language. The results show that the proposed channel access algorithm can properly allocate transmission slots and meet the operational performance requirements specified by the technical standard.
Classification of exchange currents
International Nuclear Information System (INIS)
Friar, J.L.
1983-01-01
After expansion of the vector and axial vector currents in powers of (v/c), a heretofore unremarked regularity results. Meson exchange currents can be classified into types I and II, according to the way they satisfy the constraints of special relativity. The archetypes of these two categories are the impulse approximation to the vector and axial vector currents. After a brief discussion of these constraints, the (rhoπγ) and (ωsigmaγ) exchange currents are constructed and classified, and used to illustrate a number of important points which are often overlooked
Alert Exchange Process Protocol
Groen, Frank
2015-01-01
The National Aeronautics and Space Administration of the United States of America (NASA), and the European Space Agency (ESA), and the Japanese Aerospace Exploration Agency (JAXA), acknowledging that NASA, ESA and JAXA have a mutual interest in exchanging Alerts and Alert Status Lists to enhance the information base for each system participant while fortifying the general level of cooperation between the policy agreement subscribers, and each Party will exchange Alert listings on regular basis and detailed Alert information on a need to know basis to the extent permitted by law.
Lipid exchange by ultracentrifugation
DEFF Research Database (Denmark)
Drachmann, Nikolaj Düring; Olesen, Claus
2014-01-01
, and the complex interplay between the lipids and the P-type ATPases are still not well understood. We here describe a robust method to exchange the majority of the lipids surrounding the ATPase after solubilisation and/or purification with a target lipid of interest. The method is based on an ultracentrifugation...... step, where the protein sample is spun through a dense buffer containing large excess of the target lipid, which results in an approximately 80-85 % lipid exchange. The method is a very gently technique that maintains protein folding during the process, hence allowing further characterization...
Microscale Regenerative Heat Exchanger
Moran, Matthew E.; Stelter, Stephan; Stelter, Manfred
2006-01-01
The device described herein is designed primarily for use as a regenerative heat exchanger in a miniature Stirling engine or Stirling-cycle heat pump. A regenerative heat exchanger (sometimes called, simply, a "regenerator" in the Stirling-engine art) is basically a thermal capacitor: Its role in the Stirling cycle is to alternately accept heat from, then deliver heat to, an oscillating flow of a working fluid between compression and expansion volumes, without introducing an excessive pressure drop. These volumes are at different temperatures, and conduction of heat between these volumes is undesirable because it reduces the energy-conversion efficiency of the Stirling cycle.
International Nuclear Information System (INIS)
Huff, Thomas
2010-01-01
Small Column Ion Exchange (SCIX) leverages a suite of technologies developed by DOE across the complex to achieve lifecycle savings. Technologies are applicable to multiple sites. Early testing supported multiple sites. Balance of SRS SCIX testing supports SRS deployment. A forma Systems Engineering Evaluation (SEE) was performed and selected Small Column Ion Exchange columns containing Crystalline Silicotitanate (CST) in a 2-column lead/lag configuration. SEE considered use of Spherical Resorcinol-Formaldehyde (sRF). Advantages of approach at SRS include: (1) no new buildings, (2) low volume of Cs waste in solid form compared to aqueous strip effluent; and availability of downstream processing facilities for immediate processing of spent resin.
A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm
Thirer, Nonel
2013-05-01
With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.
Directory of Open Access Journals (Sweden)
Hans Schonemann
1996-12-01
Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].
A New Modified Firefly Algorithm
Directory of Open Access Journals (Sweden)
Medha Gupta
2016-07-01
Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.
An Improved Nested Sampling Algorithm for Model Selection and Assessment
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
International Nuclear Information System (INIS)
Dinev, D.
1996-01-01
Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)
International Nuclear Information System (INIS)
Haase, G.
2003-01-01
Neural nets to the plausibility check of measured values in the ''integrated measurement and information system for the surveillance of environmental radioactivity, IMIS'' is a research project supported by the Federal Minister for the Environment, Nature Conservation and Nuclear Safety. A goal of this project was the automatic recognition of implausible measured values in the data base ORACLE, which measured values from surveillance of environmental radioactivity of most diverse environmental media contained. The conversion of this project [ 1 ] was realized by institut of logic, complexity and deduction systems of the university Karlsruhe under the direction of Professor Dr. Menzel, Dr. Martin Riedmueller and Martin Lauer. (orig.)
Scholl, Stephan
2018-01-01
This accessible book presents unconventional technologies in heat exchanger design that have the capacity to provide solutions to major concerns within the process and power-generating industries. Demonstrating the advantages and limits of these innovative heat exchangers, it also discusses micro- and nanostructure surfaces and micro-scale equipment, and introduces pillow-plate, helical and expanded metal baffle concepts. It offers step-by-step worked examples, which provide instructions for developing an initial configuration and are supported by clear, detailed drawings and pictures. Various types of heat exchangers are available, and they are widely used in all fields of industry for cooling or heating purposes, including in combustion engines. The market in 2012 was estimated to be U$ 42.7 billion and the global demand for heat exchangers is experiencing an annual growth of about 7.8 %. The market value is expected to reach U$ 57.9 billion in 2016, and approach U$ 78.16 billion in 2020. Providing a valua...
Schaper, Torsten
In the last years equity exchanges have diversified their operations into business areas such as derivatives trading, post-trading services, and software sales. Securities trading and post-trading are subject to economies of scale and scope. The integration of these functions into one institution ensures efficiency by economizing on transactions costs.
Resonance charge exchange processes
International Nuclear Information System (INIS)
Duman, E.L.; Evseev, A.V.; Eletskij, A.V.; Radtsig, A.A.; Smirnov, B.M.
1979-01-01
The calculation results for the resonance charge exchange cross sections for positive and negative atomic and molecular ions are given. The calculations are performed on the basis of the asymptotic theory. The factors affecting the calculation accuracy are analysed. The calculation data for 28 systems are compared with the experiment
Energy Technology Data Exchange (ETDEWEB)
Rafferty, Kevin D.; Culver, Gene
1998-01-01
Most geothermal fluids, because of their elevated temperature, contain a variety of dissolved chemicals. These chemicals are frequently corrosive toward standard materials of construction. As a result, it is advisable in most cases to isolate the geothermal fluid from the process to which heat is being transferred. The task of heat transfer from the geothermal fluid to a closed process loop is most often handled by a plate heat exchanger. The two most common types used in geothermal applications are: bolted and brazed. For smaller systems, in geothermal resource areas of a specific character, downhole heat exchangers (DHEs) provide a unique means of heat extraction. These devices eliminate the requirement for physical removal of fluid from the well. For this reason, DHE-based systems avoid entirely the environmental and practical problems associated with fluid disposal. Shell and tube heat exchangers play only a minor role in low-temperature, direct-use systems. These units have been in common use in industrial applications for many years and, as a result, are well understood. For these reasons, shell and tube heat exchangers will not be covered in this chapter.
J.G.M. van Marrewijk (Charles)
2005-01-01
textabstractThis four-chapter overview of basic exchange rate theories discusses (i) the elasticity and absorption approach, (ii) the (long-run) implications of the monetary approach, (iii) the short-run effects of monetary and fiscal policy under various economic conditions, and (iv) the transition
2014-01-01
The present invention relates to a method for exchanging data between at least two servers with use of a gateway. Preferably the method is applied to healthcare systems. Each server holds a unique federated identifier, which identifier identifies a single patient (P). Thus, it is possible for the
Baltic Exchange toodi Tallinna
2007-01-01
Viimane osa merekonteineritesse kokkupakitud Londoni laevandus- ja merebörsi Baltic Exchange'i endise peakorteri detailidest jõudis 2007. a. juunis Tallinna. Hoone detailid ostnud ärimehed Heiti Hääl ja Eerik-Niiles Kross plaanivad leida hoonele koha Tallinna kesklinnas. E.-N. Krossi kommentaar
Barkla, Bronwyn J; Hirschi, Kendal D
2008-01-01
Tonoplast-localised proton-coupled Ca2+ transporters encoded by cation/H+ exchanger (CAX) genes play a critical role in sequestering Ca2+ into the vacuole. These transporters may function in coordination with Ca2+ release channels, to shape stimulus-induced cytosolic Ca2+ elevations. Recent analysis of Arabidopsis CAX knockout mutants, particularly cax1 and cax3, identified a variety of phenotypes including sensitivity to abiotic stresses, which indicated that these transporters might play a role in mediating the plant's stress response. A common feature of these mutants was the perturbation of H+-ATPase activity at both the tonoplast and the plasma membrane, suggesting a tight interplay between the Ca2+/H+ exchangers and H+ pumps. We speculate that indirect regulation of proton flux by the exchangers may be as important as the direct regulation of Ca2+ flux. These results suggest cautious interpretation of mutant Ca2+/H+ exchanger phenotypes that may be due to either perturbed Ca2+ or H+ transport. PMID:19841670
Telephone Exchange Maintenance
2005-01-01
Urgent maintenance work on CERN telephone exchanges will be performed on 24 March from 6 a.m. to 8 a.m. Telephone services may be disrupted or even interrupted during this time. For more details, please contact us by email at Standard.Telephone@cern.ch.
International Nuclear Information System (INIS)
Richards, D.J.W.
1977-01-01
The heat exchangers of various types are common items of plant in the generation and transmission of electricity. The amount of attention given to the flow-induced vibrations of heat exchangers by designers is usually related to the operational history of similar items of plant. Consequently, if a particular design procedure yields items of plant which behave in a satisfactory manner during their operational life, there is little incentive to improve or refine the design procedure. On the other hand, failures of heat exchangers clearly indicate deficiencies in the design procedures or in the data available to the designer. When such failures are attributable to flow-induced vibrations, the identification of the mechanisms involved is a prime importance. Ideally, basic research work provides the background understanding and the techniques necessary to be able to identify the important mechanisms. In practice, the investigation of a flow-induced vibration problem may identify the presence of mechanisms but may not be able to quantify their effects adequately. In these circumstances the need for additional work is established and the objectives of the research programme emerge. The purpose of this paper is to outline the background to the current research programme at C.E.R.L. on heat exchanger vibration
International Nuclear Information System (INIS)
Markhol, M.
1985-01-01
Existing methods of multi-element separation for radiochemical analysis are considered. The majority of existing methods is noted to be based on application of organic and inorganic ion exchangers. Distillation, coprecipitation, extraction as well as combination of the above methods are also used. Concrete flowsheets of multi-element separation are presented
International Nuclear Information System (INIS)
Kurabayashi, Masaharu.
1985-01-01
Purpose: To improve the stability and the operationability of the fuel exchanging work by checking the validity of the data before the initiation of the work. Constitution: A floppy disc stores the initial charging state data showing the arrangement of fuel assemblies in the reactor core pool, data showing the working procedures for the fuel exchange and a final charged state data upon completion of the work. The initial data and the procedure data are read from the disk and stored once into a memory. Then, the initial data are sequentially performed on the memory in accordance with the procedure data and, thereafter, they were compared with the final data read from the disk. After confirming that there are no errors in the working data, the procedure data are orderly instructed to the fuel exchanger for performing fuel replacement. Accordingly, since the data are checked before the initiation of the work, the fuel exchange can be performed automatically thereby improving the operationability thereof. (Yoshino, Y.)
International Nuclear Information System (INIS)
Martoch, J.; Kugler, V.; Krizek, V.; Strmiska, F.
1988-01-01
The claimed heat exchanger is characteristic by the condensate level being maintained directly in the exchanger while preserving the so-called ''dry'' tube plate. This makes it unnecessary to build another pressure vessel into the circuit. The design of the heat exchanger allows access to both tube plates, which facilitates any repair. Another advantage is the possibility of accelerating the indication of leakage from the space of the second operating medium which is given by opening the drainage pipes of the lower bundle into the collar space and from there through to the indication pipe. The exchanger is especially suitable for deployment in the circuits of nuclear power plants where the second operating medium will be hot water of considerably lower purity than is that of the condensate. A rapid display of leakage can prevent any long-term penetration of this water into the condensate, which would result in worsening water quality in the entire secondary circuit of the nuclear power plant. (J.B.). 1 fig
Energy Technology Data Exchange (ETDEWEB)
Richards, D J.W. [CERL, CEGB, Leatherhead, Surrey (United Kingdom)
1977-12-01
The heat exchangers of various types are common items of plant in the generation and transmission of electricity. The amount of attention given to the flow-induced vibrations of heat exchangers by designers is usually related to the operational history of similar items of plant. Consequently, if a particular design procedure yields items of plant which behave in a satisfactory manner during their operational life, there is little incentive to improve or refine the design procedure. On the other hand, failures of heat exchangers clearly indicate deficiencies in the design procedures or in the data available to the designer. When such failures are attributable to flow-induced vibrations, the identification of the mechanisms involved is a prime importance. Ideally, basic research work provides the background understanding and the techniques necessary to be able to identify the important mechanisms. In practice, the investigation of a flow-induced vibration problem may identify the presence of mechanisms but may not be able to quantify their effects adequately. In these circumstances the need for additional work is established and the objectives of the research programme emerge. The purpose of this paper is to outline the background to the current research programme at C.E.R.L. on heat exchanger vibration.
Algorithms for parallel computers
International Nuclear Information System (INIS)
Churchhouse, R.F.
1985-01-01
Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)
Fluid structure coupling algorithm
International Nuclear Information System (INIS)
McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.
1980-01-01
A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D
Hockney, Roger
1987-01-01
Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.
Diagnostic Algorithm Benchmarking
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Inclusive Flavour Tagging Algorithm
International Nuclear Information System (INIS)
Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex
2016-01-01
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)
Unsupervised learning algorithms
Aydin, Kemal
2016-01-01
This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...
Analytical applications of ion exchangers
Inczédy, J
1966-01-01
Analytical Applications of Ion Exchangers presents the laboratory use of ion-exchange resins. This book discusses the development in the analytical application of ion exchangers. Organized into 10 chapters, this book begins with an overview of the history and significance of ion exchangers for technical purposes. This text then describes the properties of ion exchangers, which are large molecular water-insoluble polyelectrolytes having a cross-linked structure that contains ionic groups. Other chapters consider the theories concerning the operation of ion-exchange resins and investigate th
Horizontal Curve Virtual Peer Exchange : an RSPCB Peer Exchange
2014-06-01
This report summarizes the Horizontal Curve Virtual Peer Exchange sponsored by the Federal Highway Administration (FHWA) Office of Safetys Roadway Safety Professional Capacity Building Program on June 17, 2014. This virtual peer exchange was the f...
Vector Network Coding Algorithms
Ebrahimi, Javad; Fragouli, Christina
2010-01-01
We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...
Optimization algorithms and applications
Arora, Rajesh Kumar
2015-01-01
Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc
From Genetics to Genetic Algorithms
Indian Academy of Sciences (India)
Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.
Algorithmic Principles of Mathematical Programming
Faigle, Ulrich; Kern, Walter; Still, Georg
2002-01-01
Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear
Directory of Open Access Journals (Sweden)
Wang Zi Min
2016-01-01
Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.
Directory of Open Access Journals (Sweden)
Surafel Luleseged Tilahun
2012-01-01
Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.
The Metaphysics of Economic Exchanges
Directory of Open Access Journals (Sweden)
Massin Olivier
2017-05-01
Full Text Available What are economic exchanges? The received view has it that exchanges are mutual transfers of goods motivated by inverse valuations thereof. As a corollary, the standard approach treats exchanges of services as a subspecies of exchanges of goods. We raise two objections against this standard approach. First, it is incomplete, as it fails to take into account, among other things, the offers and acceptances that lie at the core of even the simplest cases of exchanges. Second, it ultimately fails to generalize to exchanges of services, in which neither inverse preferences nor mutual transfers hold true. We propose an alternative definition of exchanges, which treats exchanges of goods as a special case of exchanges of services and which builds in offers and acceptances. According to this theory: (i The valuations motivating exchanges are propositional and convergent rather than objectual and inverse; (ii All exchanges of goods involve exchanges of services/actions, but not the reverse; (iii Offers and acceptances, together with the contractual obligations and claims they bring about, lie at the heart of all cases of exchange.
Methods of Thermal Calculations for a Condensing Waste-Heat Exchanger
Directory of Open Access Journals (Sweden)
Rączka Paweł
2014-12-01
Full Text Available The paper presents the algorithms for a flue gas/water waste-heat exchanger with and without condensation of water vapour contained in flue gas with experimental validation of theoretical results. The algorithms were used for calculations of the area of a heat exchanger using waste heat from a pulverised brown coal fired steam boiler operating in a power unit with a capacity of 900 MWe. In calculation of the condensing part, the calculation results obtained with two algorithms were compared (Colburn-Hobler and VDI algorithms. The VDI algorithm allowed to take into account the condensation of water vapour for flue gas temperatures above the temperature of the water dew point. Thanks to this, it was possible to calculate more accurately the required heat transfer area, which resulted in its reduction by 19 %. In addition, the influence of the mass transfer on the heat transfer area was taken into account, which contributed to a further reduction in the calculated size of the heat exchanger - in total by 28% as compared with the Colburn-Hobler algorithm. The presented VDI algorithm was used to design a 312 kW pilot-scale condensing heat exchanger installed in PGE Belchatow power plant. Obtained experimental results are in a good agreement with calculated values.
Improved multivariate polynomial factoring algorithm
International Nuclear Information System (INIS)
Wang, P.S.
1978-01-01
A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included
Relabeling exchange method (REM) for learning in neural networks
Wu, Wen; Mammone, Richard J.
1994-02-01
The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.
Data Exchange Inventory (DEXI) System
Social Security Administration — DEXI is an intranet application used by SSA users to track all incoming and outgoing data exchanges between SSA and our data exchange partners. Information such as...
Crystal structure and cation exchanging properties of a novel open framework phosphate of Ce (IV)
Energy Technology Data Exchange (ETDEWEB)
Bevara, Samatha; Achary, S. N., E-mail: sachary@barc.gov.in; Tyagi, A. K. [Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Homi Bhabha National Insitute, Anushakti Nagar, Mumbai 400094 (India); Patwe, S. J. [Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Sinha, A. K. [Indus Synchrotrons Utilization Division, Raja Ramanna Centre for Advanced Technology, Indore 452013 (India); Mishra, R. K.; Kumar, Amar; Kaushik, C. P. [Waste Management Division, Bhabha Atomic Research Centre, Mumbai 400085 (India)
2016-05-23
Herein we report preparation, crystal structure and ion exchanging properties of a new phosphate of tetravalent cerium, K{sub 2}Ce(PO{sub 4}){sub 2}. A monoclinic structure having framework type arrangement of Ce(PO{sub 4}){sub 6} units formed by C2O{sub 8} square-antiprism and PO{sub 4} tetrahedra is assigned for K{sub C}e(PO{sub 4}){sub 2}. The K{sup +} ions are occupied in the channels formed by the Ce(PO{sub 4})6 and provide overall charge neutrality. The unique channel type arrangements of the K+ make them exchangeable with other cations. The ion exchanging properties of K2Ce(PO4)2 has been investigated by equilibrating with solution of 90Sr followed by radiometric analysis. In optimum conditions, significant exchange of K+ with Sr2+ with Kd ~ 8000 mL/g is observed. The details of crystal structure and ion exchange properties are explained and a plausible mechanism for ion exchange is presented.
Automatic fuel exchanging device
International Nuclear Information System (INIS)
Takahashi, Fuminobu.
1984-01-01
Purpose: To enable to designate the identification number of a fuel assembly in a nuclear reactor pressure vessel thereby surely exchanging the designated assembly within a short time. Constitution: Identification number (or letter) pressed on a grip of a fuel assembly is to be detected by a two-dimensional ultrasonic probe of a pull-up mechanism. When the detected number corresponds with the designated number, a control signal is outputted, whereby the pull-up drive control mechanism or pull-up mechanism responds to pull-up and exchange the fuel assembly of the identified number. With such a constitution, the fuel assembly can rapidly and surely be recognized even if pressed letters deviate to the left or right of the probe, and further, the hinge portion and the signal processing portion can be simplified. (Horiuchi, T.)
Manufacture of heat exchangers
International Nuclear Information System (INIS)
Burton, J.E.; Tombs, R.W.T.
1980-01-01
A tube bundle for use in a heat exchanger has a series of spaced parallel tubes supported by tube plates and is manufactured by depositing welding material around the end of each tube, machining the deposited material to form an annular flange around the end of the tube and welding the flange into apertures in the tube plate. Preferably the tubes have a length which is slightly less than the distance between the outer surfaces of the tube plates and the deposited material is deposited so that it overlaps and protects the end surfaces of the tubes. A plug may be inserted in the bore of the tubes during the welding material deposition which, as described, is effected by manual metal arc welding. One use of heat exchangers incorporating a tube bundle manufactured as above is in apparatus for reducing the volume of, and recovering nitric acid from, radioactive effluents from a nuclear reprocessing plant. (author)
Compact cryocooler heat exchangers
International Nuclear Information System (INIS)
Luna, J.; Frederking, T.H.K.
1991-01-01
Compact heat exchangers are subject to different constraints as a room temperature gas is cooled down by a cold stream returning from a JT valve (or a similar cryoprocess component). In particular, the optimization of exchangers for liquid helium systems has to cover a wide range in temperature and density of the fluid. In the present work we address the following thermodynamic questions: 1. The optimization of intermediate temperatures which optimize stage operation (a stage is assumed to have a constant cross section); 2. The optimum temperature difference available for best overall economic performance values. The results are viewed in the context of porous media concepts applied to rather low speeds of fluid flow in narrow passages. In this paper examples of fluid/solid constraints imposed in this non-classical low temperature area are presented
Exchange currents in nuclear physics
International Nuclear Information System (INIS)
Truglik, Eh.
1980-01-01
Starting from Adler's low-energy theorem for the soft pion production amplitudes the predictions of the meson exchange currents theory for the nuclear physics are discussed. The results are reformulated in terms of phenomenological lagrangians. This method allows one to pass naturally to the more realistic case of hard mesons. The predictions are critically compared with the existing experimental data. The main processes in which vector isovector exchange currents, vector isoscalar exchange currents and axial exchange currents take place are pointed out
Hydrogen Exchange Mass Spectrometry
Mayne, Leland
2018-01-01
Hydrogen exchange (HX) methods can reveal much about the structure, energetics, and dynamics of proteins. The addition of mass spectrometry (MS) to an earlier fragmentation-separation HX analysis now extends HX studies to larger proteins at high structural resolution and can provide information not available before. This chapter discusses experimental aspects of HX labeling, especially with respect to the use of MS and the analysis of MS data. PMID:26791986
Exchange rate rebounds after foreign exchange market interventions
Hoshikawa, Takeshi
2017-03-01
This study examined the rebounds in the exchange rate after foreign exchange intervention. When intervention is strongly effective, the exchange rate rebounds at next day. The effect of intervention is reduced slightly by the rebound after the intervention. The exchange rate might have been 67.12-77.47 yen to a US dollar without yen-selling/dollar-purchasing intervention of 74,691,100 million yen implemented by the Japanese government since 1991, in comparison to the actual exchange rate was 103.19 yen to the US dollar at the end of March 2014.
International Nuclear Information System (INIS)
Gugel, G.
1976-01-01
Certain types of heat-exchangers have tubes opening through a tube sheet to a manifold having an access opening offset from alignment with the tube ends. A tool for inserting a device, such as for inspection or repair, is provided for use in such instances. The tool is formed by a flexible guide tube insertable through the access opening and having an inner end provided with a connector for connection with the opening of the tube in which the device is to be inserted, and an outer end which remains outside of the chamber, the guide tube having adequate length for this arrangement. A flexible transport hose for internally transporting the device slides inside of the guide tube. This hose is long enough to slide through the guide tube, into the heat-exchanger tube, and through the latter to the extent required for the use of the device. The guide tube must be bent to reach the end of the heat-exchanger tube and the latter may be constructed with a bend, the hose carrying anit-friction elements at interspaced locations along its length to make it possible for the hose to negotiate such bends while sliding to the location where the use of the device is required
Timing Foreign Exchange Markets
Directory of Open Access Journals (Sweden)
Samuel W. Malone
2016-03-01
Full Text Available To improve short-horizon exchange rate forecasts, we employ foreign exchange market risk factors as fundamentals, and Bayesian treed Gaussian process (BTGP models to handle non-linear, time-varying relationships between these fundamentals and exchange rates. Forecasts from the BTGP model conditional on the carry and dollar factors dominate random walk forecasts on accuracy and economic criteria in the Meese-Rogoff setting. Superior market timing ability for large moves, more than directional accuracy, drives the BTGP’s success. We explain how, through a model averaging Monte Carlo scheme, the BTGP is able to simultaneously exploit smoothness and rough breaks in between-variable dynamics. Either feature in isolation is unable to consistently outperform benchmarks throughout the full span of time in our forecasting exercises. Trading strategies based on ex ante BTGP forecasts deliver the highest out-of-sample risk-adjusted returns for the median currency, as well as for both predictable, traded risk factors.
Directory of Open Access Journals (Sweden)
Kaba Dramane
2010-10-01
Full Text Available Abstract Background Landmark based geometric morphometrics (GM allows the quantitative comparison of organismal shapes. When applied to systematics, it is able to score shape changes which often are undetectable by traditional morphological studies and even by classical morphometric approaches. It has thus become a fast and low cost candidate to identify cryptic species. Due to inherent mathematical properties, shape variables derived from one set of coordinates cannot be compared with shape variables derived from another set. Raw coordinates which produce these shape variables could be used for data exchange, however they contain measurement error. The latter may represent a significant obstacle when the objective is to distinguish very similar species. Results We show here that a single user derived dataset produces much less classification error than a multiple one. The question then becomes how to circumvent the lack of exchangeability of shape variables while preserving a single user dataset. A solution to this question could lead to the creation of a relatively fast and inexpensive systematic tool adapted for the recognition of cryptic species. Conclusions To preserve both exchangeability of shape and a single user derived dataset, our suggestion is to create a free access bank of reference images from which one can produce raw coordinates and use them for comparison with external specimens. Thus, we propose an alternative geometric descriptive system that separates 2-D data gathering and analyzes.
Social exchange: Relations and networks
Dijkstra, Jacob
2015-01-01
In this short paper, I review the literature on social exchange networks, with specific attention to theoretical and experimental research. I indicate how social exchange theory is rooted in general social theory and mention a few of its main links to social network analysis and empirical network research. The paper provides an accessible entry into the literature on social exchange.
Loubet, B.; Castell, J.F.; Laville, P.; Personne, E.; Tuzet, A.; Ammann, C.; Emberson, L.; Ganzeveld, L.; Kowalski, A.S.; Merbold, L.; Stella, P.; Tuovinen, J.P.
2015-01-01
This discussion was based on the background document “Review on modelling atmosphere-biosphere exchange of Ozone and Nitrogen oxides”, which reviews the processes contributing to biosphere-atmosphere exchange of O3 and NOx, including stomatal and non-stomatal exchange of O3 and NO, NO2.
Integrated Foreign Exchange Risk Management
DEFF Research Database (Denmark)
Aabo, Tom; Høg, Esben; Kuhn, Jochen
Empirical research has focused on export as a proxy for the exchange rate exposure and the use of foreign exchange derivatives as the instrument to deal with this exposure. This empirical study applies an integrated foreign exchange risk management approach with a particular focus on the role...
A Parallel Butterfly Algorithm
Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing
2014-01-01
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
A Parallel Butterfly Algorithm
Poulson, Jack
2014-02-04
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
Institute of Scientific and Technical Information of China (English)
WANG ShunJin; ZHANG Hua
2007-01-01
Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Protein folding simulations by generalized-ensemble algorithms.
Yoda, Takao; Sugita, Yuji; Okamoto, Yuko
2014-01-01
In the protein folding problem, conventional simulations in physical statistical mechanical ensembles, such as the canonical ensemble with fixed temperature, face a great difficulty. This is because there exist a huge number of local-minimum-energy states in the system and the conventional simulations tend to get trapped in these states, giving wrong results. Generalized-ensemble algorithms are based on artificial unphysical ensembles and overcome the above difficulty by performing random walks in potential energy, volume, and other physical quantities or their corresponding conjugate parameters such as temperature, pressure, etc. The advantage of generalized-ensemble simulations lies in the fact that they not only avoid getting trapped in states of energy local minima but also allows the calculations of physical quantities as functions of temperature or other parameters from a single simulation run. In this article we review the generalized-ensemble algorithms. Four examples, multicanonical algorithm, replica-exchange method, replica-exchange multicanonical algorithm, and multicanonical replica-exchange method, are described in detail. Examples of their applications to the protein folding problem are presented.
Detection of algorithmic trading
Bogoev, Dimitar; Karam, Arzé
2017-10-01
We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.
Handbook of Memetic Algorithms
Cotta, Carlos; Moscato, Pablo
2012-01-01
Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems. The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes. “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now. A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem, memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...
Algorithms in invariant theory
Sturmfels, Bernd
2008-01-01
J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.
CERN. Geneva; PUNZI, Giovanni
2015-01-01
Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.
Plumlee, G. S.; Morman, S. A.; Alpers, C. N.; Hoefen, T. M.; Meeker, G. P.
2010-12-01
Disasters commonly pose immediate threats to human safety, but can also produce hazardous materials (HM) that pose short- and long-term environmental-health threats. The U.S. Geological Survey (USGS) has helped assess potential environmental health characteristics of HM produced by various natural and anthropogenic disasters, such as the 2001 World Trade Center collapse, 2005 hurricanes Katrina and Rita, 2007-2009 southern California wildfires, various volcanic eruptions, and others. Building upon experience gained from these responses, we are now developing methods to anticipate plausible environmental and health implications of the 2008 Great Southern California ShakeOut scenario (which modeled the impacts of a 7.8 magnitude earthquake on the southern San Andreas fault, http://urbanearth.gps.caltech.edu/scenario08/), and the recent ARkStorm scenario (modeling the impacts of a major, weeks-long winter storm hitting nearly all of California, http://urbanearth.gps.caltech.edu/winter-storm/). Environmental-health impacts of various past earthquakes and extreme storms are first used to identify plausible impacts that could be associated with the disaster scenarios. Substantial insights can then be gleaned using a Geographic Information Systems (GIS) approach to link ShakeOut and ARkStorm effects maps with data extracted from diverse database sources containing geologic, hazards, and environmental information. This type of analysis helps constrain where potential geogenic (natural) and anthropogenic sources of HM (and their likely types of contaminants or pathogens) fall within areas of predicted ShakeOut-related shaking, firestorms, and landslides, and predicted ARkStorm-related precipitation, flooding, and winds. Because of uncertainties in the event models and many uncertainties in the databases used (e.g., incorrect location information, lack of detailed information on specific facilities, etc.) this approach should only be considered as the first of multiple steps
Mastering Microsoft Exchange Server 2010
McBee, Jim
2010-01-01
A top-selling guide to Exchange Server-now fully updated for Exchange Server 2010. Keep your Microsoft messaging system up to date and protected with the very newest version, Exchange Server 2010, and this comprehensive guide. Whether you're upgrading from Exchange Server 2007 SP1 or earlier, installing for the first time, or migrating from another system, this step-by-step guide provides the hands-on instruction, practical application, and real-world advice you need.: Explains Microsoft Exchange Server 2010, the latest release of Microsoft's messaging system that protects against spam and vir
Named Entity Linking Algorithm
Directory of Open Access Journals (Sweden)
M. F. Panteleev
2017-01-01
Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.
Artificial root foraging optimizer algorithm with hybrid strategies
Directory of Open Access Journals (Sweden)
Yang Liu
2017-02-01
Full Text Available In this work, a new plant-inspired optimization algorithm namely the hybrid artificial root foraging optimizion (HARFO is proposed, which mimics the iterative root foraging behaviors for complex optimization. In HARFO model, two innovative strategies were developed: one is the root-to-root communication strategy, which enables the individual exchange information with each other in different efficient topologies that can essentially improve the exploration ability; the other is co-evolution strategy, which can structure the hierarchical spatial population driven by evolutionary pressure of multiple sub-populations that ensure the diversity of root population to be well maintained. The proposed algorithm is benchmarked against four classical evolutionary algorithms on well-designed test function suites including both classical and composition test functions. Through the rigorous performance analysis that of all these tests highlight the significant performance improvement, and the comparative results show the superiority of the proposed algorithm.
Fokkinga, M.M.
1992-01-01
An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as
A cluster algorithm for graphs
S. van Dongen
2000-01-01
textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)
Algorithms for Reinforcement Learning
Szepesvari, Csaba
2010-01-01
Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'
Animation of planning algorithms
Sun, Fan
2014-01-01
Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...
Secondary Vertex Finder Algorithm
Heer, Sebastian; The ATLAS collaboration
2017-01-01
If a jet originates from a b-quark, a b-hadron is formed during the fragmentation process. In its dominant decay modes, the b-hadron decays into a c-hadron via the electroweak interaction. Both b- and c-hadrons have lifetimes long enough, to travel a few millimetres before decaying. Thus displaced vertices from b- and subsequent c-hadron decays provide a strong signature for a b-jet. Reconstructing these secondary vertices (SV) and their properties is the aim of this algorithm. The performance of this algorithm is studied with tt̄ events, requiring at least one lepton, simulated at 13 TeV.
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Randomized Filtering Algorithms
DEFF Research Database (Denmark)
Katriel, Irit; Van Hentenryck, Pascal
2008-01-01
of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...
Progress in liquid ion exchangers
International Nuclear Information System (INIS)
Nakagawa, Genkichi
1974-01-01
Review is made on the extraction with anion exchangers and the extraction with liquid cation exchangers. On the former, explanation is made on the extraction of acids, the relation between anion exchange and the extraction of metals, the composition of the metallic complexes that are extracted, and the application of the extraction with anion exchangers to analytical chemistry. On the latter, explanation is made on the extraction of metals and its application to analytical chemistry. The extraction with liquid ion exchangers is suitable for the operation in chromatography, because the distribution of extracting agents into aqueous phase is small, and extraction equilibrium is quickly reached, usually within 1 to several minutes. The separation by means of anion exchangers is usually made from hydrochloric acid solution. For example, Brinkman et al. determined Rf values for more than 50 elements by thin layer chromatography. Tables are given for showing the structure of the liquid ion exchangers and the polymerized state of various amines. (Mori, K.)
An Ordering Linear Unification Algorithm
Institute of Scientific and Technical Information of China (English)
胡运发
1989-01-01
In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.
New Optimization Algorithms in Physics
Hartmann, Alexander K
2004-01-01
Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.
A propositional CONEstrip algorithm
E. Quaeghebeur (Erik); A. Laurent; O. Strauss; B. Bouchon-Meunier; R.R. Yager (Ronald)
2014-01-01
textabstractWe present a variant of the CONEstrip algorithm for checking whether the origin lies in a finitely generated convex cone that can be open, closed, or neither. This variant is designed to deal efficiently with problems where the rays defining the cone are specified as linear combinations
Modular Regularization Algorithms
DEFF Research Database (Denmark)
Jacobsen, Michael
2004-01-01
The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen...
Indian Academy of Sciences (India)
Shortest path problems. Road network on cities and we want to navigate between cities. . – p.8/30 ..... The rest of the talk... Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. . – p.15/30 ...
The Copenhagen Triage Algorithm
DEFF Research Database (Denmark)
Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia
2016-01-01
is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...
de Casteljau's Algorithm Revisited
DEFF Research Database (Denmark)
Gravesen, Jens
1998-01-01
It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...
Algorithms in ambient intelligence
Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Weber, W.; Rabaey, J.M.; Aarts, E.
2005-01-01
We briefly review the concept of ambient intelligence and discuss its relation with the domain of intelligent algorithms. By means of four examples of ambient intelligent systems, we argue that new computing methods and quantification measures are needed to bridge the gap between the class of
General Algorithm (High level)
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...
Comprehensive eye evaluation algorithm
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
DEFF Research Database (Denmark)
This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...
Optimal Quadratic Programming Algorithms
Dostal, Zdenek
2009-01-01
Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers
A robust stochastic approach for design optimization of air cooled heat exchangers
Energy Technology Data Exchange (ETDEWEB)
Doodman, A.R.; Fesanghary, M.; Hosseini, R. [Department of Mechanical Engineering, Amirkabir University of Technology, 424-Hafez Avenue, 15875-4413 Tehran (Iran)
2009-07-15
This study investigates the use of global sensitivity analysis (GSA) and harmony search (HS) algorithm for design optimization of air cooled heat exchangers (ACHEs) from the economic viewpoint. In order to reduce the size of the optimization problem, GSA is performed to examine the effect of the design parameters and to identify the non-influential parameters. Then HS is applied to optimize influential parameters. To demonstrate the ability of the HS algorithm a case study is considered and for validation purpose, genetic algorithm (GA) is also applied to this case study. Results reveal that the HS algorithm converges to optimum solution with higher accuracy in comparison with GA. (author)
A robust stochastic approach for design optimization of air cooled heat exchangers
International Nuclear Information System (INIS)
Doodman, A.R.; Fesanghary, M.; Hosseini, R.
2009-01-01
This study investigates the use of global sensitivity analysis (GSA) and harmony search (HS) algorithm for design optimization of air cooled heat exchangers (ACHEs) from the economic viewpoint. In order to reduce the size of the optimization problem, GSA is performed to examine the effect of the design parameters and to identify the non-influential parameters. Then HS is applied to optimize influential parameters. To demonstrate the ability of the HS algorithm a case study is considered and for validation purpose, genetic algorithm (GA) is also applied to this case study. Results reveal that the HS algorithm converges to optimum solution with higher accuracy in comparison with GA
Difficult Sudoku Puzzles Created by Replica Exchange Monte Carlo Method
Watanabe, Hiroshi
2013-01-01
An algorithm to create difficult Sudoku puzzles is proposed. An Ising spin-glass like Hamiltonian describing difficulty of puzzles is defined, and difficult puzzles are created by minimizing the energy of the Hamiltonian. We adopt the replica exchange Monte Carlo method with simultaneous temperature adjustments to search lower energy states efficiently, and we succeed in creating a puzzle which is the world hardest ever created in our definition, to our best knowledge. (Added on Mar. 11, the ...
Exchange functional by a range-separated exchange hole
International Nuclear Information System (INIS)
Toyoda, Masayuki; Ozaki, Taisuke
2011-01-01
An approximation to the exchange-hole density is proposed for the evaluation of the exact exchange energy in electronic structure calculations within the density-functional theory and the Kohn-Sham scheme. Based on the localized nature of density matrix, the exchange hole is divided into the short-range (SR) and long-range (LR) parts by using an adequate filter function, where the LR part is deduced by matching of moments with the exactly calculated SR counterpart, ensuring the correct asymptotic -1/r behavior of the exchange potential. With this division, the time-consuming integration is truncated at a certain interaction range, largely reducing the computation cost. The total energies, exchange energies, exchange potentials, and eigenvalues of the highest-occupied orbitals are calculated for the noble-gas atoms. The close agreement of the results with the exact values suggests the validity of the approximation.
Benchmarking monthly homogenization algorithms
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
Python algorithms mastering basic algorithms in the Python language
Hetland, Magnus Lie
2014-01-01
Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc
Exchange and fellowship programme
Energy Technology Data Exchange (ETDEWEB)
NONE
1959-04-15
By February 1959, the IAEA had received and considered nearly 300 nominations from 31 countries for nuclear science fellowships. More than 200 of the candidates - from 29 countries - had been selected for placement in centres of training in 21 countries. The programme covers three types of training: 1. General techniques training: to develop skills in the use of some fundamental techniques in the field of nuclear energy; 2. Specialist training: to prepare specialists in the theoretical and experimental aspects of the science and technology of nuclear energy; 3. Research training: to provide advanced training, including active participation in research work; this is for persons potentially qualified to develop and carry out research programmes in the basic sciences and engineering. The duration of training varies from some weeks to five or six years. The long-duration training is given at universities or educational establishments of university level, and is of special interest to Member States lacking personnel with the requisite university education. Under its 1959 exchange and fellowship programme, the Agency will be in a position to award over 400 fellowships. Some of these will be paid out of the Agency's operating fund, while 130 fellowships have been offered directly to IAEA by Member States for training at their universities or institutes. There are two new features in the Agency's 1959 programme. One provides for fellowships for scientific research work, the other is the exchange of specialists
International Nuclear Information System (INIS)
Imada, Takahiko; Sato, Hideo.
1975-01-01
Object: To provide a centripetal device, which has an initial spring force greater than a frictional force in an oscillating direction of a telescope mast, on a mast fixing device mounted on a body of fuel exchanging apparatus so that the telescope mast may be secured quickly returning to a predetermined initial position. Structure: When the body of fuel exchanging apparatus is stopped at a predetermined position, a tension spring, which has an initial spring force greater than a frictional force in an oscillating direction of the telescope mast, causes a lug to be forced by means of a push rod to position a sliding base plate to its original position. At the same time, a device of similar structure causes an operating arm to be positioned to the original position, and a lock pin urged by a cylinder is inserted into a through hole in the sliding base plate and operating arm so that the telescope mast may be fixed and retained. (Hanada, M.)
Optimization of liquid LBE-helium heat exchanger in ADS
International Nuclear Information System (INIS)
Meng Ruixue; Cai Jun; Huai Xiulan; Chen Fei
2015-01-01
The multi-parameter optimization of the liquid LBE-helium heat exchanger in ADS was conducted by genetic algorithm with entransy dissipation number and total cost as objective functions. The results show that the effectiveness of heat exchanger increases by 10.5% and 3.8%, and the total cost-reduces by 5.9% and 27.0% respectively with two optimization methods. Nevertheless, the optimization processes trade off increasing heat transfer area and decreasing heat transfer effectiveness respectively against achieving optimization targets. By comprehensively considering heat exchanger performance and cost-benefit, the optimization method with entransy dissipation number as the objective function is found to be more advantageous. (authors)
DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach
Directory of Open Access Journals (Sweden)
Tewfik Ahmed H
2006-01-01
Full Text Available Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.
Directory of Open Access Journals (Sweden)
Dazhi Jiang
2015-01-01
Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.
Utilization of genetic algorithm in on-line tuning of fluid power servos
Energy Technology Data Exchange (ETDEWEB)
Halme, J.
1997-12-31
This study describes a robust and plausible method based on genetic algorithms suitable for tuning a regulator. The main advantages of the method presented is its robustness and easy-to-use feature. In this thesis the method is demonstrated by searching for appropriate control parameters of a state-feedback controller in a fluid power environment. To corroborate the robustness of the tuning method, two earlier studies are also presented in the appendix, where the presented tuning method is used in different kinds of regulator tuning situations. (orig.) 33 refs.
Utilization of genetic algorithm in on-line tuning of fluid power servos
Energy Technology Data Exchange (ETDEWEB)
Halme, J
1998-12-31
This study describes a robust and plausible method based on genetic algorithms suitable for tuning a regulator. The main advantages of the method presented is its robustness and easy-to-use feature. In this thesis the method is demonstrated by searching for appropriate control parameters of a state-feedback controller in a fluid power environment. To corroborate the robustness of the tuning method, two earlier studies are also presented in the appendix, where the presented tuning method is used in different kinds of regulator tuning situations. (orig.) 33 refs.
Reactive Collision Avoidance Algorithm
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
MULTIFRACTAL STRUCTURE OF CENTRAL AND EASTERN EUROPEAN FOREIGN EXCHANGE MARKETS
Directory of Open Access Journals (Sweden)
Cn#259;pun#351;an Rn#259;zvan
2012-07-01
Full Text Available It is well known that empirical data coming from financial markets, like stock market indices, commodities, interest rates, traded volumes and foreign exchange rates have a multifractal structure. Multifractals were introduced in the field of economics to surpass the shortcomings of classical models like the fractional Brownian motion or GARCH processes. In this paper we investigate the multifractal behavior of Central and Eastern European foreign exchange rates, namely the Czech koruna, Croatian kuna, Hungarian forint, Polish zlot, Romanian leu and Russian rouble with respect to euro from January 13, 2000 to February 29, 2012. The dynamics of exchange rates is of interest for investors and traders, monetary and fiscal authorities, economic agents or policy makers. The exchange rate movements affect the international balance of payments, trade flows, and allocation of the resources in national and international economy. The empirical results from the multifractal detrending fluctuation analysis algorithm show that the six exchange rate series analysed display significant multifractality. Moreover, generating shuffled and surrogate time series, we analyze the sources of multifractality, long-range correlations and heavy-tailed distributions, and we find that this multifractal behavior can be mainly attributed to the latter. Finally, we propose a foreign exchange market inefficiency ranking by considering the multifractality degree as a measure of inefficiency. The regulators, through policy instruments, aim to improve the informational inefficiency of the markets, to reduce the associated risks and to ensure economic stabilization. Evaluation of the degree of information efficiency of foreign exchange markets, for Central and Eastern Europe countries, is important to assess to what extent these countries are prepared for the transition towards fully monetary integration. The weak form efficiency implies that the past exchange rates cannot help to
Ion exchange technology assessment report
International Nuclear Information System (INIS)
Duhn, E.F.
1992-01-01
In the execution of its charter, the SRS Ion Exchange Technology Assessment Team has determined that ion exchange (IX) technology has evolved to the point where it should now be considered as a viable alternative to the SRS reference ITP/LW/PH process. The ion exchange media available today offer the ability to design ion exchange processing systems tailored to the unique physical and chemical properties of SRS soluble HLW's. The technical assessment of IX technology and its applicability to the processing of SRS soluble HLW has demonstrated that IX is unquestionably a viable technology. A task team was chartered to evaluate the technology of ion exchange and its potential for replacing the present In-Tank Precipitation and proposed Late Wash processes to remove Cs, Sr, and Pu from soluble salt solutions at the Savannah River Site. This report documents the ion exchange technology assessment and conclusions of the task team
Beard reconstruction: A surgical algorithm.
Ninkovic, M; Heidekrueger, P I; Ehrl, D; von Spiegel, F; Broer, P N
2016-06-01
Facial defects with loss of hair-bearing regions can be caused by trauma, infection, tumor excision, or burn injury. The presented analysis evaluates a series of different surgical approaches with a focus on male beard reconstruction, emphasizing the role of tissue expansion of regional and free flaps. Locoregional and free flap reconstructions were performed in 11 male patients with 14 facial defects affecting the hair-bearing bucco-mandibular or perioral region. In order to minimize donor-site morbidity and obtain large amounts of thin, pliable, hair-bearing tissue, pre-expansion was performed in five of 14 patients. Eight of 14 patients were treated with locoregional flap reconstructions and six with free flap reconstructions. Algorithms regarding pre- and intraoperative decision making are discussed and long-term (mean follow-up 1.5 years) results analyzed. Major complications, including tissue expander infection with the need for removal or exchange, partial or full flap loss, occurred in 0% (0/8) of patients with locoregional flaps and in 17% (1/6) of patients undergoing free flap reconstructions. Secondary refinement surgery was performed in 25% (2/8) of locoregional flaps and in 67% (4/6) of free flaps. Both locoregional and distant tissue transfers play a role in beard reconstruction, while pre-expansion remains an invaluable tool. Paying attention to the presented principles and considering the significance of aesthetic facial subunits, range of motion, aesthetics, and patient satisfaction were improved long term in all our patients while minimizing donor-site morbidity. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Plausibility Arguments and Universal Gravitation
Cunha, Ricardo F. F.; Tort, A. C.
2017-01-01
Newton's law of universal gravitation underpins our understanding of the dynamics of the Solar System and of a good portion of the observable universe. Generally, in the classroom or in textbooks, the law is presented initially in a qualitative way and at some point during the exposition its mathematical formulation is written on the blackboard…
Towards a more plausible dragon
Efthimiou, Costas
2014-08-01
Wizards, mermaids, dragons and aliens. Walking, running, flying and space travel. A hi-tech elevator, a computer, a propulsion engine and a black hole. What do all of these things have in common? This might seem like a really hard brainteaser but the answer is simple: they all obey the fundamental laws of our universe.
Plausibility orderings in dynamic games
Perea ý Monsuwé, A.
2014-01-01
In this paper we explore game-theoretic reasoning in dynamic games within the framework of belief revision theory. More precisely, we focus on the forward induction concept of ‘common strong belief in rationality’ (Battigalli and Siniscalchi (2002) and the backward induction concept of ‘common
Data Exchange Protocol in Repsail
Directory of Open Access Journals (Sweden)
Gucma Maciej
2017-12-01
Full Text Available Article presents implantation and theoretical considerations of data exchange protocol developed for the RepSail project, where main objective was design and building innovative hybrid yacht. One of problems during the design process was improper functioning of data exchange protocols that were available in the commercially available devices to mention navigation purpose NMEA183 or 2000 as well as automation dedicated ones (CAN and similar. Author shows the basis of the dedicated format of exchange for in board devices.
Developing bulk exchange spring magnets
Mccall, Scott K.; Kuntz, Joshua D.
2017-06-27
A method of making a bulk exchange spring magnet by providing a magnetically soft material, providing a hard magnetic material, and producing a composite of said magnetically soft material and said hard magnetic material to make the bulk exchange spring magnet. The step of producing a composite of magnetically soft material and hard magnetic material is accomplished by electrophoretic deposition of the magnetically soft material and the hard magnetic material to make the bulk exchange spring magnet.
Can Exchange Rates Be Predicted?
Siriwutiset, Trin
2007-01-01
Foreign exchange rates produce significant impacts on both the macroeconomic and microeconomic scale. CountriesÃ¢Â�Â� government and multinational companies have been seeking ways to stabilize the exchange rates for a few decades. However, there is no perfect consensus on methods to control and stabilize the exchange rates. In fact, there are several occasions in history where turbulence movements caused crisis in the economies. There are several factors that are identified by economis...
Towards automatic exchange of information
Oberson, Xavier
2015-01-01
This article describes the various steps that led towards automatic exchange of information as the global standard and the issues that remain to be solved. First, the various competing models of exchange information, such as Double Tax Treaty (DTT), TIEA's, FATCA or UE Directives are described with a view to show how they interact between themselves. Second, the so-called Rubik Strategy is summarized and compared with an automatic exchange of information (AEOI). The third part then describes ...
Exchange rate smoothing in Hungary
Karádi, Péter
2005-01-01
The paper proposes a structural empirical model capable of examining exchange rate smoothing in the small, open economy of Hungary. The framework assumes the existence of an unobserved and changing implicit exchange rate target. The central bank is assumed to use interest rate policy to obtain this preferred rate in the medium term, while market participants are assumed to form rational expectations about this target and influence exchange rates accordingly. The paper applies unobserved varia...
Partitional clustering algorithms
2015-01-01
This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...
Treatment Algorithm for Ameloblastoma
Directory of Open Access Journals (Sweden)
Madhumati Singh
2014-01-01
Full Text Available Ameloblastoma is the second most common benign odontogenic tumour (Shafer et al. 2006 which constitutes 1–3% of all cysts and tumours of jaw, with locally aggressive behaviour, high recurrence rate, and a malignant potential (Chaine et al. 2009. Various treatment algorithms for ameloblastoma have been reported; however, a universally accepted approach remains unsettled and controversial (Chaine et al. 2009. The treatment algorithm to be chosen depends on size (Escande et al. 2009 and Sampson and Pogrel 1999, anatomical location (Feinberg and Steinberg 1996, histologic variant (Philipsen and Reichart 1998, and anatomical involvement (Jackson et al. 1996. In this paper various such treatment modalities which include enucleation and peripheral osteotomy, partial maxillectomy, segmental resection and reconstruction done with fibula graft, and radical resection and reconstruction done with rib graft and their recurrence rate are reviewed with study of five cases.
An Algorithmic Diversity Diet?
DEFF Research Database (Denmark)
Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik
2016-01-01
With the growing influence of personalized algorithmic recommender systems on the exposure of media content to users, the relevance of discussing the diversity of recommendations increases, particularly as far as public service media (PSM) is concerned. An imagined implementation of a diversity...... diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....
Aydemir, Bahar
2017-01-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...
Kramer, Oliver
2017-01-01
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.
Mastering Microsoft Exchange Server 2013
Elfassy, David
2013-01-01
The bestselling guide to Exchange Server, fully updated for the newest version Microsoft Exchange Server 2013 is touted as a solution for lowering the total cost of ownership, whether deployed on-premises or in the cloud. Like the earlier editions, this comprehensive guide covers every aspect of installing, configuring, and managing this multifaceted collaboration system. It offers Windows systems administrators and consultants a complete tutorial and reference, ideal for anyone installing Exchange Server for the first time or those migrating from an earlier Exchange Server version.Microsoft
Boosting foundations and algorithms
Schapire, Robert E
2012-01-01
Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.
Stochastic split determinant algorithms
International Nuclear Information System (INIS)
Horvatha, Ivan
2000-01-01
I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed
Quantum gate decomposition algorithms.
Energy Technology Data Exchange (ETDEWEB)
Slepoy, Alexander
2006-07-01
Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.
KAM Tori Construction Algorithms
Wiesel, W.
In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.
Irregular Applications: Architectures & Algorithms
Energy Technology Data Exchange (ETDEWEB)
Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone
2012-02-06
Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.
International Nuclear Information System (INIS)
Kiwi, Miguel
2001-01-01
Research on the exchange bias (EB) phenomenon has witnessed a flurry of activity during recent years, which stems from its use in magnetic sensors and as stabilizers in magnetic reading heads. EB was discovered in 1956 but it attracted only limited attention until these applications, closely related to giant magnetoresistance, were developed during the last decade. In this review, I initially give a short introduction, listing the most salient experimental results and what is required from an EB theory. Next, I indicate some of the obstacles in the road towards a satisfactory understanding of the phenomenon. The main body of the text reviews and critically discusses the activity that has flourished, mainly during the last 5 years, in the theoretical front. Finally, an evaluation of the progress made, and a critical assessment as to where we stand nowadays along the road to a satisfactory theory, is presented
Energy Technology Data Exchange (ETDEWEB)
Barnett, Catherine L.; Beresford, Nicholas A.; Patel, Sabera; Wells, Claire; Howard, Brenda J. [NERC Centre for Ecology and Hydrology, CEH Lancaster, Lancaster Environment Centre, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); Mora, Juan Carlos; Real, Almudena [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT), Avenida complutense 22, Madrid, 28040 (Spain); Beaugelin-Seiller, Karine; Gilbin, Rodolphe; Hinton, Thomas [IRSN-Institut de Radioprotection et de Surete Nucleaire, 31, Avenue de la Division Leclerc, 92260 Fontenay-Aux-Roses (France); Vesterbacka, Pia; Muikku, Maarit; Outola, Iisa [Radiation and Nuclear Safety Authority, P.O. Box 14, FI-00881 Helsinki (Finland); Skuterud, Lavrans; AlbumYtre-Eide, Martin [Norwegian Radiation Protection Authority, Grini Naeringspark 13, Oesteraas, 1332 (Norway); Bradshaw, Clare; Stark, Karolina; Jaeschke, Ben [Stockholms Universitet, Universitetsvaegen 10, Stockholm, 10691 (Sweden); Oughton, Deborah; Skipperud, Lindis [NMBU Norwegian University of Life Science P.O. Box 5003N-1432 Aas, Oslo (Norway); Vandenhove, Hildegarde; Vanhoudt, Nathalie [SCK.CEN, Studiecentrum voor Kernenergie/Centre d' Etude de l' Energie Nucleaire, Avenue Herrmann-Debroux 40, BE-1160 Brussels (Belgium); Willrodt, Christine; Steiner, Martin [Bundesamt fuer Strahlenschutz, Willy-Brandt-Strasse 5, 38226 Salzgitter (Germany)
2014-07-01
The Radioecology Exchange (www.radioecology-exchange.org) was created in 2011 under the EU FP7 STAR (Strategy for Allied Radioecology) network of excellence. The project aims to integrate the research efforts on radioecology of nine European organisations into a sustainable network. The web site (together with associated Twitter feeds and Facebook page) currently provides the gateway to project outputs and other on-line radiation protection and radioecological resources. In 2013, the EU FP7 COMET (Coordination and implementation of a pan-European instrument for radioecology) project commenced; it aims to strengthen research on the impact of radiation on man and the environment. COMET includes the STAR partners with the addition of one Japanese and two Ukrainian research institutes. As STAR and COMET interact closely together and with the European Radioecology Alliance (www.er-alliance.org/), the Radioecology Exchange will be modified to become an international 'hub' for information related to radioecology. Project specific information will be hosted on separate web sites www.star-radioecology.org and www.comet-radioecology.org. This paper will present an overview of the resources hosted on the Radioecology Exchange inviting other scientists to contribute. Highlighted aspects of the site include: Social media (News blog, Twitter, Facebook) - Items announcing project outputs, training courses, jobs, studentships etc. Virtual laboratory - Information which encourages integration through joint research and integrated use of data and sample materials. These pages will focus on three categories: (1) Methodological: descriptions and video clips of commonly used analytical methods and protocols and the procedures used in STAR and COMET; (2) Informative: databases made available by STAR/COMET partners together with details of sample archives held. Fact-sheets on radio-ecologically important radionuclides and 'topical descriptions' which show absorbed
Energy Technology Data Exchange (ETDEWEB)
Barnett, Catherine L.; Beresford, Nicholas A.; Patel, Sabera; Wells, Claire; Howard, Brenda J. [NERC Centre for Ecology and Hydrology, CEH Lancaster, Lancaster Environment Centre, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); Mora, Juan Carlos; Real, Almudena [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT), Avenida complutense 22, Madrid, 28040 (Spain); Beaugelin-Seiller, Karine; Gilbin, Rodolphe; Hinton, Thomas [IRSN-Institut de Radioprotection et de Surete Nucleaire, 31, Avenue de la Division Leclerc, 92260 Fontenay-Aux-Roses (France); Vesterbacka, Pia; Muikku, Maarit; Outola, Iisa [Radiation and Nuclear Safety Authority, P.O. Box 14, FI-00881 Helsinki (Finland); Skuterud, Lavrans; AlbumYtre-Eide, Martin [Norwegian Radiation Protection Authority, Grini Naeringspark 13, Oesteraas, 1332 (Norway); Bradshaw, Clare; Stark, Karolina; Jaeschke, Ben [Stockholms Universitet, Universitetsvaegen 10, Stockholm, 10691 (Sweden); Oughton, Deborah; Skipperud, Lindis [NMBU Norwegian University of Life Science P.O. Box 5003N-1432 Aas, Oslo (Norway); Vandenhove, Hildegarde; Vanhoudt, Nathalie [SCK.CEN, Studiecentrum voor Kernenergie/Centre d' Etude de l' Energie Nucleaire, Avenue Herrmann-Debroux 40, BE-1160 Brussels (Belgium); Willrodt, Christine; Steiner, Martin [Bundesamt fuer Strahlenschutz, Willy-Brandt-Strasse 5, 38226 Salzgitter (Germany)
2014-07-01
The Radioecology Exchange (www.radioecology-exchange.org) was created in 2011 under the EU FP7 STAR (Strategy for Allied Radioecology) network of excellence. The project aims to integrate the research efforts on radioecology of nine European organisations into a sustainable network. The web site (together with associated Twitter feeds and Facebook page) currently provides the gateway to project outputs and other on-line radiation protection and radioecological resources. In 2013, the EU FP7 COMET (Coordination and implementation of a pan-European instrument for radioecology) project commenced; it aims to strengthen research on the impact of radiation on man and the environment. COMET includes the STAR partners with the addition of one Japanese and two Ukrainian research institutes. As STAR and COMET interact closely together and with the European Radioecology Alliance (www.er-alliance.org/), the Radioecology Exchange will be modified to become an international 'hub' for information related to radioecology. Project specific information will be hosted on separate web sites www.star-radioecology.org and www.comet-radioecology.org. This paper will present an overview of the resources hosted on the Radioecology Exchange inviting other scientists to contribute. Highlighted aspects of the site include: Social media (News blog, Twitter, Facebook) - Items announcing project outputs, training courses, jobs, studentships etc. Virtual laboratory - Information which encourages integration through joint research and integrated use of data and sample materials. These pages will focus on three categories: (1) Methodological: descriptions and video clips of commonly used analytical methods and protocols and the procedures used in STAR and COMET; (2) Informative: databases made available by STAR/COMET partners together with details of sample archives held. Fact-sheets on radio-ecologically important radionuclides and 'topical descriptions' which show absorbed dose estimations for
Catalysed hydrogen isotope exchange
International Nuclear Information System (INIS)
1973-01-01
A method is described for enhancing the rate of exchange of hydrogen atoms in organic compounds or moieties with deuterium or tritium atoms. It comprises reacting the organic compound or moiety and a compound which is the source of deuterium or tritium in the presence of a catalyst consisting of a non-metallic, metallic or organometallic halide of Lewis acid character and which is reactive towards water, hydrogen halides or similar protonic acids. The catalyst is a halide or organometallic halide of: (i) zinc or another element of Group IIb; (ii) boron, aluminium or another element of Group III; (iii) tin, lead, antimony or another element of Groups IV to VI; or (iv) a transition metal, lanthanide or stable actinide; or a halohalide. (author)
Large scale tracking algorithms
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
NEUTRON ALGORITHM VERIFICATION TESTING
International Nuclear Information System (INIS)
COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST
2000-01-01
Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility
An ATR architecture for algorithm development and testing
Breivik, Gøril M.; Løkken, Kristin H.; Brattli, Alvin; Palm, Hans C.; Haavardsholm, Trym
2013-05-01
A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development. In this paper we present an architecture for ATR systems that is designed to be exible, generic and efficient. The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations. A software framework in C++ that handles large data ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.
International Nuclear Information System (INIS)
Sano, Akihito; Furusho, Junji; Okajima, Yosuke
1988-01-01
This paper proposes a new control method for quardruped walking robots in which the leg-support-exchange is lithely implemented. First, the authors formulate the leg-support-exchange phenomenon in 'Trot' using Lagrange's collision equation. Then the continuous walking can be numerically analyzed. Secondly, we propose a new control algorithm for leg-support-exchange. The conventional high gain local feedback causes many problems such as slip and excessive high torque in the leg-support-exchange phase of dynamic walking since it is impossible in this phase to prepare the proper reference values beforehand. In this algorithm, the control law is changed to 'free mode' or 'constant current mode' in order to adjust to the environment. The effectiveness of the proposed control strategy is confirmed by computer simulation and experiments using the walking robot 'COLT-1.' (author)
Convex hull ranking algorithm for multi-objective evolutionary algorithms
Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.
2012-01-01
Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity
Buske, Orion J; Schiettecatte, François; Hutton, Benjamin; Dumitriu, Sergiu; Misyura, Andriy; Huang, Lijia; Hartley, Taila; Girdea, Marta; Sobreira, Nara; Mungall, Chris; Brudno, Michael
2015-10-01
Despite the increasing prevalence of clinical sequencing, the difficulty of identifying additional affected families is a key obstacle to solving many rare diseases. There may only be a handful of similar patients worldwide, and their data may be stored in diverse clinical and research databases. Computational methods are necessary to enable finding similar patients across the growing number of patient repositories and registries. We present the Matchmaker Exchange Application Programming Interface (MME API), a protocol and data format for exchanging phenotype and genotype profiles to enable matchmaking among patient databases, facilitate the identification of additional cohorts, and increase the rate with which rare diseases can be researched and diagnosed. We designed the API to be straightforward and flexible in order to simplify its adoption on a large number of data types and workflows. We also provide a public test data set, curated from the literature, to facilitate implementation of the API and development of new matching algorithms. The initial version of the API has been successfully implemented by three members of the Matchmaker Exchange and was immediately able to reproduce previously identified matches and generate several new leads currently being validated. The API is available at https://github.com/ga4gh/mme-apis. © 2015 WILEY PERIODICALS, INC.
De Facto Exchange Rate Regime Classifications Are Better Than You Think
Michael Bleaney; Mo Tian; Lin Yin
2015-01-01
Several de facto exchange rate regime classifications have been widely used in empirical research, but they are known to disagree with one another to a disturbing extent. We dissect the algorithms employed and argue that they can be significantly improved. We implement the improvements, and show that there is a far higher agreement rate between the modified classifications. We conclude that the current pessimism about de facto exchange rate regime classification schemes is unwarranted.
Foundations of genetic algorithms 1991
1991-01-01
Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition
THE APPROACHING TRAIN DETECTION ALGORITHM
S. V. Bibikov
2015-01-01
The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...
Combinatorial optimization algorithms and complexity
Papadimitriou, Christos H
1998-01-01
This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.
Essential algorithms a practical approach to computer algorithms
Stephens, Rod
2013-01-01
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s
Kohn–Sham exchange-correlation potentials from second-order reduced density matrices
Energy Technology Data Exchange (ETDEWEB)
Cuevas-Saavedra, Rogelio; Staroverov, Viktor N., E-mail: vstarove@uwo.ca [Department of Chemistry, The University of Western Ontario, London, Ontario N6A 5B7 (Canada); Ayers, Paul W. [Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1 (Canada)
2015-12-28
We describe a practical algorithm for constructing the Kohn–Sham exchange-correlation potential corresponding to a given second-order reduced density matrix. Unlike conventional Kohn–Sham inversion methods in which such potentials are extracted from ground-state electron densities, the proposed technique delivers unambiguous results in finite basis sets. The approach can also be used to separate approximately the exchange and correlation potentials for a many-electron system for which the reduced density matrix is known. The algorithm is implemented for configuration-interaction wave functions and its performance is illustrated with numerical examples.
Efficient GPS Position Determination Algorithms
National Research Council Canada - National Science Library
Nguyen, Thao Q
2007-01-01
... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...
Algorithmic approach to diagram techniques
International Nuclear Information System (INIS)
Ponticopoulos, L.
1980-10-01
An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
International Nuclear Information System (INIS)
Meshii, Toshio; Maita, Yasushi; Hirota, Koichi; Kamishima, Yoshio.
1988-01-01
When the reduction of the construction cost of FBRs is considered from the standpoint of the machinery and equipment, to make the size small and to heighten the efficiency are the assigned mission. In order to make a reactor vessel small, it is indispensable to decrease the size of the equipment for fuel exchange installed on the upper part of a core. Mitsubishi Heavy Industries Ltd. carried out the research on the development of a new type fuel exchange system. As for the fuel exchange system for FBRs, it is necessary to change the mode of fuel exchange from that of LWRs, such as handling in the presence of chemically active sodium and inert argon atmosphere covering it and handling under heavy shielding against high radiation. The fuel exchange system for FBRs is composed of a fuel exchanger which inserts, pulls out and transfers fuel and rotary plugs. The mechanism adopted for the new type fuel exchange system that Mitsubishi is developing is explained. The feasibility of the mechanism on the upper part of a core was investigated by water flow test, vibration test and buckling test. The design of the mechanism on the upper part of the core of a demonstration FBR was examined, and the new type fuel exchange system was sufficiently applicable. (Kako, I.)
International Nuclear Information System (INIS)
Hayden, O.; Willby, C.R.; Sheward, G.E.; Ormrod, D.T.; Firth, G.F.
1980-01-01
An improved tube-in-shell heat exchanger to be used between liquid metal and water is described for use in the liquid metal coolant system of fast breeder reactors. It is stated that this design is less prone to failures which could result in sodium water reactions than previous exchangers. (UK)
Social exchange : Relations and networks
Dijkstra, Jacob
2015-01-01
In this short paper, I review the literature on social exchange networks, with specific attention to theoretical and experimental research. I indicate how social exchange theory is rooted in general social theory and mention a few of its main links to social network analysis and empirical network
Ion exchange : principles and applications
International Nuclear Information System (INIS)
Bank, Nader; Majumdar, A.S.
1975-01-01
An attempt is made to provide a brief state-of-the-art review of the basic principles underlying the unit operation of ion exchange and its numerous and diverse commercial applications. A selective bibliography is provided for the benefit of the reader interested in pursuing any specific aspect of ion exchange. (author)
Educators Exchange: A Program Evaluation.
Armstrong, William B.
The Educators Exchange Program (EEP) was established under a training and educational exchange agreement reached by California's San Diego Community College District (SDCCD) and the republic of Mexico. In the program, the District provided a 4-week technological training program to faculty at Centros de Capacitacion Tecnologica Industrial…
On-line fouling monitor for heat exchangers
International Nuclear Information System (INIS)
Tsou, J.L.
1995-01-01
Biological and/or chemical fouling in utility service water system heat exchangers adversely affects operation and maintenance costs, and reduced heat transfer capability can force a power deaerating or even a plant shut down. In addition, service water heat exchanger performance is a safety issue for nuclear power plants, and the issue was highlighted by NRC in Generic Letter 89-13. Heat transfer losses due to fouling are difficult to measure and, usually, quantitative assessment of the impact of fouling is impossible. Plant operators typically measure inlet and outlet water temperatures and flow rates and then perform complex calculations for heat exchanger fouling resistance or ''cleanliness''. These direct estimates are often imprecise due to inadequate instrumentation. Electric Power Research Institute developed and patented an on-line condenser fouling monitor. This monitor may be installed in any location within the condenser; does not interfere with routine plant operations, including on-line mechanical and chemical treatment methods; and provides continuous, real-time readings of the heat transfer efficiency of the instrumented tube. This instrument can be modified to perform on-line monitoring of service water heat exchangers. This paper discusses the design, construction of the new monitor, and algorithm used to calculate service water heat exchanger fouling
Transendothelial lipoprotein exchange and microalbuminuria
DEFF Research Database (Denmark)
Jensen, Jan Skov; Feldt-Rasmussen, Bo; Jensen, Kurt Svarre
2004-01-01
OBJECTIVE: Microalbuminuria associates with increased risk of atherosclerosis in individuals without diabetes. We hypothesized that transendothelial lipoprotein exchange is elevated among such individuals, possibly explaining increased intimal lipoprotein accumulation and thus atherosclerosis....... METHODS: Using an in vivo isotope technique, transendothelial exchange of low density lipoprotein (LDL) was measured in 77 non-diabetic individuals. Autologous 131-iodinated LDL was reinjected intravenously, and the 1-h fractional escape rate was calculated as index of transendothelial exchange. RESULTS......: There was no difference in transendothelial LDL exchange between subjects with microalbuminuria versus normoalbuminuria (mean (95% confidence interval) 3.8%/h (3.3-4.3%/h) versus 4.2%/h (3.7-4.7%/h); P=0.33). In contrast, there was a positive correlation between transendothelial LDL exchange and (logarithmically...
Heat exchanger leakage problem location
Directory of Open Access Journals (Sweden)
Jícha Miroslav
2012-04-01
Full Text Available Recent compact heat exchangers are very often assembled from numerous parts joined together to separate heat transfer fluids and to form the required heat exchanger arrangement. Therefore, the leak tightness is very important property of the compact heat exchangers. Although, the compact heat exchangers have been produced for many years, there are still technological problems associated with manufacturing of the ideal connection between the individual parts, mainly encountered with special purpose heat exchangers, e.g. gas turbine recuperators. This paper describes a procedure used to identify the leakage location inside the prime surface gas turbine recuperator. For this purpose, an analytical model of the leaky gas turbine recuperator was created to assess its performance. The results obtained are compared with the experimental data which were acquired during the recuperator thermal performance analysis. The differences between these two data sets are used to indicate possible leakage areas.
Next Generation Microchannel Heat Exchangers
Ohadi, Michael; Dessiatoun, Serguei; Cetegen, Edvin
2013-01-01
In Next Generation Microchannel Heat Exchangers, the authors’ focus on the new generation highly efficient heat exchangers and presentation of novel data and technical expertise not available in the open literature. Next generation micro channels offer record high heat transfer coefficients with pressure drops much less than conventional micro channel heat exchangers. These inherent features promise fast penetration into many mew markets, including high heat flux cooling of electronics, waste heat recovery and energy efficiency enhancement applications, alternative energy systems, as well as applications in mass exchangers and chemical reactor systems. The combination of up to the minute research findings and technical know-how make this book very timely as the search for high performance heat and mass exchangers that can cut costs in materials consumption intensifies.
Heat exchanger performance monitoring guidelines
International Nuclear Information System (INIS)
Stambaugh, N.; Closser, W. Jr.; Mollerus, F.J.
1991-12-01
Fouling can occur in many heat exchanger applications in a way that impedes heat transfer and fluid flow and reduces the heat transfer or performance capability of the heat exchanger. Fouling may be significant for heat exchanger surfaces and flow paths in contact with plant service water. This report presents guidelines for performance monitoring of heat exchangers subject to fouling. Guidelines include selection of heat exchangers to monitor based on system function, safety function and system configuration. Five monitoring methods are discussed: the heat transfer, temperature monitoring, temperature effectiveness, delta P and periodic maintenance methods. Guidelines are included for selecting the appropriate monitoring methods and for implementing the selected methods. The report also includes a bibliography, example calculations, and technical notes applicable to the heat transfer method
Directory of Open Access Journals (Sweden)
Katie S Lennard
Full Text Available The relevance of specific microbial colonisation to colorectal cancer (CRC disease pathogenesis is increasingly recognised, but our understanding of possible underlying molecular mechanisms that may link colonisation to disease in vivo remains limited. Here, we investigate the relationships between the most commonly studied CRC-associated bacteria (Enterotoxigenic Bacteroides fragilis, pks+ Escherichia coli, Fusobacterium spp., afaC+ E. coli, Enterococcus faecalis & Enteropathogenic E. coli and altered transcriptomic and methylation profiles of CRC patients, in order to gain insight into the potential contribution of these bacteria in the aetiopathogenesis of CRC. We show that colonisation by E. faecalis and high levels of Fusobacterium is associated with a specific transcriptomic subtype of CRC that is characterised by CpG island methylation, microsatellite instability and a significant increase in inflammatory and DNA damage pathways. Analysis of the significant, bacterially-associated changes in host gene expression, both at the level of individual genes as well as pathways, revealed a transcriptional remodeling that provides a plausible mechanistic link between specific bacterial colonisation and colorectal cancer disease development and progression in this subtype; these included upregulation of REG3A, REG1A and REG1P in the case of high-level colonization by Fusobacterium, and CXCL10 and BMI1 in the case of colonisation by E. faecalis. The enrichment of both E. faecalis and Fusobacterium in this CRC subtype suggests that polymicrobial colonisation of the colonic epithelium may well be an important aspect of colonic tumourigenesis.
Honing process optimization algorithms
Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.
2018-03-01
This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.
Opposite Degree Algorithm and Its Applications
Directory of Open Access Journals (Sweden)
Xiao-Guang Yue
2015-12-01
Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.
International Nuclear Information System (INIS)
Elsays, Mostafa A.; Naguib Aly, M; Badawi, Alya A.
2010-01-01
The Particle Swarm Optimization (PSO) algorithm is used to optimize the design of shell-and-tube heat exchangers and determine the optimal feasible solutions so as to eliminate trial-and-error during the design process. The design formulation takes into account the area and the total annual cost of heat exchangers as two objective functions together with operating as well as geometrical constraints. The Nonlinear Constrained Single Objective Particle Swarm Optimization (NCSOPSO) algorithm is used to minimize and find the optimal feasible solution for each of the nonlinear constrained objective functions alone, respectively. Then, a novel Nonlinear Constrained Mult-objective Particle Swarm Optimization (NCMOPSO) algorithm is used to minimize and find the Pareto optimal solutions for both of the nonlinear constrained objective functions together. The experimental results show that the two algorithms are very efficient, fast and can find the accurate optimal feasible solutions of the shell and tube heat exchangers design optimization problem. (orig.)
A spin exchange model for singlet fission
Yago, Tomoaki; Wakasa, Masanobu
2018-03-01
Singlet fission has been analyzed with the Dexter model in which electron exchange occurs between chromophores, conserving the spin for each electron. In the present study, we propose a spin exchange model for singlet fission. In the spin exchange model, spins are exchanged by the exchange interaction between two electrons. Our analysis with simple spin functions demonstrates that singlet fission is possible by spin exchange. A necessary condition for spin exchange is a variation in exchange interactions. We also adapt the spin exchange model to triplet fusion and triplet energy transfer, which often occur after singlet fission in organic solids.
Electrically switched ion exchange
Energy Technology Data Exchange (ETDEWEB)
Lilga, M.A. [Pacific Northwest National Lab., Richland, WA (United States); Schwartz, D.T.; Genders, D.
1997-10-01
A variety of waste types containing radioactive {sup 137}Cs are found throughout the DOE complex. These waste types include water in reactor cooling basins, radioactive high-level waste (HLW) in underground storage tanks, and groundwater. Safety and regulatory requirements and economics require the removal of radiocesium before these wastes can be permanently disposed of. Electrically Switched Ion Exchange (ESIX) is an approach for radioactive cesium separation that combines IX and electrochemistry to provide a selective, reversible, and economic separation method that also produces little or no secondary waste. In the ESIX process, an electroactive IX film is deposited electrochemically onto a high-surface area electrode, and ion uptake and elution are controlled directly by modulating the potential of the film. For cesium, the electroactive films under investigation are ferrocyanides, which are well known to have high selectivities for cesium in concentrated sodium solutions. When a cathode potential is applied to the film, Fe{sup +3} is reduced to the Fe{sup +2} state, and a cation must be intercalated into the film to maintain charge neutrality (i.e., Cs{sup +} is loaded). Conversely, if an anodic potential is applied, a cation must be released from the film (i.e., Cs{sup +} is unloaded). Therefore, to load the film with cesium, the film is simply reduced; to unload cesium, the film is oxidized.
Humanitarianism and Unequal Exchange
Directory of Open Access Journals (Sweden)
Raja Swamy
2017-08-01
Full Text Available This article examines the relationship between humanitarian aid and ecologically unequal exchange in the context of post-disaster reconstruction. I assess the manner in which humanitarian aid became a central part of the reconstruction process in India's Tamil Nadu state following the devastating 2004 Indian Ocean tsunami. This article focuses on how the humanitarian “gift” of housing became a central plank of the state's efforts to push fishers inland while opening up coastal lands for various economic development projects such as ports, infrastructure, industries, and tourism. As part of the state and multilateral agency financed reconstruction process, the humanitarian aid regime provided “free” houses as gifts to recipients while expecting in return the formal abandonment of all claims to the coast. The humanitarian “gift” therefore helped depoliticize critical issues of land and resources, location and livelihood, which prior to the tsunami were subjects of long-standing political conflicts between local fisher populations and the state. The gift economy in effect played into an ongoing conflict over land and resources and effectively sought to ease the alienation of fishers from their coastal commons and near shore marine resource base. I argue that humanitarian aid, despite its associations with benevolence and generosity, presents a troubling and disempowering set of options for political struggles over land, resources, and social entitlements such as housing, thereby intensifying existing ecological and economic inequalities.
Vítek, Tomáš
2017-01-01
Tato bakalářská práce řeší návrh výměníku tepla pro teplovodní kotel se zplyňovací komorou pro předehřev spalovacího vzduchu odpadním teplem spalin. Hodnoty pro výpočet byly experimentálně naměřeny. Práce obsahuje stručný popis trubkového výměníku tepla, stechiometrický vypočet spalování, návrh geometrických rozměrů výměníku, výpočet tlakových ztrát a výpočet výkonu. Její součástí je také výkresová dokumentace navrženého výměníku. This bachelor thesis solves design of a heat exchanger for ...
Effects of Externalities on Patterns of Exchange
Dijkstra, J.; van Assen, M.A.L.M.
Many real−life examples of exchanges with externalities exist. Externalities of exchange are defined as direct consequences of exchanges for the payoff of actors who are not involved in the exchange. This paper focuses on how externalities influence the partner choice in exchange networks. In an
Fast algorithm for Morphological Filters
International Nuclear Information System (INIS)
Lou Shan; Jiang Xiangqian; Scott, Paul J
2011-01-01
In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.
Recognition algorithms in knot theory
International Nuclear Information System (INIS)
Dynnikov, I A
2003-01-01
In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory
Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm
Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad
2018-01-01
Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
Algorithmic Relative Complexity
Directory of Open Access Journals (Sweden)
Daniele Cerra
2011-04-01
Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.
Fatigue evaluation algorithms: Review
Energy Technology Data Exchange (ETDEWEB)
Passipoularidis, V.A.; Broendsted, P.
2009-11-15
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)
Custom, contract, and kidney exchange.
Healy, Kieran; Krawiec, Kimberly D
2012-01-01
In this Essay, we examine a case in which the organizational and logistical demands of a novel form of organ exchange (the nonsimultaneous, extended, altruistic donor (NEAD) chain) do not map cleanly onto standard cultural schemas for either market or gift exchange, resulting in sociological ambiguity and legal uncertainty. In some ways, a NEAD chain resembles a form of generalized exchange, an ancient and widespread instance of the norm of reciprocity that can be thought of simply as the obligation to “pay it forward” rather than the obligation to reciprocate directly with the original giver. At the same time, a NEAD chain resembles a string of promises and commitments to deliver something in exchange for some valuable consideration--that is, a series of contracts. Neither of these salient "social imaginaries" of exchange--gift giving or formal contract--perfectly meets the practical demands of the NEAD system. As a result, neither contract nor generalized exchange drives the practice of NEAD chains. Rather, the majority of actual exchanges still resemble a simpler form of exchange: direct, simultaneous exchange between parties with no time delay or opportunity to back out. If NEAD chains are to reach their full promise for large-scale, nonsimultaneous organ transfer, legal uncertainties and sociological ambiguities must be finessed, both in the practices of the coordinating agencies and in the minds of NEAD-chain participants. This might happen either through the further elaboration of gift-like language and practices, or through a creative use of the cultural form and motivational vocabulary, but not necessarily the legal and institutional machinery, of contract.
Component Cooling Heat Exchanger Heat Transfer Capability Operability Monitoring
International Nuclear Information System (INIS)
Mihalina, M.; Djetelic, N.
2010-01-01
.g. using CC Heat Exchanger bypass valves for CC temperature control, variation of plant heat loads, pumps performance, and day-night temperature difference, with lagging effects on heat transfer dynamics). Krsko NPP is continuously monitoring the Component Cooling (CC) Heat Exchanger performance using the on-line process information system (PIS). By defining the mathematical algorithm, it is possible to continuously evaluate the CC Heat Exchanger operability by verifying if the heat transfer rate calculation is in accordance with the heat exchanger design specification sheet requirements. These calculations are limited to summer periods only when the bypass valves are neither throttled nor open.(author).
Relational and XML Data Exchange
Arenas, Marcelo
2010-01-01
Data exchange is the problem of finding an instance of a target schema, given an instance of a source schema and a specification of the relationship between the source and the target. Such a target instance should correctly represent information from the source instance under the constraints imposed by the target schema, and it should allow one to evaluate queries on the target instance in a way that is semantically consistent with the source data. Data exchange is an old problem that re-emerged as an active research topic recently, due to the increased need for exchange of data in various for
Corrosion protected reversing heat exchanger
International Nuclear Information System (INIS)
Zawierucha, R.
1984-01-01
A reversing heat exchanger of the plate and fin type having multiple aluminum parting sheets in a stacked arrangement with corrugated fins separating the sheets to form multiple flow paths, means for closing the ends of the sheets, an input manifold arrangement of headers for the warm end of of the exchanger and an output manifold arrangement for the cold end of the exchanger with the input air feed stream header and the waste gas exhaust header having an alloy of zinc and aluminum coated on the inside surface for providing corrosion protection to the stack
Heat exchanger using graphite foam
Campagna, Michael Joseph; Callas, James John
2012-09-25
A heat exchanger is disclosed. The heat exchanger may have an inlet configured to receive a first fluid and an outlet configured to discharge the first fluid. The heat exchanger may further have at least one passageway configured to conduct the first fluid from the inlet to the outlet. The at least one passageway may be composed of a graphite foam and a layer of graphite material on the exterior of the graphite foam. The layer of graphite material may form at least a partial barrier between the first fluid and a second fluid external to the at least one passageway.
Organic decontamination by ion exchange
International Nuclear Information System (INIS)
Wilson, T.R.
1994-01-01
This study has successfully identified ion exchanger media suitable for decontaminating the 5500-gallon organic layer in Tank 241-C-103. Decontamination of radionuclides is necessary to meet shipping, incinerator site storage, and incineration feed requirements. The exchanger media were identified through a literature search and experiments at the Russian Institute for Physical Chemistry. The principal radionuclides addressed are Cs-137 and Sr-90. Recommendations for an experimental program plan conclude the discussion. The experimental program would provide the data necessary for plant design specifications for a column and for ion exchange media to be used in decontaminating the organic layer
Optimal Fungal Space Searching Algorithms.
Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V
2016-10-01
Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.
Marriage exchanges, seed exchanges, and the dynamics of manioc diversity.
Delêtre, Marc; McKey, Doyle B; Hodkinson, Trevor R
2011-11-08
The conservation of crop genetic resources requires understanding the different variables-cultural, social, and economic-that impinge on crop diversity. In small-scale farming systems, seed exchanges represent a key mechanism in the dynamics of crop genetic diversity, and analyzing the rules that structure social networks of seed exchange between farmer communities can help decipher patterns of crop genetic diversity. Using a combination of ethnobotanical and molecular genetic approaches, we investigated the relationships between regional patterns of manioc genetic diversity in Gabon and local networks of seed exchange. Spatially explicit Bayesian clustering methods showed that geographical discontinuities of manioc genetic diversity mirror major ethnolinguistic boundaries, with a southern matrilineal domain characterized by high levels of varietal diversity and a northern patrilineal domain characterized by low varietal diversity. Borrowing concepts from anthropology--kinship, bridewealth, and filiation--we analyzed the relationships between marriage exchanges and seed exchange networks in patrilineal and matrilineal societies. We demonstrate that, by defining marriage prohibitions, kinship systems structure social networks of exchange between farmer communities and influence the movement of seeds in metapopulations, shaping crop diversity at local and regional levels.
Theory and design of heat exchanger : Double pipe and heat exchanger in abnormal condition
International Nuclear Information System (INIS)
Min, Ui Dong
1996-02-01
This book introduces theory and design of heat exchanger, which includes HTRI program, multiple tube heat exchanger external heating, theory of heat transfer, basis of design of heat exchanger, two-phase flow, condensation, boiling, material of heat exchanger, double pipe heat exchanger like hand calculation, heat exchanger in abnormal condition such as Jackets Vessel, and Coiled Vessel, design and summary of steam tracing.
Aberg, Kristoffer C; Müller, Julia; Schwartz, Sophie
2017-01-01
Anticipation and delivery of rewards improves memory formation, but little effort has been made to disentangle their respective contributions to memory enhancement. Moreover, it has been suggested that the effects of reward on memory are mediated by dopaminergic influences on hippocampal plasticity. Yet, evidence linking memory improvements to actual reward computations reflected in the activity of the dopaminergic system, i.e., prediction errors and expected values, is scarce and inconclusive. For example, different previous studies reported that the magnitude of prediction errors during a reinforcement learning task was a positive, negative, or non-significant predictor of successfully encoding simultaneously presented images. Individual sensitivities to reward and punishment have been found to influence the activation of the dopaminergic reward system and could therefore help explain these seemingly discrepant results. Here, we used a novel associative memory task combined with computational modeling and showed independent effects of reward-delivery and reward-anticipation on memory. Strikingly, the computational approach revealed positive influences from both reward delivery, as mediated by prediction error magnitude, and reward anticipation, as mediated by magnitude of expected value, even in the absence of behavioral effects when analyzed using standard methods, i.e., by collapsing memory performance across trials within conditions. We additionally measured trait estimates of reward and punishment sensitivity and found that individuals with increased reward (vs. punishment) sensitivity had better memory for associations encoded during positive (vs. negative) prediction errors when tested after 20 min, but a negative trend when tested after 24 h. In conclusion, modeling trial-by-trial fluctuations in the magnitude of reward, as we did here for prediction errors and expected value computations, provides a comprehensive and biologically plausible description of
STAR Algorithm Integration Team - Facilitating operational algorithm development
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Algorithm aversion: people erroneously avoid algorithms after seeing them err.
Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade
2015-02-01
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.
Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P
1999-10-01
In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.
Exchange of Information in Tax Matters
Paweł Szwajdler
2017-01-01
The main aim of this paper is to present issues related to exchange of tax information. The author focuses on models of exchange of information and boundaries of obligations in reference to above-mentioned problems. Automatic exchange of information, spontaneous exchange of information and exchange of information on request are analysed in this work on the base of OECD Convention on Mutual Administrative Assistance in Tax Matters, Council Directive 2011/16 and OECD Model Agreement on Exchange...
Algorithmic Reflections on Choreography
Directory of Open Access Journals (Sweden)
Pablo Ventura
2016-11-01
Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.
Application of epidemic algorithms for smart grids control
International Nuclear Information System (INIS)
Krkoleva, Aleksandra
2012-01-01
Smart Grids are a new concept for electricity networks development, aiming to provide economically efficient and sustainable power system by integrating effectively the actions and needs of the network users. The thesis addresses the Smart Grids concept, with emphasis on the control strategies developed on the basis of epidemic algorithms, more specifically, gossip algorithms. The thesis is developed around three Smart grid aspects: the changed role of consumers in terms of taking part in providing services within Smart Grids; the possibilities to implement decentralized control strategies based on distributed algorithms; and information exchange and benefits emerging from implementation of information and communication technologies. More specifically, the thesis presents a novel approach for providing ancillary services by implementing gossip algorithms. In a decentralized manner, by exchange of information between the consumers and by making decisions on local level, based on the received information and local parameters, the group achieves its global objective, i. e. providing ancillary services. The thesis presents an overview of the Smart Grids control strategies with emphasises on new strategies developed for the most promising Smart Grids concepts, as Micro grids and Virtual power plants. The thesis also presents the characteristics of epidemic algorithms and possibilities for their implementation in Smart Grids. Based on the research on epidemic algorithms, two applications have been developed. These applications are the main outcome of the research. The first application enables consumers, represented by their commercial aggregators, to participate in load reduction and consequently, to participate in balancing market or reduce the balancing costs of the group. In this context, the gossip algorithms are used for aggregator's message dissemination for load reduction and households and small commercial and industrial consumers to participate in maintaining
Apparatus and process for deuterium exchange
International Nuclear Information System (INIS)
Ergenc, M.S.
1976-01-01
The deuterium exchange plant is combined with an absorption refrigeration plant in order to improve the exchange process and to produce refrigeration. The refrigeration plant has a throttling means for expanding and cooling a portion of the liquid exchange medium separated in the exchange plant as well as an evaporator, in which the said liquid exchange medium is brought into heat exchange with a cold consumer device, absorption means for forming a solution of the used exchange medium and fresh water and a pump for pumping the solution into the exchange plant
Department of Veterans Affairs — “Connect Your Docs” through the Virtual Lifetime Electronic Record (VLER) Health Exchange program. This program gives VA and community health care providers secure...
Estimating Foreign Exchange Reserve Adequacy
Directory of Open Access Journals (Sweden)
Abdul Hakim
2013-04-01
Full Text Available Accumulating foreign exchange reserves, despite their cost and their impacts on other macroeconomics variables, provides some benefits. This paper models such foreign exchange reserves. To measure the adequacy of foreign exchange reserves for import, it uses total reserves-to-import ratio (TRM. The chosen independent variables are gross domestic product growth, exchange rates, opportunity cost, and a dummy variable separating the pre and post 1997 Asian financial crisis. To estimate the risky TRM value, this paper uses conditional Value-at-Risk (VaR, with the help of Glosten-Jagannathan-Runkle (GJR model to estimate the conditional volatility. The results suggest that all independent variables significantly influence TRM. They also suggest that the short and long run volatilities are evident, with the additional evidence of asymmetric effects of negative and positive past shocks. The VaR, which are calculated assuming both normal and t distributions, provide similar results, namely violations in 2005 and 2008.
Mechanical calculation of heat exchangers
International Nuclear Information System (INIS)
Osweiller, Francis.
1977-01-01
Many heat exchangers are still being dimensioned at the present time by means of the American TEMA code (Tubular Exchanger Manufacturers Association). The basic formula of this code often gives rise to significant tubular plate thicknesses which, apart from the cost of materials, involve significant machining. Some constructors have brought into use calculation methods that are more analytic so as to take into better consideration the mechanical phenomena which come into play in a heat exchanger. After a brief analysis of these methods it is shown, how the original TEMA formulations have changed to reach the present version and how this code has incorporated Gardner's results for treating exchangers with two fixed heads. A formal and numerical comparison is then made of the analytical and TEMA methods by attempting to highlight a code based on these methods or a computer calculation programme in relation to the TEMA code [fr
Microplate Heat Exchanger, Phase I
National Aeronautics and Space Administration — We propose a microplate heat exchanger for cryogenic cooling systems used for continuous flow distributed cooling systems, large focal plane arrays, multiple cooling...
Multisensor data fusion algorithm development
Energy Technology Data Exchange (ETDEWEB)
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
Shared Year Exchange in Nursing
DEFF Research Database (Denmark)
Vedsegaard, Helle Wendner; Wederkinck, Elisabeth
2010-01-01
Beskrivelse af Shared Year Exchange in Nursing, et udviklingsporjekt omhandlende udvikling, beskrivelse og implementering af et fælles studieår for sygeplejerskestuderende ved Metropol og La Trobe University Australien.......Beskrivelse af Shared Year Exchange in Nursing, et udviklingsporjekt omhandlende udvikling, beskrivelse og implementering af et fælles studieår for sygeplejerskestuderende ved Metropol og La Trobe University Australien....
Exchange Rate and Inflation Dynamics
Eatzaz Ahmad; Saima Ahmed Ali
1999-01-01
This paper studies simultaneous determination of nominal exchange rate and domestic price level in Pakistan. The estimated model contains sufficient built-in dynamics to trace the pattern and speed of adjustment in the two variables in response to temporary or permanent shocks. The two domestic shocks considered in the paper are monetary and real shocks, while the three external shocks considered are import price, export price and foreign exchange reserves shocks. The study finds that the imp...
Sodium vapor charge exchange cell
International Nuclear Information System (INIS)
Hiddleston, H.R.; Fasolo, J.A.; Minette, D.C.; Chrien, R.E.; Frederick, J.A.
1976-01-01
An operational sequential charge-exchange ion source yielding a 50 MeV H - current of approximately 8 mA is planned for use with the Argonne 500 MeV booster synchrotron. We report on the progress for development of a sodium vapor charge-exchange cell as part of that planned effort. Design, fabrication, and operating results to date are presented and discussed. (author)
Mao-Gilles Stabilization Algorithm
Jérôme Gilles
2013-01-01
Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...
Mao-Gilles Stabilization Algorithm
Directory of Open Access Journals (Sweden)
Jérôme Gilles
2013-07-01
Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.
One improved LSB steganography algorithm
Song, Bing; Zhang, Zhi-hong
2013-03-01
It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.
Unsupervised Classification Using Immune Algorithm
Al-Muallim, M. T.; El-Kouatly, R.
2012-01-01
Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...
Graph Algorithm Animation with Grrr
Rodgers, Peter; Vidal, Natalia
2000-01-01
We discuss geometric positioning, highlighting of visited nodes and user defined highlighting that form the algorithm animation facilities in the Grrr graph rewriting programming language. The main purpose of animation was initially for the debugging and profiling of Grrr code, but recently it has been extended for the purpose of teaching algorithms to undergraduate students. The animation is restricted to graph based algorithms such as graph drawing, list manipulation or more traditional gra...
Algorithms over partially ordered sets
DEFF Research Database (Denmark)
Baer, Robert M.; Østerby, Ole
1969-01-01
in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....
Electrically switched cesium ion exchange
International Nuclear Information System (INIS)
Lilga, M.A.; Orth, R.J.; Sukamto, J.P.H.; Schwartz, D.T.; Haight, S.M.; Genders, J.D.
1997-04-01
Electrically Switched Ion Exchange (ESIX) is a separation technology being developed as an alternative to conventional ion exchange for removing radionuclides from high-level waste. The ESIX technology, which combines ion exchange and electrochemistry, is geared toward producing electroactive films that are highly selective, regenerable, and long lasting. During the process, ion uptake and elution are controlled directly by modulating the potential of an ion exchange film that has been electrochemically deposited onto a high surface area electrode. This method adds little sodium to the waste stream and minimizes the secondary wastes associated with traditional ion exchange techniques. Development of the ESIX process is well underway for cesium removal using ferrocyanides as the electroactive films. Films having selectivity for perrhenate (a pertechnetate surrogate) over nitrate also have been deposited and tested. A case study for the KE Basin on the Hanford Site was conducted based on the results of the development testing. Engineering design baseline parameters for film deposition, film regeneration, cesium loading, and cesium elution were used for developing a conceptual system. Order of magnitude cost estimates were developed to compare with conventional ion exchange. This case study demonstrated that KE Basin wastewater could be processed continuously with minimal secondary waste and reduced associated disposal costs, as well as lower capital and labor expenditures
An overview of smart grid routing algorithms
Wang, Junsheng; OU, Qinghai; Shen, Haijuan
2017-08-01
This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.
Algorithmic complexity of quantum capacity
Oskouei, Samad Khabbazi; Mancini, Stefano
2018-04-01
We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.
Machine Learning an algorithmic perspective
Marsland, Stephen
2009-01-01
Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Diversity-Guided Evolutionary Algorithms
DEFF Research Database (Denmark)
Ursem, Rasmus Kjær
2002-01-01
Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...
FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS
Directory of Open Access Journals (Sweden)
G. Sithole
2015-05-01
Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.
Building test data from real outbreaks for evaluating detection algorithms.
Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve
2017-01-01
Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.
Building test data from real outbreaks for evaluating detection algorithms.
Directory of Open Access Journals (Sweden)
Gaetan Texier
Full Text Available Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler. We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1 resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak
Processes of Ammonia Air-Surface Exchange in a Fertilized Zea Mays Canopy
Recent incorporation of coupled soil biogeochemical and bi-directional NH3 air-surface exchange algorithms into regional air quality models holds promise for further reducing uncertainty in estimates of NH3 emissions from fertilized soils. While this advancement represents a sig...
A Multipopulation Coevolutionary Strategy for Multiobjective Immune Algorithm
Directory of Open Access Journals (Sweden)
Jiao Shi
2014-01-01
Full Text Available How to maintain the population diversity is an important issue in designing a multiobjective evolutionary algorithm. This paper presents an enhanced nondominated neighbor-based immune algorithm in which a multipopulation coevolutionary strategy is introduced for improving the population diversity. In the proposed algorithm, subpopulations evolve independently; thus the unique characteristics of each subpopulation can be effectively maintained, and the diversity of the entire population is effectively increased. Besides, the dynamic information of multiple subpopulations is obtained with the help of the designed cooperation operator which reflects a mutually beneficial relationship among subpopulations. Subpopulations gain the opportunity to exchange information, thereby expanding the search range of the entire population. Subpopulations make use of the reference experience from each other, thereby improving the efficiency of evolutionary search. Compared with several state-of-the-art multiobjective evolutionary algorithms on well-known and frequently used multiobjective and many-objective problems, the proposed algorithm achieves comparable results in terms of convergence, diversity metrics, and running time on most test problems.
Comparison of machine learning algorithms for detecting coral reef
Directory of Open Access Journals (Sweden)
Eduardo Tusa
2014-09-01
Full Text Available (Received: 2014/07/31 - Accepted: 2014/09/23This work focuses on developing a fast coral reef detector, which is used for an autonomous underwater vehicle, AUV. A fast detection secures the AUV stabilization respect to an area of reef as fast as possible, and prevents devastating collisions. We use the algorithm of Purser et al. (2009 because of its precision. This detector has two parts: feature extraction that uses Gabor Wavelet filters, and feature classification that uses machine learning based on Neural Networks. Due to the extensive time of the Neural Networks, we exchange for a classification algorithm based on Decision Trees. We use a database of 621 images of coral reef in Belize (110 images for training and 511 images for testing. We implement the bank of Gabor Wavelets filters using C++ and the OpenCV library. We compare the accuracy and running time of 9 machine learning algorithms, whose result was the selection of the Decision Trees algorithm. Our coral detector performs 70ms of running time in comparison to 22s executed by the algorithm of Purser et al. (2009.
Flow vibrations and dynamic instability of heat exchanger tube bundles
International Nuclear Information System (INIS)
Granger, S.; Langre, E. de
1995-01-01
This paper presents a review of external-flow-induced vibration of heat exchanger tube bundles. Attention is focused on a dynamic instability, known as ''fluidelastic instability'', which can develop when flow is transverse to the tube axis. The main physical models proposed in the literature are successively reviewed in a critical way. As a consequence, some concepts are clarified, some a priori plausible misinterpretations are rejected and finally, certain basic mechanisms, induced by the flow-structure interaction and responsible for the ultimate onset of fluidelastic instability, are elucidated. Design tools and methods for predictive analysis of industrial cases are then presented. The usual design tool is the ''stability map'', i.e. an empirical correlation which must be interpreted in a conservative way. Of course, when using this approach, the designer must also consider reasonable safety margins. In the area of predictive analysis, the ''unsteady semi-analytical models'' seem to be a promising and efficient methodology. A modern implementation of these ideas mix an original experimental approach for taking fluid dynamic forces into account, together with non-classical numerical methods of mechanical vibration. (authors). 20 refs., 9 figs
Air Circulation and Heat Exchange under Reduced Pressures
Rygalov, Vadim; Wheeler, Raymond; Dixon, Mike; Hillhouse, Len; Fowler, Philip
Low pressure atmospheres were suggested for Space Greenhouses (SG) design to minimize sys-tem construction and re-supply materials, as well as system manufacturing and deployment costs. But rarified atmospheres modify heat exchange mechanisms what finally leads to alter-ations in thermal control for low pressure closed environments. Under low atmospheric pressures (e.g., lower than 25 kPa compare to 101.3 kPa for normal Earth atmosphere), convection is becoming replaced by diffusion and rate of heat exchange reduces significantly. During a period from 2001 to 2009, a series of hypobaric experiments were conducted at Space Life Sciences Lab (SLSLab) NASA's Kennedy Space Center and the Department of Space Studies, University of North Dakota. Findings from these experiments showed: -air circulation rate decreases non-linearly with lowering of total atmospheric pressure; -heat exchange slows down with pressure decrease creating risk of thermal stress (elevated leaf tem-peratures) for plants in closed environments; -low pressure-induced thermal stress could be reduced by either lowering system temperature set point or increasing forced convection rates (circulation fan power) within certain limits; Air circulation is an important constituent of controlled environments and plays crucial role in material and heat exchange. Theoretical schematics and mathematical models are developed from a series of observations. These models can be used to establish optimal control algorithms for low pressure environments, such as a space greenhouse, as well as assist in fundamental design concept developments for these or similar habitable structures.
On one pion exchange potential with quark exchange in the resonating group method
International Nuclear Information System (INIS)
Braeuer, K.; Faessler, A.; Fernandez, F.; Shimizu, K.
1985-01-01
The effect of quark exchange between different nucleons on the one pion exchange potential is studied in the framework of the resonating group method. The calculated phase shifts including the one pion exchange potential with quark exchange in addition to the one gluon plus sigma meson exchange are shown to be consistent with experiments. Especially the p-wave phase shifts are improved by taking into account the quark exchange on the one pion exchange potential. (orig.)
A continuous exchange factor method for radiative exchange in enclosures with participating media
International Nuclear Information System (INIS)
Naraghi, M.H.N.; Chung, B.T.F.; Litkouhi, B.
1987-01-01
A continuous exchange factor method for analysis of radiative exchange in enclosures is developed. In this method two types of exchange functions are defined, direct exchange function and total exchange function. Certain integral equations relating total exchange functions to direct exchange functions are developed. These integral equations are solved using Gaussian quadrature integration method. The results obtained based on the present approach are found to be more accurate than those of the zonal method
Experimental test of exchange degeneracy in hypercharge exchange reactions
International Nuclear Information System (INIS)
Moffeit, K.C.
1978-10-01
Two pairs of line-reversed reactions π + P → K + Σ + , K - p → π - Σ + and π + p → K + Y* + (1385), K - p → π - Y* + (1385) provide an experimental test of exchange degeneracy in hypercharge exchange reactions. From their study it is concluded that in contrast to the lower energy data, the 11.5 results for the two pairs of reactions are consistent with exchange degeneracy predictions for both helicity-flip and nonflip amplitudes. The Y(1385) decay angular distributions indicate that the quark model and Stodolsky--Sakurai predictions are in agreement with the main features of the data. However, small violations are observed at small momentum transfer. While the Y(1385) vertex is helicity-flip dominated, the nonvanishing of T/sub 3/2 - 1/2/ and T/sub -3/2 1/2/ suggests some finite helicity nonflip contribution in the forward direction. 23 references