What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.
Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.
Algorithmic Principles of Mathematical Programming
Faigle, Ulrich; Kern, Walter; Still, Georg
2002-01-01
Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear
Graphics and visualization principles & algorithms
Theoharis, T; Platis, Nikolaos; Patrikalakis, Nicholas M
2008-01-01
Computer and engineering collections strong in applied graphics and analysis of visual data via computer will find Graphics & Visualization: Principles and Algorithms makes an excellent classroom text as well as supplemental reading. It integrates coverage of computer graphics and other visualization topics, from shadow geneeration and particle tracing to spatial subdivision and vector data visualization, and it provides a thorough review of literature from multiple experts, making for a comprehensive review essential to any advanced computer study.-California Bookw
Simple sorting algorithm test based on CUDA
Meng, Hongyu; Guo, Fangjin
2015-01-01
With the development of computing technology, CUDA has become a very important tool. In computer programming, sorting algorithm is widely used. There are many simple sorting algorithms such as enumeration sort, bubble sort and merge sort. In this paper, we test some simple sorting algorithm based on CUDA and draw some useful conclusions.
Simple Activity Demonstrates Wind Energy Principles
Roman, Harry T.
2012-01-01
Wind energy is an exciting and clean energy option often described as the fastest-growing energy system on the planet. With some simple materials, teachers can easily demonstrate its key principles in their classroom. (Contains 1 figure and 2 tables.)
Simple Obstacle Avoidance Algorithm for Rehabilitation Robots
Stuyt, Floran H.A.; Römer, GertWillem R.B.E.; Stuyt, Harry .J.A.
2007-01-01
The efficiency of a rehabilitation robot is improved by offering record-and-replay to operate the robot. While automatically moving to a stored target (replay) collisions of the robot with obstacles in its work space must be avoided. A simple, though effective, generic and deterministic algorithm
Training nuclei detection algorithms with simple annotations
Directory of Open Access Journals (Sweden)
Henning Kost
2017-01-01
Full Text Available Background: Generating good training datasets is essential for machine learning-based nuclei detection methods. However, creating exhaustive nuclei contour annotations, to derive optimal training data from, is often infeasible. Methods: We compared different approaches for training nuclei detection methods solely based on nucleus center markers. Such markers contain less accurate information, especially with regard to nuclear boundaries, but can be produced much easier and in greater quantities. The approaches use different automated sample extraction methods to derive image positions and class labels from nucleus center markers. In addition, the approaches use different automated sample selection methods to improve the detection quality of the classification algorithm and reduce the run time of the training process. We evaluated the approaches based on a previously published generic nuclei detection algorithm and a set of Ki-67-stained breast cancer images. Results: A Voronoi tessellation-based sample extraction method produced the best performing training sets. However, subsampling of the extracted training samples was crucial. Even simple class balancing improved the detection quality considerably. The incorporation of active learning led to a further increase in detection quality. Conclusions: With appropriate sample extraction and selection methods, nuclei detection algorithms trained on the basis of simple center marker annotations can produce comparable quality to algorithms trained on conventionally created training sets.
A new simple iterative reconstruction algorithm for SPECT transmission measurement
International Nuclear Information System (INIS)
Hwang, D.S.; Zeng, G.L.
2005-01-01
This paper proposes a new iterative reconstruction algorithm for transmission tomography and compares this algorithm with several other methods. The new algorithm is simple and resembles the emission ML-EM algorithm in form. Due to its simplicity, it is easy to implement and fast to compute a new update at each iteration. The algorithm also always guarantees non-negative solutions. Evaluations are performed using simulation studies and real phantom data. Comparisons with other algorithms such as convex, gradient, and logMLEM show that the proposed algorithm is as good as others and performs better in some cases
A simple algorithm for computing the smallest enclosing circle
DEFF Research Database (Denmark)
Skyum, Sven
1991-01-01
Presented is a simple O(n log n) algorithm for computing the smallest enclosing circle of a convex polygon. It can be easily extended to algorithms that compute the farthest-and the closest-point Voronoi diagram of a convex polygon within the same time bound.......Presented is a simple O(n log n) algorithm for computing the smallest enclosing circle of a convex polygon. It can be easily extended to algorithms that compute the farthest-and the closest-point Voronoi diagram of a convex polygon within the same time bound....
A Simple Two Aircraft Conflict Resolution Algorithm
Chatterji, Gano B.
2006-01-01
Conflict detection and resolution methods are crucial for distributed air-ground traffic management in which the crew in, the cockpit, dispatchers in operation control centers sad and traffic controllers in the ground-based air traffic management facilities share information and participate in the traffic flow and traffic control functions. This paper describes a conflict detection, and a conflict resolution method. The conflict detection method predicts the minimum separation and the time-to-go to the closest point of approach by assuming that both the aircraft will continue to fly at their current speeds along their current headings. The conflict resolution method described here is motivated by the proportional navigation algorithm, which is often used for missile guidance during the terminal phase. It generates speed and heading commands to rotate the line-of-sight either clockwise or counter-clockwise for conflict resolution. Once the aircraft achieve a positive range-rate and no further conflict is predicted, the algorithm generates heading commands to turn back the aircraft to their nominal trajectories. The speed commands are set to the optimal pre-resolution speeds. Six numerical examples are presented to demonstrate the conflict detection, and the conflict resolution methods.
Connectivity algorithm with depth first search (DFS) on simple graphs
Riansanti, O.; Ihsan, M.; Suhaimi, D.
2018-01-01
This paper discusses an algorithm to detect connectivity of a simple graph using Depth First Search (DFS). The DFS implementation in this paper differs than other research, that is, on counting the number of visited vertices. The algorithm obtains s from the number of vertices and visits source vertex, following by its adjacent vertices until the last vertex adjacent to the previous source vertex. Any simple graph is connected if s equals 0 and disconnected if s is greater than 0. The complexity of the algorithm is O(n2).
Automatic modulation classification principles, algorithms and applications
Zhu, Zhechen
2014-01-01
Automatic Modulation Classification (AMC) has been a key technology in many military, security, and civilian telecommunication applications for decades. In military and security applications, modulation often serves as another level of encryption; in modern civilian applications, multiple modulation types can be employed by a signal transmitter to control the data rate and link reliability. This book offers comprehensive documentation of AMC models, algorithms and implementations for successful modulation recognition. It provides an invaluable theoretical and numerical comparison of AMC algo
Inverse synthetic aperture radar imaging principles, algorithms and applications
Chen , Victor C
2014-01-01
Inverse Synthetic Aperture Radar Imaging: Principles, Algorithms and Applications is based on the latest research on ISAR imaging of moving targets and non-cooperative target recognition (NCTR). With a focus on the advances and applications, this book will provide readers with a working knowledge on various algorithms of ISAR imaging of targets and implementation with MATLAB. These MATLAB algorithms will prove useful in order to visualize and manipulate some simulated ISAR images.
Architectures of soft robotic locomotion enabled by simple mechanical principles.
Zhu, Liangliang; Cao, Yunteng; Liu, Yilun; Yang, Zhe; Chen, Xi
2017-06-28
In nature, a variety of limbless locomotion patterns flourish, from the small or basic life forms (Escherichia coli, amoebae, etc.) to the large or intelligent creatures (e.g., slugs, starfishes, earthworms, octopuses, jellyfishes, and snakes). Many bioinspired soft robots based on locomotion have been developed in the past few decades. In this work, based on the kinematics and dynamics of two representative locomotion modes (i.e., worm-like crawling and snake-like slithering), we propose a broad set of innovative designs for soft mobile robots through simple mechanical principles. Inspired by and going beyond the existing biological systems, these designs include 1-D (dimensional), 2-D, and 3-D robotic locomotion patterns enabled by the simple actuation of continuous beams. We report herein over 20 locomotion modes achieving various locomotion functions, including crawling, rising, running, creeping, squirming, slithering, swimming, jumping, turning, turning over, helix rolling, wheeling, etc. Some are able to reach high speed, high efficiency, and overcome obstacles. All these locomotion strategies and functions can be integrated into a simple beam model. The proposed simple and robust models are adaptive for severe and complex environments. These elegant designs for diverse robotic locomotion patterns are expected to underpin future deployments of soft robots and to inspire a series of advanced designs.
Genetic Algorithms Principles Towards Hidden Markov Model
Directory of Open Access Journals (Sweden)
Nabil M. Hewahi
2011-10-01
Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find
out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values.
Linear Programming, the Simplex Algorithm and Simple Polytopes
Directory of Open Access Journals (Sweden)
Das Bhusan
2010-09-01
Full Text Available In the first part of the paper we survey some far reaching applications of the basis facts of linear programming to the combinatorial theory of simple polytopes. In the second part we discuss some recent developments concurring the simplex algorithm. We describe sub-exponential randomized pivot roles and upper bounds on the diameter of graphs of polytopes.
Nodal algorithm derived from a new variational principle
International Nuclear Information System (INIS)
Watson, Fernando V.
1995-01-01
As a by-product of the research being carried on by the author on methods of recovering pin power distribution of PWR cores, a nodal algorithm based on a modified variational principle for the two group diffusion equations has been obtained. The main feature of the new algorithm is the low dimensionality achieved by the reduction of the original diffusion equations to a system of algebraic Eigen equations involving the average sources only, instead of sources and interface group currents used in conventional nodal methods. The advantage of this procedure is discussed and results generated by the new algorithm and by a finite difference code are compared. (author). 2 refs, 7 tabs
Time-advance algorithms based on Hamilton's principle
International Nuclear Information System (INIS)
Lewis, H.R.; Kostelec, P.J.
1993-01-01
Time-advance algorithms based on Hamilton's variational principle are being developed for application to problems in plasma physics and other areas. Hamilton's principle was applied previously to derive a system of ordinary differential equations in time whose solution provides an approximation to the evolution of a plasma described by the Vlasov-Maxwell equations. However, the variational principle was not used to obtain an algorithm for solving the ordinary differential equations numerically. The present research addresses the numerical solution of systems of ordinary differential equations via Hamilton's principle. The basic idea is first to choose a class of functions for approximating the solution of the ordinary differential equations over a specific time interval. Then the parameters in the approximating function are determined by applying Hamilton's principle exactly within the class of approximating functions. For example, if an approximate solution is desired between time t and time t + Δ t, the class of approximating functions could be polynomials in time up to some degree. The issue of how to choose time-advance algorithms is very important for achieving efficient, physically meaningful computer simulations. The objective is to reliably simulate those characteristics of an evolving system that are scientifically most relevant. Preliminary numerical results are presented, including comparisons with other computational methods
A simple algorithm for the identification of clinical COPD phenotypes
DEFF Research Database (Denmark)
Burgel, Pierre-Régis; Paillasseur, Jean-Louis; Janssens, Wim
2017-01-01
This study aimed to identify simple rules for allocating chronic obstructive pulmonary disease (COPD) patients to clinical phenotypes identified by cluster analyses. Data from 2409 COPD patients of French/Belgian COPD cohorts were analysed using cluster analysis resulting in the identification...... of subgroups, for which clinical relevance was determined by comparing 3-year all-cause mortality. Classification and regression trees (CARTs) were used to develop an algorithm for allocating patients to these subgroups. This algorithm was tested in 3651 patients from the COPD Cohorts Collaborative...... International Assessment (3CIA) initiative. Cluster analysis identified five subgroups of COPD patients with different clinical characteristics (especially regarding severity of respiratory disease and the presence of cardiovascular comorbidities and diabetes). The CART-based algorithm indicated...
The algorithms and principles of non-photorealistic graphics
Geng, Weidong
2011-01-01
""The Algorithms and Principles of Non-photorealistic Graphics: Artistic Rendering and Cartoon Animation"" provides a conceptual framework for and comprehensive and up-to-date coverage of research on non-photorealistic computer graphics including methodologies, algorithms and software tools dedicated to generating artistic and meaningful images and animations. This book mainly discusses how to create art from a blank canvas, how to convert the source images into pictures with the desired visual effects, how to generate artistic renditions from 3D models, how to synthesize expressive pictures f
Statistical behaviour of adaptive multilevel splitting algorithms in simple models
International Nuclear Information System (INIS)
Rolland, Joran; Simonnet, Eric
2015-01-01
Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations
A simple greedy algorithm for dynamic graph orientation
DEFF Research Database (Denmark)
Berglin, Edvin; Brodal, Gerth Stølting
2017-01-01
Graph orientations with low out-degree are one of several ways to efficiently store sparse graphs. If the graphs allow for insertion and deletion of edges, one may have to flip the orientation of some edges to prevent blowing up the maximum out-degree. We use arboricity as our sparsity measure....... With an immensely simple greedy algorithm, we get parametrized trade-off bounds between out-degree and worst case number of flips, which previously only existed for amortized number of flips. We match the previous best worst-case algorithm (in O(log n) flips) for general arboricity and beat it for either constant...... or super-logarithmic arboricity. We also match a previous best amortized result for at least logarithmic arboricity, and give the first results with worst-case O(1) and O(sqrt(log n)) flips nearly matching degree bounds to their respective amortized solutions....
Simple algorithm for improved security in the FDDI protocol
Lundy, G. M.; Jones, Benjamin
1993-02-01
We propose a modification to the Fiber Distributed Data Interface (FDDI) protocol based on a simple algorithm which will improve confidential communication capability. This proposed modification provides a simple and reliable system which exploits some of the inherent security properties in a fiber optic ring network. This method differs from conventional methods in that end to end encryption can be facilitated at the media access control sublayer of the data link layer in the OSI network model. Our method is based on a variation of the bit stream cipher method. The transmitting station takes the intended confidential message and uses a simple modulo two addition operation against an initialization vector. The encrypted message is virtually unbreakable without the initialization vector. None of the stations on the ring will have access to both the encrypted message and the initialization vector except the transmitting and receiving stations. The generation of the initialization vector is unique for each confidential transmission and thus provides a unique approach to the key distribution problem. The FDDI protocol is of particular interest to the military in terms of LAN/MAN implementations. Both the Army and the Navy are considering the standard as the basis for future network systems. A simple and reliable security mechanism with the potential to support realtime communications is a necessary consideration in the implementation of these systems. The proposed method offers several advantages over traditional methods in terms of speed, reliability, and standardization.
A Simple Model of Entrepreneurship for Principles of Economics Courses
Gunter, Frank R.
2012-01-01
The critical roles of entrepreneurs in creating, operating, and destroying markets, as well as their importance in driving long-term economic growth are still generally either absent from principles of economics texts or relegated to later chapters. The primary difficulties in explaining entrepreneurship at the principles level are the lack of a…
Insights: Simple Models for Teaching Equilibrium and Le Chatelier's Principle.
Russell, Joan M.
1988-01-01
Presents three models that have been effective for teaching chemical equilibrium and Le Chatelier's principle: (1) the liquid transfer model, (2) the fish model, and (3) the teeter-totter model. Explains each model and its relation to Le Chatelier's principle. (MVL)
Design principles and algorithms for automated air traffic management
Erzberger, Heinz
1995-01-01
This paper presents design principles and algorithm for building a real time scheduler. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high altitude airspace far from the airport and low altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time.
Energy Aware Simple Ant Routing Algorithm for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Sohail Jabbar
2015-01-01
Full Text Available Network lifetime is one of the most prominent barriers in deploying wireless sensor networks for large-scale applications because these networks employ sensors with nonrenewable scarce energy resources. Sensor nodes dissipate most of their energy in complex routing mechanisms. To cope with limited energy problem, we present EASARA, an energy aware simple ant routing algorithm based on ant colony optimization. Unlike most algorithms, EASARA strives to avoid low energy routes and optimizes the routing process through selection of least hop count path with more energy. It consists of three phases, that is, route discovery, forwarding node, and route selection. We have improved the route discovery procedure and mainly concentrate on energy efficient forwarding node and route selection, so that the network lifetime can be prolonged. The four possible cases of forwarding node and route selection are presented. The performance of EASARA is validated through simulation. Simulation results demonstrate the performance supremacy of EASARA over contemporary scheme in terms of various metrics.
Design Principles and Algorithms for Air Traffic Arrival Scheduling
Erzberger, Heinz; Itoh, Eri
2014-01-01
This report presents design principles and algorithms for building a real-time scheduler of arrival aircraft based on a first-come-first-served (FCFS) scheduling protocol. The algorithms provide the conceptual and computational foundation for the Traffic Management Advisor (TMA) of the Center/terminal radar approach control facilities (TRACON) automation system, which comprises a set of decision support tools for managing arrival traffic at major airports in the United States. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high-altitude airspace far away from the airport and low-altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time. This report is a revision of an earlier paper first presented as part of an Advisory Group for Aerospace Research and Development (AGARD) lecture series in September 1995. The authors, during vigorous discussions over the details of this paper, felt it was important to the air-trafficmanagement (ATM) community to revise and extend the original 1995 paper, providing more detail and clarity and thereby allowing future researchers to understand this foundational work as the basis for the TMA's scheduling algorithms.
Parallelization of MCNP4 code by using simple FORTRAN algorithms
International Nuclear Information System (INIS)
Yazid, P.I.; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka.
1993-12-01
Simple FORTRAN algorithms, that rely only on open, close, read and write statements, together with disk files and some UNIX commands have been applied to parallelization of MCNP4. The code, named MCNPNFS, maintains almost all capabilities of MCNP4 in solving shielding problems. It is able to perform parallel computing on a set of any UNIX workstations connected by a network, regardless of the heterogeneity in hardware system, provided that all processors produce a binary file in the same format. Further, it is confirmed that MCNPNFS can be executed also on Monte-4 vector-parallel computer. MCNPNFS has been tested intensively by executing 5 photon-neutron benchmark problems, a spent fuel cask problem and 17 sample problems included in the original code package of MCNP4. Three different workstations, connected by a network, have been used to execute MCNPNFS in parallel. By measuring CPU time, the parallel efficiency is determined to be 58% to 99% and 86% in average. On Monte-4, MCNPNFS has been executed using 4 processors concurrently and has achieved the parallel efficiency of 79% in average. (author)
Gog, Simon; Bader, Martin
2008-10-01
The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.
A simple fall detection algorithm for Powered Two Wheelers
BOUBEZOUL, Abderrahmane; ESPIE, Stéphane; LARNAUDIE, Bruno; BOUAZIZ, Samir
2013-01-01
The aim of this study is to evaluate a low-complexity fall detection algorithm, that use both acceleration and angular velocity signals to trigger an alert-system or to inflate an airbag jacket. The proposed fall detection algorithm is a threshold-based algorithm, using data from 3-accelerometers and 3-gyroscopes sensors mounted on the motorcycle. During the first step, the commonly fall accident configurations were selected and analyzed in order to identify the main causation factors. On the...
Simple and Effective Algorithms: Computer-Adaptive Testing.
Linacre, John Michael
Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…
A simple and efficient parallel FFT algorithm using the BSP model
Bisseling, R.H.; Inda, M.A.
2000-01-01
In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case
A Simple and Efficient Artificial Bee Colony Algorithm
Directory of Open Access Journals (Sweden)
Yunfeng Xu
2013-01-01
Full Text Available Artificial bee colony (ABC is a new population-based stochastic algorithm which has shown good search abilities on many optimization problems. However, the original ABC shows slow convergence speed during the search process. In order to enhance the performance of ABC, this paper proposes a new artificial bee colony (NABC algorithm, which modifies the search pattern of both employed and onlooker bees. A solution pool is constructed by storing some best solutions of the current swarm. New candidate solutions are generated by searching the neighborhood of solutions randomly chosen from the solution pool. Experiments are conducted on a set of twelve benchmark functions. Simulation results show that our approach is significantly better or at least comparable to the original ABC and seven other stochastic algorithms.
A simple algorithm for the identification of clinical COPD phenotypes
Burgel, Pierre-Régis; Paillasseur, Jean-Louis; Janssens, Wim; Piquet, Jacques; ter Riet, Gerben; Garcia-Aymerich, Judith; Cosio, Borja; Bakke, Per; Puhan, Milo A.; Langhammer, Arnulf; Alfageme, Inmaculada; Almagro, Pere; Ancochea, Julio; Celli, Bartolome R.; Casanova, Ciro; de-Torres, Juan P.; Decramer, Marc; Echazarreta, Andrés; Esteban, Cristobal; Gomez Punter, Rosa Mar; Han, MeiLan K.; Johannessen, Ane; Kaiser, Bernhard; Lamprecht, Bernd; Lange, Peter; Leivseth, Linda; Marin, Jose M.; Martin, Francis; Martinez-Camblor, Pablo; Miravitlles, Marc; Oga, Toru; Sofia Ramírez, Ana; Sin, Don D.; Sobradillo, Patricia; Soler-Cataluña, Juan J.; Turner, Alice M.; Verdu Rivera, Francisco Javier; Soriano, Joan B.; Roche, Nicolas
2017-01-01
This study aimed to identify simple rules for allocating chronic obstructive pulmonary disease (COPD) patients to clinical phenotypes identified by cluster analyses. Data from 2409 COPD patients of French/Belgian COPD cohorts were analysed using cluster analysis resulting in the identification of
Branch and peg algorithms for the simple plant location problem
Goldengorin, B.; Ghosh, D.; Sierksma, G.
The simple plant location problem is a well-studied problem in combinatorial optimization. It is one of deciding where to locate a set of plants so that a set of clients can be supplied by them at the minimum cost. This problem often appears as a subproblem in other combinatorial problems. Several
Branch and peg algorithms for the simple plant location problem
Goldengorin, Boris; Ghosh, Diptesh; Sierksma, Gerard
2001-01-01
The simple plant location problem is a well-studied problem in combinatorial optimization. It is one of deciding where to locate a set of plants so that a set of clients can be supplied by them at the minimum cost. This problem of ten appears as a subproblem in other combinatorial problems. Several
Al-Jabr, Ahmad Ali; Alsunaidi, Mohammad A.; Ng, Tien Khee; Ooi, Boon S.
2013-01-01
In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.
Al-Jabr, Ahmad Ali
2013-03-01
In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.
SIMPLE HEURISTIC ALGORITHM FOR DYNAMIC VM REALLOCATION IN IAAS CLOUDS
Directory of Open Access Journals (Sweden)
Nikita A. Balashov
2018-03-01
Full Text Available The rapid development of cloud technologies and its high prevalence in both commercial and academic areas have stimulated active research in the domain of optimal cloud resource management. One of the most active research directions is dynamic virtual machine (VM placement optimization in clouds build on Infrastructure-as-a-Service model. This kind of research may pursue different goals with energy-aware optimization being the most common goal as it aims at a urgent problem of green cloud computing - reducing energy consumption by data centers. In this paper we present a new heuristic algorithm of dynamic reallocation of VMs based on an approach presented in one of our previous works. In the algorithm we apply a 2-rank strategy to classify VMs and servers corresponding to the highly and lowly active VMs and solve four tasks: VM classification, host classification, forming a VM migration map and VMs migration. Dividing all of the VMs and servers into two classes we attempt to implement the possibility of risk reduction in case of hardware overloads under overcommitment conditions and to reduce the influence of the occurring overloads on the performance of the cloud VMs. Presented algorithm was developed based on the workload profile of the JINR cloud (a scientific private cloud with the goal of maximizing its usage, but it can also be applied in both public and private commercial clouds to organize the simultaneous use of different SLA and QoS levels in the same cloud environment by giving each VM rank its own level of overcommitment.
A Simple Encryption Algorithm for Quantum Color Image
Li, Panchi; Zhao, Ya
2017-06-01
In this paper, a simple encryption scheme for quantum color image is proposed. Firstly, a color image is transformed into a quantum superposition state by employing NEQR (novel enhanced quantum representation), where the R,G,B values of every pixel in a 24-bit RGB true color image are represented by 24 single-qubit basic states, and each value has 8 qubits. Then, these 24 qubits are respectively transformed from a basic state into a balanced superposition state by employed the controlled rotation gates. At this time, the gray-scale values of R, G, B of every pixel are in a balanced superposition of 224 multi-qubits basic states. After measuring, the whole image is an uniform white noise, which does not provide any information. Decryption is the reverse process of encryption. The experimental results on the classical computer show that the proposed encryption scheme has better security.
Modified SIMPLE algorithm for the numerical analysis of incompressible flows with free surface
International Nuclear Information System (INIS)
Mok, Jin Ho; Hong, Chun Pyo; Lee, Jin Ho
2005-01-01
While the SIMPLE algorithm is most widely used for the simulations of flow phenomena that take place in the industrial equipment or the manufacturing processes, it is less adopted for the simulations of the free surface flow. Though the SIMPLE algorithm is free from the limitation of time step, the free surface behavior imposes the restriction on the time step. As a result, the explicit schemes are faster than the implicit scheme in terms of computation time when the same time step is applied to, since the implicit scheme includes the numerical method to solve the simultaneous equations in its procedure. If the computation time of SIMPLE algorithm can be reduced when it is applied to the unsteady free surface flow problems, the calculation can be carried out in the more stable way and, in the design process, the process variables can be controlled based on the more accurate data base. In this study, a modified SIMPLE algorithm is presented for the free surface flow. The broken water column problem is adopted for the validation of the modified algorithm (MoSIMPLE) and for comparison to the conventional SIMPLE algorithm
A Simple But Effective Canonical Dual Theory Unified Algorithm for Global Optimization
Zhang, Jiapu
2011-01-01
Numerical global optimization methods are often very time consuming and could not be applied for high-dimensional nonconvex/nonsmooth optimization problems. Due to the nonconvexity/nonsmoothness, directly solving the primal problems sometimes is very difficult. This paper presents a very simple but very effective canonical duality theory (CDT) unified global optimization algorithm. This algorithm has convergence is proved in this paper. More important, for this CDT-unified algorithm, numerous...
Pareto Principle in Datamining: an Above-Average Fencing Algorithm
Directory of Open Access Journals (Sweden)
K. Macek
2008-01-01
Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.
Calculation of propellant gas pressure by simple extended corresponding state principle
Bin Xu; San-jiu Ying; Xin Liao
2016-01-01
The virial equation can well describe gas state at high temperature and pressure, but the difficulties in virial coefficient calculation limit the use of virial equation. Simple extended corresponding state principle (SE-CSP) is introduced in virial equation. Based on a corresponding state equation, including three characteristic parameters, an extended parameter is introduced to describe the second virial coefficient expressions of main products of propellant gas. The modified SE-CSP second ...
Flux-corrected transport principles, algorithms, and applications
Kuzmin, Dmitri; Turek, Stefan
2005-01-01
Addressing students and researchers as well as CFD practitioners, this book describes the state of the art in the development of high-resolution schemes based on the Flux-Corrected Transport (FCT) paradigm. Intended for readers who have a solid background in Computational Fluid Dynamics, the book begins with historical notes by J.P. Boris and D.L. Book. Review articles that follow describe recent advances in the design of FCT algorithms as well as various algorithmic aspects. The topics addressed in the book and its main highlights include: the derivation and analysis of classical FCT schemes with special emphasis on the underlying physical and mathematical constraints; flux limiting for hyperbolic systems; generalization of FCT to implicit time-stepping and finite element discretizations on unstructured meshes and its role as a subgrid scale model for Monotonically Integrated Large Eddy Simulation (MILES) of turbulent flows. The proposed enhancements of the FCT methodology also comprise the prelimiting and '...
Principles of a new treatment algorithm in multiple sclerosis
DEFF Research Database (Denmark)
Hartung, Hans-Peter; Montalban, Xavier; Sorensen, Per Soelberg
2011-01-01
We are entering a new era in the management of patients with multiple sclerosis (MS). The first oral treatment (fingolimod) has now gained US FDA approval, addressing an unmet need for patients with MS who wish to avoid parenteral administration. A second agent (cladribine) is currently being...... considered for approval. With the arrival of these oral agents, a key question is where they may fit into the existing MS treatment algorithm. This article aims to help answer this question by analyzing the trial data for the new oral therapies, as well as for existing MS treatments, by applying practical...... clinical experience, and through consideration of our increased understanding of how to define treatment success in MS. This article also provides a speculative look at what the treatment algorithm may look like in 5 years, with the availability of new data, greater experience and, potentially, other novel...
ANNIT - An Efficient Inversion Algorithm based on Prediction Principles
Růžek, B.; Kolář, P.
2009-04-01
Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good
A simple algorithm for measuring particle size distributions on an uneven background from TEM images
DEFF Research Database (Denmark)
Gontard, Lionel Cervera; Ozkaya, Dogan; Dunin-Borkowski, Rafal E.
2011-01-01
Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence of a...... application to images of heterogeneous catalysts is presented.......Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence...
Flux-corrected transport principles, algorithms, and applications
Löhner, Rainald; Turek, Stefan
2012-01-01
Many modern high-resolution schemes for Computational Fluid Dynamics trace their origins to the Flux-Corrected Transport (FCT) paradigm. FCT maintains monotonicity using a nonoscillatory low-order scheme to determine the bounds for a constrained high-order approximation. This book begins with historical notes by J.P. Boris and D.L. Book who invented FCT in the early 1970s. The chapters that follow describe the design of fully multidimensional FCT algorithms for structured and unstructured grids, limiting for systems of conservation laws, and the use of FCT as an implicit subgrid scale model. The second edition presents 200 pages of additional material. The main highlights of the three new chapters include: FCT-constrained interpolation for Arbitrary Lagrangian-Eulerian methods, an optimization-based approach to flux correction, and FCT simulations of high-speed flows on overset grids. Addressing students and researchers, as well as CFD practitioners, the book is focused on computational aspects and contains m...
Principle and Reconstruction Algorithm for Atomic-Resolution Holography
Matsushita, Tomohiro; Muro, Takayuki; Matsui, Fumihiko; Happo, Naohisa; Hosokawa, Shinya; Ohoyama, Kenji; Sato-Tomita, Ayana; Sasaki, Yuji C.; Hayashi, Kouichi
2018-06-01
Atomic-resolution holography makes it possible to obtain the three-dimensional (3D) structure around a target atomic site. Translational symmetry of the atomic arrangement of the sample is not necessary, and the 3D atomic image can be measured when the local structure of the target atomic site is oriented. Therefore, 3D local atomic structures such as dopants and adsorbates are observable. Here, the atomic-resolution holography comprising photoelectron holography, X-ray fluorescence holography, neutron holography, and their inverse modes are treated. Although the measurement methods are different, they can be handled with a unified theory. The algorithm for reconstructing 3D atomic images from holograms plays an important role. Although Fourier transform-based methods have been proposed, they require the multiple-energy holograms. In addition, they cannot be directly applied to photoelectron holography because of the phase shift problem. We have developed methods based on the fitting method for reconstructing from single-energy and photoelectron holograms. The developed methods are applicable to all types of atomic-resolution holography.
A Simple Sizing Algorithm for Stand-Alone PV/Wind/Battery Hybrid Microgrids
Directory of Open Access Journals (Sweden)
Jing Li
2012-12-01
Full Text Available In this paper, we develop a simple algorithm to determine the required number of generating units of wind-turbine generator and photovoltaic array, and the associated storage capacity for stand-alone hybrid microgrid. The algorithm is based on the observation that the state of charge of battery should be periodically invariant. The optimal sizing of hybrid microgrid is given in the sense that the life cycle cost of system is minimized while the given load power demand can be satisfied without load rejection. We also report a case study to show the efficacy of the developed algorithm.
DEFF Research Database (Denmark)
Neumann, Frank; Witt, Carsten
2015-01-01
combinatorial optimization problem, namely makespan scheduling. We study the model of a strong adversary which is allowed to change one job at regular intervals. Furthermore, we investigate the setting of random changes. Our results show that randomized local search and a simple evolutionary algorithm are very...
A simple two stage optimization algorithm for constrained power economic dispatch
International Nuclear Information System (INIS)
Huang, G.; Song, K.
1994-01-01
A simple two stage optimization algorithm is proposed and investigated for fast computation of constrained power economic dispatch control problems. The method is a simple demonstration of the hierarchical aggregation-disaggregation (HAD) concept. The algorithm first solves an aggregated problem to obtain an initial solution. This aggregated problem turns out to be classical economic dispatch formulation, and it can be solved in 1% of overall computation time. In the second stage, linear programming method finds optimal solution which satisfies power balance constraints, generation and transmission inequality constraints and security constraints. Implementation of the algorithm for IEEE systems and EPRI Scenario systems shows that the two stage method obtains average speedup ratio 10.64 as compared to classical LP-based method
Simple Exact Algorithm for Transistor Sizing of Low-Power High-Speed Arithmetic Circuits
Directory of Open Access Journals (Sweden)
Tooraj Nikoubin
2010-01-01
Full Text Available A new transistor sizing algorithm, SEA (Simple Exact Algorithm, for optimizing low-power and high-speed arithmetic integrated circuits is proposed. In comparison with other transistor sizing algorithms, simplicity, accuracy, independency of order and initial sizing factors of transistors, and flexibility in choosing the optimization parameters such as power consumption, delay, Power-Delay Product (PDP, chip area or the combination of them are considered as the advantages of this new algorithm. More exhaustive rules of grouping transistors are the main trait of our algorithm. Hence, the SEA algorithm dominates some major transistor sizing metrics such as optimization rate, simulation speed, and reliability. According to approximate comparison of the SEA algorithm with MDE and ADC for a number of conventional full adder circuits, delay and PDP have been improved 55.01% and 57.92% on an average, respectively. By comparing the SEA and Chang's algorithm, 25.64% improvement in PDP and 33.16% improvement in delay have been achieved. All the simulations have been performed with 0.13 m technology based on the BSIM3v3 model using HSpice simulator software.
Bruijn, de N.G.
1972-01-01
Recently A. W. Joseph described an algorithm providing combinatorial insight into E. Sparre Andersen's so-called Principle of Equivalence in mathematical statistics. In the present paper such algorithms are discussed systematically.
Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.
2017-12-01
This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.
Hammoudi, Nadjib; Duprey, Matthieu; Régnier, Philippe; Achkar, Marc; Boubrit, Lila; Preud'homme, Gisèle; Healy-Brucker, Aude; Vignalou, Jean-Baptiste; Pousset, Françoise; Komajda, Michel; Isnard, Richard
2014-02-01
Management of increased referrals for transthoracic echocardiography (TTE) examinations is a challenge. Patients with normal TTE examinations take less time to explore than those with heart abnormalities. A reliable method for assessing pretest probability of a normal TTE may optimize management of requests. To establish and validate, based on requests for examinations, a simple algorithm for defining pretest probability of a normal TTE. In a retrospective phase, factors associated with normality were investigated and an algorithm was designed. In a prospective phase, patients were classified in accordance with the algorithm as being at high or low probability of having a normal TTE. In the retrospective phase, 42% of 618 examinations were normal. In multivariable analysis, age and absence of cardiac history were associated to normality. Low pretest probability of normal TTE was defined by known cardiac history or, in case of doubt about cardiac history, by age>70 years. In the prospective phase, the prevalences of normality were 72% and 25% in high (n=167) and low (n=241) pretest probability of normality groups, respectively. The mean duration of normal examinations was significantly shorter than abnormal examinations (13.8 ± 9.2 min vs 17.6 ± 11.1 min; P=0.0003). A simple algorithm can classify patients referred for TTE as being at high or low pretest probability of having a normal examination. This algorithm might help to optimize management of requests in routine practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
A Simple Density with Distance Based Initial Seed Selection Technique for K Means Algorithm
Directory of Open Access Journals (Sweden)
Sajidha Syed Azimuddin
2017-01-01
Full Text Available Open issues with respect to K means algorithm are identifying the number of clusters, initial seed concept selection, clustering tendency, handling empty clusters, identifying outliers etc. In this paper we propose a novel and a simple technique considering both density and distance of the concepts in a dataset to identify initial seed concepts for clustering. Many authors have proposed different techniques to identify initial seed concepts; but our method ensures that the initial seed concepts are chosen from different clusters that are to be generated by the clustering solution. The hallmark of our algorithm is that it is a single pass algorithm that does not require any extra parameters to be estimated. Further, our seed concepts are one among the actual concepts and not the mean of representative concepts as is the case in many other algorithms. We have implemented our proposed algorithm and compared the results with the interval based technique of Fouad Khan. We see that our method outperforms the interval based method. We have also compared our method with the original random K means and K Means++ algorithms.
Simple nuclear norm based algorithms for imputing missing data and forecasting in time series
Butcher, Holly Louise; Gillard, Jonathan William
2017-01-01
There has been much recent progress on the use of the nuclear norm for the so-called matrix completion problem (the problem of imputing missing values of a matrix). In this paper we investigate the use of the nuclear norm for modelling time series, with particular attention to imputing missing data and forecasting. We introduce a simple alternating projections type algorithm based on the nuclear norm for these tasks, and consider a number of practical examples.
A simple algorithm for estimation of source-to-detector distance in Compton imaging
International Nuclear Information System (INIS)
Rawool-Sullivan, Mohini W.; Sullivan, John P.; Tornga, Shawn R.; Brumby, Steven P.
2008-01-01
Compton imaging is used to predict the location of gamma-emitting radiation sources. The X and Y coordinates of the source can be obtained using a back-projected image and a two-dimensional peak-finding algorithm. The emphasis of this work is to estimate the source-to-detector distance (Z). The algorithm presented uses the solid angle subtended by the reconstructed image at various source-to-detector distances. This algorithm was validated using both measured data from the prototype Compton imager (PCI) constructed at the Los Alamos National Laboratory and simulated data of the same imager. Results show this method can be applied successfully to estimate Z, and it provides a way of determining Z without prior knowledge of the source location. This method is faster than the methods that employ maximum likelihood method because it is based on simple back projections of Compton scatter data
International Nuclear Information System (INIS)
Lu, Ning; Qin, Jun; Yang, Kun; Sun, Jiulin
2011-01-01
Surface global solar radiation (GSR) is the primary renewable energy in nature. Geostationary satellite data are used to map GSR in many inversion algorithms in which ground GSR measurements merely serve to validate the satellite retrievals. In this study, a simple algorithm with artificial neural network (ANN) modeling is proposed to explore the non-linear physical relationship between ground daily GSR measurements and Multi-functional Transport Satellite (MTSAT) all-channel observations in an effort to fully exploit information contained in both data sets. Singular value decomposition is implemented to extract the principal signals from satellite data and a novel method is applied to enhance ANN performance at high altitude. A three-layer feed-forward ANN model is trained with one year of daily GSR measurements at ten ground sites. This trained ANN is then used to map continuous daily GSR for two years, and its performance is validated at all 83 ground sites in China. The evaluation result demonstrates that this algorithm can quickly and efficiently build the ANN model that estimates daily GSR from geostationary satellite data with good accuracy in both space and time. -- Highlights: → A simple and efficient algorithm to estimate GSR from geostationary satellite data. → ANN model fully exploits both the information from satellite and ground measurements. → Good performance of the ANN model is comparable to that of the classical models. → Surface elevation and infrared information enhance GSR inversion.
Directory of Open Access Journals (Sweden)
Milinkovitch Michel C
2007-11-01
Full Text Available Abstract Background Distance matrix methods constitute a major family of phylogenetic estimation methods, and the minimum evolution (ME principle (aiming at recovering the phylogeny with shortest length is one of the most commonly used optimality criteria for estimating phylogenetic trees. The major difficulty for its application is that the number of possible phylogenies grows exponentially with the number of taxa analyzed and the minimum evolution principle is known to belong to the NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGacaGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8xdX7Kaeeiuaafaaa@3888@-hard class of problems. Results In this paper, we introduce an Ant Colony Optimization (ACO algorithm to estimate phylogenies under the minimum evolution principle. ACO is an optimization technique inspired from the foraging behavior of real ant colonies. This behavior is exploited in artificial ant colonies for the search of approximate solutions to discrete optimization problems. Conclusion We show that the ACO algorithm is potentially competitive in comparison with state-of-the-art algorithms for the minimum evolution principle. This is the first application of an ACO algorithm to the phylogenetic estimation problem.
Hamiltonians and variational principles for Alfvén simple waves
International Nuclear Information System (INIS)
Webb, G M; Hu, Q; Roux, J A le; Dasgupta, B; Zank, G P
2012-01-01
The evolution equations for the magnetic field induction B with the wave phase for Alfvén simple waves are expressed as variational principles and in the Hamiltonian form. The evolution of B with the phase (which is a function of the space and time variables) depends on the generalized Frenet–Serret equations, in which the wave normal n (which is a function of the phase) is taken to be tangent to a curve X, in a 3D Cartesian geometry vector space. The physical variables (the gas density, fluid velocity, gas pressure and magnetic field induction) in the wave depend only on the phase. Three approaches are developed. One approach exploits the fact that the Frenet equations may be written as a 3D Hamiltonian system, which can be described using the Nambu bracket. It is shown that B as a function of the phase satisfies a modified version of the Frenet equations, and hence the magnetic field evolution equations can be expressed in the Hamiltonian form. A second approach develops an Euler–Poincaré variational formulation. A third approach uses the Frenet frame formulation, in which the hodograph of B moves on a sphere of constant radius and uses a stereographic projection transformation due to Darboux. The equations for the projected field components reduce to a complex Riccati equation. By using a Cole–Hopf transformation, the Riccati equation reduces to a linear second order differential equation for the new variable. A Hamiltonian formulation of the second order differential equation then allows the system to be written in the Hamiltonian form. Alignment dynamics equations for Alfvén simple waves give rise to a complex Riccati equation or, equivalently, to a quaternionic Riccati equation, which can be mapped onto the Riccati equation obtained by stereographic projection. (paper)
Calculation of propellant gas pressure by simple extended corresponding state principle
Directory of Open Access Journals (Sweden)
Bin Xu
2016-04-01
Full Text Available The virial equation can well describe gas state at high temperature and pressure, but the difficulties in virial coefficient calculation limit the use of virial equation. Simple extended corresponding state principle (SE-CSP is introduced in virial equation. Based on a corresponding state equation, including three characteristic parameters, an extended parameter is introduced to describe the second virial coefficient expressions of main products of propellant gas. The modified SE-CSP second virial coefficient expression was extrapolated based on the virial coefficients experimental temperature, and the second virial coefficients obtained are in good agreement with the experimental data at a low temperature and the theoretical values at high temperature. The maximum pressure in the closed bomb test was calculated with modified SE-CSP virial coefficient expressions with the calculated error of less than 2%, and the error was smaller than the result calculated with the reported values under the same calculation conditions. The modified SE-CSP virial coefficient expression provides a convenient and efficient method for practical virial coefficient calculation without resorting to complicated molecular model design and integral calculation.
International Nuclear Information System (INIS)
Göktürkler, G; Balkaya, Ç
2012-01-01
Three naturally inspired meta-heuristic algorithms—the genetic algorithm (GA), simulated annealing (SA) and particle swarm optimization (PSO)—were used to invert some of the self-potential (SP) anomalies originated by some polarized bodies with simple geometries. Both synthetic and field data sets were considered. The tests with the synthetic data comprised of the solutions with both noise-free and noisy data; in the tests with the field data some SP anomalies observed over a copper belt (India), graphite deposits (Germany) and metallic sulfide (Turkey) were inverted. The model parameters included the electric dipole moment, polarization angle, depth, shape factor and origin of the anomaly. The estimated parameters were compared with those from previous studies using various optimization algorithms, mainly least-squares approaches, on the same data sets. During the test studies the solutions by GA, PSO and SA were characterized as being consistent with each other; a good starting model was not a requirement to reach the global minimum. It can be concluded that the global optimization algorithms considered in this study were able to yield compatible solutions with those from widely used local optimization algorithms. (paper)
Energy Technology Data Exchange (ETDEWEB)
Wrede, D E; Dawalibi, H [King Faisal Specialist Hospital and Research Centre, Department of Medical Physics. Riyadh (Saudi Arabia)
1980-01-01
A simple mathematical algorithm is derived from experimental data for dose rates from /sup 137/Cs sources in a finite tissue equivalent medium corresponding to the female pelvis. An analytical expression for a point source of /sup 137/Cs along with a simple numerical integration routine allows for rapid as well as accurate dose rate calculations at points of interest for gynecologic insertions. When compared with theoretical models assuming an infinite unit density medium, the measured dose rates are found to be systematically lower at distances away from a single source; 5 per cent at 2 cm and 10 per cent at 7 cm along the transverse axis. Allowance in the program for print out of dose rates from individual sources to a given point and the feature of source strength modification allows for optimization in terms of increasing the difference in dose rate between reference treatment points and sensitive structures such as the bladder, rectum and colon.
International Nuclear Information System (INIS)
Wrede, D.E.; Dawalibi, H.
1980-01-01
A simple mathematical algorithm is derived from experimental data for dose rates from 137 Cs sources in a finite tissue equivalent medium corresponding to the female pelvis. An analytical expression for a point source of 137 Cs along with a simple numerical integration routine allows for rapid as well as accurate dose rate calculations at points of interest for gynecologic insertions. When compared with theoretical models assuming an infinite unit density medium, the measured dose rates are found to be systematically lower at distances away from a single source; 5 per cent at 2 cm and 10 per cent at 7 cm along the transverse axis. Allowance in the program for print out of dose rates from individual sources to a given point and the feature of source strength modification allows for optimization in terms of increasing the difference in dose rate between reference treatment points and sensitive structures such as the bladder, rectum and colon. (Auth.)
International Nuclear Information System (INIS)
Lue Kunhan; Lin Hsinhon; Chuang Kehshih; Kao Chihhao, K.; Hsieh Hungjen; Liu Shuhsin
2014-01-01
In positron emission tomography (PET) of the dopaminergic system, quantitative measurements of nigrostriatal dopamine function are useful for differential diagnosis. A subregional analysis of striatal uptake enables the diagnostic performance to be more powerful. However, the partial volume effect (PVE) induces an underestimation of the true radioactivity concentration in small structures. This work proposes a simple algorithm for subregional analysis of striatal uptake with partial volume correction (PVC) in dopaminergic PET imaging. The PVC algorithm analyzes the separate striatal subregions and takes into account the PVE based on the recovery coefficient (RC). The RC is defined as the ratio of the PVE-uncorrected to PVE-corrected radioactivity concentration, and is derived from a combination of the traditional volume of interest (VOI) analysis and the large VOI technique. The clinical studies, comprising 11 patients with Parkinson's disease (PD) and 6 healthy subjects, were used to assess the impact of PVC on the quantitative measurements. Simulations on a numerical phantom that mimicked realistic healthy and neurodegenerative situations were used to evaluate the performance of the proposed PVC algorithm. In both the clinical and the simulation studies, the striatal-to-occipital ratio (SOR) values for the entire striatum and its subregions were calculated with and without PVC. In the clinical studies, the SOR values in each structure (caudate, anterior putamen, posterior putamen, putamen, and striatum) were significantly higher by using PVC in contrast to those without. Among the PD patients, the SOR values in each structure and quantitative disease severity ratings were shown to be significantly related only when PVC was used. For the simulation studies, the average absolute percentage error of the SOR estimates before and after PVC were 22.74% and 1.54% in the healthy situation, respectively; those in the neurodegenerative situation were 20.69% and 2
Multiway simple cycle separators and I/O-efficient algorithms for planar graphs
DEFF Research Database (Denmark)
Arge, L.; Walderveen, Freek van; Zeh, Norbert
2013-01-01
memory, where sort(N) is the number of I/Os needed to sort N items in external memory. The key, and the main technical contribution of this paper, is a multiway version of Miller's simple cycle separator theorem. We show how to compute these separators in linear time in internal memory, and using O...... in internal memory, thereby completely negating the performance gain achieved by minimizing the number of disk accesses. In this paper, we show how to make these algorithms simultaneously efficient in internal and external memory so they achieve I/O complexity O(sort(N)) and take O(N log N) time in internal......(sort(N)) I/Os and O(N log N) (internal-memory computation) time in external memory....
A simple model based magnet sorting algorithm for planar hybrid undulators
International Nuclear Information System (INIS)
Rakowsky, G.
2010-01-01
Various magnet sorting strategies have been used to optimize undulator performance, ranging from intuitive pairing of high- and low-strength magnets, to full 3D FEM simulation with 3-axis Helmholtz coil magnet data. In the extreme, swapping magnets in a full field model to minimize trajectory wander and rms phase error can be time consuming. This paper presents a simpler approach, extending the field error signature concept to obtain trajectory displacement, kick angle and phase error signatures for each component of magnetization error from a Radia model of a short hybrid-PM undulator. We demonstrate that steering errors and phase errors are essentially decoupled and scalable from measured X, Y and Z components of magnetization. Then, for any given sequence of magnets, rms trajectory and phase errors are obtained from simple cumulative sums of the scaled displacements and phase errors. The cost function (a weighted sum of these errors) is then minimized by swapping magnets, using one's favorite optimization algorithm. This approach was applied recently at NSLS to a short in-vacuum undulator, which required no subsequent trajectory or phase shimming. Trajectory and phase signatures are also obtained for some mechanical errors, to guide 'virtual shimming' and specifying mechanical tolerances. Some simple inhomogeneities are modeled to assess their error contributions.
International Nuclear Information System (INIS)
Alarco, J.A.; Talbot, P.C.
2012-01-01
A simple phenomenological model for the relationship between structure and composition of the high Tc cuprates is presented. The model is based on two simple crystal chemistry principles: unit cell doping and charge balance within unit cells. These principles are inspired by key experimental observations of how the materials accommodate large deviations from stoichiometry. Consistent explanations for significant HTSC properties can be explained without any additional assumptions while retaining valuable insight for geometric interpretation. Combining these two chemical principles with a review of Crystal Field Theory (CFT) or Ligand Field Theory (LFT), it becomes clear that the two oxidation states in the conduction planes (typically d 8 and d 9 ) belong to the most strongly divergent d-levels as a function of deformation from regular octahedral coordination. This observation offers a link to a range of coupling effects relating vibrations and spin waves through application of Hund’s rules. An indication of this model’s capacity to predict physical properties for HTSC is provided and will be elaborated in subsequent publications. Simple criteria for the relationship between structure and composition in HTSC systems may guide chemical syntheses within new material systems.
Using a Simple Neural Network to Delineate Some Principles of Distributed Economic Choice.
Balasubramani, Pragathi P; Moreno-Bote, Rubén; Hayden, Benjamin Y
2018-01-01
The brain uses a mixture of distributed and modular organization to perform computations and generate appropriate actions. While the principles under which the brain might perform computations using modular systems have been more amenable to modeling, the principles by which the brain might make choices using distributed principles have not been explored. Our goal in this perspective is to delineate some of those distributed principles using a neural network method and use its results as a lens through which to reconsider some previously published neurophysiological data. To allow for direct comparison with our own data, we trained the neural network to perform binary risky choices. We find that value correlates are ubiquitous and are always accompanied by non-value information, including spatial information (i.e., no pure value signals). Evaluation, comparison, and selection were not distinct processes; indeed, value signals even in the earliest stages contributed directly, albeit weakly, to action selection. There was no place, other than at the level of action selection, at which dimensions were fully integrated. No units were specialized for specific offers; rather, all units encoded the values of both offers in an anti-correlated format, thus contributing to comparison. Individual network layers corresponded to stages in a continuous rotation from input to output space rather than to functionally distinct modules. While our network is likely to not be a direct reflection of brain processes, we propose that these principles should serve as hypotheses to be tested and evaluated for future studies.
Using a Simple Neural Network to Delineate Some Principles of Distributed Economic Choice
Directory of Open Access Journals (Sweden)
Pragathi P. Balasubramani
2018-03-01
Full Text Available The brain uses a mixture of distributed and modular organization to perform computations and generate appropriate actions. While the principles under which the brain might perform computations using modular systems have been more amenable to modeling, the principles by which the brain might make choices using distributed principles have not been explored. Our goal in this perspective is to delineate some of those distributed principles using a neural network method and use its results as a lens through which to reconsider some previously published neurophysiological data. To allow for direct comparison with our own data, we trained the neural network to perform binary risky choices. We find that value correlates are ubiquitous and are always accompanied by non-value information, including spatial information (i.e., no pure value signals. Evaluation, comparison, and selection were not distinct processes; indeed, value signals even in the earliest stages contributed directly, albeit weakly, to action selection. There was no place, other than at the level of action selection, at which dimensions were fully integrated. No units were specialized for specific offers; rather, all units encoded the values of both offers in an anti-correlated format, thus contributing to comparison. Individual network layers corresponded to stages in a continuous rotation from input to output space rather than to functionally distinct modules. While our network is likely to not be a direct reflection of brain processes, we propose that these principles should serve as hypotheses to be tested and evaluated for future studies.
Loss avoidance as selection principle: evidence from simple stag-hunt games
Czech Academy of Sciences Publication Activity Database
Rydval, Ondřej; Ortmann, Andreas
2005-01-01
Roč. 88, č. 1 (2005), s. 101-107 ISSN 0165-1765 Institutional research plan: CEZ:AV0Z70850503 Keywords : loss avoidance * selection principles * stag-hunt games Subject RIV: AH - Economics Impact factor: 0.381, year: 2005 http://dx.doi.org/10.1016/j.econlet.2004.12.027
Directory of Open Access Journals (Sweden)
Murray Christopher JL
2011-08-01
Full Text Available Abstract Background Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff, which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs from verbal autopsy data. Methods Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data. Results Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates. Conclusions Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science.
Directory of Open Access Journals (Sweden)
Ramakrishna R. Nemani
2012-01-01
Full Text Available Algorithms that use remotely-sensed vegetation indices to estimate gross primary production (GPP, a key component of the global carbon cycle, have gained a lot of popularity in the past decade. Yet despite the amount of research on the topic, the most appropriate approach is still under debate. As an attempt to address this question, we compared the performance of different vegetation indices from the Moderate Resolution Imaging Spectroradiometer (MODIS in capturing the seasonal and the annual variability of GPP estimates from an optimal network of 21 FLUXNET forest towers sites. The tested indices include the Normalized Difference Vegetation Index (NDVI, Enhanced Vegetation Index (EVI, Leaf Area Index (LAI, and Fraction of Photosynthetically Active Radiation absorbed by plant canopies (FPAR. Our results indicated that single vegetation indices captured 50–80% of the variability of tower-estimated GPP, but no one index performed universally well in all situations. In particular, EVI outperformed the other MODIS products in tracking seasonal variations in tower-estimated GPP, but annual mean MODIS LAI was the best estimator of the spatial distribution of annual flux-tower GPP (GPP = 615 × LAI − 376, where GPP is in g C/m2/year. This simple algorithm rehabilitated earlier approaches linking ground measurements of LAI to flux-tower estimates of GPP and produced annual GPP estimates comparable to the MODIS 17 GPP product. As such, remote sensing-based estimates of GPP continue to offer a useful alternative to estimates from biophysical models, and the choice of the most appropriate approach depends on whether the estimates are required at annual or sub-annual temporal resolution.
CHESS-changing horizon efficient set search: A simple principle for multiobjective optimization
DEFF Research Database (Denmark)
Borges, Pedro Manuel F. C.
2000-01-01
This paper presents a new concept for generating approximations to the non-dominated set in multiobjective optimization problems. The approximation set A is constructed by solving several single-objective minimization problems in which a particular function D(A, z) is minimized. A new algorithm t...
A simple biota removal algorithm for 35 GHz cloud radar measurements
Kalapureddy, Madhu Chandra R.; Sukanya, Patra; Das, Subrata K.; Deshpande, Sachin M.; Pandithurai, Govindan; Pazamany, Andrew L.; Ambuj K., Jha; Chakravarty, Kaustav; Kalekar, Prasad; Krishna Devisetty, Hari; Annam, Sreenivas
2018-03-01
promisingly simple in realization but powerful in performance due to the flexibility in constraining, identifying and filtering out the biota and screening out the true cloud content, especially the CBL clouds. Therefore, the TEST algorithm is superior for screening out the low-level clouds that are strongly linked to the rainmaking mechanism associated with the Indian Summer Monsoon region's CVS.
Crittenden, Barry D.
1991-01-01
A simple liquid-liquid equilibrium (LLE) system involving a constant partition coefficient based on solute ratios is used to develop an algebraic understanding of multistage contacting in a first-year separation processes course. This algebraic approach to the LLE system is shown to be operable for the introduction of graphical techniques…
Water productivity using SAFER - Simple Algorithm for Evapotranspiration Retrieving in watershed
Directory of Open Access Journals (Sweden)
Daniel N. Coaguila
Full Text Available ABSTRACT The Cabeceira Comprida stream’s watershed, located in Santa Fé do Sul, São Paulo state, has great environmental importance. It is essential for supplying water to the population and generating surpluses for sewage dilution. This study aimed to evaluate the annual performance of the components of water productivity from Landsat-8 images of 2015, using the Simple Algorithm for Evapotranspiration Retrieving (SAFER, calculating the actual evapotranspiration (ETa, biomass (BIO and water productivity (WP. The annual averages of ETa, BIO and WP were 1.03 mm day-1, 36.04 kg ha-1 day-1 and 3.19 kg m-3, respectively. The average annual values of ETa for land use and occupation were 1.40, 1.23, 1.05, 0.97 and 1.08 mm day-1 for the remaining forest (RF, invasive species (IS, pasture (Pa, annual crop (AC and perennial crop (PC, respectively, with BIO of 57.64, 46.10, 36.78, 32.69, 40.03 kg ha-1 day-1 for RF, IS, Pa, AC and PC, respectively, resulting in WP of 3.94, 3.59, 3.25, 3.09, 3.35 kg m-3 for RF, IS, Pa, AC and PC, respectively. The ETa, BIO and WP adjust to the seasonality of the region, and RF and IS stood out with the highest values.
Directory of Open Access Journals (Sweden)
Dong-Sup Lee
2015-01-01
Full Text Available Independent Component Analysis (ICA, one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: insta- bility and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to vali- date the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.
DEFF Research Database (Denmark)
Frydendall, Jan; Brandt, J.; Christensen, J. H.
2009-01-01
A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM), applied for air pollution forecasting at the National Environmental Research Institute (NERI), Denmark....... In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP...... (European Monitoring and Evaluation Programme) network covering a half-year period, April-September 1999. The best performing setup of the data assimilation algorithm for surface ozone concentrations has been found, including the combination of determining the covariances using the Hollingsworth method...
Xu, Lei; Jeavons, Peter
2015-11-01
Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth
2015-07-01
Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a simple algorithm to identify neonates with respiratory distress who would benefit from CPAP. To validate the TRY CPAP algorithm for neonates with respiratory distress in a low-resource setting. We constructed an algorithm using a combination of vital signs, tone and birth weight to determine the need for CPAP in neonates with respiratory distress. Neonates admitted to the neonatal ward of Queen Elizabeth Central Hospital, in Blantyre, Malawi, were assessed in a prospective, cross-sectional study. Nurses and paediatricians-in-training assessed neonates to determine whether they required CPAP using the TRY CPAP algorithm. To establish the accuracy of the TRY CPAP algorithm in evaluating the need for CPAP, their assessment was compared with the decision of a neonatologist blinded to the TRY CPAP algorithm findings. 325 neonates were evaluated over a 2-month period; 13% were deemed to require CPAP by the neonatologist. The inter-rater reliability with the algorithm was 0.90 for nurses and 0.97 for paediatricians-in-training using the neonatologist's assessment as the reference standard. The TRY CPAP algorithm has the potential to be a simple and reliable tool to assist nurses and clinicians in identifying neonates who require treatment with CPAP in low-resource settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
A simple algorithm for large-scale mapping of evergreen forests in tropical America, Africa and Asia
Xiangming Xiao; Chandrashekhar M. Biradar; Christina Czarnecki; Tunrayo Alabi; Michael Keller
2009-01-01
The areal extent and spatial distribution of evergreen forests in the tropical zones are important for the study of climate, carbon cycle and biodiversity. However, frequent cloud cover in the tropical regions makes mapping evergreen forests a challenging task. In this study we developed a simple and novel mapping algorithm that is based on the temporal profile...
Directory of Open Access Journals (Sweden)
Alexandr Victorovich Budylskiy
2014-06-01
Full Text Available This article considers the multicriteria optimization approach using the modified genetic algorithm to solve the project-scheduling problem under duration and cost constraints. The work contains the list of choices for solving this problem. The multicriteria optimization approach is justified here. The study describes the Pareto principles, which are used in the modified genetic algorithm. We identify the mathematical model of the project-scheduling problem. We introduced the modified genetic algorithm, the ranking strategies, the elitism approaches. The article includes the example.
International Nuclear Information System (INIS)
Phan Thanh An
2008-06-01
The convex rope problem, posed by Peshkin and Sanderson in IEEE J. Robotics Automat, 2 (1986) pp. 53-58, is to find the counterclockwise and clockwise convex ropes starting at the vertex a and ending at the vertex b of a simple polygon, where a is on the boundary of the convex hull of the polygon and b is visible from infinity. In this paper, we present a linear time algorithm for solving this problem without resorting to a linear-time triangulation algorithm and without resorting to a convex hull algorithm for the polygon. The counterclockwise (clockwise, respectively) convex rope consists of two polylines obtained in a basic incremental strategy described in convex hull algorithms for the polylines forming the polygon from a to b. (author)
Guelpa, Valérian; Laurent, Guillaume J; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric
2014-03-12
This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations-leading to high resolution-while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 µs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 µm measurement range.
Directory of Open Access Journals (Sweden)
J. Frydendall
2009-08-01
Full Text Available A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM, applied for air pollution forecasting at the National Environmental Research Institute (NERI, Denmark. In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP (European Monitoring and Evaluation Programme network covering a half-year period, April–September 1999. The best performing setup of the data assimilation algorithm for surface ozone concentrations has been found, including the combination of determining the covariances using the Hollingsworth method, varying the correlation length according to the number of adjacent observation stations and applying the assimilation routine at three successive hours during the morning. Improvements in the correlation coefficient in the range of 0.1 to 0.21 between the results from the reference and the optimal configuration of the data assimilation algorithm, were found. The data assimilation algorithm will in the future be used in the operational THOR integrated air pollution forecast system, which includes the DEOM.
A simple algorithm for computing positively weighted straight skeletons of monotone polygons☆
Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter
2015-01-01
We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in O(nlogn) time and O(n) space, where n denotes the number of vertices of the polygon. PMID:25648376
A simple algorithm for computing positively weighted straight skeletons of monotone polygons.
Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter
2015-02-01
We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in [Formula: see text] time and [Formula: see text] space, where n denotes the number of vertices of the polygon.
A simple algorithm for calculating the area of an arbitrary polygon
Directory of Open Access Journals (Sweden)
K.R. Wijeweera
2017-06-01
Full Text Available Computing the area of an arbitrary polygon is a popular problem in pure mathematics. The two methods used are Shoelace Method (SM and Orthogonal Trapezoids Method (OTM. In OTM, the polygon is partitioned into trapezoids by drawing either horizontal or vertical lines through its vertices. The area of each trapezoid is computed and the resultant areas are added up. In SM, a formula which is a generalization of Green’s Theorem for the discrete case is used. The most of the available systems is based on SM. Since an algorithm for OTM is not available in literature, this paper proposes an algorithm for OTM along with efficient implementation. Conversion of a pure mathematical method into an efficient computer program is not straightforward. In order to reduce the run time, minimal computation needs to be achieved. Handling of indeterminate forms and special cases separately can support this. On the other hand, precision error should also be avoided. Salient feature of the proposed algorithm is that it successfully handles these situations achieving minimum run time. Experimental results of the proposed method are compared against that of the existing algorithm. However, the proposed algorithm suggests a way to partition a polygon into orthogonal trapezoids which is not an easy task. Additionally, the proposed algorithm uses only basic mathematical concepts while the Green’s theorem uses complicated mathematical concepts. The proposed algorithm can be used when the simplicity is important than the speed.
Directory of Open Access Journals (Sweden)
Rozana Alik
2016-03-01
Full Text Available This paper presents a simple checking algorithm for maximum power point tracking (MPPT technique for Photovoltaic (PV system using Perturb and Observe (P&O algorithm. The main benefit of this checking algorithm is the simplicity and efficiency of the system whose duty cycle produced by the MPPT is smoother and changes faster according to maximum power point (MPP. This checking algorithm can determine the maximum power first before the P&O algorithm takes place to identify the voltage at MPP (VMPP, which is needed to calculate the duty cycle for the boost converter. To test the effectiveness of the algorithm, a simulation model of PV system has been carried out using MATLAB/Simulink under different level of irradiation; or in other words partially shaded condition of PV array. The results from the system using the proposed approach prove to have faster response and low ripple. Besides, the results are close to the desired outputs and exhibit an approximately 98.25% of the system efficiency. On the other hand, the system with conventional P&O MPPT seems to be unstable and has higher percentage of error. In summary, the proposed method is useful under varying level of irradiation with higher efficiency of the system.
International Nuclear Information System (INIS)
Tougaard, Sven
2003-01-01
It is well known that due to inelastic electron scattering, the measured x-ray photoelectron spectroscopy peak intensity depends strongly on the in-depth atom distribution. Quantification based only on the peak intensity can therefore give large errors. The problem was basically solved by developing algorithms for the detailed analysis of the energy distribution of emitted electrons. These algorithms have been extensively tested experimentally and found to be able to determine the depth distribution of atoms with nanometer resolution. Practical application of these algorithms has increased after ready-to-use software packages were made available and they are now being used in laboratories worldwide. These software packages are easy to use but they need operator interaction. They are not well suited for automatic data processing and there is an additional need for simplified quantification strategies that can be automated. In this article we report on a very simple algorithm. It is a slightly more accurate version of our previous algorithm. The algorithm gives the amount of atoms within the outermost three inelastic mean free paths and it also gives a rough estimate for the in-depth distribution. An experimental example of its application is also presented
the simple mono-canal algorithm for the temperature estimating of ...
African Journals Online (AJOL)
30 juin 2010 ... the brightness temperature (Tb) at the sensor level. This algorithm ..... des attributs de textures et de la fusion de segmentations: application à la zone ... retreved from thermal infrared single channel remote sensing data. 2004 ...
Directory of Open Access Journals (Sweden)
Huan Zhou
2017-09-01
Full Text Available Aimed at solving the problem of decreased filtering precision while maneuvering target tracking caused by non-Gaussian distribution and sensor faults, we developed an efficient interacting multiple model-unscented Kalman filter (IMM-UKF algorithm. By dividing the IMM-UKF into two links, the algorithm introduces the cubature principle to approximate the probability density of the random variable, after the interaction, by considering the external link of IMM-UKF, which constitutes the cubature-principle-assisted IMM method (CPIMM for solving the non-Gaussian problem, and leads to an adaptive matrix to balance the contribution of the state. The algorithm provides filtering solutions by considering the internal link of IMM-UKF, which is called a new adaptive UKF algorithm (NAUKF to address sensor faults. The proposed CPIMM-NAUKF is evaluated in a numerical simulation and two practical experiments including one navigation experiment and one maneuvering target tracking experiment. The simulation and experiment results show that the proposed CPIMM-NAUKF has greater filtering precision and faster convergence than the existing IMM-UKF. The proposed algorithm achieves a very good tracking performance, and will be effective and applicable in the field of maneuvering target tracking.
Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands
Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas
2010-01-01
This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).
International Nuclear Information System (INIS)
Yeh, W.-C.
2004-01-01
A MP/minimal cutset (MC) is a path/cut set such that if any edge is removed from this path/cut set, then the remaining set is no longer a path/cut set. An intuitive method is proposed to evaluate the reliability in terms of MCs in a stochastic-flow network subject to both edge and node failures under the condition that all of the MCs are given in advance. This is an extension of the best of known algorithms for solving the d-MC (a special MC but formatted in a system-state vector, where d is the lower bound points of the system capacity level) problem from the stochastic-flow network without unreliable nodes to with unreliable nodes by introducing some simple concepts. These concepts were first developed in the literature to implement the proposed algorithm to reduce the number of d-MC candidates. This method is more efficient than the best of known existing algorithms regardless if the network has or does not have unreliable nodes. Two examples are illustrated to show how the reliability is determined using the proposed algorithm in the network with or without unreliable nodes. The computational complexity of the proposed algorithm is analyzed and compared with the existing methods
Actuator Disc Model Using a Modified Rhie-Chow/SIMPLE Pressure Correction Algorithm
DEFF Research Database (Denmark)
Rethore, Pierre-Elouan; Sørensen, Niels
2008-01-01
An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known.......An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known....
A new model and simple algorithms for multi-label mumford-shah problems
Hong, Byungwoo; Lu, Zhaojin; Sundaramoorthi, Ganesh
2013-01-01
is that the underlying variables: the labels and the functions are less coupled than in the original formulation, and the labels can be computed from the functions with more global updates. The resulting algorithms can be tuned to the desired level of locality
A new model and simple algorithms for multi-label mumford-shah problems
Hong, Byungwoo
2013-06-01
In this work, we address the multi-label Mumford-Shah problem, i.e., the problem of jointly estimating a partitioning of the domain of the image, and functions defined within regions of the partition. We create algorithms that are efficient, robust to undesirable local minima, and are easy-to-implement. Our algorithms are formulated by slightly modifying the underlying statistical model from which the multi-label Mumford-Shah functional is derived. The advantage of this statistical model is that the underlying variables: the labels and the functions are less coupled than in the original formulation, and the labels can be computed from the functions with more global updates. The resulting algorithms can be tuned to the desired level of locality of the solution: from fully global updates to more local updates. We demonstrate our algorithm on two applications: joint multi-label segmentation and denoising, and joint multi-label motion segmentation and flow estimation. We compare to the state-of-the-art in multi-label Mumford-Shah problems and show that we achieve more promising results. © 2013 IEEE.
Optimal Power Flow in Islanded Microgrids Using a Simple Distributed Algorithm
DEFF Research Database (Denmark)
Sanseverino, Eleonora Riva; Di Silvestre, Maria Luisa; Badalamenti, Romina
2015-01-01
In this paper, the problem of distributed power losses minimization in islanded distribution systems is dealt with. The problem is formulated in a very simple manner and a solution is reached after a few iterations. The considered distribution system, a microgrid, will not need large bandwidth co...
A simple algorithm for identifying periods of snow accumulation on a radiometer
Lapo, Karl E.; Hinkelman, Laura M.; Landry, Christopher C.; Massmann, Adam K.; Lundquist, Jessica D.
2015-09-01
Downwelling solar, Qsi, and longwave, Qli, irradiances at the earth's surface are the primary energy inputs for many hydrologic processes, and uncertainties in measurements of these two terms confound evaluations of estimated irradiances and negatively impact hydrologic modeling. Observations of Qsi and Qli in cold environments are subject to conditions that create additional uncertainties not encountered in other climates, specifically the accumulation of snow on uplooking radiometers. To address this issue, we present an automated method for estimating these periods of snow accumulation. Our method is based on forest interception of snow and uses common meteorological observations. In this algorithm, snow accumulation must exceed a threshold to obscure the sensor and is only removed through scouring by wind or melting. The algorithm is evaluated at two sites representing different mountain climates: (1) Snoqualmie Pass, Washington (maritime) and (2) the Senator Beck Basin Study Area, Colorado (continental). The algorithm agrees well with time-lapse camera observations at the Washington site and with multiple measurements at the Colorado site, with 70-80% of observed snow accumulation events correctly identified. We suggest using the method for quality controlling irradiance observations in snow-dominated climates where regular, daily maintenance is not possible.
Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia
2017-04-01
Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The 'predictAL-10' risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the 'predictAL-9'), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. © British Journal of General Practice 2017.
A simple but usually fast branch-and-bound algorithm for the capacitated facility location problem
DEFF Research Database (Denmark)
Görtz, Simon; Klose, Andreas
2012-01-01
This paper presents a simple branch-and-bound method based on Lagrangean relaxation and subgradient optimization for solving large instances of the capacitated facility location problem (CFLP) to optimality. To guess a primal solution to the Lagrangean dual, we average solutions to the Lagrangean...... subproblem. Branching decisions are then based on this estimated (fractional) primal solution. Extensive numerical results reveal that the method is much faster and more robust than other state-of-the-art methods for solving the CFLP exactly....
Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin
2018-04-18
Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.
Yeo, Lami; Romero, Roberto; Jodicke, Cristiano; Oggè, Giovanna; Lee, Wesley; Kusanovic, Juan Pedro; Vaisbuch, Edi; Hassan, Sonia S.
2010-01-01
Objective To describe a novel and simple algorithm (FAST Echo: Four chamber view And Swing Technique) to visualize standard diagnostic planes of fetal echocardiography from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). Methods We developed an algorithm to image standard fetal echocardiographic planes by drawing four dissecting lines through the longitudinal view of the ductal arch contained in a STIC volume dataset. Three of the lines are locked to provide simultaneous visualization of targeted planes, and the fourth line (unlocked) “swings” through the ductal arch image (“swing technique”), providing an infinite number of cardiac planes in sequence. Each line generated the following plane(s): 1) Line 1: three-vessels and trachea view; 2) Line 2: five-chamber view and long axis view of the aorta (obtained by rotation of the five-chamber view on the y-axis); 3) Line 3: four-chamber view; and 4) “Swing” line: three-vessels and trachea view, five-chamber view and/or long axis view of the aorta, four-chamber view, and stomach. The algorithm was then tested in 50 normal hearts (15.3 – 40 weeks of gestation) and visualization rates for cardiac diagnostic planes were calculated. To determine if the algorithm could identify planes that departed from the normal images, we tested the algorithm in 5 cases with proven congenital heart defects. Results In normal cases, the FAST Echo algorithm (3 locked lines and rotation of the five-chamber view on the y-axis) was able to generate the intended planes (longitudinal view of the ductal arch, pulmonary artery, three-vessels and trachea view, five-chamber view, long axis view of the aorta, four-chamber view): 1) individually in 100% of cases [except for the three-vessel and trachea view, which was seen in 98% (49/50)]; and 2) simultaneously in 98% (49/50). The “swing technique” was able to generate the three-vessels and trachea view, five
Yeo, L; Romero, R; Jodicke, C; Oggè, G; Lee, W; Kusanovic, J P; Vaisbuch, E; Hassan, S
2011-04-01
To describe a novel and simple algorithm (four-chamber view and 'swing technique' (FAST) echo) for visualization of standard diagnostic planes of fetal echocardiography from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). We developed an algorithm to image standard fetal echocardiographic planes by drawing four dissecting lines through the longitudinal view of the ductal arch contained in a STIC volume dataset. Three of the lines are locked to provide simultaneous visualization of targeted planes, and the fourth line (unlocked) 'swings' through the ductal arch image (swing technique), providing an infinite number of cardiac planes in sequence. Each line generates the following plane(s): (a) Line 1: three-vessels and trachea view; (b) Line 2: five-chamber view and long-axis view of the aorta (obtained by rotation of the five-chamber view on the y-axis); (c) Line 3: four-chamber view; and (d) 'swing line': three-vessels and trachea view, five-chamber view and/or long-axis view of the aorta, four-chamber view and stomach. The algorithm was then tested in 50 normal hearts in fetuses at 15.3-40 weeks' gestation and visualization rates for cardiac diagnostic planes were calculated. To determine whether the algorithm could identify planes that departed from the normal images, we tested the algorithm in five cases with proven congenital heart defects. In normal cases, the FAST echo algorithm (three locked lines and rotation of the five-chamber view on the y-axis) was able to generate the intended planes (longitudinal view of the ductal arch, pulmonary artery, three-vessels and trachea view, five-chamber view, long-axis view of the aorta, four-chamber view) individually in 100% of cases (except for the three-vessels and trachea view, which was seen in 98% (49/50)) and simultaneously in 98% (49/50). The swing technique was able to generate the three-vessels and trachea view, five-chamber view and/or long
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Wojdyga, Krzysztof; Malicki, Marcin
2017-11-01
Constant strive to improve the energy efficiency forces carrying out activities aimed at reduction of energy consumption hence decreasing amount of contamination emissions to atmosphere. Cooling demand, both for air-conditioning and process cooling, plays an increasingly important role in the balance of Polish electricity generation and distribution system in summer. During recent years' demand for electricity during summer months has been steadily and significantly increasing leading to deficits of energy availability during particularly hot periods. This causes growing importance and interest in trigeneration power generation sources and heat recovery systems producing chilled water. Key component of such system is thermally driven chiller, mostly absorption, based on lithium-bromide and water mixture. Absorption cooling systems also exist in Poland as stand-alone systems, supplied with heating from various sources, generated solely for them or recovered as waste or useless energy. The publication presents a simple algorithm, designed to reduce the amount of heat for the supply of absorption chillers producing chilled water for the purposes of air conditioning by reducing the temperature of the cooling water, and its impact on decreasing emissions of harmful substances into the atmosphere. Scale of environmental advantages has been rated for specific sources what enabled evaluation and estimation of simple algorithm implementation to sources existing nationally.
Enhancing a Simple MODIS Cloud Mask Algorithm for the Landsat Data Continuity Mission
Wilson, Michael J.; Oreopoulos, Lazarous
2011-01-01
The presence of clouds in images acquired by the Landsat series of satellites is usually an undesirable, but generally unavoidable fact. With the emphasis of the program being on land imaging, the suspended liquid/ice particles of which clouds are made of fully or partially obscure the desired observational target. Knowing the amount and location of clouds in a Landsat scene is therefore valuable information for scene selection, for making clear-sky composites from multiple scenes, and for scheduling future acquisitions. The two instruments in the upcoming Landsat Data Continuity Mission (LDCM) will include new channels that will enhance our ability to detect high clouds which are often also thin in the sense that a large fraction of solar radiation can pass through them. This work studies the potential impact of these new channels on enhancing LDCM's cloud detection capabilities compared to previous Landsat missions. We revisit a previously published scheme for cloud detection and add new tests to capture more of the thin clouds that are harder to detect with the more limited arsenal channels. Since there are no Landsat data yet that include the new LDCM channels, we resort to data from another instrument, MODIS, which has these bands, as well as the other bands of LDCM, to test the capabilities of our new algorithm. By comparing our revised scheme's performance against the performance of the official MODIS cloud detection scheme, we conclude that the new scheme performs better than the earlier scheme which was not very good at thin cloud detection.
Entropic sampling of simple polymer models within Wang-Landau algorithm
International Nuclear Information System (INIS)
Vorontsov-Velyaminov, P N; Volkov, N A; Yurchenko, A A
2004-01-01
In this paper we apply a new simulation technique proposed in Wang and Landau (WL) (2001 Phys. Rev. Lett. 86 2050) to sampling of three-dimensional lattice and continuous models of polymer chains. Distributions obtained by homogeneous (unconditional) random walks are compared with results of entropic sampling (ES) within the WL algorithm. While homogeneous sampling gives reliable results typically in the range of 4-5 orders of magnitude, the WL entropic sampling yields them in the range of 20-30 orders and even larger with comparable computer effort. A combination of homogeneous and WL sampling provides reliable data for events with probabilities down to 10 -35 . For the lattice model we consider both the athermal case (self-avoiding walks, SAWs) and the thermal case when an energy is attributed to each contact between nonbonded monomers in a self-avoiding walk. For short chains the simulation results are checked by comparison with the exact data. In WL calculations for chain lengths up to N = 300 scaling relations for SAWs are well reproduced. In the thermal case distribution over the number of contacts is obtained in the N-range up to N = 100 and the canonical averages - internal energy, heat capacity, excess canonical entropy, mean square end-to-end distance - are calculated as a result in a wide temperature range. The continuous model is studied in the athermal case. By sorting conformations of a continuous phantom freely joined N-bonded chain with a unit bond length over a stochastic variable, the minimum distance between nonbonded beads, we determine the probability distribution for the N-bonded chain with hard sphere monomer units over its diameter a in the complete diameter range, 0 ≤ a ≤ 2, within a single ES run. This distribution provides us with excess specific entropy for a set of diameters a in this range. Calculations were made for chain lengths up to N = 100 and results were extrapolated to N → ∞ for a in the range 0 ≤ a ≤ 1.25
Development of a Two-Phase Flow Analysis Code based on a Unstructured-Mesh SIMPLE Algorithm
Energy Technology Data Exchange (ETDEWEB)
Kim, Jong Tae; Park, Ik Kyu; Cho, Heong Kyu; Yoon, Han Young; Kim, Kyung Doo; Jeong, Jae Jun
2008-09-15
For analyses of multi-phase flows in a water-cooled nuclear power plant, a three-dimensional SIMPLE-algorithm based hydrodynamic solver CUPID-S has been developed. As governing equations, it adopts a two-fluid three-field model for the two-phase flows. The three fields represent a continuous liquid, a dispersed droplets, and a vapour field. The governing equations are discretized by a finite volume method on an unstructured grid to handle the geometrical complexity of the nuclear reactors. The phasic momentum equations are coupled and solved with a sparse block Gauss-Seidel matrix solver to increase a numerical stability. The pressure correction equation derived by summing the phasic volume fraction equations is applied on the unstructured mesh in the context of a cell-centered co-located scheme. This paper presents the numerical method and the preliminary results of the calculations.
International Nuclear Information System (INIS)
Wigneron, J.P.; Chanzy, A.; Calvet, J.C.; Bruguier, N.
1995-01-01
A simple algorithm to retrieve sail moisture and vegetation water content from passive microwave measurements is analyzed in this study. The approach is based on a zeroth-order solution of the radiative transfer equations in a vegetation layer. In this study, the single scattering albedo accounts for scattering effects and two parameters account for the dependence of the optical thickness on polarization, incidence angle, and frequency. The algorithm requires only ancillary information about crop type and surface temperature. Retrievals of the surface parameters from two radiometric data sets acquired over a soybean and a wheat crop have been attempted. The model parameters have been fitted in order to achieve best match between measured and retrieved surface data. The results of the inversion are analyzed for different configurations of the radiometric observations: one or several look angles, L-band, C-band or (L-band and C-band). Sensitivity of the retrievals to the best fit values of the model parameters has also been investigated. The best configurations, requiring simultaneous measurements at L- and C-band, produce retrievals of soil moisture and biomass with a 15% estimated precision (about 0.06 m 3 /m 3 for soil moisture and 0.3 kg/m 2 for biomass) and exhibit a limited sensitivity to the best fit parameters. (author)
Directory of Open Access Journals (Sweden)
Wing Kam Fung
2010-02-01
Full Text Available The case-control study is an important design for testing association between genetic markers and a disease. The Cochran-Armitage trend test (CATT is one of the most commonly used statistics for the analysis of case-control genetic association studies. The asymptotically optimal CATT can be used when the underlying genetic model (mode of inheritance is known. However, for most complex diseases, the underlying genetic models are unknown. Thus, tests robust to genetic model misspecification are preferable to the model-dependant CATT. Two robust tests, MAX3 and the genetic model selection (GMS, were recently proposed. Their asymptotic null distributions are often obtained by Monte-Carlo simulations, because they either have not been fully studied or involve multiple integrations. In this article, we study how components of each robust statistic are correlated, and find a linear dependence among the components. Using this new finding, we propose simple algorithms to calculate asymptotic null distributions for MAX3 and GMS, which greatly reduce the computing intensity. Furthermore, we have developed the R package Rassoc implementing the proposed algorithms to calculate the empirical and asymptotic p values for MAX3 and GMS as well as other commonly used tests in case-control association studies. For illustration, Rassoc is applied to the analysis of case-control data of 17 most significant SNPs reported in four genome-wide association studies.
Directory of Open Access Journals (Sweden)
A. G. Xia
2011-07-01
Full Text Available A new method is proposed to simplify complex atmospheric chemistry reaction schemes, while preserving SOA formation properties, using genetic algorithms. The method is first applied in this study to the gas-phase α-pinene oxidation scheme. The simple unified volatility-based scheme (SUVS reflects the multi-generation evolution of chemical species from a near-explicit master chemical mechanism (MCM and, at the same time, uses the volatility-basis set speciation for condensable products. The SUVS also unifies reactions between SOA precursors with different oxidants under different atmospheric conditions. A total of 412 unknown parameters (product yields of parameterized products, reaction rates, etc. from the SUVS are estimated by using genetic algorithms operating on the detailed mechanism. The number of organic species was reduced from 310 in the detailed mechanism to 31 in the SUVS. Output species profiles, obtained from the original subset of the MCM reaction scheme for α-pinene oxidation, are reproduced with maximum fractional error at 0.10 for scenarios under a wide range of ambient HC/NO_{x} conditions. Ultimately, the same SUVS with updated parameters could be used to describe the SOA formation from different precursors.
Hoffmann, Mathias; Schulz-Hanke, Maximilian; Garcia Alba, Joana; Jurisch, Nicole; Hagemann, Ulrike; Sachs, Torsten; Sommer, Michael; Augustin, Jürgen
2016-04-01
Processes driving methane (CH4) emissions in wetland ecosystems are highly complex. Especially, the separation of CH4 emissions into ebullition and diffusion derived flux components, a perquisite for the mechanistic process understanding and identification of potential environmental driver is rather challenging. We present a simple calculation algorithm, based on an adaptive R-script, which separates open-water, closed chamber CH4 flux measurements into diffusion- and ebullition-derived components. Hence, flux component specific dynamics are revealed and potential environmental driver identified. Flux separation is based on a statistical approach, using ebullition related sudden concentration changes obtained during high resolution CH4 concentration measurements. By applying the lower and upper quartile ± the interquartile range (IQR) as a variable threshold, diffusion dominated periods of the flux measurement are filtered. Subsequently, flux calculation and separation is performed. The algorithm was verified in a laboratory experiment and tested under field conditions, using flux measurement data (July to September 2013) from a flooded, former fen grassland site. Erratic ebullition events contributed 46% to total CH4 emissions, which is comparable to values reported by literature. Additionally, a shift in the diurnal trend of diffusive fluxes throughout the measurement period, driven by the water temperature gradient, was revealed.
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
Aiyoshi, Eitaro; Masuda, Kazuaki
On the basis of market fundamentalism, new types of social systems with the market mechanism such as electricity trading markets and carbon dioxide (CO2) emission trading markets have been developed. However, there are few textbooks in science and technology which present the explanation that Lagrange multipliers can be interpreted as market prices. This tutorial paper explains that (1) the steepest descent method for dual problems in optimization, and (2) Gauss-Seidel method for solving the stationary conditions of Lagrange problems with market principles, can formulate the mechanism of market pricing, which works even in the information-oriented modern society. The authors expect readers to acquire basic knowledge on optimization theory and algorithms related to economics and to utilize them for designing the mechanism of more complicated markets.
Directory of Open Access Journals (Sweden)
Cagdas Yilmaz
2017-05-01
Full Text Available The producing of nanofiber tissue scaffolds is quite important for enhancing success in tissue engineering. Electrospinning method is used frequently to produce of these scaffolds. In this study a simple and novel expression derived by using artificial bee colony ABC optimization algorithm is presented to calculate the average fiber diameter AFD of the electrospun gelatinbioactive glass GtBG scaffold. The diameter of the fiber produced by electrospinning technique depends on the various parameters like process solution and environmental parameters. The experimental results previously published in the literature which include one solution parameter BG content as well as two process parameters tip to collector distance and solution flow rate related to producing of electrospun GtBG nanofiber have been used for the optimization process. At first the AFD expression has been constructed with the use of the solution and process parameters and then the unknown coefficients belonging to this expression have been accurately determined by using the ABC algorithm. From 19 experimental data 15 ones are used for the optimization phase while the other 4 data are utilized in the verification phase. The values of average percentage error between the calculated average fiber diameters and experimental ones are achieved as 2.2 and 5.7 for the optimization and verification phases respectively. The results obtained from the proposed expression have also been confirmed by comparing with those of AFD expression reported elsewhere. It is illustrated that the AFD of electrospun GtBG can be accurately calculated by the expression proposed here without requiring any complicated or sophisticated knowledge of the mathematical and physical background.
A simple, practical and complete O-time Algorithm for RNA folding using the Four-Russians Speedup
Directory of Open Access Journals (Sweden)
Gusfield Dan
2010-01-01
Full Text Available Abstract Background The problem of computationally predicting the secondary structure (or folding of RNA molecules was first introduced more than thirty years ago and yet continues to be an area of active research and development. The basic RNA-folding problem of finding a maximum cardinality, non-crossing, matching of complimentary nucleotides in an RNA sequence of length n, has an O(n3-time dynamic programming solution that is widely applied. It is known that an o(n3 worst-case time solution is possible, but the published and suggested methods are complex and have not been established to be practical. Significant practical improvements to the original dynamic programming method have been introduced, but they retain the O(n3 worst-case time bound when n is the only problem-parameter used in the bound. Surprisingly, the most widely-used, general technique to achieve a worst-case (and often practical speed up of dynamic programming, the Four-Russians technique, has not been previously applied to the RNA-folding problem. This is perhaps due to technical issues in adapting the technique to RNA-folding. Results In this paper, we give a simple, complete, and practical Four-Russians algorithm for the basic RNA-folding problem, achieving a worst-case time-bound of O(n3/log(n. Conclusions We show that this time-bound can also be obtained for richer nucleotide matching scoring-schemes, and that the method achieves consistent speed-ups in practice. The contribution is both theoretical and practical, since the basic RNA-folding problem is often solved multiple times in the inner-loop of more complex algorithms, and for long RNA molecules in the study of RNA virus genomes.
Lyamuya, Eligius F; Aboud, Said; Urassa, Willy K; Sufi, Jaffer; Mbwana, Judica; Ndugulile, Faustin; Massambu, Charles
2009-02-18
Suitable algorithms based on a combination of two or more simple rapid HIV assays have been shown to have a diagnostic accuracy comparable to double enzyme-linked immunosorbent assay (ELISA) or double ELISA with Western Blot strategies. The aims of this study were to evaluate the performance of five simple rapid HIV assays using whole blood samples from HIV-infected patients, pregnant women, voluntary counseling and testing attendees and blood donors, and to formulate an alternative confirmatory strategy based on rapid HIV testing algorithms suitable for use in Tanzania. Five rapid HIV assays: Determine HIV-1/2 (Inverness Medical), SD Bioline HIV 1/2 3.0 (Standard Diagnostics Inc.), First Response HIV Card 1-2.0 (PMC Medical India Pvt Ltd), HIV1/2 Stat-Pak Dipstick (Chembio Diagnostic System, Inc) and Uni-Gold HIV-1/2 (Trinity Biotech) were evaluated between June and September 2006 using 1433 whole blood samples from hospital patients, pregnant women, voluntary counseling and testing attendees and blood donors. All samples that were reactive on all or any of the five rapid assays and 10% of non-reactive samples were tested on a confirmatory Inno-Lia HIV I/II immunoblot assay (Immunogenetics). Three hundred and ninety samples were confirmed HIV-1 antibody positive, while 1043 were HIV negative. The sensitivity at initial testing of Determine, SD Bioline and Uni-Gold was 100% (95% CI; 99.1-100) while First Response and Stat-Pak had sensitivity of 99.5% (95% CI; 98.2-99.9) and 97.7% (95% CI; 95.7-98.9), respectively, which increased to 100% (95% CI; 99.1-100) on repeat testing. The initial specificity of the Uni-Gold assay was 100% (95% CI; 99.6-100) while specificities were 99.6% (95% CI; 99-99.9), 99.4% (95% CI; 98.8-99.7), 99.6% (95% CI; 99-99.9) and 99.8% (95% CI; 99.3-99.9) for Determine, SD Bioline, First Response and Stat-Pak assays, respectively. There was no any sample which was concordantly false positive in Uni-Gold, Determine and SD Bioline assays. An
Mahalakshmi; Murugesan, R.
2018-04-01
This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.
Luger, Thomas J; Garoscio, Ivo; Rehder, Peter; Oberladstätter, Jürgen; Voelckel, Wolfgang
2008-06-01
In practice, trauma and orthopedic surgery during spinal anesthesia are often performed with routine urethral catheterization of the bladder to prevent an overdistention of the bladder. However, use of a catheter has inherent risks. Ultrasound examination of the bladder (Bladderscan) can precisely determine the bladder volume. Thus, the aim of this study was to identify parameters indicative of urinary retention after low-dose spinal anesthesia and to develop a simple algorithm for patient care. This prospective pilot study approved by the Ethics Committee enrolled 45 patients after obtaining their written informed consent. Patients who underwent arthroscopic knee surgery received low-dose spinal anesthesia with 1.4 ml 0.5% bupivacaine at level L3/L4. Bladder volume was measured by urinary bladder scanning at baseline, at the end of surgery and up to 4 h later. The incidence of spontaneous urination versus catheterization was assessed and the relative risk for catheterization was calculated. Mann-Whitney test, chi(2) test with Fischer Exact test and the relative odds ratio were performed as appropriate. *P 300 ml postoperatively had a 6.5-fold greater likelihood for urinary retention. In the management of patients with short-lasting spinal anesthesia for arthroscopic knee surgery we recommend monitoring bladder volume by Bladderscan instead of routine catheterization. Anesthesiologists or nurses under protocol should assess bladder volume preoperatively and at the end of surgery. If bladder volume is >300 ml, catheterization should be performed in the OR. Patients with a bladder volume of 500 ml.
Numerical Method based on SIMPLE Algorithm for a Two-Phase Flow with Non-condensable Gas
International Nuclear Information System (INIS)
Kim, Jong Tae
2009-08-01
In this study, a numerical method based on SIMPLE algorithm for a two-phase flow with non-condensable gas has been developed in order to simulate thermal hydraulics in a containment of a nuclear power plant. As governing equations, it adopts a two-fluid three-field model for the two-phase flows. The three fields include gas, drops, and continuous liquid. The gas field can contains vapor and non-condensable gases such as air and hydrogen. In order to resolve mixing phenomena of gas species, gas transport equations for each species base on the gas mass fractions are solved with gas phase governing equations such as mass, momentum and energy equations. Methods to evaluate the properties of the gas species were implemented in the code. They are constant or polynomial function based a user input and a property library from Chemkin and JANAF table for gas specific heat. Properties for the gas mixture which are dependent on mole fractions of the gas species were evaluated by a mix rule
Hosker, Bill S.
2018-01-01
A highly simplified variation on the do-it-yourself spectrophotometer using a smartphone's light sensor as a detector and an app to calculate and display absorbance values was constructed and tested. This simple version requires no need for electronic components or postmeasurement spectral analysis. Calibration graphs constructed from two…
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
... Han M, Partin AW. Simple prostatectomy: open and robot-assisted laparoscopic approaches. In: Wein AJ, Kavoussi LR, ... M. is also a founding member of Hi-Ethics and subscribes to the principles of the Health ...
Directory of Open Access Journals (Sweden)
Mbwana Judica
2009-02-01
Full Text Available Abstract Background Suitable algorithms based on a combination of two or more simple rapid HIV assays have been shown to have a diagnostic accuracy comparable to double enzyme-linked immunosorbent assay (ELISA or double ELISA with Western Blot strategies. The aims of this study were to evaluate the performance of five simple rapid HIV assays using whole blood samples from HIV-infected patients, pregnant women, voluntary counseling and testing attendees and blood donors, and to formulate an alternative confirmatory strategy based on rapid HIV testing algorithms suitable for use in Tanzania. Methods Five rapid HIV assays: Determine™ HIV-1/2 (Inverness Medical, SD Bioline HIV 1/2 3.0 (Standard Diagnostics Inc., First Response HIV Card 1–2.0 (PMC Medical India Pvt Ltd, HIV1/2 Stat-Pak Dipstick (Chembio Diagnostic System, Inc and Uni-Gold™ HIV-1/2 (Trinity Biotech were evaluated between June and September 2006 using 1433 whole blood samples from hospital patients, pregnant women, voluntary counseling and testing attendees and blood donors. All samples that were reactive on all or any of the five rapid assays and 10% of non-reactive samples were tested on a confirmatory Inno-Lia HIV I/II immunoblot assay (Immunogenetics. Results Three hundred and ninety samples were confirmed HIV-1 antibody positive, while 1043 were HIV negative. The sensitivity at initial testing of Determine, SD Bioline and Uni-Gold™ was 100% (95% CI; 99.1–100 while First Response and Stat-Pak had sensitivity of 99.5% (95% CI; 98.2–99.9 and 97.7% (95% CI; 95.7–98.9, respectively, which increased to 100% (95% CI; 99.1–100 on repeat testing. The initial specificity of the Uni-Gold™ assay was 100% (95% CI; 99.6–100 while specificities were 99.6% (95% CI; 99–99.9, 99.4% (95% CI; 98.8–99.7, 99.6% (95% CI; 99–99.9 and 99.8% (95% CI; 99.3–99.9 for Determine, SD Bioline, First Response and Stat-Pak assays, respectively. There was no any sample which was
International Nuclear Information System (INIS)
Guerra, J.G.; Rubiano, J.G.; Winter, G.; Guerra, A.G.; Alonso, H.; Arnedo, M.A.; Tejera, A.; Gil, J.M.; Rodríguez, R.; Martel, P.; Bolivar, J.P.
2015-01-01
The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. - Highlights: • A computational method for characterizing an HPGe spectrometer has been developed. • Detector characterized using as reference photopeak efficiencies obtained experimentally or by Monte Carlo calibration. • The characterization obtained has been validated for samples with different geometries and composition. • Good agreement
Zaiwani, B. E.; Zarlis, M.; Efendi, S.
2018-03-01
In this research, the improvement of hybridization algorithm of Fuzzy Analytic Hierarchy Process (FAHP) with Fuzzy Technique for Order Preference by Similarity to Ideal Solution (FTOPSIS) in selecting the best bank chief inspector based on several qualitative and quantitative criteria with various priorities. To improve the performance of the above research, FAHP algorithm hybridization with Fuzzy Multiple Attribute Decision Making - Simple Additive Weighting (FMADM-SAW) algorithm was adopted, which applied FAHP algorithm to the weighting process and SAW for the ranking process to determine the promotion of employee at a government institution. The result of improvement of the average value of Efficiency Rate (ER) is 85.24%, which means that this research has succeeded in improving the previous research that is equal to 77.82%. Keywords: Ranking and Selection, Fuzzy AHP, Fuzzy TOPSIS, FMADM-SAW.
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
Artrith, Nongnuch; Urban, Alexander; Ceder, Gerbrand
2018-06-01
The atomistic modeling of amorphous materials requires structure sizes and sampling statistics that are challenging to achieve with first-principles methods. Here, we propose a methodology to speed up the sampling of amorphous and disordered materials using a combination of a genetic algorithm and a specialized machine-learning potential based on artificial neural networks (ANNs). We show for the example of the amorphous LiSi alloy that around 1000 first-principles calculations are sufficient for the ANN-potential assisted sampling of low-energy atomic configurations in the entire amorphous LixSi phase space. The obtained phase diagram is validated by comparison with the results from an extensive sampling of LixSi configurations using molecular dynamics simulations and a general ANN potential trained to ˜45 000 first-principles calculations. This demonstrates the utility of the approach for the first-principles modeling of amorphous materials.
Bueno, Marta; Camacho, Carlos J; Sancho, Javier
2007-09-01
The bioinformatics revolution of the last decade has been instrumental in the development of empirical potentials to quantitatively estimate protein interactions for modeling and design. Although computationally efficient, these potentials hide most of the relevant thermodynamics in 5-to-40 parameters that are fitted against a large experimental database. Here, we revisit this longstanding problem and show that a careful consideration of the change in hydrophobicity, electrostatics, and configurational entropy between the folded and unfolded state of aliphatic point mutations predicts 20-30% less false positives and yields more accurate predictions than any published empirical energy function. This significant improvement is achieved with essentially no free parameters, validating past theoretical and experimental efforts to understand the thermodynamics of protein folding. Our first principle analysis strongly suggests that both the solute-solute van der Waals interactions in the folded state and the electrostatics free energy change of exposed aliphatic mutations are almost completely compensated by similar interactions operating in the unfolded ensemble. Not surprisingly, the problem of properly accounting for the solvent contribution to the free energy of polar and charged group mutations, as well as of mutations that disrupt the protein backbone remains open. 2007 Wiley-Liss, Inc.
Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie
2018-01-01
As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.
International Nuclear Information System (INIS)
Hu Jicun; Tam, Kwok; Johnson, Roger H
2004-01-01
We derive and analyse a simple algorithm first proposed by Kudo et al (2001 Proc. 2001 Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine (Pacific Grove, CA) pp 7-10) for long object imaging from truncated helical cone beam data via a novel definition of region of interest (ROI). Our approach is based on the theory of short object imaging by Kudo et al (1998 Phys. Med. Biol. 43 2885-909). One of the key findings in their work is that filtering of the truncated projection can be divided into two parts: one, finite in the axial direction, results from ramp filtering the data within the Tam window. The other, infinite in the z direction, results from unbounded filtering of ray sums over PI lines only. We show that for an ROI defined by PI lines emanating from the initial and final source positions on a helical segment, the boundary data which would otherwise contaminate the reconstruction of the ROI can be completely excluded. This novel definition of the ROI leads to a simple algorithm for long object imaging. The overscan of the algorithm is analytically calculated and it is the same as that of the zero boundary method. The reconstructed ROI can be divided into two regions: one is minimally contaminated by the portion outside the ROI, while the other is reconstructed free of contamination. We validate the algorithm with a 3D Shepp-Logan phantom and a disc phantom
Våge, Selina; Thingstad, T. Frede
2015-01-01
Trophic interactions are highly complex and modern sequencing techniques reveal enormous biodiversity across multiple scales in marine microbial communities. Within the chemically and physically relatively homogeneous pelagic environment, this calls for an explanation beyond spatial and temporal heterogeneity. Based on observations of simple parasite-host and predator-prey interactions occurring at different trophic levels and levels of phylogenetic resolution, we present a theoretical perspective on this enormous biodiversity, discussing in particular self-similar aspects of pelagic microbial food web organization. Fractal methods have been used to describe a variety of natural phenomena, with studies of habitat structures being an application in ecology. In contrast to mathematical fractals where pattern generating rules are readily known, however, identifying mechanisms that lead to natural fractals is not straight-forward. Here we put forward the hypothesis that trophic interactions between pelagic microbes may be organized in a fractal-like manner, with the emergent network resembling the structure of the Sierpinski triangle. We discuss a mechanism that could be underlying the formation of repeated patterns at different trophic levels and discuss how this may help understand characteristic biomass size-spectra that hint at scale-invariant properties of the pelagic environment. If the idea of simple underlying principles leading to a fractal-like organization of the pelagic food web could be formalized, this would extend an ecologists mindset on how biological complexity could be accounted for. It may furthermore benefit ecosystem modeling by facilitating adequate model resolution across multiple scales. PMID:26648929
Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Knöös, Tommy; Nicolini, Giorgia; Cozzi, Luca
2007-03-01
A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (ρ = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (ρ = 0.035 g cm-3), normal lung (ρ = 0.20 g cm-3) and cortical bone tissue (ρ = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 × 13 cm2) and elongated rectangular (2.8 × 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, γ index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (ρ = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.
Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil
2016-06-01
An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of
International Nuclear Information System (INIS)
Wang Yanfang; Liu Li; Yan Yonglian; Shan Baoci; Tang Xiaowei
2007-01-01
A new algorithm of segmenting contour series of images is presented, which can achieve three dimension reconstruction with parametric recognition in Reverse Engineering based on X-ray CT. First, in order to get the nested relationship between contours, a method of a certain angle ray is used. Second, for realizing the contour location in one slice, another approach is presented to generate the contour tree by scanning the relevant vector only once. Last, a judge algorithm is put forward to accomplish the contour match between slices by adopting the qualitative and quantitative properties. The example shows that this algorithm can segment contour series of CT parts rapidly and precisely. (authors)
Energy Technology Data Exchange (ETDEWEB)
Spolaore, P.
2016-03-11
A simple analysis of gamma spectra selected to represent the performance of different detection systems, or, for one same system, different operation modes or states of progress of the system development, allows to compare the relative average-sensitivities of the represented systems themselves, as operated in the selected cases. The obtained SP figure-of-merit takes into account and correlates the main parameters commonly used to estimate the performance of a system. An example of application is given.
Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth
2015-01-01
Background Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a s...
Directory of Open Access Journals (Sweden)
Mark D McDonnell
Full Text Available Recent advances in training deep (multi-layer architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM approach, which also enables a very rapid training time (∼ 10 minutes. Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.
Energy Technology Data Exchange (ETDEWEB)
Hack, Christopher A.; Benner, W. Henry
2001-10-31
A simple mathematical technique for improving mass calibration accuracy of linear delayed extraction matrix assisted laser desorption ionization time-of-flight mass spectrometry (DE MALDI-TOF MS) spectra is presented. The method involves fitting a parabola to a plot of Dm vs. mass data where Dm is the difference between the theoretical mass of calibrants and the mass obtained from a linear relationship between the square root of m/z and ion time of flight. The quadratic equation that describes the parabola is then used to correct the mass of unknowns by subtracting the deviation predicted by the quadratic equation from measured data. By subtracting the value of the parabola at each mass from the calibrated data, the accuracy of mass data points can be improved by factors of 10 or more. This method produces highly similar results whether or not initial ion velocity is accounted for in the calibration equation; consequently, there is no need to depend on that uncertain parameter when using the quadratic correction. This method can be used to correct the internally calibrated masses of protein digest peaks. The effect of nitrocellulose as a matrix additive is also briefly discussed, and it is shown that using nitrocellulose as an additive to a CHCA matrix does not significantly change initial ion velocity but does change the average position of ions relative to the sample electrode at the instant the extraction voltage is applied.
Min, Junwei; Yao, Baoli; Ketelhut, Steffi; Kemper, Björn
2017-02-01
The modular combination of optical microscopes with digital holographic microscopy (DHM) has been proven to be a powerful tool for quantitative live cell imaging. The introduction of condenser and different microscope objectives (MO) simplifies the usage of the technique and makes it easier to measure different kinds of specimens with different magnifications. However, the high flexibility of illumination and imaging also causes variable phase aberrations that need to be eliminated for high resolution quantitative phase imaging. The existent phase aberrations compensation methods either require add additional elements into the reference arm or need specimen free reference areas or separate reference holograms to build up suitable digital phase masks. These inherent requirements make them unpractical for usage with highly variable illumination and imaging systems and prevent on-line monitoring of living cells. In this paper, we present a simple numerical method for phase aberration compensation based on the analysis of holograms in spatial frequency domain with capabilities for on-line quantitative phase imaging. From a single shot off-axis hologram, the whole phase aberration can be eliminated automatically without numerical fitting or pre-knowledge of the setup. The capabilities and robustness for quantitative phase imaging of living cancer cells are demonstrated.
Seaux, Liesbeth; Van Houcke, Sofie; Dumoulin, Els; Fiers, Tom; Lecocq, Elke; Delanghe, Joris R
2014-08-01
Analytical interferences have been described due to the presence of various exogenous UV-absorbing substances in serum. Iodine-based X-ray contrast agents and various antibiotics have been reported to interfere with interpretation of serum protein pherograms, resulting in false diagnosis of paraproteinemia. In the present study, we have explored the possibility of measuring UV absorbance at two distinct wavelengths (210 and 246 nm) to distinguish between true and false paraproteins on a Helena V8 clinical electrophoresis instrument. This study demonstrates that most substances potentially interfering with serum protein electrophoresis show UV-absorption spectra that are distinct from those of serum proteins. Scanning at 246 nm allows detection of all described interfering agents. Comparing pherograms recorded at both wavelengths (210 and 246 nm) enables to distinguish paraproteins from UV-absorbing substances. In case of a true paraprotein, the peak with an electrophoretic mobility in the gamma-region decreases, whereas the X-ray contrast media and antibiotics show an increased absorption when compared to the basic setting (210 nm). The finding of iodine-containing contrast media interfering with serum protein electrophoresis is not uncommon. In a clinical series, interference induced by contrast media was reported in 54 cases (of 13 237 analyses), corresponding with a prevalence of 0.4%. In the same series, 1631 true paraproteins (12.3%) were detected. Implementation of the proposed algorithm may significantly improve the interpretation of routine electrophoresis results. However, attention should still be paid to possible interference due to presence of atypical proteins fractions (e.g., tumor markers, C3). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Dwi Marisa Efendi
2018-04-01
Full Text Available Cassava is one type of plant that can be planted in tropical climates. Cassava commodity is one of the leading sub-sectors in the plantation area. Cassava plant is the main ingredient of sago flour which is now experiencing price decline. The condition of the abundant supply of sago or tapioca flour production is due to the increase of cassava planting in each farmer. With the increasing number of cassava planting in farmer's plantation cause the price of cassava received by farmer is not suitable. So for the need of making sago or tapioca flour often excess in buying raw material of cassava This resulted in a lot of rotten cassava and the factory bought cassava for a low price. Based on the problem, this research is done using data mining modeled with multiple linear regression algorithm which aim to estimate the amount of Sago or Tapioca flour that can be produced, so that the future can improve the balance between the amount of cassava supply and tapioca production. The variables used in linear regression analysis are dependent variable and independent variable . From the data obtained, the dependent variable is the number of Tapioca (kg symbolized by Y while the independent variable is milled cassava symbolized by X. From the results obtained with an accuracy of 95% confidence level, then obtained coefficient of determination (R2 is 1.00. While the estimation results almost closer to the actual data value, with an average error of 0.00.
Directory of Open Access Journals (Sweden)
Yu Shi
2017-02-01
Full Text Available We introduce a finite-difference frequency-domain algorithm for coupled acousto-optic simulations. First-principles acousto-optic simulation in time domain has been challenging due to the fact that the acoustic and optical frequencies differ by many orders of magnitude. We bypass this difficulty by formulating the interactions between the optical and acoustic waves rigorously as a system of coupled nonlinear equations in frequency domain. This approach is particularly suited for on-chip devices that are based on a variety of acousto-optic interactions such as the stimulated Brillouin scattering. We validate our algorithm by simulating a stimulated Brillouin scattering process in a suspended waveguide structure and find excellent agreement with coupled-mode theory. We further provide an example of a simulation for a compact on-chip resonator device that greatly enhances the effect of stimulated Brillouin scattering. Our algorithm should facilitate the design of nanophotonic on-chip devices for the harnessing of photon-phonon interactions.
Directory of Open Access Journals (Sweden)
V. S. Kudryashov
2017-01-01
Full Text Available The article is devoted to the development of the algorithm of the heating phase control of a rubber compound for CJSC “Voronezh tyre plant”. The algorithm is designed for implementation on basis of controller Siemens S-300 to control the RS-270 mixer. To compile the algorithm a systematic analysis of the heating process has been performed as a control object, also the mathematical model of the heating phase has been developed on the basis of the heat balance equation, which describes the process of heating of a heat-transfer agent in the heat exchanger and further heating of the mixture in the mixer. The dynamic characteristics of temperature of the heat exchanger and the rubber mixer have been obtained. Taking into account the complexity and nonlinearity of the control object – a rubber mixer, as well as the availability of methods and great experience in managing this machine in an industrial environment, the algorithm has been implemented using the Pontryagin maximum principle. The optimization problem is reduced to determining the optimal control (heating steam supply and the optimal path of the object’s output coordinates (the temperature of the mixture which ensure the least flow of steam while heating a rubber compound in a limited time. To do this, the mathematical model of the heating phase has been written in matrix form. Coefficients matrices for each state of the control, control and disturbance vectors have been created, the Hamilton function has been obtained and time switching points have been found for constructing an optimal control and escape path of the object. Analysis of the model experiments and practical research results in the process of programming of the controller have showed a decrease in the heating steam consumption by 24.4% during the heating phase of the rubber compound.
Clementel, N.; Madura, T. I.; Kruip, C. J. H.; Icke, V.; Gull, T. R.
2014-01-01
Eta Carinae is an ideal astrophysical laboratory for studying massive binary interactions and evolution, and stellar wind-wind collisions. Recent three-dimensional (3D) simulations set the stage for understanding the highly complex 3D flows in Eta Car. Observations of different broad high- and low-ionization forbidden emission lines provide an excellent tool to constrain the orientation of the system, the primary's mass-loss rate, and the ionizing flux of the hot secondary. In this work we present the first steps towards generating synthetic observations to compare with available and future HST/STIS data. We present initial results from full 3D radiative transfer simulations of the interacting winds in Eta Car. We use the SimpleX algorithm to post-process the output from 3D SPH simulations and obtain the ionization fractions of hydrogen and helium assuming three different mass-loss rates for the primary star. The resultant ionization maps of both species constrain the regions where the observed forbidden emission lines can form. Including collisional ionization is necessary to achieve a better description of the ionization states, especially in the areas shielded from the secondary's radiation. We find that reducing the primary's mass-loss rate increases the volume of ionized gas, creating larger areas where the forbidden emission lines can form. We conclude that post processing 3D SPH data with SimpleX is a viable tool to create ionization maps for Eta Car.
Frid, Yelena; Gusfield, Dan
2010-01-04
The problem of computationally predicting the secondary structure (or folding) of RNA molecules was first introduced more than thirty years ago and yet continues to be an area of active research and development. The basic RNA-folding problem of finding a maximum cardinality, non-crossing, matching of complimentary nucleotides in an RNA sequence of length n, has an O(n3)-time dynamic programming solution that is widely applied. It is known that an o(n3) worst-case time solution is possible, but the published and suggested methods are complex and have not been established to be practical. Significant practical improvements to the original dynamic programming method have been introduced, but they retain the O(n3) worst-case time bound when n is the only problem-parameter used in the bound. Surprisingly, the most widely-used, general technique to achieve a worst-case (and often practical) speed up of dynamic programming, the Four-Russians technique, has not been previously applied to the RNA-folding problem. This is perhaps due to technical issues in adapting the technique to RNA-folding. In this paper, we give a simple, complete, and practical Four-Russians algorithm for the basic RNA-folding problem, achieving a worst-case time-bound of O(n3/log(n)). We show that this time-bound can also be obtained for richer nucleotide matching scoring-schemes, and that the method achieves consistent speed-ups in practice. The contribution is both theoretical and practical, since the basic RNA-folding problem is often solved multiple times in the inner-loop of more complex algorithms, and for long RNA molecules in the study of RNA virus genomes.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Liu, Yongliang; Kim, Hee-Jin
2017-06-22
With cotton fiber growth or maturation, cellulose content in cotton fibers markedly increases. Traditional chemical methods have been developed to determine cellulose content, but it is time-consuming and labor-intensive, mostly owing to the slow hydrolysis process of fiber cellulose components. As one approach, the attenuated total reflection Fourier transform infrared (ATR FT-IR) spectroscopy technique has also been utilized to monitor cotton cellulose formation, by implementing various spectral interpretation strategies of both multivariate principal component analysis (PCA) and 1-, 2- or 3-band/-variable intensity or intensity ratios. The main objective of this study was to compare the correlations between cellulose content determined by chemical analysis and ATR FT-IR spectral indices acquired by the reported procedures, among developmental Texas Marker-1 (TM-1) and immature fiber ( im ) mutant cotton fibers. It was observed that the R value, CI IR , and the integrated intensity of the 895 cm -1 band exhibited strong and linear relationships with cellulose content. The results have demonstrated the suitability and utility of ATR FT-IR spectroscopy, combined with a simple algorithm analysis, in assessing cotton fiber cellulose content, maturity, and crystallinity in a manner which is rapid, routine, and non-destructive.
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.
DEFF Research Database (Denmark)
Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk
2007-01-01
A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam...... a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (rho = 0.035 g cm(-3)), enhanced for the most energetic beam. For denser...
Jaeger, Christian; Hemmann, Felix
2014-01-01
Elimination of Artifacts in NMR SpectroscopY (EASY) is a simple but very effective tool to remove simultaneously any real NMR probe background signal, any spectral distortions due to deadtime ringdown effects and -specifically- severe acoustic ringing artifacts in NMR spectra of low-gamma nuclei. EASY enables and maintains quantitative NMR (qNMR) as only a single pulse (preferably 90°) is used for data acquisition. After the acquisition of the first scan (it contains the wanted NMR signal and the background/deadtime/ringing artifacts) the same experiment is repeated immediately afterwards before the T1 waiting delay. This second scan contains only the background/deadtime/ringing parts. Hence, the simple difference of both yields clean NMR line shapes free of artefacts. In this Part I various examples for complete (1)H, (11)B, (13)C, (19)F probe background removal due to construction parts of the NMR probes are presented. Furthermore, (25)Mg EASY of Mg(OH)2 is presented and this example shows how extremely strong acoustic ringing can be suppressed (more than a factor of 200) such that phase and baseline correction for spectra acquired with a single pulse is no longer a problem. EASY is also a step towards deadtime-free data acquisition as these effects are also canceled completely. EASY can be combined with any other NMR experiment, including 2D NMR, if baseline distortions are a big problem. © 2013 Published by Elsevier Inc.
DEFF Research Database (Denmark)
Ødegård, Jørgen; Meuwissen, Theo HE; Heringstad, Bjørg
2010-01-01
Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where...... records exist for the parents). Furthermore, the new algorithm showed much faster Markov chain mixing properties for genetic parameters (similar to the sire-dam model). Conclusions The new algorithm to estimate genetic parameters via Gibbs sampling solves the bias problems typically occurring in animal...... individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (co)variance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative...
International Nuclear Information System (INIS)
Penfold, S; Casiraghi, M; Dou, T; Schulte, R; Censor, Y
2015-01-01
Purpose: To investigate the applicability of feasibility-seeking cyclic orthogonal projections to the field of intensity modulated proton therapy (IMPT) inverse planning. Feasibility of constraints only, as opposed to optimization of a merit function, is less demanding algorithmically and holds a promise of parallel computations capability with non-cyclic orthogonal projections algorithms such as string-averaging or block-iterative strategies. Methods: A virtual 2D geometry was designed containing a C-shaped planning target volume (PTV) surrounding an organ at risk (OAR). The geometry was pixelized into 1 mm pixels. Four beams containing a subset of proton pencil beams were simulated in Geant4 to provide the system matrix A whose elements a-ij correspond to the dose delivered to pixel i by a unit intensity pencil beam j. A cyclic orthogonal projections algorithm was applied with the goal of finding a pencil beam intensity distribution that would meet the following dose requirements: D-OAR < 54 Gy and 57 Gy < D-PTV < 64.2 Gy. The cyclic algorithm was based on the concept of orthogonal projections onto half-spaces according to the Agmon-Motzkin-Schoenberg algorithm, also known as ‘ART for inequalities’. Results: The cyclic orthogonal projections algorithm resulted in less than 5% of the PTV pixels and less than 1% of OAR pixels violating their dose constraints, respectively. Because of the abutting OAR-PTV geometry and the realistic modelling of the pencil beam penumbra, complete satisfaction of the dose objectives was not achieved, although this would be a clinically acceptable plan for a meningioma abutting the brainstem, for example. Conclusion: The cyclic orthogonal projections algorithm was demonstrated to be an effective tool for inverse IMPT planning in the 2D test geometry described. We plan to further develop this linear algorithm to be capable of incorporating dose-volume constraints into the feasibility-seeking algorithm
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
Cazzola, Mario; Calzetta, Luigino; Matera, Maria Gabriella; Muscoli, Saverio; Rogliani, Paola; Romeo, Francesco
2015-08-01
Chronic obstructive pulmonary disease (COPD) is often associated with cardiovascular artery disease (CAD), representing a potential and independent risk factor for cardiovascular morbidity. Therefore, the aim of this study was to identify an algorithm for predicting the risk of CAD in COPD patients. We analyzed data of patients afferent to the Cardiology ward and the Respiratory Diseases outpatient clinic of Tor Vergata University (2010-2012, 1596 records). The study population was clustered as training population (COPD patients undergoing coronary arteriography), control population (non-COPD patients undergoing coronary arteriography), test population (COPD patients whose records reported information on the coronary status). The predicting model was built via causal relationship between variables, stepwise binary logistic regression and Hosmer-Lemeshow analysis. The algorithm was validated via split-sample validation method and receiver operating characteristics (ROC) curve analysis. The diagnostic accuracy was assessed. In training population the variables gender (men/women OR: 1.7, 95%CI: 1.237-2.5, P COPD patients, whereas in control population also age and diabetes were correlated. The stepwise binary logistic regressions permitted to build a well fitting predictive model for training population but not for control population. The predictive algorithm shown a diagnostic accuracy of 81.5% (95%CI: 77.78-84.71) and an AUC of 0.81 (95%CI: 0.78-0.85) for the validation set. The proposed algorithm is effective for predicting the risk of CAD in COPD patients via a rapid, inexpensive and non-invasive approach. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cosmological principles. II. Physical principles
International Nuclear Information System (INIS)
Harrison, E.R.
1974-01-01
The discussion of cosmological principle covers the uniformity principle of the laws of physics, the gravitation and cognizability principles, and the Dirac creation, chaos, and bootstrap principles. (U.S.)
Pont, Frédéric; Fournié, Jean Jacques
2010-03-01
MS, the reference technology for proteomics, routinely produces large numbers of protein lists whose fast comparison would prove very useful. Unfortunately, most softwares only allow comparisons of two to three lists at once. We introduce here nwCompare, a simple tool for n-way comparison of several protein lists without any query language, and exemplify its use with differential and shared cancer cell proteomes. As the software compares character strings, it can be applied to any type of data mining, such as genomic or metabolomic datalists.
First-principles molecular dynamics for metals
International Nuclear Information System (INIS)
Fernando, G.W.; Qian, G.; Weinert, M.; Davenport, J.W.
1989-01-01
A Car-Parrinello-type first-principles molecular-dynamics approach capable of treating the partial occupancy of electronic states that occurs at the Fermi level in a metal is presented. The algorithms used to study metals are both simple and computationally efficient. We also discuss the connection between ordinary electronic-structure calculations and molecular-dynamics simulations as well as the role of Brillouin-zone sampling. This extension should be useful not only for metallic solids but also for solids that become metals in their liquid and/or amorphous phases
Temiz, Burak Kagan; Yavuz, Ahmet
2015-01-01
This study was done to develop a simple and inexpensive wave driver that can be used in experiments on string waves. The wave driver was made using a battery-operated toy car, and the apparatus can be used to produce string waves at a fixed frequency. The working principle of the apparatus is as follows: shortly after the car is turned on, the…
Energy Technology Data Exchange (ETDEWEB)
Vuckovic, V.; Vukosavic, S. (Electrical Engineering Inst. Nikola Tesla, Viktora Igoa 3, Belgrade, 11000 (Yugoslavia))
1992-01-01
This paper brings out a control algorithm for VSI fed induction motor drives based on the converter DC link current feedback. It is shown that the speed and flux can be controlled over the wide speed and load range quite satisfactorily for simpler drives. The base commands of both the inverter voltage and frequency are proportional to the reference speed, but each of them is further modified by the signals derived from the DC current sensor. The algorithm is based on the equations well known from the vector control theory, and is aimed to obtain the constant rotor flux and proportionality between the electrical torque, the slip frequency and the active component of the stator current. In this way, the problems of slip compensation, Ri compensation and correction of U/f characteristics are solved in the same time. Analytical considerations and computer simulations of the proposed control structure are in close agreement with the experimental results measured on a prototype drive.
Graybill, George
2007-01-01
Just how simple are simple machines? With our ready-to-use resource, they are simple to teach and easy to learn! Chocked full of information and activities, we begin with a look at force, motion and work, and examples of simple machines in daily life are given. With this background, we move on to different kinds of simple machines including: Levers, Inclined Planes, Wedges, Screws, Pulleys, and Wheels and Axles. An exploration of some compound machines follows, such as the can opener. Our resource is a real time-saver as all the reading passages, student activities are provided. Presented in s
Hewitt, Paul G.
2004-01-01
Some teachers have difficulty understanding Bernoulli's principle particularly when the principle is applied to the aerodynamic lift. Some teachers favor using Newton's laws instead of Bernoulli's principle to explain the physics behind lift. Some also consider Bernoulli's principle too difficult to explain to students and avoid teaching it…
Quantum Computation and Algorithms
International Nuclear Information System (INIS)
Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.
1999-01-01
It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution
Hashemi, Zohreh; Rafiezadeh, Shohreh; Hafizi, Roohollah; Hashemifar, S. Javad; Akbarzadeh, Hadi
2018-04-01
Evolutionary algorithm is combined with full-potential ab initio calculations to investigate conformational space of (MoS2)n and (MoSe2)n (n = 1-10) nanoclusters and to identify the lowest energy structural isomers of these systems. It is argued that within both BLYP and PBE functionals, these nanoclusters favor sandwiched planar configurations, similar to their ideal planar sheets. The second order difference in total energy (Δ2 E) of the lowest energy isomers is computed to estimate the abundance of the clusters at different sizes and to determine the magic sizes of (MoS2)n and (MoSe2)n nanoclusters. In order to investigate the electronic properties of nanoclusters, their energy gap is calculated by several methods, including hybrid functionals (B3LYP and PBE0), GW approach, and Δ scf method. At the end, the vibrational modes of the lowest lying isomers are calculated by using the force constants method and the IR active modes of the systems are identified. The vibrational spectra are used to calculate the Helmholtz free energy of the systems and then to investigate abundance of the nanoclusters at finite temperatures.
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
A New Modified Firefly Algorithm
Directory of Open Access Journals (Sweden)
Medha Gupta
2016-07-01
Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.
Algorithm Preserving Mass Fraction Maximum Principle for Multi-component Flows%多组份流动质量分数保极值原理算法
Institute of Scientific and Technical Information of China (English)
唐维军; 蒋浪; 程军波
2014-01-01
We propose a new method for compressible multi⁃component flows with Mie⁃Gruneisen equation of state based on mass fraction. The model preserves conservation law of mass, momentum and total energy for mixture flows. It also preserves conservation of mass of all single components. Moreover, it prevents pressure and velocity from jumping across interface that separate regions of different fluid components. Wave propagation method is used to discretize this quasi⁃conservation system. Modification of numerical method is adopted for conservative equation of mass fraction. This preserves the maximum principle of mass fraction. The wave propagation method which is not modified for conservation equations of flow components mass, cannot preserve the mass fraction in the interval [0,1]. Numerical results confirm validity of the method.%对基于质量分数的Mie⁃Gruneisen状态方程多流体组份模型提出了新的数值方法。该模型保持混合流体的质量、动量、和能量守恒，保持各组份分质量守恒，在多流体组份界面处保持压力和速度一致。该模型是拟守恒型方程系统。对该模型系统的离散采用波传播算法。与直接对模型中所有守恒方程采用相同算法不同的是，在处理分介质质量守恒方程时，对波传播算法进行了修正，使之满足质量分数保极值原理。而不作修改的算法则不能保证质量分数在[0，1]范围。数值实验验证了该方法有效。
International Nuclear Information System (INIS)
Ponce, W.A.; Zepeda, A.
1987-08-01
We present the results obtained from our systematic search of a simple Lie group that unifies weak and electromagnetic interactions in a single truly unified theory. We work with fractionally charged quarks, and allow for particles and antiparticles to belong to the same irreducible representation. We found that models based on SU(6), SU(7), SU(8) and SU(10) are viable candidates for simple unification. (author). 23 refs
A simple algorithm for computing canonical forms
Ford, H.; Hunt, L. R.; Renjeng, S.
1986-01-01
It is well known that all linear time-invariant controllable systems can be transformed to Brunovsky canonical form by a transformation consisting only of coordinate changes and linear feedback. However, the actual procedures for doing this have tended to be overly complex. The technique introduced here is envisioned as an on-line procedure and is inspired by George Meyer's tangent model for nonlinear systems. The process utilizes Meyer's block triangular form as an intermedicate step in going to Brunovsky form. The method also involves orthogonal matrices, thus eliminating the need for the computation of matrix inverses. In addition, the Kronecker indices can be computed as a by-product of this transformation so it is necessary to know them in advance.
Directory of Open Access Journals (Sweden)
A. E. Karateev
2017-01-01
Full Text Available To enhance the efficacy and safety of nonsteroidal anti-inflammatory drugs (NSAIDs, a class of essential medications used to treat acute and chronic pain, is an important and urgent task. For its solution, in 2015 Russian experts provided an NSAID selection algorithm based on the assessment of risk factors (RFs for drug-induced complications and on the prescription of drugs with the least negative effect on the gastrointestinal tract and cardiovascular system. The PRINCIPLE project was implemented to test the effectiveness of this algorithm.Subjects and methods. A study group consisted of 439 patients (65% were women and 35% – men; their mean age was 51.3±14.4 years with severe musculoskeletal pain, who were prescribed NSAIDs by using the above algorithm. The majority of patients were noted to have RFs: gastrointestinal and cardiovascular ones in 62 and 88% of the patients, respectively. Given the RF, eight NSAIDs were used; these were aceclofenac, diclofenac, ibuprofen, ketoprofen, meloxicam, naproxen, nimesulide, and celecoxib, the latter being prescribed most commonly (in 57.4% of cases. NSAID was used in combination with proton pump inhibitors in 30.2% of the patients. The follow-up period was 28 days. The investigators evaluated the efficacy of therapy (pain changes on a 10-point numeric rating scale (NRS and the development of adverse events (AE. Results and discussion. Pain was completely relieved in the overwhelming majority (94.9% of patients. There were no significant differences in the efficacy of different NSAIDs according to NRS scores. The number of AE was minimal and did not differ between different NSAIDs, with the exception of a higher frequency of dyspepsia caused by diclofenac (15.7%. There were no serious complications or therapy discontinuation because of AE.Conclusion. The use of the NSAID selection algorithm allows for effective and relatively safe therapy with these drugs in real clinical practice.
Convex hull ranking algorithm for multi-objective evolutionary algorithms
Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.
2012-01-01
Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity
Fuzzy tracking algorithm with feedback based on maximum entropy principle%带反馈多传感器模糊最大熵单目标跟踪算法
Institute of Scientific and Technical Information of China (English)
刘智; 陈丰; 黄继平
2012-01-01
针对矩阵加权融合算法计算量大、传感器数量不易扩充的特点,提出了一种带反馈的模糊最大熵融合算法.该算法采用模糊C-均值算法和最大熵原理计算状态向量中每一分量的权值,不但从整体考虑各分量对融合估计的影响,而且减少了复杂的矩阵运算过程,实时性较好.与矩阵加权算法相比,该融合算法还具有容易扩充的特点,能够直接应用于传感器数量大于2时的融合计算.实验仿真结果表明,融合估计的准确性与矩阵加权融合算法基本一致,算法的有效性得到了验证.%Aiming at the disadvantages of high computation overhead and bad extensibility in matrix weighted fusion methods, this paper proposed a multisensor fusion algorithm with feedback based on fuzzy C-means( FCM) clustering and tnaximun en-tropy principle( MEP). This algorithm combined FCM and MEP to calculate fusion matrix weight of state vector considering ev-ery components integratedly. What' s more, this algoritm had a good real-time performance due to less matrix computation and good extensibility which showed it could directly be applied into tracking system comprising more than two sensors. Experi-ments and results reveal that the tracking accuracy of fusion estimate is consistent with that of matrix weighted fusion methods.
Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle
Ettl, Svenja
2015-04-01
'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.
Simple concurrent garbage collection almost without synchronization
Hesselink, Wim H.; Lali, M.I.
We present two simple mark and sweep algorithms, A and B, for concurrent garbage collection by a single collector running concurrently with a number of mutators that concurrently modify shared data. Both algorithms are based on the ideas of Ben-Ari's classical algorithm for on-the-fly garbage
Ant-based extraction of rules in simple decision systems over ontological graphs
Directory of Open Access Journals (Sweden)
Pancerz Krzysztof
2015-06-01
Full Text Available In the paper, the problem of extraction of complex decision rules in simple decision systems over ontological graphs is considered. The extracted rules are consistent with the dominance principle similar to that applied in the dominancebased rough set approach (DRSA. In our study, we propose to use a heuristic algorithm, utilizing the ant-based clustering approach, searching the semantic spaces of concepts presented by means of ontological graphs. Concepts included in the semantic spaces are values of attributes describing objects in simple decision systems
Improved FHT Algorithms for Fast Computation of the Discrete Hartley Transform
Directory of Open Access Journals (Sweden)
M. T. Hamood
2013-05-01
Full Text Available In this paper, by using the symmetrical properties of the discrete Hartley transform (DHT, an improved radix-2 fast Hartley transform (FHT algorithm with arithmetic complexity comparable to that of the real-valued fast Fourier transform (RFFT is developed. It has a simple and regular butterfly structure and possesses the in-place computation property. Furthermore, using the same principles, the development can be extended to more efficient radix-based FHT algorithms. An example for the improved radix-4 FHT algorithm is given to show the validity of the presented method. The arithmetic complexity for the new algorithms are computed and then compared with the existing FHT algorithms. The results of these comparisons have shown that the developed algorithms reduce the number of multiplications and additions considerably.
Deane, Sharon
2003-01-01
ASP Made Simple provides a brief introduction to ASP for the person who favours self teaching and/or does not have expensive computing facilities to learn on. The book will demonstrate how the principles of ASP can be learned with an ordinary PC running Personal Web Server, MS Access and a general text editor like Notepad.After working through the material readers should be able to:* Write ASP scripts that can display changing information on a web browser* Request records from a remote database or add records to it* Check user names & passwords and take this knowledge forward, either for their
Schilstra, Maria J; Martin, Stephen R
2009-01-01
Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.
Moiseiwitsch, B L
2004-01-01
This graduate-level text's primary objective is to demonstrate the expression of the equations of the various branches of mathematical physics in the succinct and elegant form of variational principles (and thereby illuminate their interrelationship). Its related intentions are to show how variational principles may be employed to determine the discrete eigenvalues for stationary state problems and to illustrate how to find the values of quantities (such as the phase shifts) that arise in the theory of scattering. Chapter-by-chapter treatment consists of analytical dynamics; optics, wave mecha
Directory of Open Access Journals (Sweden)
V. A. Grinenko
2011-06-01
Full Text Available The offered material in the article is picked up so that the reader could have a complete representation about concept “safety”, intrinsic characteristics and formalization possibilities. Principles and possible strategy of safety are considered. A material of the article is destined for the experts who are taking up the problems of safety.
Energy Technology Data Exchange (ETDEWEB)
Levine, R.B.; Stassi, J.; Karasick, D.
1985-04-01
Anterior displacement of the tibial tubercle is a well-accepted orthopedic procedure in the treatment of certain patellofemoral disorders. The radiologic appearance of surgical procedures utilizing the Maquet principle has not been described in the radiologic literature. Familiarity with the physiologic and biochemical basis for the procedure and its postoperative appearance is necessary for appropriate roentgenographic evaluation and the radiographic recognition of complications.
International Nuclear Information System (INIS)
Wesson, P.S.
1979-01-01
The Cosmological Principle states: the universe looks the same to all observers regardless of where they are located. To most astronomers today the Cosmological Principle means the universe looks the same to all observers because density of the galaxies is the same in all places. A new Cosmological Principle is proposed. It is called the Dimensional Cosmological Principle. It uses the properties of matter in the universe: density (rho), pressure (p), and mass (m) within some region of space of length (l). The laws of physics require incorporation of constants for gravity (G) and the speed of light (C). After combining the six parameters into dimensionless numbers, the best choices are: 8πGl 2 rho/c 2 , 8πGl 2 rho/c 4 , and 2 Gm/c 2 l (the Schwarzchild factor). The Dimensional Cosmological Principal came about because old ideas conflicted with the rapidly-growing body of observational evidence indicating that galaxies in the universe have a clumpy rather than uniform distribution
DEFF Research Database (Denmark)
Mahnke, Martina; Uprichard, Emma
2014-01-01
Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...
An Ordering Linear Unification Algorithm
Institute of Scientific and Technical Information of China (English)
胡运发
1989-01-01
In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.
Manpower Administration (DOL), Washington, DC. Job Corps.
This self-study program for high-school level contains lessons on: Speed, Acceleration, and Velocity; Force, Mass, and Distance; Types of Motion and Rest; Electricity and Magnetism; Electrical, Magnetic, and Gravitational Fields; The Conservation and Conversion of Matter and Energy; Simple Machines and Work; Gas Laws; Principles of Heat Engines;…
Ogunfunmi, Tokunbo
2010-01-01
It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the
A Flocking Based algorithm for Document Clustering Analysis
Energy Technology Data Exchange (ETDEWEB)
Cui, Xiaohui [ORNL; Gao, Jinzhu [ORNL; Potok, Thomas E [ORNL
2006-01-01
Social animals or insects in nature often exhibit a form of emergent collective behavior known as flocking. In this paper, we present a novel Flocking based approach for document clustering analysis. Our Flocking clustering algorithm uses stochastic and heuristic principles discovered from observing bird flocks or fish schools. Unlike other partition clustering algorithm such as K-means, the Flocking based algorithm does not require initial partitional seeds. The algorithm generates a clustering of a given set of data through the embedding of the high-dimensional data items on a two-dimensional grid for easy clustering result retrieval and visualization. Inspired by the self-organized behavior of bird flocks, we represent each document object with a flock boid. The simple local rules followed by each flock boid result in the entire document flock generating complex global behaviors, which eventually result in a clustering of the documents. We evaluate the efficiency of our algorithm with both a synthetic dataset and a real document collection that includes 100 news articles collected from the Internet. Our results show that the Flocking clustering algorithm achieves better performance compared to the K- means and the Ant clustering algorithm for real document clustering.
Semioptimal practicable algorithmic cooling
International Nuclear Information System (INIS)
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-01-01
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Majorization arrow in quantum-algorithm design
International Nuclear Information System (INIS)
Latorre, J.I.; Martin-Delgado, M.A.
2002-01-01
We apply majorization theory to study the quantum algorithms known so far and find that there is a majorization principle underlying the way they operate. Grover's algorithm is a neat instance of this principle where majorization works step by step until the optimal target state is found. Extensions of this situation are also found in algorithms based in quantum adiabatic evolution and the family of quantum phase-estimation algorithms, including Shor's algorithm. We state that in quantum algorithms the time arrow is a majorization arrow
Principle Paradigms Revisiting the Dublin Core 1:1 Principle
Urban, Richard J.
2012-01-01
The Dublin Core "1:1 Principle" asserts that "related but conceptually different entities, for example a painting and a digital image of the painting, are described by separate metadata records" (Woodley et al., 2005). While this seems to be a simple requirement, studies of metadata quality have found that cultural heritage…
A matrix S for all simple current extensions
International Nuclear Information System (INIS)
Fuchs, J.; Schellekens, A.N.; Schweigert, C.
1996-01-01
A formula is presented for the modular transformation matrix S for any simple current extension of the chiral algebra of a conformal field theory. This provides in particular an algorithm for resolving arbitrary simple current fixed points, in such a way that the matrix S we obtain is unitary and symmetric and furnishes a modular group representation. The formalism works in principle for any conformal field theory. A crucial ingredient is a set of matrices S ab J , where J is a simple current and a and b are fixed points of J. We expect that these input matrices realize the modular group for the torus one-point functions of the simple currents. In the case of WZW-models these matrices can be identified with the S-matrices of the orbit Lie algebras that were introduced recently. As a special case of our conjecture we obtain the modular matrix S for WZW-theories based on group manifolds that are not simply connected, as well as for most coset models. (orig.)
Wilkesman, Jeff; Kurz, Liliana
2017-01-01
Zymography, the detection, identification, and even quantification of enzyme activity fractionated by gel electrophoresis, has received increasing attention in the last years, as revealed by the number of articles published. A number of enzymes are routinely detected by zymography, especially with clinical interest. This introductory chapter reviews the major principles behind zymography. New advances of this method are basically focused towards two-dimensional zymography and transfer zymography as will be explained in the rest of the chapters. Some general considerations when performing the experiments are outlined as well as the major troubleshooting and safety issues necessary for correct development of the electrophoresis.
International Nuclear Information System (INIS)
Wilson, P.D.
1996-01-01
Some basic explanations are given of the principles underlying the nuclear fuel cycle, starting with the physics of atomic and nuclear structure and continuing with nuclear energy and reactors, fuel and waste management and finally a discussion of economics and the future. An important aspect of the fuel cycle concerns the possibility of ''closing the back end'' i.e. reprocessing the waste or unused fuel in order to re-use it in reactors of various kinds. The alternative, the ''oncethrough'' cycle, discards the discharged fuel completely. An interim measure involves the prolonged storage of highly radioactive waste fuel. (UK)
Unsupervised Classification Using Immune Algorithm
Al-Muallim, M. T.; El-Kouatly, R.
2012-01-01
Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
Quantum Computations: Fundamentals and Algorithms
International Nuclear Information System (INIS)
Duplij, S.A.; Shapoval, I.I.
2007-01-01
Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described
Demonstrating Fermat's Principle in Optics
Paleiov, Orr; Pupko, Ofir; Lipson, S. G.
2011-01-01
We demonstrate Fermat's principle in optics by a simple experiment using reflection from an arbitrarily shaped one-dimensional reflector. We investigated a range of possible light paths from a lamp to a fixed slit by reflection in a curved reflector and showed by direct measurement that the paths along which light is concentrated have either…
De Götzen , Amalia; Mion , Luca; Tache , Olivier
2007-01-01
International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
Essential algorithms a practical approach to computer algorithms
Stephens, Rod
2013-01-01
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s
Energy Technology Data Exchange (ETDEWEB)
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Maximum-entropy clustering algorithm and its global convergence analysis
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
VARIATIONAL PRINCIPLE FOR PLANETARY INTERIORS
International Nuclear Information System (INIS)
Zeng, Li; Jacobsen, Stein B.
2016-01-01
In the past few years, the number of confirmed planets has grown above 2000. It is clear that they represent a diversity of structures not seen in our own solar system. In addition to very detailed interior modeling, it is valuable to have a simple analytical framework for describing planetary structures. The variational principle is a fundamental principle in physics, entailing that a physical system follows the trajectory, which minimizes its action. It is alternative to the differential equation formulation of a physical system. Applying the variational principle to the planetary interior can beautifully summarize the set of differential equations into one, which provides us some insight into the problem. From this principle, a universal mass–radius relation, an estimate of the error propagation from the equation of state to the mass–radius relation, and a form of the virial theorem applicable to planetary interiors are derived.
Microscopic Description of Le Chatelier's Principle
Novak, Igor
2005-01-01
A simple approach that "demystifies" Le Chatelier's principle (LCP) and simulates students to think about fundamental physical background behind the well-known principles is presented. The approach uses microscopic descriptors of matter like energy levels and populations and does not require any assumption about the fixed amount of substance being…
Mao-Gilles Stabilization Algorithm
Jérôme Gilles
2013-01-01
Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...
Mao-Gilles Stabilization Algorithm
Directory of Open Access Journals (Sweden)
Jérôme Gilles
2013-07-01
Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.
Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization
Huang, Xiaofei
2006-01-01
The normalized min-sum algorithm can achieve near-optimal performance at decoding LDPC codes. However, it is a critical question to understand the mathematical principle underlying the algorithm. Traditionally, people thought that the normalized min-sum algorithm is a good approximation to the sum-product algorithm, the best known algorithm for decoding LDPC codes and Turbo codes. This paper offers an alternative approach to understand the normalized min-sum algorithm. The algorithm is derive...
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
Mandal, Abhijit; Ram, Chhape; Mourya, Ankur; Singh, Navin
2017-01-01
To establish trends of estimation error of dose calculation by anisotropic analytical algorithm (AAA) with respect to dose measured by thermoluminescent dosimeters (TLDs) in air-water heterogeneity for small field size photon. TLDs were irradiated along the central axis of the photon beam in four different solid water phantom geometries using three small field size single beams. The depth dose profiles were estimated using AAA calculation model for each field sizes. The estimated and measured depth dose profiles were compared. The over estimation (OE) within air cavity were dependent on field size (f) and distance (x) from solid water-air interface and formulated as OE = - (0.63 f + 9.40) x2+ (-2.73 f + 58.11) x + (0.06 f2 - 1.42 f + 15.67). In postcavity adjacent point and distal points from the interface have dependence on field size (f) and equations are OE = 0.42 f2 - 8.17 f + 71.63, OE = 0.84 f2 - 1.56 f + 17.57, respectively. The trend of estimation error of AAA dose calculation algorithm with respect to measured value have been formulated throughout the radiation path length along the central axis of 6 MV photon beam in air-water heterogeneity combination for small field size photon beam generated from a 6 MV linear accelerator.
The action uncertainty principle and quantum gravity
Mensky, Michael B.
1992-02-01
Results of the path-integral approach to the quantum theory of continuous measurements have been formulated in a preceding paper in the form of an inequality of the type of the uncertainty principle. The new inequality was called the action uncertainty principle, AUP. It was shown that the AUP allows one to find in a simple what outputs of the continuous measurements will occur with high probability. Here a more simple form of the AUP will be formulated, δ S≳ħ. When applied to quantum gravity, it leads in a very simple way to the Rosenfeld inequality for measurability of the average curvature.
... Solitary Kidney Your Kidneys & How They Work Simple Kidney Cysts What are simple kidney cysts? Simple kidney cysts are abnormal, fluid-filled ... that form in the kidneys. What are the kidneys and what do they do? The kidneys are ...
A Simple Inexpensive Procedure for Illustrating Some Principles of Tomography
Darvey, Ivan G.
2013-01-01
The experiment proposed here illustrates some concepts of tomography via a qualitative determination of the relative concentration of various dilutions of food dye without "a priori" knowledge of the concentration of each dye mixture. This is performed in a manner analogous to computed tomography (CT) scans. In order to determine the…
Nonlinear optics principles and applications
Li, Chunfei
2017-01-01
This book reflects the latest advances in nonlinear optics. Besides the simple, strict mathematical deduction, it also discusses the experimental verification and possible future applications, such as the all-optical switches. It consistently uses the practical unit system throughout. It employs simple physical images, such as "light waves" and "photons" to systematically explain the main principles of nonlinear optical effects. It uses the first-order nonlinear wave equation in frequency domain under the condition of “slowly varying amplitude approximation" and the classical model of the interaction between the light and electric dipole. At the same time, it also uses the rate equations based on the energy-level transition of particle systems excited by photons and the energy and momentum conservation principles to explain the nonlinear optical phenomenon. The book is intended for researchers, engineers and graduate students in the field of the optics, optoelectronics, fiber communication, information tech...
Directory of Open Access Journals (Sweden)
Elena ANGHEL
2015-07-01
Full Text Available "I'm wishing Law this: all legal obligations sholud be executed with the scrupulosity with which moral obligations are being performed by those people who feel bound by them ...", so beautifully portraited by Nicolae Titulescu`s words1. Life in the society means more than a simple coexistence of human beings, it actually means living together, collaborating and cooperating; that is why I always have to relate to other people and to be aware that only by limiting my freedom of action, the others freedom is feasible. Neminem laedere should be a principle of life for each of us. The individual is a responsible being. But responsibility exceeds legal prescriptions. Romanian Constitution underlines that I have to exercise my rights and freedoms in good faith, without infringing the rights and freedoms of others. The legal norm, developer of the constitutional principles, is endowed with sanction, which grants it exigibility. But I wonder: If I choose to obey the law, is my decision essentially determined only due of the fear of punishment? Is it not because I am a rational being, who developed during its life a conscience towards values, and thus I understand that I have to respect the law and I choose to comply with it?
Storage capacity of the Tilinglike Learning Algorithm
International Nuclear Information System (INIS)
Buhot, Arnaud; Gordon, Mirta B.
2001-01-01
The storage capacity of an incremental learning algorithm for the parity machine, the Tilinglike Learning Algorithm, is analytically determined in the limit of a large number of hidden perceptrons. Different learning rules for the simple perceptron are investigated. The usual Gardner-Derrida rule leads to a storage capacity close to the upper bound, which is independent of the learning algorithm considered
On König's root finding algorithms
DEFF Research Database (Denmark)
Buff, Xavier; Henriksen, Christian
2003-01-01
In this paper, we first recall the definition of a family of root-finding algorithms known as König's algorithms. We establish some local and some global properties of those algorithms. We give a characterization of rational maps which arise as König's methods of polynomials with simple roots. We...
The gauge principle vs. the equivalence principle
International Nuclear Information System (INIS)
Gates, S.J. Jr.
1984-01-01
Within the context of field theory, it is argued that the role of the equivalence principle may be replaced by the principle of gauge invariance to provide a logical framework for theories of gravitation
Equivalence principles and electromagnetism
Ni, W.-T.
1977-01-01
The implications of the weak equivalence principles are investigated in detail for electromagnetic systems in a general framework. In particular, it is shown that the universality of free-fall trajectories (Galileo weak equivalence principle) does not imply the validity of the Einstein equivalence principle. However, the Galileo principle plus the universality of free-fall rotation states does imply the Einstein principle.
Substoichiometric method in the simple radiometric analysis
International Nuclear Information System (INIS)
Ikeda, N.; Noguchi, K.
1979-01-01
The substoichiometric method is applied to simple radiometric analysis. Two methods - the standard reagent method and the standard sample method - are proposed. The validity of the principle of the methods is verified experimentally in the determination of silver by the precipitation method, or of zinc by the ion-exchange or solvent-extraction method. The proposed methods are simple and rapid compared with the conventional superstoichiometric method. (author)
Online learning algorithm for ensemble of decision rules
Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2011-01-01
We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach
Simple Electromagnetic Analysis in Cryptography
Directory of Open Access Journals (Sweden)
Zdenek Martinasek
2012-07-01
Full Text Available The article describes the main principle and methods of simple electromagnetic analysis and thus provides an overview of simple electromagnetic analysis.The introductions chapters describe specific SPA attack used visual inspection of EM traces, template based attack and collision attack.After reading the article, the reader is sufficiently informed of any context of SEMA.Another aim of the article is the practical realization of SEMA which is focused on AES implementation.The visual inspection of EM trace of AES is performed step by step and the result is the determination of secret key Hamming weight.On the resulting EM trace, the Hamming weight of the secret key 1 to 8 was clearly visible.This method allows reduction from the number of possible keys for following brute force attack.
Automatic bounding estimation in modified NLMS algorithm
International Nuclear Information System (INIS)
Shahtalebi, K.; Doost-Hoseini, A.M.
2002-01-01
Modified Normalized Least Mean Square algorithm, which is a sign form of Nlm based on set-membership (S M) theory in the class of optimal bounding ellipsoid (OBE) algorithms, requires a priori knowledge of error bounds that is unknown in most applications. In a special but popular case of measurement noise, a simple algorithm has been proposed. With some simulation examples the performance of algorithm is compared with Modified Normalized Least Mean Square
International Nuclear Information System (INIS)
Zane, L.I.
1982-01-01
A simple model of a two-party arms race is developed based on the principle that the race will continue so long as either side can unleash an effective first strike against the other side. The model is used to examine how secrecy, the ABM, MIRV-ing, and an MX system affect the arms race
Simple Numerical Simulation of Strain Measurement
Tai, H.
2002-01-01
By adopting the basic principle of the reflection (and transmission) of a plane polarized electromagnetic wave incident normal to a stack of films of alternating refractive index, a simple numerical code was written to simulate the maximum reflectivity (transmittivity) of a fiber optic Bragg grating corresponding to various non-uniform strain conditions including photo-elastic effect in certain cases.
Directory of Open Access Journals (Sweden)
Lizeth Torres
2018-05-01
Full Text Available The principal aim of a spectral observer is twofold: the reconstruction of a signal of time via state estimation and the decomposition of such a signal into the frequencies that make it up. A spectral observer can be catalogued as an online algorithm for time-frequency analysis because is a method that can compute on the fly the Fourier transform (FT of a signal, without having the entire signal available from the start. In this regard, this paper presents a novel spectral observer with an adjustable constant gain for reconstructing a given signal by means of the recursive identification of the coefficients of a Fourier series. The reconstruction or estimation of a signal in the context of this work means to find the coefficients of a linear combination of sines a cosines that fits a signal such that it can be reproduced. The design procedure of the spectral observer is presented along with the following applications: (1 the reconstruction of a simple periodical signal, (2 the approximation of both a square and a triangular signal, (3 the edge detection in signals by using the Fourier coefficients, (4 the fitting of the historical Bitcoin market data from 1 December 2014 to 8 January 2018 and (5 the estimation of a input force acting upon a Duffing oscillator. To round out this paper, we present a detailed discussion about the results of the applications as well as a comparative analysis of the proposed spectral observer vis-à-vis the Short Time Fourier Transform (STFT, which is a well-known method for time-frequency analysis.
Elementary functions algorithms and implementation
Muller, Jean-Michel
2016-01-01
This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...
A Parallel Butterfly Algorithm
Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing
2014-01-01
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
A Parallel Butterfly Algorithm
Poulson, Jack
2014-02-04
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
A retrodictive stochastic simulation algorithm
International Nuclear Information System (INIS)
Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.
2010-01-01
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
International Nuclear Information System (INIS)
Collins, T.
1985-08-01
A simple criterion governs the beam distortion and/or loss of protons on a fast resonance crossing. Results from numerical integrations are illustrated for simple sextupole, octupole, and 10-pole resonances
Energy Technology Data Exchange (ETDEWEB)
Collins, T.
1985-08-01
A simple criterion governs the beam distortion and/or loss of protons on a fast resonance crossing. Results from numerical integrations are illustrated for simple sextupole, octupole, and 10-pole resonances.
First-principles study of complex material systems
He, Lixin
This thesis covers several topics concerning the study of complex materials systems by first-principles methods. It contains four chapters. A brief, introductory motivation of this work will be given in Chapter 1. In Chapter 2, I will give a short overview of the first-principles methods, including density-functional theory (DFT), planewave pseudopotential methods, and the Berry-phase theory of polarization in crystallines insulators. I then discuss in detail the locality and exponential decay properties of Wannier functions and of related quantities such as the density matrix, and their application in linear-scaling algorithms. In Chapter 3, I investigate the interaction of oxygen vacancies and 180° domain walls in tetragonal PbTiO3 using first-principles methods. Our calculations indicate that the oxygen vacancies have a lower formation energy in the domain wall than in the bulk, thereby confirming the tendency of these defects to migrate to, and pin, the domain walls. The pinning energies are reported for each of the three possible orientations of the original Ti--O--Ti bonds, and attempts to model the results with simple continuum models are discussed. CaCu3Ti4O12 (CCTO) has attracted a lot of attention recently because it was found to have an enormous dielectric response over a very wide temperature range. In Chapter 4, I study the electronic and lattice structure, and the lattice dynamical properties, of this system. Our first-principles calculations together with experimental results point towards an extrinsic mechanism as the origin of the unusual dielectric response.
Cognitive radio resource allocation based on coupled chaotic genetic algorithm
International Nuclear Information System (INIS)
Zu Yun-Xiao; Zhou Jie; Zeng Chang-Chang
2010-01-01
A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed
Structural and computational aspects of simple and influence games
Riquelme Csori, Fabián
2014-01-01
Simple games are a fundamental class of cooperative games. They have a huge relevance in several areas of computer science, social sciences and discrete applied mathematics. The algorithmic and computational complexity aspects of simple games have been gaining notoriety in the recent years. In this thesis we review different computational problems related to properties, parameters, and solution concepts of simple games. We consider different forms of representation of simple games, regular...
From properties to materials: An efficient and simple approach.
Huwig, Kai; Fan, Chencheng; Springborg, Michael
2017-12-21
We present an inverse-design method, the poor man's materials optimization, that is designed to identify materials within a very large class with optimized values for a pre-chosen property. The method combines an efficient genetic-algorithm-based optimization, an automatic approach for generating modified molecules, a simple approach for calculating the property of interest, and a mathematical formulation of the quantity whose value shall be optimized. In order to illustrate the performance of our approach, we study the properties of organic molecules related to those used in dye-sensitized solar cells, whereby we, for the sake of proof of principle, consider benzene as a simple test system. Using a genetic algorithm, the substituents attached to the organic backbone are varied and the best performing molecules are identified. We consider several properties to describe the performance of organic molecules, including the HOMO-LUMO gap, the sunlight absorption, the spatial distance of the orbitals, and the reorganisation energy. The results show that our method is able to identify a large number of good candidate structures within a short time. In some cases, chemical/physical intuition can be used to rationalize the substitution pattern of the best structures, although this is not always possible. The present investigations provide a solid foundation for dealing with more complex and technically relevant systems such as porphyrins. Furthermore, our "properties first, materials second" approach is not limited to solar-energy harvesting but can be applied to many other fields, as briefly is discussed in the paper.
From properties to materials: An efficient and simple approach
Huwig, Kai; Fan, Chencheng; Springborg, Michael
2017-12-01
We present an inverse-design method, the poor man's materials optimization, that is designed to identify materials within a very large class with optimized values for a pre-chosen property. The method combines an efficient genetic-algorithm-based optimization, an automatic approach for generating modified molecules, a simple approach for calculating the property of interest, and a mathematical formulation of the quantity whose value shall be optimized. In order to illustrate the performance of our approach, we study the properties of organic molecules related to those used in dye-sensitized solar cells, whereby we, for the sake of proof of principle, consider benzene as a simple test system. Using a genetic algorithm, the substituents attached to the organic backbone are varied and the best performing molecules are identified. We consider several properties to describe the performance of organic molecules, including the HOMO-LUMO gap, the sunlight absorption, the spatial distance of the orbitals, and the reorganisation energy. The results show that our method is able to identify a large number of good candidate structures within a short time. In some cases, chemical/physical intuition can be used to rationalize the substitution pattern of the best structures, although this is not always possible. The present investigations provide a solid foundation for dealing with more complex and technically relevant systems such as porphyrins. Furthermore, our "properties first, materials second" approach is not limited to solar-energy harvesting but can be applied to many other fields, as briefly is discussed in the paper.
An investigation of genetic algorithms
International Nuclear Information System (INIS)
Douglas, S.R.
1995-04-01
Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs
Robustness of Multiple Clustering Algorithms on Hyperspectral Images
National Research Council Canada - National Science Library
Williams, Jason P
2007-01-01
.... Various clustering algorithms were employed, including a hierarchical method, ISODATA, K-means, and X-means, and were used on a simple two dimensional dataset in order to discover potential problems with the algorithms...
Principles of Chemistry (by Michael Munowitz)
Kovac, Reviewed By Jeffrey
2000-05-01
At a time when almost all general chemistry textbooks seem to have become commodities designed by marketing departments to offend no one, it is refreshing to find a book with a unique perspective. Michael Munowitz has written what I can only describe as a delightful chemistry book, full of conceptual insight, that uses a novel and interesting pedagogic strategy. This is a book that has much to recommend it. This is the best-written general chemistry book I have ever read. An editor with whom I have worked recently remarked that he felt his job was to help authors make their writing sing. Well, the writing in Principles of Chemistry sings with the full, rich harmonies and creative inventiveness of the King's Singers or Chanticleer. Here is the first sentence of the introduction: "Central to any understanding of the physical world is one discovery of paramount importance, a truth disarmingly simple yet profound in its implications: matter is not continuous." This is prose to be savored and celebrated. Principles of Chemistry has a distinct perspective on chemistry: the perspective of the physical chemist. The focus is on simplicity, what is common about molecules and reactions; begin with the microscopic and build bridges to the macroscopic. The author's perspective is clear from the organization of the book. After three rather broad introductory chapters, there are four chapters that develop the quantum mechanical theory of atoms and molecules, including a strong treatment of molecular orbital theory. Unlike many books, Principles of Chemistry presents the molecular orbital approach first and introduces valence bond theory later only as an approximation for dealing with more complicated molecules. The usual chapters on descriptive inorganic chemistry are absent (though there is an excellent chapter on organic and biological molecules and reactions as well as one on transition metal complexes). Instead, descriptive chemistry is integrated into the development of
International Nuclear Information System (INIS)
Creutz, M.
1987-11-01
A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
Blackwell, William C., Jr.
2004-01-01
In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.
Radiation protection principles
International Nuclear Information System (INIS)
Ismail Bahari
2007-01-01
The presentation outlines the aspects of radiation protection principles. It discussed the following subjects; radiation hazards and risk, the objectives of radiation protection, three principles of the system - justification of practice, optimization of protection and safety, dose limit
Principles of project management
1982-01-01
The basic principles of project management as practiced by NASA management personnel are presented. These principles are given as ground rules and guidelines to be used in the performance of research, development, construction or operational assignments.
Fast algorithm of track detection
International Nuclear Information System (INIS)
Nehrguj, B.
1980-01-01
A fast algorithm of variable-slope histograms is proposed, which allows a considerable reduction of computer memory size and is quite simple to carry out. Corresponding FORTRAN subprograms given a triple speed gain have been included in spiral reader data handling software
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
The certainty principle (review)
Arbatsky, D. A.
2006-01-01
The certainty principle (2005) allowed to conceptualize from the more fundamental grounds both the Heisenberg uncertainty principle (1927) and the Mandelshtam-Tamm relation (1945). In this review I give detailed explanation and discussion of the certainty principle, oriented to all physicists, both theorists and experimenters.
Quantum Action Principle with Generalized Uncertainty Principle
Gu, Jie
2013-01-01
One of the common features in all promising candidates of quantum gravity is the existence of a minimal length scale, which naturally emerges with a generalized uncertainty principle, or equivalently a modified commutation relation. Schwinger's quantum action principle was modified to incorporate this modification, and was applied to the calculation of the kernel of a free particle, partly recovering the result previously studied using path integral.
Basic principles of forest fuel reduction treatments
James K. Agee; Carl N. Skinner
2005-01-01
Successful fire exclusion in the 20th century has created severe fire problems across the West. Not every forest is at risk of uncharacteristically severe wildfire, but drier forests are in need of active management to mitigate fire hazard. We summarize a set of simple principles important to address in fuel reduction treatments: reduction of surface fuels, increasing...
simple algorithm in the management of fetal sacroccocygeal
African Journals Online (AJOL)
significant effect on the health of the pregnant mother as it may be associated with severe anaemia, cardiac failure, maternal mirror syndrome and even death. The developing fetus with SCT is prone to high output ... echocardiography and doppler echocardiography. Fetal. MRI is observed to give a clearer configuration of.
A simple consensus algorithm for distributed averaging in random ...
Indian Academy of Sciences (India)
Random geographical networks are realistic models for wireless sensor ... work are cheap, unreliable, with limited computational power and limited .... signal xj from node j, j does not need to transmit its degree to i in order to let i compute.
Adaboost Ensemble with Simple Genetic Algorithm for Student Prediction Mode
AhmedSharaf ElDen; ElDen1Malaka A. Moustafa2Hany; M. Harb; AbdelH.Emara
2013-01-01
Predicting the student performance is a great concern to the higher education managements.Thisprediction helps to identify and to improve students' performance.Several factors may improve thisperformance.In the present study, we employ the data mining processes, particularly classification, toenhance the quality of the higher educational system. Recently, a new direction is used for the improvementof the classification accuracy by combining classifiers.In thispaper, we design and evaluate a f...
The Principle of Energetic Consistency
Cohn, Stephen E.
2009-01-01
A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of
Dimensional cosmological principles
International Nuclear Information System (INIS)
Chi, L.K.
1985-01-01
The dimensional cosmological principles proposed by Wesson require that the density, pressure, and mass of cosmological models be functions of the dimensionless variables which are themselves combinations of the gravitational constant, the speed of light, and the spacetime coordinates. The space coordinate is not the comoving coordinate. In this paper, the dimensional cosmological principle and the dimensional perfect cosmological principle are reformulated by using the comoving coordinate. The dimensional perfect cosmological principle is further modified to allow the possibility that mass creation may occur. Self-similar spacetimes are found to be models obeying the new dimensional cosmological principle
Approximation algorithms for guarding holey polygons ...
African Journals Online (AJOL)
Guarding edges of polygons is a version of art gallery problem.The goal is finding the minimum number of guards to cover the edges of a polygon. This problem is NP-hard, and to our knowledge there are approximation algorithms just for simple polygons. In this paper we present two approximation algorithms for guarding ...
The Porter Stemming Algorithm: Then and Now
Willett, Peter
2006-01-01
Purpose: In 1980, Porter presented a simple algorithm for stemming English language words. This paper summarises the main features of the algorithm, and highlights its role not just in modern information retrieval research, but also in a range of related subject domains. Design/methodology/approach: Review of literature and research involving use…
DEFF Research Database (Denmark)
Markham, Annette
This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
Kinetics of enzyme action: essential principles for drug hunters
National Research Council Canada - National Science Library
Stein, Ross L
2011-01-01
... field. Beginning with the most basic principles pertaining to simple, one-substrate enzyme reactions and their inhibitors, and progressing to a thorough treatment of two-substrate enzymes, Kinetics of Enzyme Action...
The serial message-passing schedule for LDPC decoding algorithms
Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue
2015-12-01
The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.
A simple technique to increase profits in wood products marketing
George B. Harpole
1971-01-01
Mathematical models can be used to solve quickly some simple day-to-day marketing problems. This note explains how a sawmill production manager, who has an essentially fixed-capacity mill, can solve several optimization problems by using pencil and paper, a forecast of market prices, and a simple algorithm. One such problem is to maximize profits in an operating period...
Hardware modules of the RSA algorithm
Directory of Open Access Journals (Sweden)
Škobić Velibor
2014-01-01
Full Text Available This paper describes basic principles of data protection using the RSA algorithm, as well as algorithms for its calculation. The RSA algorithm is implemented on FPGA integrated circuit EP4CE115F29C7, family Cyclone IV, Altera. Four modules of Montgomery algorithm are designed using VHDL. Synthesis and simulation are done using Quartus II software and ModelSim. The modules are analyzed for different key lengths (16 to 1024 in terms of the number of logic elements, the maximum frequency and speed.
MANAGER PRINCIPLES AS BASIS OF MANAGEMENT STYLE TRANSFORMATION
R. A. Kopytov
2011-01-01
The paper considers an approach which is based on non-conventional mechanisms of management style formation. The preset level of sustainable management is maintained by self-organized environment created in the process of management style transformation in efficient management principles. Their efficiency is checked within an adaptive algorithm. The algorithm is developed on the basis of combination of evaluative tools and base of operational proves. The operating algorithm capability is te...
Application of the maximum entropy production principle to electrical systems
International Nuclear Information System (INIS)
Christen, Thomas
2006-01-01
For a simple class of electrical systems, the principle of the maximum entropy production rate (MaxEP) is discussed. First, we compare the MaxEP principle and the principle of the minimum entropy production rate and illustrate the superiority of the MaxEP principle for the example of two parallel constant resistors. Secondly, we show that the Steenbeck principle for the electric arc as well as the ohmic contact behaviour of space-charge limited conductors follow from the MaxEP principle. In line with work by Dewar, the investigations seem to suggest that the MaxEP principle can also be applied to systems far from equilibrium, provided appropriate information is available that enters the constraints of the optimization problem. Finally, we apply the MaxEP principle to a mesoscopic system and show that the universal conductance quantum, e 2 /h, of a one-dimensional ballistic conductor can be estimated
DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM
Directory of Open Access Journals (Sweden)
TAYSEER S. ATIA
2014-08-01
Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.
International Nuclear Information System (INIS)
Fuchs, J.
1990-08-01
A complete classification of simple currents of WZW theory is obtained. The proof is based on an analysis of the quantum dimensions of the primary fields. Simple currents are precisely the primaries with unit quantum dimension; for WZW theories explicit formulae for the quantum dimensions can be derived so that an identification of the fields with unit quantum dimension is possible. (author). 19 refs.; 2 tabs
Clarifying the Misconception about the Principle of Floatation
Yadav, Manoj K.
2014-01-01
This paper aims to clarify the misconception about the violation of the principle of floatation. Improper understanding of the definition of "displaced fluid" by a floating body leads to the misconception. With the help of simple experiments, this article shows that there is no violation of the principle of floatation.
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...
Biomechanics principles and practices
Peterson, Donald R
2014-01-01
Presents Current Principles and ApplicationsBiomedical engineering is considered to be the most expansive of all the engineering sciences. Its function involves the direct combination of core engineering sciences as well as knowledge of nonengineering disciplines such as biology and medicine. Drawing on material from the biomechanics section of The Biomedical Engineering Handbook, Fourth Edition and utilizing the expert knowledge of respected published scientists in the application and research of biomechanics, Biomechanics: Principles and Practices discusses the latest principles and applicat
Dolan, Thomas James
2013-01-01
Fusion Research, Volume I: Principles provides a general description of the methods and problems of fusion research. The book contains three main parts: Principles, Experiments, and Technology. The Principles part describes the conditions necessary for a fusion reaction, as well as the fundamentals of plasma confinement, heating, and diagnostics. The Experiments part details about forty plasma confinement schemes and experiments. The last part explores various engineering problems associated with reactor design, vacuum and magnet systems, materials, plasma purity, fueling, blankets, neutronics
Combinatorial structures to modeling simple games and applications
Molinero, Xavier
2017-09-01
We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.
Database principles programming performance
O'Neil, Patrick
2014-01-01
Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi
National Research Council Canada - National Science Library
Walker, C. H
2012-01-01
"Now in its fourth edition, this exceptionally accessible text provides students with a multidisciplinary perspective and a grounding in the fundamental principles required for research in toxicology today...
Rack Protection Monitor - A Simple System
International Nuclear Information System (INIS)
Orr, S.
1997-12-01
The Rack Protection Monitor is a simple, fail-safe device to monitor smoke, temperature and ventilation sensors. It accepts inputs from redundant sensors and has a hardwired algorithm to prevent nuisance power trips due to random sensor failures. When a sensor is triggered the Rack Protection Monitor latches and annunicates the alarm. If another sensor is triggered, the Rack Protection Monitor locally shuts down the power to the relay rack and sends alarm to central control
Fermat's principle and nonlinear traveltime tomography
International Nuclear Information System (INIS)
Berryman, J.G.; Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, New York 10012)
1989-01-01
Fermat's principle shows that a definite convex set of feasible slowness models, depending only on the traveltime data, exists for the fully nonlinear traveltime inversion problem. In a new iterative reconstruction algorithm, the minimum number of nonfeasible ray paths is used as a figure of merit to determine the optimum size of the model correction at each step. The numerical results show that the new algorithm is robust, stable, and produces very good reconstructions even for high contrast materials where standard methods tend to diverge
Eisenhardt, K M; Sull, D N
2001-01-01
The success of Yahoo!, eBay, Enron, and other companies that have become adept at morphing to meet the demands of changing markets can't be explained using traditional thinking about competitive strategy. These companies have succeeded by pursuing constantly evolving strategies in market spaces that were considered unattractive according to traditional measures. In this article--the third in an HBR series by Kathleen Eisenhardt and Donald Sull on strategy in the new economy--the authors ask, what are the sources of competitive advantage in high-velocity markets? The secret, they say, is strategy as simple rules. The companies know that the greatest opportunities for competitive advantage lie in market confusion, but they recognize the need for a few crucial strategic processes and a few simple rules. In traditional strategy, advantage comes from exploiting resources or stable market positions. In strategy as simple rules, advantage comes from successfully seizing fleeting opportunities. Key strategic processes, such as product innovation, partnering, or spinout creation, place the company where the flow of opportunities is greatest. Simple rules then provide the guidelines within which managers can pursue such opportunities. Simple rules, which grow out of experience, fall into five broad categories: how- to rules, boundary conditions, priority rules, timing rules, and exit rules. Companies with simple-rules strategies must follow the rules religiously and avoid the temptation to change them too frequently. A consistent strategy helps managers sort through opportunities and gain short-term advantage by exploiting the attractive ones. In stable markets, managers rely on complicated strategies built on detailed predictions of the future. But when business is complicated, strategy should be simple.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms
International Nuclear Information System (INIS)
Wang Guobao; Qi Jinyi
2010-01-01
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
APPLYING THE PRINCIPLES OF ACCOUNTING IN
NAGY CRISTINA MIHAELA; SABĂU CRĂCIUN; ”Tibiscus” University of Timişoara, Faculty of Economic Science
2015-01-01
The application of accounting principles (accounting principle on accrual basis; principle of business continuity; method consistency principle; prudence principle; independence principle; the principle of separate valuation of assets and liabilities; intangibility principle; non-compensation principle; the principle of substance over form; the principle of threshold significance) to companies that are in bankruptcy procedure has a number of particularities. Thus, some principl...
Optimisation combinatoire Theorie et algorithmes
Korte, Bernhard; Fonlupt, Jean
2010-01-01
Ce livre est la traduction fran aise de la quatri me et derni re dition de Combinatorial Optimization: Theory and Algorithms crit par deux minents sp cialistes du domaine: Bernhard Korte et Jens Vygen de l'universit de Bonn en Allemagne. Il met l accent sur les aspects th oriques de l'optimisation combinatoire ainsi que sur les algorithmes efficaces et exacts de r solution de probl mes. Il se distingue en cela des approches heuristiques plus simples et souvent d crites par ailleurs. L ouvrage contient de nombreuses d monstrations, concises et l gantes, de r sultats difficiles. Destin aux tudia
A new algorithm for coding geological terminology
Apon, W.
The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.
The theory of hybrid stochastic algorithms
International Nuclear Information System (INIS)
Kennedy, A.D.
1989-01-01
These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs
The genetic difference principle.
Farrelly, Colin
2004-01-01
In the newly emerging debates about genetics and justice three distinct principles have begun to emerge concerning what the distributive aim of genetic interventions should be. These principles are: genetic equality, a genetic decent minimum, and the genetic difference principle. In this paper, I examine the rationale of each of these principles and argue that genetic equality and a genetic decent minimum are ill-equipped to tackle what I call the currency problem and the problem of weight. The genetic difference principle is the most promising of the three principles and I develop this principle so that it takes seriously the concerns of just health care and distributive justice in general. Given the strains on public funds for other important social programmes, the costs of pursuing genetic interventions and the nature of genetic interventions, I conclude that a more lax interpretation of the genetic difference principle is appropriate. This interpretation stipulates that genetic inequalities should be arranged so that they are to the greatest reasonable benefit of the least advantaged. Such a proposal is consistent with prioritarianism and provides some practical guidance for non-ideal societies--that is, societies that do not have the endless amount of resources needed to satisfy every requirement of justice.
International Nuclear Information System (INIS)
Unnikrishnan, C.S.
1994-01-01
Principle of equivalence was the fundamental guiding principle in the formulation of the general theory of relativity. What are its key elements? What are the empirical observations which establish it? What is its relevance to some new experiments? These questions are discussed in this article. (author). 11 refs., 5 figs
van Heerwaarden, A.E.; Kaas, R.
1992-01-01
A premium principle is derived, in which the loading for a risk is the reinsurance loading for an excess-of-loss cover. It is shown that the principle is well-behaved in the sense that it results in larger premiums for risks that are larger in stop-loss order or in stochastic dominance.
International Nuclear Information System (INIS)
Fatmi, H.A.; Resconi, G.
1988-01-01
In 1954 while reviewing the theory of communication and cybernetics the late Professor Dennis Gabor presented a new mathematical principle for the design of advanced computers. During our work on these computers it was found that the Gabor formulation can be further advanced to include more recent developments in Lie algebras and geometric probability, giving rise to a new computing principle
International Nuclear Information System (INIS)
Carr, B.J.
1982-01-01
The anthropic principle (the conjecture that certain features of the world are determined by the existence of Man) is discussed with the listing of the objections, and is stated that nearly all the constants of nature may be determined by the anthropic principle which does not give exact values for the constants but only their orders of magnitude. (J.T.)
Principles and Algorithms for Natural and Engineered Systems
2014-12-16
settings (chasing a free flying praying mantis and competing with a conspecific to catch a tethered mealworm) we provide evidence to show the...Figure 3 (SW_Architecture.pdf) : Software Architecture Hierarchy (The yellow balloons show if the software
Principles of a new treatment algorithm in multiple sclerosis
DEFF Research Database (Denmark)
Hartung, Hans-Peter; Montalban, Xavier; Sorensen, Per Soelberg
2011-01-01
We are entering a new era in the management of patients with multiple sclerosis (MS). The first oral treatment (fingolimod) has now gained US FDA approval, addressing an unmet need for patients with MS who wish to avoid parenteral administration. A second agent (cladribine) is currently being...
Alabdulmohsin, Ibrahim M.
2018-03-07
We will begin our treatment of summability calculus by analyzing what will be referred to, throughout this book, as simple finite sums. Even though the results of this chapter are particular cases of the more general results presented in later chapters, they are important to start with for a few reasons. First, this chapter serves as an excellent introduction to what summability calculus can markedly accomplish. Second, simple finite sums are encountered more often and, hence, they deserve special treatment. Third, the results presented in this chapter for simple finite sums will, themselves, be used as building blocks for deriving the most general results in subsequent chapters. Among others, we establish that fractional finite sums are well-defined mathematical objects and show how various identities related to the Euler constant as well as the Riemann zeta function can actually be derived in an elementary manner using fractional finite sums.
Alabdulmohsin, Ibrahim M.
2018-01-01
We will begin our treatment of summability calculus by analyzing what will be referred to, throughout this book, as simple finite sums. Even though the results of this chapter are particular cases of the more general results presented in later chapters, they are important to start with for a few reasons. First, this chapter serves as an excellent introduction to what summability calculus can markedly accomplish. Second, simple finite sums are encountered more often and, hence, they deserve special treatment. Third, the results presented in this chapter for simple finite sums will, themselves, be used as building blocks for deriving the most general results in subsequent chapters. Among others, we establish that fractional finite sums are well-defined mathematical objects and show how various identities related to the Euler constant as well as the Riemann zeta function can actually be derived in an elementary manner using fractional finite sums.
The Basic Principles and Methods of the System Approach to Compression of Telemetry Data
Levenets, A. V.
2018-01-01
The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.
International Nuclear Information System (INIS)
Khoury, Justin; Parikh, Maulik
2009-01-01
Mach's principle is the proposition that inertial frames are determined by matter. We put forth and implement a precise correspondence between matter and geometry that realizes Mach's principle. Einstein's equations are not modified and no selection principle is applied to their solutions; Mach's principle is realized wholly within Einstein's general theory of relativity. The key insight is the observation that, in addition to bulk matter, one can also add boundary matter. Given a space-time, and thus the inertial frames, we can read off both boundary and bulk stress tensors, thereby relating matter and geometry. We consider some global conditions that are necessary for the space-time to be reconstructible, in principle, from bulk and boundary matter. Our framework is similar to that of the black hole membrane paradigm and, in asymptotically anti-de Sitter space-times, is consistent with holographic duality.
Variational principles in physics
Basdevant, Jean-Louis
2007-01-01
Optimization under constraints is an essential part of everyday life. Indeed, we routinely solve problems by striking a balance between contradictory interests, individual desires and material contingencies. This notion of equilibrium was dear to thinkers of the enlightenment, as illustrated by Montesquieu’s famous formulation: "In all magistracies, the greatness of the power must be compensated by the brevity of the duration." Astonishingly, natural laws are guided by a similar principle. Variational principles have proven to be surprisingly fertile. For example, Fermat used variational methods to demonstrate that light follows the fastest route from one point to another, an idea which came to be known as Fermat’s principle, a cornerstone of geometrical optics. Variational Principles in Physics explains variational principles and charts their use throughout modern physics. The heart of the book is devoted to the analytical mechanics of Lagrange and Hamilton, the basic tools of any physicist. Prof. Basdev...
The Top Ten Algorithms in Data Mining
Wu, Xindong
2009-01-01
From classification and clustering to statistical learning, association analysis, and link mining, this book covers the most important topics in data mining research. It presents the ten most influential algorithms used in the data mining community today. Each chapter provides a detailed description of the algorithm, a discussion of available software implementation, advanced topics, and exercises. With a simple data set, examples illustrate how each algorithm works and highlight the overall performance of each algorithm in a real-world application. Featuring contributions from leading researc
Principles of broadband switching and networking
Liew, Soung C
2010-01-01
An authoritative introduction to the roles of switching and transmission in broadband integrated services networks Principles of Broadband Switching and Networking explains the design and analysis of switch architectures suitable for broadband integrated services networks, emphasizing packet-switched interconnection networks with distributed routing algorithms. The text examines the mathematical properties of these networks, rather than specific implementation technologies. Although the pedagogical explanations in this book are in the context of switches, many of the fundamenta
Katz, Abbott
2011-01-01
Get the most out of Excel 2010 with Excel 2010 Made Simple - learn the key features, understand what's new, and utilize dozens of time-saving tips and tricks to get your job done. Over 500 screen visuals and clear-cut instructions guide you through the features of Excel 2010, from formulas and charts to navigating around a worksheet and understanding Visual Basic for Applications (VBA) and macros. Excel 2010 Made Simple takes a practical and highly effective approach to using Excel 2010, showing you the best way to complete your most common spreadsheet tasks. You'll learn how to input, format,
Mazo, Gary
2011-01-01
If you have a Droid series smartphone - Droid, Droid X, Droid 2, or Droid 2 Global - and are eager to get the most out of your device, Droids Made Simple is perfect for you. Authors Martin Trautschold, Gary Mazo and Marziah Karch guide you through all of the features, tips, and tricks using their proven combination of clear instructions and detailed visuals. With hundreds of annotated screenshots and step-by-step directions, Droids Made Simple will transform you into a Droid expert, improving your productivity, and most importantly, helping you take advantage of all of the cool features that c
International Nuclear Information System (INIS)
Sator, N.
2003-01-01
This article concerns the correspondence between thermodynamics and the morphology of simple fluids in terms of clusters. Definitions of clusters providing a geometric interpretation of the liquid-gas phase transition are reviewed with an eye to establishing their physical relevance. The author emphasizes their main features and basic hypotheses, and shows how these definitions lead to a recent approach based on self-bound clusters. Although theoretical, this tutorial review is also addressed to readers interested in experimental aspects of clustering in simple fluids
Thouzery, Michel
2014-01-01
Fondée par les producteurs du Syndicat Inter-Massifs pour la Production et l’Économie des Simples (S.I.M.P.L.E.S), l’association base son action sur la recherche et le maintien d’une production de qualité (herboristerie et préparations à base de plantes) qui prend en compte le respect de l’environnement et la pérennité des petits producteurs en zone de montagne. Actions de formation Stages de découverte de la flore médicinale sauvage, Stages de culture et transformation des plantes médicinale...
International Nuclear Information System (INIS)
Dobrzynski, L; Akjouj, A; Djafari-Rouhani, B; Al-Wahsh, H; Zielinski, P
2003-01-01
We present a simple multiplexing device made of two atomic chains coupled by two other transition metal atoms. We show that this simple atomic device can transfer electrons at a given energy from one wire to the other, leaving all other electron states unaffected. Closed-form relations between the transmission coefficients and the inter-atomic distances are given to optimize the desired directional electron ejection. Such devices can be adsorbed on insulating substrates and characterized by current surface technologies. (letter to the editor)
Using the Perceptron Algorithm to Find Consistent Hypotheses
Anthony, M.; Shawe-Taylor, J.
1993-01-01
The perceptron learning algorithm yields quite naturally an algorithm for finding a linearly separable boolean function consistent with a sample of such a function. Using the idea of a specifying sample, we give a simple proof that this algorithm is not efficient, in general.
International Nuclear Information System (INIS)
Bolognesi, Tommaso
2011-01-01
In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.
Rules Extraction with an Immune Algorithm
Directory of Open Access Journals (Sweden)
Deqin Yan
2007-12-01
Full Text Available In this paper, a method of extracting rules with immune algorithms from information systems is proposed. Designing an immune algorithm is based on a sharing mechanism to extract rules. The principle of sharing and competing resources in the sharing mechanism is consistent with the relationship of sharing and rivalry among rules. In order to extract rules efficiently, a new concept of flexible confidence and rule measurement is introduced. Experiments demonstrate that the proposed method is effective.
A novel hybrid algorithm of GSA with Kepler algorithm for numerical optimization
Directory of Open Access Journals (Sweden)
Soroor Sarafrazi
2015-07-01
Full Text Available It is now well recognized that pure algorithms can be promisingly improved by hybridization with other techniques. One of the relatively new metaheuristic algorithms is Gravitational Search Algorithm (GSA which is based on the Newton laws. In this paper, to enhance the performance of GSA, a novel algorithm called “Kepler”, inspired by the astrophysics, is introduced. The Kepler algorithm is based on the principle of the first Kepler law. The hybridization of GSA and Kepler algorithm is an efficient approach to provide much stronger specialization in intensification and/or diversification. The performance of GSA–Kepler is evaluated by applying it to 14 benchmark functions with 20–1000 dimensions and the optimal approximation of linear system as a practical optimization problem. The results obtained reveal that the proposed hybrid algorithm is robust enough to optimize the benchmark functions and practical optimization problems.
DEFF Research Database (Denmark)
Rosendahl, Mads
2002-01-01
-like language. Our aim is to extract a simple notion of driving and show that even in this tamed form it has much of the power of more general notions of driving. Our driving technique may be used to simplify functional programs which use function composition and will often be able to remove intermediate data...
Dix, M. G.; Harrison, D. R.; Edwards, T. M.
1982-01-01
Bubble vial with external aluminum-foil electrodes is sensing element for simple indicating tiltmeter. To measure bubble displacement, bridge circuit detects difference in capacitance between two sensing electrodes and reference electrode. Tiltmeter was developed for experiment on forecasting seismic events by changes in Earth's magnetic field.
Eggen, Per-Odd
2009-01-01
This article describes the construction of an inexpensive, robust, and simple hydrogen electrode, as well as the use of this electrode to measure "standard" potentials. In the experiment described here the students can measure the reduction potentials of metal-metal ion pairs directly, without using a secondary reference electrode. Measurements…
International Nuclear Information System (INIS)
Blain, J.F.
1969-01-01
The results obtained by application to argon and sodium of the two important methods of studying the structure of liquids: scattering of X-rays and neutrons, are presented on one hand. On the other hand the principal models employed for reconstituting the structure of simple liquids are exposed: mathematical models, lattice models and their derived models, experimental models. (author) [fr
International Nuclear Information System (INIS)
De Luca, R; Faella, O
2014-01-01
Mathematical fireworks are reproduced in two dimensions by means of simple notions in kinematics and Newtonian mechanics. Extension of the analysis in three dimensions is proposed and the geometric figures the falling tiny particles make on the ground after explosion are determined. (paper)
African Journals Online (AJOL)
In the present study, 78 mapped simple sequence repeat (SSR) markers representing 11 linkage groups of adzuki bean were evaluated for transferability to mungbean and related Vigna spp. 41 markers amplified characteristic bands in at least one Vigna species. The transferability percentage across the genotypes ranged ...
Intelligent instrumentation principles and applications
Bhuyan, Manabendra
2011-01-01
With the advent of microprocessors and digital-processing technologies as catalyst, classical sensors capable of simple signal conditioning operations have evolved rapidly to take on higher and more specialized functions including validation, compensation, and classification. This new category of sensor expands the scope of incorporating intelligence into instrumentation systems, yet with such rapid changes, there has developed no universal standard for design, definition, or requirement with which to unify intelligent instrumentation. Explaining the underlying design methodologies of intelligent instrumentation, Intelligent Instrumentation: Principles and Applications provides a comprehensive and authoritative resource on the scientific foundations from which to coordinate and advance the field. Employing a textbook-like language, this book translates methodologies to more than 80 numerical examples, and provides applications in 14 case studies for a complete and working understanding of the material. Beginn...
Limitations of Boltzmann's principle
International Nuclear Information System (INIS)
Lavenda, B.H.
1995-01-01
The usual form of Boltzmann's principle assures that maximum entropy, or entropy reduction, occurs with maximum probability, implying a unimodal distribution. Boltzmann's principle cannot be applied to nonunimodal distributions, like the arcsine law, because the entropy may be concave only over a limited portion of the interval. The method of subordination shows that the arcsine distribution corresponds to a process with a single degree of freedom, thereby confirming the invalidation of Boltzmann's principle. The fractalization of time leads to a new distribution in which arcsine and Cauchy distributions can coexist simultaneously for nonintegral degrees of freedom between √2 and 2
Biomedical engineering principles
Ritter, Arthur B; Valdevit, Antonio; Ascione, Alfred N
2011-01-01
Introduction: Modeling of Physiological ProcessesCell Physiology and TransportPrinciples and Biomedical Applications of HemodynamicsA Systems Approach to PhysiologyThe Cardiovascular SystemBiomedical Signal ProcessingSignal Acquisition and ProcessingTechniques for Physiological Signal ProcessingExamples of Physiological Signal ProcessingPrinciples of BiomechanicsPractical Applications of BiomechanicsBiomaterialsPrinciples of Biomedical Capstone DesignUnmet Clinical NeedsEntrepreneurship: Reasons why Most Good Designs Never Get to MarketAn Engineering Solution in Search of a Biomedical Problem
Modern electronic maintenance principles
Garland, DJ
2013-01-01
Modern Electronic Maintenance Principles reviews the principles of maintaining modern, complex electronic equipment, with emphasis on preventive and corrective maintenance. Unfamiliar subjects such as the half-split method of fault location, functional diagrams, and fault finding guides are explained. This book consists of 12 chapters and begins by stressing the need for maintenance principles and discussing the problem of complexity as well as the requirements for a maintenance technician. The next chapter deals with the connection between reliability and maintenance and defines the terms fai
Pérez-Soba Díez del Corral, Juan José
2008-01-01
Bioethics emerges about the tecnological problems of acting in human life. Emerges also the problem of the moral limits determination, because they seem exterior of this practice. The Bioethics of Principles, take his rationality of the teleological thinking, and the autonomism. These divergence manifest the epistemological fragility and the great difficulty of hmoralñ thinking. This is evident in the determination of autonomy's principle, it has not the ethical content of Kant's propose. We need a new ethic rationality with a new refelxion of new Principles whose emerges of the basic ethic experiences.
Hill, Rodney
2013-01-01
Principles of Dynamics presents classical dynamics primarily as an exemplar of scientific theory and method. This book is divided into three major parts concerned with gravitational theory of planetary systems; general principles of the foundations of mechanics; and general motion of a rigid body. Some of the specific topics covered are Keplerian Laws of Planetary Motion; gravitational potential and potential energy; and fields of axisymmetric bodies. The principles of work and energy, fictitious body-forces, and inertial mass are also looked into. Other specific topics examined are kinematics
Hamilton's principle for beginners
International Nuclear Information System (INIS)
Brun, J L
2007-01-01
I find that students have difficulty with Hamilton's principle, at least the first time they come into contact with it, and therefore it is worth designing some examples to help students grasp its complex meaning. This paper supplies the simplest example to consolidate the learning of the quoted principle: that of a free particle moving along a line. Next, students are challenged to add gravity to reinforce the argument and, finally, a two-dimensional motion in a vertical plane is considered. Furthermore these examples force us to be very clear about such an abstract principle
Developing principles of growth
DEFF Research Database (Denmark)
Neergaard, Helle; Fleck, Emma
of the principles of growth among women-owned firms. Using an in-depth case study methodology, data was collected from women-owned firms in Denmark and Ireland, as these countries are similar in contextual terms, e.g. population and business composition, dominated by micro, small and medium-sized enterprises....... Extending on principles put forward in effectuation theory, we propose that women grow their firms according to five principles which enable women’s enterprises to survive in the face of crises such as the current financial world crisis....
Is weak violation of the Pauli principle possible?
International Nuclear Information System (INIS)
Ignat'ev, A.Yu.; Kuz'min, V.A.
1987-01-01
The question considered in the work is whether there are models which can account for small violation of the Pauli principle. A simple algebra is constructed for the creation-annihilation operators, which contains a parameter β and describe small violation of the Pauli principle (the Pauli principle is valid exactly for β=0). The commutation relations in this algebra are trilinear. A model is presented, basing upon this commutator algebra, which allows transitions violating the Pauli principle, their probability being suppressed by a factor of β 2 (even though the Hamiltonian does not contain small parameters)
Is a weak violation of the Pauli principle possible?
International Nuclear Information System (INIS)
Ignat'ev, A.Y.; Kuz'min, V.A.
1987-01-01
We examine models in which there is a weak violation of the Pauli principle. A simple algebra of creation and annihilation operators is constructed which contains a parameter β and describes a weak violation of the Pauli principle (when β = 0 the Pauli principle is satisfied exactly). The commutation relations in this algebra turn out to be trilinear. A model based on this algebra is described. It allows transitions in which the Pauli principle is violated, but the probability of these transitions is suppressed by the quantity β 2 (even though the interaction Hamiltonian does not contain small parameters)
Empirical study of parallel LRU simulation algorithms
Carr, Eric; Nicol, David M.
1994-01-01
This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.
Vaccinology: principles and practice
National Research Council Canada - National Science Library
Morrow, John
2012-01-01
... principles to implementation. This is an authoritative textbook that details a comprehensive and systematic approach to the science of vaccinology focusing on not only basic science, but the many stages required to commercialize...
Energy Technology Data Exchange (ETDEWEB)
Moller-Nielsen, Thomas [University of Oxford (United Kingdom)
2014-07-01
Physicists and philosophers have long claimed that the symmetries of our physical theories - roughly speaking, those transformations which map solutions of the theory into solutions - can provide us with genuine insight into what the world is really like. According to this 'Invariance Principle', only those quantities which are invariant under a theory's symmetries should be taken to be physically real, while those quantities which vary under its symmetries should not. Physicists and philosophers, however, are generally divided (or, indeed, silent) when it comes to explaining how such a principle is to be justified. In this paper, I spell out some of the problems inherent in other theorists' attempts to justify this principle, and sketch my own proposed general schema for explaining how - and when - the Invariance Principle can indeed be used as a legitimate tool of metaphysical inference.
Principles of applied statistics
National Research Council Canada - National Science Library
Cox, D. R; Donnelly, Christl A
2011-01-01
.... David Cox and Christl Donnelly distil decades of scientific experience into usable principles for the successful application of statistics, showing how good statistical strategy shapes every stage of an investigation...
Minimum entropy production principle
Czech Academy of Sciences Publication Activity Database
Maes, C.; Netočný, Karel
2013-01-01
Roč. 8, č. 7 (2013), s. 9664-9677 ISSN 1941-6016 Institutional support: RVO:68378271 Keywords : MINEP Subject RIV: BE - Theoretical Physics http://www.scholarpedia.org/article/Minimum_entropy_production_principle
Global ethics and principlism.
Gordon, John-Stewart
2011-09-01
This article examines the special relation between common morality and particular moralities in the four-principles approach and its use for global ethics. It is argued that the special dialectical relation between common morality and particular moralities is the key to bridging the gap between ethical universalism and relativism. The four-principles approach is a good model for a global bioethics by virtue of its ability to mediate successfully between universal demands and cultural diversity. The principle of autonomy (i.e., the idea of individual informed consent), however, does need to be revised so as to make it compatible with alternatives such as family- or community-informed consent. The upshot is that the contribution of the four-principles approach to global ethics lies in the so-called dialectical process and its power to deal with cross-cultural issues against the background of universal demands by joining them together.
Analytic representation for first-principles pseudopotentials
International Nuclear Information System (INIS)
Lam, P.K.; Cohen, M.L.; Zunger, A.
1980-01-01
The first-principles pseudopotentials developed by Zunger and Cohen are fit with a simple analytic form chosen to model the main physical properties of the potentials. The fitting parameters for the first three rows of the Periodic Table are presented, and the quality of the fit is discussed. The parameters reflect chemical trends of the elements. We find that a minimum of three parameters is required to reproduce the regularities of the Periodic Table. Application of these analytic potentials is also discussed
Microprocessors principles and applications
Debenham, Michael J
1979-01-01
Microprocessors: Principles and Applications deals with the principles and applications of microprocessors and covers topics ranging from computer architecture and programmed machines to microprocessor programming, support systems and software, and system design. A number of microprocessor applications are considered, including data processing, process control, and telephone switching. This book is comprised of 10 chapters and begins with a historical overview of computers and computing, followed by a discussion on computer architecture and programmed machines, paying particular attention to t
Electrical and electronic principles
Knight, S A
1991-01-01
Electrical and Electronic Principles, 2, Second Edition covers the syllabus requirements of BTEC Unit U86/329, including the principles of control systems and elements of data transmission. The book first tackles series and parallel circuits, electrical networks, and capacitors and capacitance. Discussions focus on flux density, electric force, permittivity, Kirchhoff's laws, superposition theorem, arrangement of resistors, internal resistance, and powers in a circuit. The text then takes a look at capacitors in circuit, magnetism and magnetization, electromagnetic induction, and alternating v
Microwave system engineering principles
Raff, Samuel J
1977-01-01
Microwave System Engineering Principles focuses on the calculus, differential equations, and transforms of microwave systems. This book discusses the basic nature and principles that can be derived from thermal noise; statistical concepts and binomial distribution; incoherent signal processing; basic properties of antennas; and beam widths and useful approximations. The fundamentals of propagation; LaPlace's Equation and Transmission Line (TEM) waves; interfaces between homogeneous media; modulation, bandwidth, and noise; and communications satellites are also deliberated in this text. This bo
Electrical and electronic principles
Knight, SA
1988-01-01
Electrical and Electronic Principles, 3 focuses on the principles involved in electrical and electronic circuits, including impedance, inductance, capacitance, and resistance.The book first deals with circuit elements and theorems, D.C. transients, and the series circuits of alternating current. Discussions focus on inductance and resistance in series, resistance and capacitance in series, power factor, impedance, circuit magnification, equation of charge, discharge of a capacitor, transfer of power, and decibels and attenuation. The manuscript then examines the parallel circuits of alternatin
Remark on Heisenberg's principle
International Nuclear Information System (INIS)
Noguez, G.
1988-01-01
Application of Heisenberg's principle to inertial frame transformations allows a distinction between three commutative groups of reciprocal transformations along one direction: Galilean transformations, dual transformations, and Lorentz transformations. These are three conjugate groups and for a given direction, the related commutators are all proportional to one single conjugation transformation which compensates for uniform and rectilinear motions. The three transformation groups correspond to three complementary ways of measuring space-time as a whole. Heisenberg's Principle then gets another explanation [fr
Algorithms for worst-case tolerance optimization
DEFF Research Database (Denmark)
Schjær-Jacobsen, Hans; Madsen, Kaj
1979-01-01
New algorithms are presented for the solution of optimum tolerance assignment problems. The problems considered are defined mathematically as a worst-case problem (WCP), a fixed tolerance problem (FTP), and a variable tolerance problem (VTP). The basic optimization problem without tolerances...... is denoted the zero tolerance problem (ZTP). For solution of the WCP we suggest application of interval arithmetic and also alternative methods. For solution of the FTP an algorithm is suggested which is conceptually similar to algorithms previously developed by the authors for the ZTP. Finally, the VTP...... is solved by a double-iterative algorithm in which the inner iteration is performed by the FTP- algorithm. The application of the algorithm is demonstrated by means of relatively simple numerical examples. Basic properties, such as convergence properties, are displayed based on the examples....
Cottrell, William; Montero, Miguel
2018-02-01
In this note we investigate the role of Lloyd's computational bound in holographic complexity. Our goal is to translate the assumptions behind Lloyd's proof into the bulk language. In particular, we discuss the distinction between orthogonalizing and `simple' gates and argue that these notions are useful for diagnosing holographic complexity. We show that large black holes constructed from series circuits necessarily employ simple gates, and thus do not satisfy Lloyd's assumptions. We also estimate the degree of parallel processing required in this case for elementary gates to orthogonalize. Finally, we show that for small black holes at fixed chemical potential, the orthogonalization condition is satisfied near the phase transition, supporting a possible argument for the Weak Gravity Conjecture first advocated in [1].
Unicameral (simple) bone cysts.
Baig, Rafath; Eady, John L
2006-09-01
Since their original description by Virchow, simple bone cysts have been studied repeatedly. Although these defects are not true neoplasms, simple bone cysts may create major structural defects of the humerus, femur, and os calcis. They are commonly discovered incidentally when x-rays are taken for other reasons or on presentation due to a pathologic fracture. Various treatment strategies have been employed, but the only reliable predictor of success of any treatment strategy is the age of the patient; those being older than 10 years of age heal their cysts at a higher rate than those under age 10. The goal of management is the formation of a bone that can withstand the stresses of use by the patient without evidence of continued bone destruction as determined by serial radiographic follow-up. The goal is not a normal-appearing x-ray, but a functionally stable bone.
Information technology made simple
Carter, Roger
1991-01-01
Information Technology: Made Simple covers the full range of information technology topics, including more traditional subjects such as programming languages, data processing, and systems analysis. The book discusses information revolution, including topics about microchips, information processing operations, analog and digital systems, information processing system, and systems analysis. The text also describes computers, computer hardware, microprocessors, and microcomputers. The peripheral devices connected to the central processing unit; the main types of system software; application soft
Modern mathematics made simple
Murphy, Patrick
1982-01-01
Modern Mathematics: Made Simple presents topics in modern mathematics, from elementary mathematical logic and switching circuits to multibase arithmetic and finite systems. Sets and relations, vectors and matrices, tesselations, and linear programming are also discussed.Comprised of 12 chapters, this book begins with an introduction to sets and basic operations on sets, as well as solving problems with Venn diagrams. The discussion then turns to elementary mathematical logic, with emphasis on inductive and deductive reasoning; conjunctions and disjunctions; compound statements and conditional
Data structures and algorithm analysis in C++
Shaffer, Clifford A
2011-01-01
With its focus on creating efficient data structures and algorithms, this comprehensive text helps readers understand how to select or design the tools that will best solve specific problems. It uses Microsoft C++ as the programming language and is suitable for second-year data structure courses and computer science courses in algorithm analysis.Techniques for representing data are presented within the context of assessing costs and benefits, promoting an understanding of the principles of algorithm analysis and the effects of a chosen physical medium. The text also explores tradeoff issues, f
Data structures and algorithm analysis in Java
Shaffer, Clifford A
2011-01-01
With its focus on creating efficient data structures and algorithms, this comprehensive text helps readers understand how to select or design the tools that will best solve specific problems. It uses Java as the programming language and is suitable for second-year data structure courses and computer science courses in algorithm analysis. Techniques for representing data are presented within the context of assessing costs and benefits, promoting an understanding of the principles of algorithm analysis and the effects of a chosen physical medium. The text also explores tradeoff issues, familiari
Simple substrates for complex cognition
Directory of Open Access Journals (Sweden)
Peter Dayan
2008-12-01
Full Text Available Complex cognitive tasks present a range of computational and algorithmic challenges for neural accounts of both learning and inference. In particular, it is extremely hard to solve them using the sort of simple policies that have been extensively studied as solutions to elementary Markov decision problems. There has thus been recent interest in architectures for the instantiation and even learning of policies that are formally more complicated than these, involving operations such as gated working memory. However, the focus of these ideas and methods has largely been on what might best be considered as automatized, routine or, in the sense of animal conditioning, habitual, performance. Thus, they have yet to provide a route towards understanding the workings of rule-based control, which is critical for cognitively sophisticated competence. Here, we review a recent suggestion for a uniform architecture for habitual and rule-based execution, discuss some of the habitual mechanisms that underpin the use of rules, and consider a statistical relationship between rules and habits.
Directory of Open Access Journals (Sweden)
Bradley Christopher Lowekamp
2013-12-01
Full Text Available SimpleITK is a new interface to the Insight Segmentation andRegistration Toolkit (ITK designed to facilitate rapid prototyping, educationand scientific activities, via high level programminglanguages. ITK is a templated C++ library of image processingalgorithms and frameworks for biomedical and other applications, andit was designed to be generic, flexible and extensible. Initially, ITKprovided a direct wrapping interface to languages such as Python andTcl through the WrapITK system. Unlike WrapITK, which exposed ITK'scomplex templated interface, SimpleITK was designed to provide an easyto use and simplified interface to ITK's algorithms. It includesprocedural methods, hides ITK's demand driven pipeline, and provides atemplate-less layer. Also SimpleITK provides practical conveniencessuch as binary distribution packages and overloaded operators. Ouruser-friendly design goals dictated a departure from the directinterface wrapping approach of WrapITK, towards a new facadeclass structure that only exposes the required functionality, hidingITK's extensive template use. Internally SimpleITK utilizes a manualdescription of each filter with code-generation and advanced C++meta-programming to provide the higher-level interface, bringing thecapabilities of ITK to a wider audience. SimpleITK is licensed asopen source software under the Apache License Version 2.0 and more informationabout downloading it can be found at http://www.simpleitk.org.
National Aeronautics and Space Administration — This chapter described at a very high level some of the considerations that need to be made when designing algorithms for a vehicle health management application....
Solving simple stochastic games with few coin toss positions
DEFF Research Database (Denmark)
Ibsen-Jensen, Rasmus; Miltersen, Peter Bro
2011-01-01
Gimbert and Horn gave an algorithm for solving simple stochastic games with running time O(r! n) where n is the number of positions of the simple stochastic game and r is the number of its coin toss positions. Chatterjee et al. pointed out that a variant of strategy iteration can be implemented...... to solve this problem in time 4^r r^{O(1)} n^{O(1)}. In this paper, we show that an algorithm combining value iteration with retrograde analysis achieves a time bound of O(r 2^r (r log r + n)), thus improving both time bounds. While the algorithm is simple, the analysis leading to this time bound...
Particle swarm genetic algorithm and its application
International Nuclear Information System (INIS)
Liu Chengxiang; Yan Changxiang; Wang Jianjun; Liu Zhenhai
2012-01-01
To solve the problems of slow convergence speed and tendency to fall into the local optimum of the standard particle swarm optimization while dealing with nonlinear constraint optimization problem, a particle swarm genetic algorithm is designed. The proposed algorithm adopts feasibility principle handles constraint conditions and avoids the difficulty of penalty function method in selecting punishment factor, generates initial feasible group randomly, which accelerates particle swarm convergence speed, and introduces genetic algorithm crossover and mutation strategy to avoid particle swarm falls into the local optimum Through the optimization calculation of the typical test functions, the results show that particle swarm genetic algorithm has better optimized performance. The algorithm is applied in nuclear power plant optimization, and the optimization results are significantly. (authors)
Applying Kitaev's algorithm in an ion trap quantum computer
International Nuclear Information System (INIS)
Travaglione, B.; Milburn, G.J.
2000-01-01
Full text: Kitaev's algorithm is a method of estimating eigenvalues associated with an operator. Shor's factoring algorithm, which enables a quantum computer to crack RSA encryption codes, is a specific example of Kitaev's algorithm. It has been proposed that the algorithm can also be used to generate eigenstates. We extend this proposal for small quantum systems, identifying the conditions under which the algorithm can successfully generate eigenstates. We then propose an implementation scheme based on an ion trap quantum computer. This scheme allows us to illustrate a simple example, in which the algorithm effectively generates eigenstates
An improved VSS NLMS algorithm for active noise cancellation
Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan
2017-08-01
In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.
Simple models of equilibrium and nonequilibrium phenomena
International Nuclear Information System (INIS)
Lebowitz, J.L.
1987-01-01
This volume consists of two chapters of particular interest to researchers in the field of statistical mechanics. The first chapter is based on the premise that the best way to understand the qualitative properties that characterize many-body (i.e. macroscopic) systems is to study 'a number of the more significant model systems which, at least in principle are susceptible of complete analysis'. The second chapter deals exclusively with nonequilibrium phenomena. It reviews the theory of fluctuations in open systems to which they have made important contributions. Simple but interesting model examples are emphasised
A Principle of Intentionality.
Turner, Charles K
2017-01-01
The mainstream theories and models of the physical sciences, including neuroscience, are all consistent with the principle of causality. Wholly causal explanations make sense of how things go, but are inherently value-neutral, providing no objective basis for true beliefs being better than false beliefs, nor for it being better to intend wisely than foolishly. Dennett (1987) makes a related point in calling the brain a syntactic (procedure-based) engine. He says that you cannot get to a semantic (meaning-based) engine from there. He suggests that folk psychology revolves around an intentional stance that is independent of the causal theories of the brain, and accounts for constructs such as meanings, agency, true belief, and wise desire. Dennett proposes that the intentional stance is so powerful that it can be developed into a valid intentional theory. This article expands Dennett's model into a principle of intentionality that revolves around the construct of objective wisdom. This principle provides a structure that can account for all mental processes, and for the scientific understanding of objective value. It is suggested that science can develop a far more complete worldview with a combination of the principles of causality and intentionality than would be possible with scientific theories that are consistent with the principle of causality alone.
General principles of radiotherapy
International Nuclear Information System (INIS)
Easson, E.C.
1985-01-01
The daily practice of any established branch of medicine should be based on some acceptable principles. This chapter is concerned with the general principles on which the radiotherapy of the Manchester school is based. Though many radiotherapists in other centres would doubtless accept these principles, there are sufficiently wide differences in practice throughout the world to suggest that some therapists adhere to a fundamentally different philosophy. The authors believe it is important, especially for those beginning their formal training in radiotherapy, to subscribe to an internally consistent school of thought, employing methods of treatment for each type of lesion in each anatomical site that are based on accepted principles and subjected to continuous rigorous scrutiny to test their effectiveness. Not only must each therapeutic technique be evaluated, but the underlying principles too must be questioned if and when this seems indicated. It is a feature of this hospital that similar lesions are all treated by the same technique, so long as statistical evidence justifies such a policy. All members of the staff adhere to the accepted policy until or unless reliable reasons are adduced to change this policy
The traveltime holographic principle
Huang, Yunsong; Schuster, Gerard T.
2015-01-01
Fermat's interferometric principle is used to compute interior transmission traveltimes τpq from exterior transmission traveltimes τsp and τsq. Here, the exterior traveltimes are computed for sources s on a boundary B that encloses a volume V of interior points p and q. Once the exterior traveltimes are computed, no further ray tracing is needed to calculate the interior times τpq. Therefore this interferometric approach can be more efficient than explicitly computing interior traveltimes τpq by ray tracing. Moreover, the memory requirement of the traveltimes is reduced by one dimension, because the boundary B is of one fewer dimension than the volume V. An application of this approach is demonstrated with interbed multiple (IM) elimination. Here, the IMs in the observed data are predicted from the migration image and are subsequently removed by adaptive subtraction. This prediction is enabled by the knowledge of interior transmission traveltimes τpq computed according to Fermat's interferometric principle. We denote this principle as the `traveltime holographic principle', by analogy with the holographic principle in cosmology where information in a volume is encoded on the region's boundary.
Dimensional analysis made simple
International Nuclear Information System (INIS)
Lira, Ignacio
2013-01-01
An inductive strategy is proposed for teaching dimensional analysis to second- or third-year students of physics, chemistry, or engineering. In this strategy, Buckingham's theorem is seen as a consequence and not as the starting point. In order to concentrate on the basics, the mathematics is kept as elementary as possible. Simple examples are suggested for classroom demonstrations of the power of the technique and others are put forward for homework or experimentation, but instructors are encouraged to produce examples of their own. (paper)
Applied mathematics made simple
Murphy, Patrick
1982-01-01
Applied Mathematics: Made Simple provides an elementary study of the three main branches of classical applied mathematics: statics, hydrostatics, and dynamics. The book begins with discussion of the concepts of mechanics, parallel forces and rigid bodies, kinematics, motion with uniform acceleration in a straight line, and Newton's law of motion. Separate chapters cover vector algebra and coplanar motion, relative motion, projectiles, friction, and rigid bodies in equilibrium under the action of coplanar forces. The final chapters deal with machines and hydrostatics. The standard and conte
Wooldridge, Susan
2013-01-01
Data Processing: Made Simple, Second Edition presents discussions of a number of trends and developments in the world of commercial data processing. The book covers the rapid growth of micro- and mini-computers for both home and office use; word processing and the 'automated office'; the advent of distributed data processing; and the continued growth of database-oriented systems. The text also discusses modern digital computers; fundamental computer concepts; information and data processing requirements of commercial organizations; and the historical perspective of the computer industry. The
Hansen, Jean-Pierre
1986-01-01
This book gives a comprehensive and up-to-date treatment of the theory of ""simple"" liquids. The new second edition has been rearranged and considerably expanded to give a balanced account both of basic theory and of the advances of the past decade. It presents the main ideas of modern liquid state theory in a way that is both pedagogical and self-contained. The book should be accessible to graduate students and research workers, both experimentalists and theorists, who have a good background in elementary mechanics.Key Features* Compares theoretical deductions with experimental r
Ethical principles of scientific communication
Directory of Open Access Journals (Sweden)
Baranov G. V.
2017-03-01
Full Text Available the article presents the principles of ethical management of scientific communication. The author approves the priority of ethical principle of social responsibility of the scientist.
Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance.
Vandersypen, L M; Steffen, M; Breyta, G; Yannoni, C S; Sherwood, M H; Chuang, I L
The number of steps any classical computer requires in order to find the prime factors of an l-digit integer N increases exponentially with l, at least using algorithms known at present. Factoring large integers is therefore conjectured to be intractable classically, an observation underlying the security of widely used cryptographic codes. Quantum computers, however, could factor integers in only polynomial time, using Shor's quantum factoring algorithm. Although important for the study of quantum computers, experimental demonstration of this algorithm has proved elusive. Here we report an implementation of the simplest instance of Shor's algorithm: factorization of N = 15 (whose prime factors are 3 and 5). We use seven spin-1/2 nuclei in a molecule as quantum bits, which can be manipulated with room temperature liquid-state nuclear magnetic resonance techniques. This method of using nuclei to store quantum information is in principle scalable to systems containing many quantum bits, but such scalability is not implied by the present work. The significance of our work lies in the demonstration of experimental and theoretical techniques for precise control and modelling of complex quantum computers. In particular, we present a simple, parameter-free but predictive model of decoherence effects in our system.
The mGA1.0: A common LISP implementation of a messy genetic algorithm
Goldberg, David E.; Kerzic, Travis
1990-01-01
Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.
NeatSort - A practical adaptive algorithm
La Rocca, Marcello; Cantone, Domenico
2014-01-01
We present a new adaptive sorting algorithm which is optimal for most disorder metrics and, more important, has a simple and quick implementation. On input $X$, our algorithm has a theoretical $\\Omega (|X|)$ lower bound and a $\\mathcal{O}(|X|\\log|X|)$ upper bound, exhibiting amazing adaptive properties which makes it run closer to its lower bound as disorder (computed on different metrics) diminishes. From a practical point of view, \\textit{NeatSort} has proven itself competitive with (and of...
Space mapping optimization algorithms for engineering design
DEFF Research Database (Denmark)
Koziel, Slawomir; Bandler, John W.; Madsen, Kaj
2006-01-01
A simple, efficient optimization algorithm based on space mapping (SM) is presented. It utilizes input SM to reduce the misalignment between the coarse and fine models of the optimized object over a region of interest, and output space mapping (OSM) to ensure matching of response and first...... to a benchmark problem. In comparison with SMIS, the models presented are simple and have a small number of parameters that need to be extracted. The new algorithm is applied to the optimization of coupled-line band-pass filter....
Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm
Institute of Scientific and Technical Information of China (English)
Haidong Xu; Mingyan Jiang; Kun Xu
2015-01-01
The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.
DEFF Research Database (Denmark)
Kautz, Karlheinz
2012-01-01
Contemporary Information Systems Development (ISD) takes place in a dynamic environment and is generally acknowledged as a complex activity. We investigate whether complex adaptive systems (CAS) theory is relevant as a theoretical foundation for understanding ISD, and if so, what kind of conception...... in general, beyond any particular development approach chosen for the execution of a project such as agile development. Thereby, we contribute to a complexity theory of ISD. Second, we back up our argument with a coherent empirical account of contemporary ISD, and thus contribute with practical advice...... can be achieved by utilizing the theory? We introduce key CAS concepts and describe an emergent method framework for understanding ISD. Extending existing research, our main contribution is twofold: We first show how CAS and CAS principles are advantageous for comprehending and organizing ISD...
An inductive algorithm for smooth approximation of functions
International Nuclear Information System (INIS)
Kupenova, T.N.
2011-01-01
An inductive algorithm is presented for smooth approximation of functions, based on the Tikhonov regularization method and applied to a specific kind of the Tikhonov parametric functional. The discrepancy principle is used for estimation of the regularization parameter. The principle of heuristic self-organization is applied for assessment of some parameters of the approximating function
A simple, remote, video based breathing monitor.
Regev, Nir; Wulich, Dov
2017-07-01
Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.
Ethical principles and theories.
Schultz, R C
1993-01-01
Ethical theory about what is right and good in human conduct lies behind the issues practitioners face and the codes they turn to for guidance; it also provides guidance for actions, practices, and policies. Principles of obligation, such as egoism, utilitarianism, and deontology, offer general answers to the question, "Which acts/practices are morally right?" A re-emerging alternative to using such principles to assess individual conduct is to center normative theory on personal virtues. For structuring society's institutions, principles of social justice offer alternative answers to the question, "How should social benefits and burdens be distributed?" But human concerns about right and good call for more than just theoretical responses. Some critics (eg, the postmodernists and the feminists) charge that normative ethical theorizing is a misguided enterprise. However, that charge should be taken as a caution and not as a refutation of normative ethical theorizing.
Principles of musical acoustics
Hartmann, William M
2013-01-01
Principles of Musical Acoustics focuses on the basic principles in the science and technology of music. Musical examples and specific musical instruments demonstrate the principles. The book begins with a study of vibrations and waves, in that order. These topics constitute the basic physical properties of sound, one of two pillars supporting the science of musical acoustics. The second pillar is the human element, the physiological and psychological aspects of acoustical science. The perceptual topics include loudness, pitch, tone color, and localization of sound. With these two pillars in place, it is possible to go in a variety of directions. The book treats in turn, the topics of room acoustics, audio both analog and digital, broadcasting, and speech. It ends with chapters on the traditional musical instruments, organized by family. The mathematical level of this book assumes that the reader is familiar with elementary algebra. Trigonometric functions, logarithms and powers also appear in the book, but co...
Probabilistic simple sticker systems
Selvarajoo, Mathuri; Heng, Fong Wan; Sarmin, Nor Haniza; Turaev, Sherzod
2017-04-01
A model for DNA computing using the recombination behavior of DNA molecules, known as a sticker system, was introduced by by L. Kari, G. Paun, G. Rozenberg, A. Salomaa, and S. Yu in the paper entitled DNA computing, sticker systems and universality from the journal of Acta Informatica vol. 35, pp. 401-420 in the year 1998. A sticker system uses the Watson-Crick complementary feature of DNA molecules: starting from the incomplete double stranded sequences, and iteratively using sticking operations until a complete double stranded sequence is obtained. It is known that sticker systems with finite sets of axioms and sticker rules generate only regular languages. Hence, different types of restrictions have been considered to increase the computational power of sticker systems. Recently, a variant of restricted sticker systems, called probabilistic sticker systems, has been introduced [4]. In this variant, the probabilities are initially associated with the axioms, and the probability of a generated string is computed by multiplying the probabilities of all occurrences of the initial strings in the computation of the string. Strings for the language are selected according to some probabilistic requirements. In this paper, we study fundamental properties of probabilistic simple sticker systems. We prove that the probabilistic enhancement increases the computational power of simple sticker systems.
Mechanical engineering principles
Bird, John
2014-01-01
A student-friendly introduction to core engineering topicsThis book introduces mechanical principles and technology through examples and applications, enabling students to develop a sound understanding of both engineering principles and their use in practice. These theoretical concepts are supported by 400 fully worked problems, 700 further problems with answers, and 300 multiple-choice questions, all of which add up to give the reader a firm grounding on each topic.The new edition is up to date with the latest BTEC National specifications and can also be used on undergraduate courses in mecha
Itch Management: General Principles.
Misery, Laurent
2016-01-01
Like pain, itch is a challenging condition that needs to be managed. Within this setting, the first principle of itch management is to get an appropriate diagnosis to perform an etiology-oriented therapy. In several cases it is not possible to treat the cause, the etiology is undetermined, there are several causes, or the etiological treatment is not effective enough to alleviate itch completely. This is also why there is need for symptomatic treatment. In all patients, psychological support and associated pragmatic measures might be helpful. General principles and guidelines are required, yet patient-centered individual care remains fundamental. © 2016 S. Karger AG, Basel.
Born, Max; Wolf, Emil
1999-10-01
Principles of Optics is one of the classic science books of the twentieth century, and probably the most influential book in optics published in the past forty years. This edition has been thoroughly revised and updated, with new material covering the CAT scan, interference with broad-band light and the so-called Rayleigh-Sommerfeld diffraction theory. This edition also details scattering from inhomogeneous media and presents an account of the principles of diffraction tomography to which Emil Wolf has made a basic contribution. Several new appendices are also included. This new edition will be invaluable to advanced undergraduates, graduate students and researchers working in most areas of optics.
Electrical principles 3 checkbook
Bird, J O
2013-01-01
Electrical Principles 3 Checkbook aims to introduce students to the basic electrical principles needed by technicians in electrical engineering, electronics, and telecommunications.The book first tackles circuit theorems, single-phase series A.C. circuits, and single-phase parallel A.C. circuits. Discussions focus on worked problems on parallel A.C. circuits, worked problems on series A.C. circuits, main points concerned with D.C. circuit analysis, worked problems on circuit theorems, and further problems on circuit theorems. The manuscript then examines three-phase systems and D.C. transients
Bulmer, M G
1979-01-01
There are many textbooks which describe current methods of statistical analysis, while neglecting related theory. There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again fo
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
Online learning algorithm for ensemble of decision rules
Chikalov, Igor
2011-01-01
We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach. © 2011 Springer-Verlag.
Clustered K nearest neighbor algorithm for daily inflow forecasting
Akbari, M.; Van Overloop, P.J.A.T.M.; Afshar, A.
2010-01-01
Instance based learning (IBL) algorithms are a common choice among data driven algorithms for inflow forecasting. They are based on the similarity principle and prediction is made by the finite number of similar neighbors. In this sense, the similarity of a query instance is estimated according to
Bio Inspired Algorithms in Single and Multiobjective Reliability Optimization
DEFF Research Database (Denmark)
Madsen, Henrik; Albeanu, Grigore; Burtschy, Bernard
2014-01-01
Non-traditional search and optimization methods based on natural phenomena have been proposed recently in order to avoid local or unstable behavior when run towards an optimum state. This paper describes the principles of bio inspired algorithms and reports on Migration Algorithms and Bees...
ALGORITHMS FOR TETRAHEDRAL NETWORK (TEN) GENERATION
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The Tetrahedral Network(TEN) is a powerful 3-D vector structure in GIS, which has a lot of advantages such as simple structure, fast topological relation processing and rapid visualization. The difficulty of TEN application is automatic creating data structure. Al though a raster algorithm has been introduced by some authors, the problems in accuracy, memory requirement, speed and integrity are still existent. In this paper, the raster algorithm is completed and a vector algorithm is presented after a 3-D data model and structure of TEN have been introducted. Finally, experiment, conclusion and future work are discussed.
Beyond Simple Headquarters Configurations
DEFF Research Database (Denmark)
Dellestrand, Henrik; Kappen, Philip; Nell, Phillip Christopher
We investigate “dual headquarters involvement”, i.e. corporate and divisional headquarters’ simultaneous involvement in subsidiaries’ innovation development projects. Analyses draw on 85 innovation projects in 23 multibusiness firms and reveal that cross-divisional innovation importance, i.......e., an innovation that is important for the firm beyond the divisional boundaries, drives dual headquarters involvement in innovation development. Contrary to expectations, on average, a non-significant effect of cross-divisional embeddedness on dual headquarters involvement is found. Yet, both cross......-divisional importance and embeddedness effects are contingent on the overall complexity of the innovation project as signified by the size of the development network. The results lend support for the notion that parenting in complex structures entails complex headquarters structures and that we need to go beyond simple...
Givant, Steven
2017-01-01
This monograph details several different methods for constructing simple relation algebras, many of which are new with this book. By drawing these seemingly different methods together, all are shown to be aspects of one general approach, for which several applications are given. These tools for constructing and analyzing relation algebras are of particular interest to mathematicians working in logic, algebraic logic, or universal algebra, but will also appeal to philosophers and theoretical computer scientists working in fields that use mathematics. The book is written with a broad audience in mind and features a careful, pedagogical approach; an appendix contains the requisite background material in relation algebras. Over 400 exercises provide ample opportunities to engage with the material, making this a monograph equally appropriate for use in a special topics course or for independent study. Readers interested in pursuing an extended background study of relation algebras will find a comprehensive treatme...
Energy Technology Data Exchange (ETDEWEB)
Graham, Peter W.; /Stanford U., ITP; Horn, Bart; Kachru, Shamit; /Stanford U., ITP /SLAC; Rajendran, Surjeet; /Johns Hopkins U. /Stanford U., ITP; Torroba, Gonzalo; /Stanford U., ITP /SLAC
2011-12-14
We explore simple but novel bouncing solutions of general relativity that avoid singularities. These solutions require curvature k = +1, and are supported by a negative cosmological term and matter with -1 < w < -1 = 3. In the case of moderate bounces (where the ratio of the maximal scale factor a{sub +} to the minimal scale factor a{sub -} is {Omicron}(1)), the solutions are shown to be classically stable and cycle through an infinite set of bounces. For more extreme cases with large a{sub +} = a{sub -}, the solutions can still oscillate many times before classical instabilities take them out of the regime of validity of our approximations. In this regime, quantum particle production also leads eventually to a departure from the realm of validity of semiclassical general relativity, likely yielding a singular crunch. We briefly discuss possible applications of these models to realistic cosmology.
SIMPLE for industrial radiography
International Nuclear Information System (INIS)
Azhar Azmi; Abd Nassir Ibrahim; Siti Madiha Muhammad Amir; Glam Hadzir Patai Mohamad; Saidi Rajab
2004-01-01
The first thing industrial radiographers have to do before commencing radiography works is to determine manually the amount of correct exposure that the film need to be exposed in order to obtain the right density. The amount of exposure depends on many variables such as type of radioisotope, type of film, nature of test-object and its orientation, and specific arrangement related to object location and configuration. In many cases radiography works are rejected because of radiographs fail to meet certain reference criteria as defined in the applicable standard. One of the main reasons of radiograph rejection is due to inadequate exposure received by the films. SIMPLE is a software specially developed to facilitate the calculation of gamma-radiography exposure. By using this software and knowing radiographic parameters to be encountered during the work, it is expected that human error will be minimized, thus enhancing the quality and productivity of NDT jobs. (Author)
Molecular genetics made simple
Directory of Open Access Journals (Sweden)
Heba Sh. Kassem
2012-07-01
Full Text Available Genetics have undoubtedly become an integral part of biomedical science and clinical practice, with important implications in deciphering disease pathogenesis and progression, identifying diagnostic and prognostic markers, as well as designing better targeted treatments. The exponential growth of our understanding of different genetic concepts is paralleled by a growing list of genetic terminology that can easily intimidate the unfamiliar reader. Rendering genetics incomprehensible to the clinician however, defeats the very essence of genetic research: its utilization for combating disease and improving quality of life. Herein we attempt to correct this notion by presenting the basic genetic concepts along with their usefulness in the cardiology clinic. Bringing genetics closer to the clinician will enable its harmonious incorporation into clinical care, thus not only restoring our perception of its simple and elegant nature, but importantly ensuring the maximal benefit for our patients.
Molecular genetics made simple
Kassem, Heba Sh.; Girolami, Francesca; Sanoudou, Despina
2012-01-01
Abstract Genetics have undoubtedly become an integral part of biomedical science and clinical practice, with important implications in deciphering disease pathogenesis and progression, identifying diagnostic and prognostic markers, as well as designing better targeted treatments. The exponential growth of our understanding of different genetic concepts is paralleled by a growing list of genetic terminology that can easily intimidate the unfamiliar reader. Rendering genetics incomprehensible to the clinician however, defeats the very essence of genetic research: its utilization for combating disease and improving quality of life. Herein we attempt to correct this notion by presenting the basic genetic concepts along with their usefulness in the cardiology clinic. Bringing genetics closer to the clinician will enable its harmonious incorporation into clinical care, thus not only restoring our perception of its simple and elegant nature, but importantly ensuring the maximal benefit for our patients. PMID:25610837
DuBay, William H.
2004-01-01
The principles of readability are in every style manual. Readability formulas are in every writing aid. What is missing is the research and theory on which they stand. This short review of readability research spans 100 years. The first part covers the history of adult literacy studies in the U.S., establishing the stratified nature of the adult…
Schwartz, Melvin
1972-01-01
This advanced undergraduate- and graduate-level text by the 1988 Nobel Prize winner establishes the subject's mathematical background, reviews the principles of electrostatics, then introduces Einstein's special theory of relativity and applies it throughout the book in topics ranging from Gauss' theorem and Coulomb's law to electric and magnetic susceptibility.
Principles of Bridge Reliability
DEFF Research Database (Denmark)
Thoft-Christensen, Palle; Nowak, Andrzej S.
The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated......, and as an example the reliability profile and a sensitivity analyses for a corroded reinforced concrete bridge is shown....
Siyanova-Chanturia, Anna; Martinez, Ron
2015-01-01
John Sinclair's Idiom Principle famously posited that most texts are largely composed of multi-word expressions that "constitute single choices" in the mental lexicon. At the time that assertion was made, little actual psycholinguistic evidence existed in support of that holistic, "single choice," view of formulaic language. In…
Indian Academy of Sciences (India)
his exclusion principle, the quantum theory was a mess. Moreover, it could ... This is a function of all the coordinates and 'internal variables' such as spin, of all the ... must remain basically the same (ie change by a phase factor at most) if we ...
The traveltime holographic principle
Huang, Y.; Schuster, Gerard T.
2014-01-01
Fermat's interferometric principle is used to compute interior transmission traveltimes τpq from exterior transmission traveltimes τsp and τsq. Here, the exterior traveltimes are computed for sources s on a boundary B that encloses a volume V of interior points p and q. Once the exterior traveltimes are computed, no further ray tracing is needed to calculate the interior times τpq. Therefore this interferometric approach can be more efficient than explicitly computing interior traveltimes τpq by ray tracing. Moreover, the memory requirement of the traveltimes is reduced by one dimension, because the boundary B is of one fewer dimension than the volume V. An application of this approach is demonstrated with interbed multiple (IM) elimination. Here, the IMs in the observed data are predicted from the migration image and are subsequently removed by adaptive subtraction. This prediction is enabled by the knowledge of interior transmission traveltimes τpq computed according to Fermat's interferometric principle. We denote this principle as the ‘traveltime holographic principle’, by analogy with the holographic principle in cosmology where information in a volume is encoded on the region's boundary.
The Bohr Correspondence Principle
Indian Academy of Sciences (India)
IAS Admin
Deepak Dhar. Keywords. Correspondence principle, hy- drogen atom, Kepler orbit. Deepak Dhar works at the. Tata Institute of Funda- mental Research,. Mumbai. His research interests are mainly in the area of statistical physics. We consider the quantum-mechanical non-relati- vistic hydrogen atom. We show that for bound.
International Nuclear Information System (INIS)
Abdelmalik, W.E.Y.
2011-01-01
This work presents a summary of the IAEA Safety Standards Series publication No. SF-1 entitled F UDAMENTAL Safety PRINCIPLES p ublished on 2006. This publication states the fundamental safety objective and ten associated safety principles, and briefly describes their intent and purposes. Safety measures and security measures have in common the aim of protecting human life and health and the environment. These safety principles are: 1) Responsibility for safety, 2) Role of the government, 3) Leadership and management for safety, 4) Justification of facilities and activities, 5) Optimization of protection, 6) Limitation of risks to individuals, 7) Protection of present and future generations, 8) Prevention of accidents, 9)Emergency preparedness and response and 10) Protective action to reduce existing or unregulated radiation risks. The safety principles concern the security of facilities and activities to the extent that they apply to measures that contribute to both safety and security. Safety measures and security measures must be designed and implemented in an integrated manner so that security measures do not compromise safety and safety measures do not compromise security.
DEFF Research Database (Denmark)
Sharp, Robin
This is a new and updated edition of a book first published in 1994. The book introduces the reader to the principles used in the construction of a large range of modern data communication protocols, as used in distributed computer systems of all kinds. The approach taken is rather a formal one...
The traveltime holographic principle
Huang, Y.
2014-11-06
Fermat\\'s interferometric principle is used to compute interior transmission traveltimes τpq from exterior transmission traveltimes τsp and τsq. Here, the exterior traveltimes are computed for sources s on a boundary B that encloses a volume V of interior points p and q. Once the exterior traveltimes are computed, no further ray tracing is needed to calculate the interior times τpq. Therefore this interferometric approach can be more efficient than explicitly computing interior traveltimes τpq by ray tracing. Moreover, the memory requirement of the traveltimes is reduced by one dimension, because the boundary B is of one fewer dimension than the volume V. An application of this approach is demonstrated with interbed multiple (IM) elimination. Here, the IMs in the observed data are predicted from the migration image and are subsequently removed by adaptive subtraction. This prediction is enabled by the knowledge of interior transmission traveltimes τpq computed according to Fermat\\'s interferometric principle. We denote this principle as the ‘traveltime holographic principle’, by analogy with the holographic principle in cosmology where information in a volume is encoded on the region\\'s boundary.
Kamat, R. V.
1991-01-01
A principle is presented to show that, if the time of passage of light is expressible as a function of discrete variables, one may dispense with the more general method of the calculus of variations. The calculus of variations and the alternative are described. The phenomenon of mirage is discussed. (Author/KR)
Principles of economics textbooks
DEFF Research Database (Denmark)
Madsen, Poul Thøis
2012-01-01
Has the financial crisis already changed US principles of economics textbooks? Rather little has changed in individual textbooks, but taken as a whole ten of the best-selling textbooks suggest rather encompassing changes of core curriculum. A critical analysis of these changes shows how individual...
Directory of Open Access Journals (Sweden)
Ahmet YILDIRIM
2014-07-01
Full Text Available Individuals in terms of the economy in which we live is one of the most important phenomenon of the century. This phenomenon present itself as the only determinant of people's lives by entering almost makes itself felt. The mo st obvious objective needs of the economy by triggering motive is to induce people to consume . Consumer culture pervades all aspects of the situation are people . Therefore, these people have the blessing of culture , beauty and value all in the name of w hatever is consumed. This is way out of the siege of moral and religious values we have is to go back again . Referred by local cultural and religious values, based on today increasingly come to the fore and the Muslim way of life appears to be close to th e plain / lean preferred by many people life has been a way of life. Even the simple life , a way of life in the Western world , a conception of life , a philosophy, a movement as it has become widely accepted. Here in determining the Muslim way of life Pr ophet. Prophet (sa lived the kind of life a very important model, sample, and determining which direction is known. Religious values, which is the carrier of the prophets, sent to the society they have always been examples and models. Because every aspect of human life, his life style and the surrounding area has a feature. We also value his life that he has unknowingly and without learning and skills and to understand it is not possible to live our religion . We also our presentation, we mainly of Islam o utlook on life and predicted life - style, including the Prophet of Islam 's (sa simple life to scrutinize and lifestyle issues related to reveal , in short Islam's how life has embraced and the Prophet. Prophet's will try to find answers to questions reg arding how to live.
A Simple Inquiry-Based Lab for Teaching Osmosis
Taylor, John R.
2014-01-01
This simple inquiry-based lab was designed to teach the principle of osmosis while also providing an experience for students to use the skills and practices commonly found in science. Students first design their own experiment using very basic equipment and supplies, which generally results in mixed, but mostly poor, outcomes. Classroom "talk…
A Medieval Clock Made out of Simple Materials
Danese, B.; Oss, S.
2008-01-01
A cheap replica of the verge-and-foliot clock has been built from simple materials. It is a didactic tool of great power for physics teaching at every stage of schooling, in particular at university level. An account is given of its construction and its working principles, together with motivated examples of a few activities. (Contains 3 tables…
Trophic dynamics of a simple model ecosystem.
Bell, Graham; Fortier-Dubois, Étienne
2017-09-13
We have constructed a model of community dynamics that is simple enough to enumerate all possible food webs, yet complex enough to represent a wide range of ecological processes. We use the transition matrix to predict the outcome of succession and then investigate how the transition probabilities are governed by resource supply and immigration. Low-input regimes lead to simple communities whereas trophically complex communities develop when there is an adequate supply of both resources and immigrants. Our interpretation of trophic dynamics in complex communities hinges on a new principle of mutual replenishment, defined as the reciprocal alternation of state in a pair of communities linked by the invasion and extinction of a shared species. Such neutral couples are the outcome of succession under local dispersal and imply that food webs will often be made up of suites of trophically equivalent species. When immigrants arrive from an external pool of fixed composition a similar principle predicts a dynamic core of webs constituting a neutral interchange network, although communities may express an extensive range of other webs whose membership is only in part predictable. The food web is not in general predictable from whole-community properties such as productivity or stability, although it may profoundly influence these properties. © 2017 The Author(s).
Extremum principles for irreversible processes
International Nuclear Information System (INIS)
Hillert, M.; Agren, J.
2006-01-01
Hamilton's extremum principle is a powerful mathematical tool in classical mechanics. Onsager's extremum principle may play a similar role in irreversible thermodynamics and may also become a valuable tool. His principle may formally be regarded as a principle of maximum rate of entropy production but does not have a clear physical interpretation. Prigogine's principle of minimum rate of entropy production has a physical interpretation when it applies, but is not strictly valid except for a very special case
The parallel plate avalanche counter: a simple, rugged, imaging X-ray counter
International Nuclear Information System (INIS)
Joensen, K.D.; Budtz-Joergensen, C.; Bahnsen, A.; Madsen, M.M.; Olesen, C.; Schnopper, H.W.
1995-01-01
A two-dimensional parallel gap proportional counter has been developed at the Danish Space Research Institute. Imaging over the 120 mm diameter active area is obtained using the positive ion component of the avalanche signals as recorded by a system of wedge- and strip-electrodes. An electronically simple, but very effective background rejection is obtained by using the fast electron component of the avalanche signal. Gas gains up to 8x10 5 have been achieved. An energy-resolution of 16% and a sub-millimeter spatial resolution have been measured at 5.9 keV for an operating gas gain of 10 5 . In principle, the position coordinates are linear functions of electronic readouts. The present model, however, exhibits non-linearities, caused by imperfections in the wedge and strip-electrode pattern. These non-linearities are corrected by using a bilinear correction algorithm. We conclude that the rugged construction, the simple electronics, the effectiveness of the background rejection and the actual imaging performance makes this a very attractive laboratory detector for low and intermediate count rate imaging applications. ((orig.))
Self-organized modularization in evolutionary algorithms.
Dauscher, Peter; Uthmann, Thomas
2005-01-01
The principle of modularization has proven to be extremely successful in the field of technical applications and particularly for Software Engineering purposes. The question to be answered within the present article is whether mechanisms can also be identified within the framework of Evolutionary Computation that cause a modularization of solutions. We will concentrate on processes, where modularization results only from the typical evolutionary operators, i.e. selection and variation by recombination and mutation (and not, e.g., from special modularization operators). This is what we call Self-Organized Modularization. Based on a combination of two formalizations by Radcliffe and Altenberg, some quantitative measures of modularity are introduced. Particularly, we distinguish Built-in Modularity as an inherent property of a genotype and Effective Modularity, which depends on the rest of the population. These measures can easily be applied to a wide range of present Evolutionary Computation models. It will be shown, both theoretically and by simulation, that under certain conditions, Effective Modularity (as defined within this paper) can be a selection factor. This causes Self-Organized Modularization to take place. The experimental observations emphasize the importance of Effective Modularity in comparison with Built-in Modularity. Although the experimental results have been obtained using a minimalist toy model, they can lead to a number of consequences for existing models as well as for future approaches. Furthermore, the results suggest a complex self-amplification of highly modular equivalence classes in the case of respected relations. Since the well-known Holland schemata are just the equivalence classes of respected relations in most Simple Genetic Algorithms, this observation emphasizes the role of schemata as Building Blocks (in comparison with arbitrary subsets of the search space).
Algorithmic Mechanism Design of Evolutionary Computation.
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.
General Quantum Interference Principle and Duality Computer
International Nuclear Information System (INIS)
Long Guilu
2006-01-01
In this article, we propose a general principle of quantum interference for quantum system, and based on this we propose a new type of computing machine, the duality computer, that may outperform in principle both classical computer and the quantum computer. According to the general principle of quantum interference, the very essence of quantum interference is the interference of the sub-waves of the quantum system itself. A quantum system considered here can be any quantum system: a single microscopic particle, a composite quantum system such as an atom or a molecule, or a loose collection of a few quantum objects such as two independent photons. In the duality computer, the wave of the duality computer is split into several sub-waves and they pass through different routes, where different computing gate operations are performed. These sub-waves are then re-combined to interfere to give the computational results. The quantum computer, however, has only used the particle nature of quantum object. In a duality computer, it may be possible to find a marked item from an unsorted database using only a single query, and all NP-complete problems may have polynomial algorithms. Two proof-of-the-principle designs of the duality computer are presented: the giant molecule scheme and the nonlinear quantum optics scheme. We also propose thought experiment to check the related fundamental issues, the measurement efficiency of a partial wave function.
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.
2015-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.
2014-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...
Energy Technology Data Exchange (ETDEWEB)
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Directory of Open Access Journals (Sweden)
2005-11-01
Full Text Available Quasispecies are clouds of genotypes that appear in a population at mutation-selection balance. This concept has recently attracted the attention of virologists, because many RNA viruses appear to generate high levels of genetic variation that may enhance the evolution of drug resistance and immune escape. The literature on these important evolutionary processes is, however, quite challenging. Here we use simple models to link mutation-selection balance theory to the most novel property of quasispecies: the error threshold-a mutation rate below which populations equilibrate in a traditional mutation-selection balance and above which the population experiences an error catastrophe, that is, the loss of the favored genotype through frequent deleterious mutations. These models show that a single fitness landscape may contain multiple, hierarchically organized error thresholds and that an error threshold is affected by the extent of back mutation and redundancy in the genotype-to-phenotype map. Importantly, an error threshold is distinct from an extinction threshold, which is the complete loss of the population through lethal mutations. Based on this framework, we argue that the lethal mutagenesis of a viral infection by mutation-inducing drugs is not a true error catastophe, but is an extinction catastrophe.
International Nuclear Information System (INIS)
Narayanan, R.; Kalavathy, K.R.
1989-01-01
In any nuclear reactor, the start-up channels monitor the neutron flux during the start-up operation and give the alarm signals for safety purposes. Normally, a fission chamber is used as a detector to detect the low level neutron fluxes. The output of the detector after amplification and discrimination is shaped in a pulse shaper to provide constant width, constant height pulses for further processing in rate meters. The shaped pulses also go to a scaler timer, where they are counted for fixed time intervals and the accumulated counts displayed. The scaler timer described in this paper uses LSIs to get at a simple, compact and reliable unit. The design is centered around two LSIs. MOS Counter Timebase LSI type MK 5009P (U1) is used to generate the gating pulses. A 1 MHz crystal is used to generate the system clock. A 4 bit address selects the desired gating intervals of 1 or 10 or 100 seconds. In fact, MK 5009 is a very versatile LSI in a 16 pin DIP package, consisting of a MOS oscillator and divider chain. It is binary encoded for frequency division selection ranging from 1 to 36 x 10. With an input frequency of 1 MHz, MK 5009 provides the time periods of 1 μs to 100 seconds, one minute, ten minute and one hour periods. (author)
A Simple Accelerometer Calibrator
International Nuclear Information System (INIS)
Salam, R A; Islamy, M R F; Khairurrijal; Munir, M M; Latief, H; Irsyam, M
2016-01-01
High possibility of earthquake could lead to the high number of victims caused by it. It also can cause other hazards such as tsunami, landslide, etc. In that case it requires a system that can examine the earthquake occurrence. Some possible system to detect earthquake is by creating a vibration sensor system using accelerometer. However, the output of the system is usually put in the form of acceleration data. Therefore, a calibrator system for accelerometer to sense the vibration is needed. In this study, a simple accelerometer calibrator has been developed using 12 V DC motor, optocoupler, Liquid Crystal Display (LCD) and AVR 328 microcontroller as controller system. The system uses the Pulse Wave Modulation (PWM) form microcontroller to control the motor rotational speed as response to vibration frequency. The frequency of vibration was read by optocoupler and then those data was used as feedback to the system. The results show that the systems could control the rotational speed and the vibration frequencies in accordance with the defined PWM. (paper)
Simple-MSSM: a simple and efficient method for simultaneous multi-site saturation mutagenesis.
Cheng, Feng; Xu, Jian-Miao; Xiang, Chao; Liu, Zhi-Qiang; Zhao, Li-Qing; Zheng, Yu-Guo
2017-04-01
To develop a practically simple and robust multi-site saturation mutagenesis (MSSM) method that enables simultaneously recombination of amino acid positions for focused mutant library generation. A general restriction enzyme-free and ligase-free MSSM method (Simple-MSSM) based on prolonged overlap extension PCR (POE-PCR) and Simple Cloning techniques. As a proof of principle of Simple-MSSM, the gene of eGFP (enhanced green fluorescent protein) was used as a template gene for simultaneous mutagenesis of five codons. Forty-eight randomly selected clones were sequenced. Sequencing revealed that all the 48 clones showed at least one mutant codon (mutation efficiency = 100%), and 46 out of the 48 clones had mutations at all the five codons. The obtained diversities at these five codons are 27, 24, 26, 26 and 22, respectively, which correspond to 84, 75, 81, 81, 69% of the theoretical diversity offered by NNK-degeneration (32 codons; NNK, K = T or G). The enzyme-free Simple-MSSM method can simultaneously and efficiently saturate five codons within one day, and therefore avoid missing interactions between residues in interacting amino acid networks.
Scheidegger, Adrian E
1982-01-01
Geodynamics is commonly thought to be one of the subjects which provide the basis for understanding the origin of the visible surface features of the Earth: the latter are usually assumed as having been built up by geodynamic forces originating inside the Earth ("endogenetic" processes) and then as having been degrad ed by geomorphological agents originating in the atmosphere and ocean ("exogenetic" agents). The modem view holds that the sequence of events is not as neat as it was once thought to be, and that, in effect, both geodynamic and geomorphological processes act simultaneously ("Principle of Antagonism"); however, the division of theoretical geology into the principles of geodynamics and those of theoretical geomorphology seems to be useful for didactic purposes. It has therefore been maintained in the present writer's works. This present treatise on geodynamics is the first part of the author's treatment of theoretical geology, the treatise on Theoretical Geomorphology (also published by the Sprin...
Mobus, George E
2015-01-01
This pioneering text provides a comprehensive introduction to systems structure, function, and modeling as applied in all fields of science and engineering. Systems understanding is increasingly recognized as a key to a more holistic education and greater problem solving skills, and is also reflected in the trend toward interdisciplinary approaches to research on complex phenomena. The subject of systems science, as a basis for understanding the components and drivers of phenomena at all scales, should be viewed with the same importance as a traditional liberal arts education. Principles of Systems Science contains many graphs, illustrations, side bars, examples, and problems to enhance understanding. From basic principles of organization, complexity, abstract representations, and behavior (dynamics) to deeper aspects such as the relations between information, knowledge, computation, and system control, to higher order aspects such as auto-organization, emergence and evolution, the book provides an integrated...
Common principles and multiculturalism.
Zahedi, Farzaneh; Larijani, Bagher
2009-01-01
Judgment on rightness and wrongness of beliefs and behaviors is a main issue in bioethics. Over centuries, big philosophers and ethicists have been discussing the suitable tools to determine which act is morally sound and which one is not. Emerging the contemporary bioethics in the West has resulted in a misconception that absolute westernized principles would be appropriate tools for ethical decision making in different cultures. We will discuss this issue by introducing a clinical case. Considering various cultural beliefs around the world, though it is not logical to consider all of them ethically acceptable, we can gather on some general fundamental principles instead of going to the extremes of relativism and absolutism. Islamic teachings, according to the presented evidence in this paper, fall in with this idea.
Principles of Mobile Communication
Stüber, Gordon L
2012-01-01
This mathematically rigorous overview of physical layer wireless communications is now in a third, fully revised and updated edition. Along with coverage of basic principles sufficient for novice students, the volume includes plenty of finer details that will satisfy the requirements of graduate students aiming to research the topic in depth. It also has a role as a handy reference for wireless engineers. The content stresses core principles that are applicable to a broad range of wireless standards. Beginning with a survey of the field that introduces an array of issues relevant to wireless communications and which traces the historical development of today’s accepted wireless standards, the book moves on to cover all the relevant discrete subjects, from radio propagation to error probability performance and cellular radio resource management. A valuable appendix provides a succinct and focused tutorial on probability and random processes, concepts widely used throughout the book. This new edition, revised...
Principles of mathematical modeling
Dym, Clive
2004-01-01
Science and engineering students depend heavily on concepts of mathematical modeling. In an age where almost everything is done on a computer, author Clive Dym believes that students need to understand and "own" the underlying mathematics that computers are doing on their behalf. His goal for Principles of Mathematical Modeling, Second Edition, is to engage the student reader in developing a foundational understanding of the subject that will serve them well into their careers. The first half of the book begins with a clearly defined set of modeling principles, and then introduces a set of foundational tools including dimensional analysis, scaling techniques, and approximation and validation techniques. The second half demonstrates the latest applications for these tools to a broad variety of subjects, including exponential growth and decay in fields ranging from biology to economics, traffic flow, free and forced vibration of mechanical and other systems, and optimization problems in biology, structures, an...
Principles of Stellar Interferometry
Glindemann, Andreas
2011-01-01
Over the last decade, stellar interferometry has developed from a specialist tool to a mainstream observing technique, attracting scientists whose research benefits from milliarcsecond angular resolution. Stellar interferometry has become part of the astronomer’s toolbox, complementing single-telescope observations by providing unique capabilities that will advance astronomical research. This carefully written book is intended to provide a solid understanding of the principles of stellar interferometry to students starting an astronomical research project in this field or to develop instruments and to astronomers using interferometry but who are not interferometrists per se. Illustrated by excellent drawings and calculated graphs the imaging process in stellar interferometers is explained starting from first principles on light propagation and diffraction wave propagation through turbulence is described in detail using Kolmogorov statistics the impact of turbulence on the imaging process is discussed both f...
Principles of Fourier analysis
Howell, Kenneth B
2001-01-01
Fourier analysis is one of the most useful and widely employed sets of tools for the engineer, the scientist, and the applied mathematician. As such, students and practitioners in these disciplines need a practical and mathematically solid introduction to its principles. They need straightforward verifications of its results and formulas, and they need clear indications of the limitations of those results and formulas.Principles of Fourier Analysis furnishes all this and more. It provides a comprehensive overview of the mathematical theory of Fourier analysis, including the development of Fourier series, "classical" Fourier transforms, generalized Fourier transforms and analysis, and the discrete theory. Much of the author''s development is strikingly different from typical presentations. His approach to defining the classical Fourier transform results in a much cleaner, more coherent theory that leads naturally to a starting point for the generalized theory. He also introduces a new generalized theory based ...
Principles of mobile communication
Stüber, Gordon L
2017-01-01
This mathematically rigorous overview of physical layer wireless communications is now in a 4th, fully revised and updated edition. The new edition features new content on 4G cellular systems, 5G cellular outlook, bandpass signals and systems, and polarization, among many other topics, in addition to a new chapters on channel assignment techniques. Along with coverage of fundamentals and basic principles sufficient for novice students, the volume includes finer details that satisfy the requirements of graduate students aiming to conduct in-depth research. The book begins with a survey of the field, introducing issues relevant to wireless communications. The book moves on to cover relevant discrete subjects, from radio propagation, to error probability performance, and cellular radio resource management. An appendix provides a tutorial on probability and random processes. The content stresses core principles that are applicable to a broad range of wireless standards. New examples are provided throughout the bo...
Liu, Jia-Ming
2016-01-01
With this self-contained and comprehensive text, students will gain a detailed understanding of the fundamental concepts and major principles of photonics. Assuming only a basic background in optics, readers are guided through key topics such as the nature of optical fields, the properties of optical materials, and the principles of major photonic functions regarding the generation, propagation, coupling, interference, amplification, modulation, and detection of optical waves or signals. Numerous examples and problems are provided throughout to enhance understanding, and a solutions manual containing detailed solutions and explanations is available online for instructors. This is the ideal resource for electrical engineering and physics undergraduates taking introductory, single-semester or single-quarter courses in photonics, providing them with the knowledge and skills needed to progress to more advanced courses on photonic devices, systems and applications.
Common Principles and Multiculturalism
Zahedi, Farzaneh; Larijani, Bagher
2009-01-01
Judgment on rightness and wrongness of beliefs and behaviors is a main issue in bioethics. Over centuries, big philosophers and ethicists have been discussing the suitable tools to determine which act is morally sound and which one is not. Emerging the contemporary bioethics in the West has resulted in a misconception that absolute westernized principles would be appropriate tools for ethical decision making in different cultures. We will discuss this issue by introducing a clinical case. Considering various cultural beliefs around the world, though it is not logical to consider all of them ethically acceptable, we can gather on some general fundamental principles instead of going to the extremes of relativism and absolutism. Islamic teachings, according to the presented evidence in this paper, fall in with this idea. PMID:23908720
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Simple and robust image-based autofocusing for digital microscopy.
Yazdanfar, Siavash; Kenny, Kevin B; Tasimi, Krenar; Corwin, Alex D; Dixon, Elizabeth L; Filkins, Robert J
2008-06-09
A simple image-based autofocusing scheme for digital microscopy is demonstrated that uses as few as two intermediate images to bring the sample into focus. The algorithm is adapted to a commercial inverted microscope and used to automate brightfield and fluorescence imaging of histopathology tissue sections.
Polyhedral Computations for the Simple Graph Partitioning Problem
DEFF Research Database (Denmark)
Sørensen, Michael Malmros
The simple graph partitioning problem is to partition an edge-weighted graph into mutually disjoint subgraphs, each containing no more than b nodes, such that the sum of the weights of all edges in the subgraphs is maximal. In this paper we present a branch-and-cut algorithm for the problem that ...
Simple simulation schemes for CIR and Wishart processes
DEFF Research Database (Denmark)
Pisani, Camilla
2013-01-01
We develop some simple simulation algorithms for CIR and Wishart processes. The main idea is the splitting of their generator into the sum of the square of an Ornstein-Uhlenbeck matrix process and a deterministic process. Joint work with Paolo Baldi, Tor Vergata University, Rome...
Physics Without Physics. The Power of Information-theoretical Principles
D'Ariano, Giacomo Mauro
2017-01-01
David Finkelstein was very fond of the new information-theoretic paradigm of physics advocated by John Archibald Wheeler and Richard Feynman. Only recently, however, the paradigm has concretely shown its full power, with the derivation of quantum theory (Chiribella et al., Phys. Rev. A 84:012311, 2011; D'Ariano et al., 2017) and of free quantum field theory (D'Ariano and Perinotti, Phys. Rev. A 90:062106, 2014; Bisio et al., Phys. Rev. A 88:032301, 2013; Bisio et al., Ann. Phys. 354:244, 2015; Bisio et al., Ann. Phys. 368:177, 2016) from informational principles. The paradigm has opened for the first time the possibility of avoiding physical primitives in the axioms of the physical theory, allowing a re-foundation of the whole physics over logically solid grounds. In addition to such methodological value, the new information-theoretic derivation of quantum field theory is particularly interesting for establishing a theoretical framework for quantum gravity, with the idea of obtaining gravity itself as emergent from the quantum information processing, as also suggested by the role played by information in the holographic principle (Susskind, J. Math. Phys. 36:6377, 1995; Bousso, Rev. Mod. Phys. 74:825, 2002). In this paper I review how free quantum field theory is derived without using mechanical primitives, including space-time, special relativity, Hamiltonians, and quantization rules. The theory is simply provided by the simplest quantum algorithm encompassing a countable set of quantum systems whose network of interactions satisfies the three following simple principles: homogeneity, locality, and isotropy. The inherent discrete nature of the informational derivation leads to an extension of quantum field theory in terms of a quantum cellular automata and quantum walks. A simple heuristic argument sets the scale to the Planck one, and the currently observed regime where discreteness is not visible is the so-called "relativistic regime" of small wavevectors, which
Algorithm for Compressing Time-Series Data
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
Principles of (Behavioral) Economics
David Laibson; John A. List
2015-01-01
Behavioral economics has become an important and integrated component of modern economics. Behavioral economists embrace the core principles of economics—optimization and equilibrium—and seek to develop and extend those ideas to make them more empirically accurate. Behavioral models assume that economic actors try to pick the best feasible option and those actors sometimes make mistakes. Behavioral ideas should be incorporated throughout the first-year undergraduate course. Instructors should...
International Nuclear Information System (INIS)
Kreider, J.F.
1985-01-01
This book is an introduction on fluid mechanics incorporating computer applications. Topics covered are as follows: brief history; what is a fluid; two classes of fluids: liquids and gases; the continuum model of a fluid; methods of analyzing fluid flows; important characteristics of fluids; fundamentals and equations of motion; fluid statics; dimensional analysis and the similarity principle; laminar internal flows; ideal flow; external laminar and channel flows; turbulent flow; compressible flow; fluid flow measurements
Principles of electrical safety
Sutherland, Peter E
2015-01-01
Principles of Electrical Safety discusses current issues in electrical safety, which are accompanied by series' of practical applications that can be used by practicing professionals, graduate students, and researchers. . Provides extensive introductions to important topics in electrical safety Comprehensive overview of inductance, resistance, and capacitance as applied to the human body Serves as a preparatory guide for today's practicing engineers
International Nuclear Information System (INIS)
Martens, Hans.
1991-01-01
The subject of this thesis is the uncertainty principle (UP). The UP is one of the most characteristic points of differences between quantum and classical mechanics. The starting point of this thesis is the work of Niels Bohr. Besides the discussion the work is also analyzed. For the discussion of the different aspects of the UP the formalism of Davies and Ludwig is used instead of the more commonly used formalism of Neumann and Dirac. (author). 214 refs.; 23 figs
PREFERENCE, PRINCIPLE AND PRACTICE
DEFF Research Database (Denmark)
Skovsgaard, Morten; Bro, Peter
2011-01-01
Legitimacy has become a central issue in journalism, since the understanding of what journalism is and who journalists are has been challenged by developments both within and outside the newsrooms. Nonetheless, little scholarly work has been conducted to aid conceptual clarification as to how jou...... distinct, but interconnected categories*preference, principle, and practice. Through this framework, historical attempts to justify journalism and journalists are described and discussed in the light of the present challenges for the profession....
Advertisement without Ethical Principles?
Wojciech Słomski
2007-01-01
The article replies to the question, whether the advertisement can exist without ethical principles or ethics should be the basis of the advertisement. One can say that the ethical opinion of the advertisement does not depend on content and the form of advertising content exclusively, but also on recipientís consciousness. The advertisement appeals to the emotions more than to the intellect, thus restricting the area of conscious and based on rational premises choice, so it is morally bad. It...
General Principles Governing Liability
International Nuclear Information System (INIS)
Reyners, P.
1998-01-01
This paper contains a brief review of the basic principles which govern the special regime of liability and compensation for nuclear damage originating on nuclear installations, in particular the strict and exclusive liability of the nuclear operator, the provision of a financial security to cover this liability and the limits applicable both in amount and in time. The paper also reviews the most important international agreements currently in force which constitute the foundation of this special regime. (author)
The Principle of Proportionality
DEFF Research Database (Denmark)
Bennedsen, Morten; Meisner Nielsen, Kasper
2005-01-01
Recent policy initiatives within the harmonization of European company laws have promoted a so-called "principle of proportionality" through proposals that regulate mechanisms opposing a proportional distribution of ownership and control. We scrutinize the foundation for these initiatives...... in relationship to the process of harmonization of the European capital markets.JEL classifications: G30, G32, G34 and G38Keywords: Ownership Structure, Dual Class Shares, Pyramids, EU companylaws....
Common Principles and Multiculturalism
Zahedi, Farzaneh; Larijani, Bagher
2009-01-01
Judgment on rightness and wrongness of beliefs and behaviors is a main issue in bioethics. Over centuries, big philosophers and ethicists have been discussing the suitable tools to determine which act is morally sound and which one is not. Emerging the contemporary bioethics in the West has resulted in a misconception that absolute westernized principles would be appropriate tools for ethical decision making in different cultures. We will discuss this issue by introducing a clinical case. Con...
International Nuclear Information System (INIS)
Levine, R.B.; Stassi, J.; Karasick, D.
1985-01-01
Anterior displacement of the tibial tubercle is a well-accepted orthopedic procedure in the treatment of certain patellofemoral disorders. The radiologic appearance of surgical procedures utilizing the Maquet principle has not been described in the radiologic literature. Familiarity with the physiologic and biochemical basis for the procedure and its postoperative appearance is necessary for appropriate roentgenographic evaluation and the radiographic recognition of complications. (orig.)
Principles of lake sedimentology
International Nuclear Information System (INIS)
Janasson, L.
1983-01-01
This book presents a comprehensive outline on the basic sedimentological principles for lakes, and focuses on environmental aspects and matters related to lake management and control-on lake ecology rather than lake geology. This is a guide for those who plan, perform and evaluate lake sedimentological investigations. Contents abridged: Lake types and sediment types. Sedimentation in lakes and water dynamics. Lake bottom dynamics. Sediment dynamics and sediment age. Sediments in aquatic pollution control programmes. Subject index
Principles of artificial intelligence
Nilsson, Nils J
1980-01-01
A classic introduction to artificial intelligence intended to bridge the gap between theory and practice, Principles of Artificial Intelligence describes fundamental AI ideas that underlie applications such as natural language processing, automatic programming, robotics, machine vision, automatic theorem proving, and intelligent data retrieval. Rather than focusing on the subject matter of the applications, the book is organized around general computational concepts involving the kinds of data structures used, the types of operations performed on the data structures, and the properties of th
Economic uncertainty principle?
Alexander Harin
2006-01-01
The economic principle of (hidden) uncertainty is presented. New probability formulas are offered. Examples of solutions of three types of fundamental problems are reviewed.; Principe d'incertitude économique? Le principe économique d'incertitude (cachée) est présenté. De nouvelles formules de chances sont offertes. Les exemples de solutions des trois types de problèmes fondamentaux sont reconsidérés.
BIBLIO: A Reprint File Management Algorithm
Zelnio, Robert N.; And Others
1977-01-01
The development of a simple computer algorithm designed for use by the individual educator or researcher in maintaining and searching reprint files is reported. Called BIBLIO, the system is inexpensive and easy to operate and maintain without sacrificing flexibility and utility. (LBH)
Figuring Control in the Algorithmic Era
DEFF Research Database (Denmark)
Markham, Annette; Bossen, Claus
Drawing on actor network theory, we follow how algorithms, information, selfhood and identity-for-others tangle in interesting and unexpected ways. Starting with simple moments in everyday life that might be described as having implications for ‘control,’ we focus attention on the ways in which t...
Answer Markup Algorithms for Southeast Asian Languages.
Henry, George M.
1991-01-01
Typical markup methods for providing feedback to foreign language learners are not applicable to languages not written in a strictly linear fashion. A modification of Hart's edit markup software is described, along with a second variation based on a simple edit distance algorithm adapted to a general Southeast Asian font system. (10 references)…
Adler's overrelaxation algorithm for Goldstone bosons
International Nuclear Information System (INIS)
Neuberger, H.
1987-01-01
A very simple derivation of a closed-form solution to the stochastic evolution defined by Adler's overrelaxation algorithm is given for free massive and massless scalar fields on a finite lattice with periodic boundary conditions and checkerboard updating. It is argued that the results are directly relevant when critical slowing down reflects the existence of Goldstone bosons in the system
VLSI PARTITIONING ALGORITHM WITH ADAPTIVE CONTROL PARAMETER
Directory of Open Access Journals (Sweden)
P. N. Filippenko
2013-03-01
Full Text Available The article deals with the problem of very large-scale integration circuit partitioning. A graph is selected as a mathematical model describing integrated circuit. Modification of ant colony optimization algorithm is presented, which is used to solve graph partitioning problem. Ant colony optimization algorithm is an optimization method based on the principles of self-organization and other useful features of the ants’ behavior. The proposed search system is based on ant colony optimization algorithm with the improved method of the initial distribution and dynamic adjustment of the control search parameters. The experimental results and performance comparison show that the proposed method of very large-scale integration circuit partitioning provides the better search performance over other well known algorithms.
Principled Missing Data Treatments.
Lang, Kyle M; Little, Todd D
2018-04-01
We review a number of issues regarding missing data treatments for intervention and prevention researchers. Many of the common missing data practices in prevention research are still, unfortunately, ill-advised (e.g., use of listwise and pairwise deletion, insufficient use of auxiliary variables). Our goal is to promote better practice in the handling of missing data. We review the current state of missing data methodology and recent missing data reporting in prevention research. We describe antiquated, ad hoc missing data treatments and discuss their limitations. We discuss two modern, principled missing data treatments: multiple imputation and full information maximum likelihood, and we offer practical tips on how to best employ these methods in prevention research. The principled missing data treatments that we discuss are couched in terms of how they improve causal and statistical inference in the prevention sciences. Our recommendations are firmly grounded in missing data theory and well-validated statistical principles for handling the missing data issues that are ubiquitous in biosocial and prevention research. We augment our broad survey of missing data analysis with references to more exhaustive resources.
Design principle and structure of the ANI data centre
International Nuclear Information System (INIS)
Akopov, N.Z.; Arutyunyan, S.Kh.; Chilingaryan, A.A.; Galfayan, S.Kh.; Matevosyan, V.Kh.; Zazyan, M.Z.
1985-01-01
The design principles and structure of applied statistical programms used for processing the data from the ANI experiments are described. Nonparametric algorithms provide development of high-efficient method for simultaneous analysis of computerized and experimental data, from cosmic ray experiments. Relation data base for unified data storage, protection, renewing and erasuring as well as for fast and convenient information retrieval is considered
David A. Marquis; Rodney Jacobs
1989-01-01
Forest stands are managed to achieve some combination of desired products or values. These products or values may include income and tangible benefits from timber production or fees for hunting rights and other recreational activities. The values may be intangible, such as the enjoyment of seeing wildlife or flowering plants, or the simple satisfaction of knowing that...
Solving Simple Stochastic Games with Few Coin Toss Positions
DEFF Research Database (Denmark)
Ibsen-Jensen, Rasmus; Miltersen, Peter Bro
2012-01-01
Gimbert and Horn gave an algorithm for solving simple stochastic games with running time O(r! n) where n is the number of positions of the simple stochastic game and r is the number of its coin toss positions. Chatterjee et al. pointed out that a variant of strategy iteration can be implemented...... to solve this problem in time 4 r n O(1). In this paper, we show that an algorithm combining value iteration with retrograde analysis achieves a time bound of O(r 2 r (r logr + n)), thus improving both time bounds. We also improve the analysis of Chatterjee et al. and show that their algorithm in fact has...
Nuclear detectors. Physical principles of operation
International Nuclear Information System (INIS)
Pochet, Th.
2005-01-01
Nuclear detection is used in several domains of activity from the physics research, the nuclear industry, the medical and industrial sectors, the security etc. The particles of interest are the α, β, X, γ and neutrons. This article treats of the basic physical properties of radiation detection, the general characteristics of the different classes of existing detectors and the particle/matter interactions: 1 - general considerations; 2 - measurement types and definitions: pulse mode, current mode, definitions; 3 - physical principles of direct detection: introduction and general problem, materials used in detection, simple device, junction semiconductor device, charges generation and transport inside matter, signal generation; 4 - physical principles of indirect detection: introduction, scintillation mechanisms, definition and properties of scintillators. (J.S.)
Soft magnetic tweezers: a proof of principle.
Mosconi, Francesco; Allemand, Jean François; Croquette, Vincent
2011-03-01
We present here the principle of soft magnetic tweezers which improve the traditional magnetic tweezers allowing the simultaneous application and measurement of an arbitrary torque to a deoxyribonucleic acid (DNA) molecule. They take advantage of a nonlinear coupling regime that appears when a fast rotating magnetic field is applied to a superparamagnetic bead immersed in a viscous fluid. In this work, we present the development of the technique and we compare it with other techniques capable of measuring the torque applied to the DNA molecule. In this proof of principle, we use standard electromagnets to achieve our experiments. Despite technical difficulties related to the present implementation of these electromagnets, the agreement of measurements with previous experiments is remarkable. Finally, we propose a simple way to modify the experimental design of electromagnets that should bring the performances of the device to a competitive level.
Maximal frustration as an immunological principle.
de Abreu, F Vistulo; Mostardinha, P
2009-03-06
A fundamental problem in immunology is that of understanding how the immune system selects promptly which cells to kill without harming the body. This problem poses an apparent paradox. Strong reactivity against pathogens seems incompatible with perfect tolerance towards self. We propose a different view on cellular reactivity to overcome this paradox: effector functions should be seen as the outcome of cellular decisions which can be in conflict with other cells' decisions. We argue that if cellular systems are frustrated, then extensive cross-reactivity among the elements in the system can decrease the reactivity of the system as a whole and induce perfect tolerance. Using numerical and mathematical analyses, we discuss two simple models that perform optimal pathogenic detection with no autoimmunity if cells are maximally frustrated. This study strongly suggests that a principle of maximal frustration could be used to build artificial immune systems. It would be interesting to test this principle in the real adaptive immune system.
Optimized theory for simple and molecular fluids.
Marucho, M; Montgomery Pettitt, B
2007-03-28
An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.
Principles of linear algebra with Mathematica
Shiskowski, Kenneth M
2013-01-01
A hands-on introduction to the theoretical and computational aspects of linear algebra using Mathematica® Many topics in linear algebra are simple, yet computationally intensive, and computer algebra systems such as Mathematica® are essential not only for learning to apply the concepts to computationally challenging problems, but also for visualizing many of the geometric aspects within this field of study. Principles of Linear Algebra with Mathematica uniquely bridges the gap between beginning linear algebra and computational linear algebra that is often encountered in applied settings,
Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.
Giridhar, K.
The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal
International Nuclear Information System (INIS)
Tian Wenxi; Su, G.H.; Qiu Suizheng; Jia Dounan
2004-01-01
The field synergy principle was proposed by Guo(1998) which is based on 2-D boundary laminar flow and it resulted from a second look at the mechanism of convective heat transfer. Numerical verification of this principle's validity for turbulent flow has been carried out by very few researchers, and mostly commercial software such as FLUENT, CFX etc. were used in their study. In this paper, numerical simulation of turbulent flow with recirculation was developed using SIMPLE algorithm with two-equation k-ε model. Extension of computational region method and wall function method were quoted to regulate the whole computational region geometrically. Given the inlet Reynold number keeps constant: 10000, by changing the height of the solid obstacle, simulation was conducted and the result showed that the wall heat flux decreased with the angle between the velocity vector and the temperature gradient. Thus it is validated that the field synergy principle based on 2-D boundary laminar flow can also be applied to complex turbulent flow even with recirculation. (author)
An algorithm for learning real-time automata
Verwer, S.E.; De Weerdt, M.M.; Witteveen, C.
2007-01-01
We describe an algorithm for learning simple timed automata, known as real-time automata. The transitions of real-time automata can have a temporal constraint on the time of occurrence of the current symbol relative to the previous symbol. The learning algorithm is similar to the redblue fringe
Comparison of two (geometric) algorithms for auto OMA
DEFF Research Database (Denmark)
Juul, Martin; Olsen, Peter; Balling, Ole
2018-01-01
parameters. The two algorithms are compared and illustrated on simulated data. Different choices of distance measures are discussed and evaluated. It is illustrated how a simple distance measure outperforms traditional distance measures from other Auto OMA algorithms. Traditional measures are unable...
Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images
Tzimiropoulos, Georgios; Pantic, Maja
2016-01-01
Fitting algorithms for Active Appearance Models (AAMs) are usually considered to be robust but slow or fast but less able to generalize well to unseen variations. In this paper, we look into AAM fitting algorithms and make the following orthogonal contributions: We present a simple “project-out‿
Efficiency principles of consulting entrepreneurship
Moroz Yustina S.; Drozdov Igor N.
2015-01-01
The article reviews the primary goals and problems of consulting entrepreneurship. The principles defining efficiency of entrepreneurship in the field of consulting are generalized. The special attention is given to the importance of ethical principles of conducting consulting entrepreneurship activity.
Practical boundary surveying legal and technical principles
Gay, Paul
2015-01-01
This guide to boundary surveying provides landowners, land surveyors, students and others with the necessary foundation to understand boundary surveying techniques and the common legal issues that govern boundary establishment. Boundary surveying is sometimes mistakenly considered a strictly technical discipline with simple and straightforward technical solutions. In reality, boundary establishment is often a difficult and complex matter, requiring years of experience and a thorough understanding of boundary law. This book helps readers to understand the challenges often encountered by boundary surveyors and some of the available solutions. Using only simple and logically explained mathematics, the principles and practice of boundary surveying are demystified for those without prior experience, and the focused coverage of pivotal issues such as easements and setting lot corners will aid even licensed practitioners in untangling thorny cases. Practical advice on using both basic and advanced instruments ...
The Effect of Swarming on a Voltage Potential-Based Conflict Resolution Algorithm
Maas, J.B.; Sunil, E.; Ellerbroek, J.; Hoekstra, J.M.; Tra, M.A.P.
2016-01-01
Several conflict resolution algorithms for airborne self-separation rely on principles derived from the repulsive forces that exist between similarly charged particles. This research investigates whether the performance of the Modified Voltage Potential algorithm, which is based on this algorithm,
An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.
Gonzales, Michael G.
1984-01-01
Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)
An Educational System for Learning Search Algorithms and Automatically Assessing Student Performance
Grivokostopoulou, Foteini; Perikos, Isidoros; Hatzilygeroudis, Ioannis
2017-01-01
In this paper, first we present an educational system that assists students in learning and tutors in teaching search algorithms, an artificial intelligence topic. Learning is achieved through a wide range of learning activities. Algorithm visualizations demonstrate the operational functionality of algorithms according to the principles of active…
DEFF Research Database (Denmark)
Sifa, Rafet; Bauckhage, Christian; Drachen, Anders
2014-01-01
be derived from this large-scale analysis, notably that playtime as a function of time, across the thousands of games in the dataset, and irrespective of local differences in the playtime frequency distribution, can be modeled using the same model: the Wei bull distribution. This suggests...... that there are fundamental properties governing player engagement as it evolves over time, which we here refer to as the Playtime Principle. Additionally, the analysis shows that there are distinct clusters, or archetypes, in the playtime frequency distributions of the investigated games. These archetypal groups correspond...
Complex Correspondence Principle
International Nuclear Information System (INIS)
Bender, Carl M.; Meisinger, Peter N.; Hook, Daniel W.; Wang Qinghai
2010-01-01
Quantum mechanics and classical mechanics are distinctly different theories, but the correspondence principle states that quantum particles behave classically in the limit of high quantum number. In recent years much research has been done on extending both quantum and classical mechanics into the complex domain. These complex extensions continue to exhibit a correspondence, and this correspondence becomes more pronounced in the complex domain. The association between complex quantum mechanics and complex classical mechanics is subtle and demonstrating this relationship requires the use of asymptotics beyond all orders.
Principles of chemical kinetics
House, James E
2007-01-01
James House's revised Principles of Chemical Kinetics provides a clear and logical description of chemical kinetics in a manner unlike any other book of its kind. Clearly written with detailed derivations, the text allows students to move rapidly from theoretical concepts of rates of reaction to concrete applications. Unlike other texts, House presents a balanced treatment of kinetic reactions in gas, solution, and solid states. The entire text has been revised and includes many new sections and an additional chapter on applications of kinetics. The topics covered include quantitative rela
Lehpamer, Harvey
2012-01-01
This revised edition of the Artech House bestseller, RFID Design Principles, serves as an up-to-date and comprehensive introduction to the subject. The second edition features numerous updates and brand new and expanded material on emerging topics such as the medical applications of RFID and new ethical challenges in the field. This practical book offers you a detailed understanding of RFID design essentials, key applications, and important management issues. The book explores the role of RFID technology in supply chain management, intelligent building design, transportation systems, military
Krinov, E L
1960-01-01
Principles of Meteoritics examines the significance of meteorites in relation to cosmogony and to the origin of the planetary system. The book discusses the science of meteoritics and the sources of meteorites. Scientists study the morphology of meteorites to determine their motion in the atmosphere. The scope of such study includes all forms of meteorites, the circumstances of their fall to earth, their motion in the atmosphere, and their orbits in space. Meteoric bodies vary in sizes; in calculating their motion in interplanetary space, astronomers apply the laws of Kepler. In the region of
Kadane, Joseph B
2011-01-01
An intuitive and mathematical introduction to subjective probability and Bayesian statistics. An accessible, comprehensive guide to the theory of Bayesian statistics, Principles of Uncertainty presents the subjective Bayesian approach, which has played a pivotal role in game theory, economics, and the recent boom in Markov Chain Monte Carlo methods. Both rigorous and friendly, the book contains: Introductory chapters examining each new concept or assumption Just-in-time mathematics -- the presentation of ideas just before they are applied Summary and exercises at the end of each chapter Discus
DEFF Research Database (Denmark)
Kohlenbach, Ulrich Wilhelm
2002-01-01
We show that the so-called weak Markov's principle (WMP) which states that every pseudo-positive real number is positive is underivable in E-HA + AC. Since allows one to formalize (atl eastl arge parts of) Bishop's constructive mathematics, this makes it unlikely that WMP can be proved within...... the framework of Bishop-style mathematics (which has been open for about 20 years). The underivability even holds if the ine.ective schema of full comprehension (in all types) for negated formulas (in particular for -free formulas) is added, which allows one to derive the law of excluded middle...
Principles of quantum chemistry
George, David V
2013-01-01
Principles of Quantum Chemistry focuses on the application of quantum mechanics in physical models and experiments of chemical systems.This book describes chemical bonding and its two specific problems - bonding in complexes and in conjugated organic molecules. The very basic theory of spectroscopy is also considered. Other topics include the early development of quantum theory; particle-in-a-box; general formulation of the theory of quantum mechanics; and treatment of angular momentum in quantum mechanics. The examples of solutions of Schroedinger equations; approximation methods in quantum c
Kaufman, Myron
2002-01-01
Ideal for one- or two-semester courses that assume elementary knowledge of calculus, This text presents the fundamental concepts of thermodynamics and applies these to problems dealing with properties of materials, phase transformations, chemical reactions, solutions and surfaces. The author utilizes principles of statistical mechanics to illustrate key concepts from a microscopic perspective, as well as develop equations of kinetic theory. The book provides end-of-chapter question and problem sets, some using Mathcad™ and Mathematica™; a useful glossary containing important symbols, definitions, and units; and appendices covering multivariable calculus and valuable numerical methods.
Principles of fluorescence techniques
2016-01-01
Fluorescence techniques are being used and applied increasingly in academics and industry. The Principles of Fluorescence Techniques course will outline the basic concepts of fluorescence techniques and the successful utilization of the currently available commercial instrumentation. The course is designed for students who utilize fluorescence techniques and instrumentation and for researchers and industrial scientists who wish to deepen their knowledge of fluorescence applications. Key scientists in the field will deliver theoretical lectures. The lectures will be complemented by the direct utilization of steady-state and lifetime fluorescence instrumentation and confocal microscopy for FLIM and FRET applications provided by leading companies.
Algorithms for Brownian first-passage-time estimation
Adib, Artur B.
2009-09-01
A class of algorithms in discrete space and continuous time for Brownian first-passage-time estimation is considered. A simple algorithm is derived that yields exact mean first-passage times (MFPTs) for linear potentials in one dimension, regardless of the lattice spacing. When applied to nonlinear potentials and/or higher spatial dimensions, numerical evidence suggests that this algorithm yields MFPT estimates that either outperform or rival Langevin-based (discrete time and continuous space) estimates.
Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network
Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao
2018-03-01
Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.
The principle of general tovariance
Heunen, C.; Landsman, N.P.; Spitters, B.A.W.; Loja Fernandes, R.; Picken, R.
2008-01-01
We tentatively propose two guiding principles for the construction of theories of physics, which should be satisfied by a possible future theory of quantum gravity. These principles are inspired by those that led Einstein to his theory of general relativity, viz. his principle of general covariance
Fermat and the Minimum Principle
Indian Academy of Sciences (India)
Arguably, least action and minimum principles were offered or applied much earlier. This (or these) principle(s) is/are among the fundamental, basic, unifying or organizing ones used to describe a variety of natural phenomena. It considers the amount of energy expended in performing a given action to be the least required ...
Fundamental Principle for Quantum Theory
Khrennikov, Andrei
2002-01-01
We propose the principle, the law of statistical balance for basic physical observables, which specifies quantum statistical theory among all other statistical theories of measurements. It seems that this principle might play in quantum theory the role that is similar to the role of Einstein's relativity principle.
Principles for School Drug Education
Meyer, Lois
2004-01-01
This document presents a revised set of principles for school drug education. The principles for drug education in schools comprise an evolving framework that has proved useful over a number of decades in guiding the development of effective drug education. The first edition of "Principles for Drug Education in Schools" (Ballard et al.…
Principles of Mechanical Excavation
International Nuclear Information System (INIS)
Lislerud, A.
1997-12-01
Mechanical excavation of rock today includes several methods such as tunnel boring, raiseboring, roadheading and various continuous mining systems. Of these raiseboring is one potential technique for excavating shafts in the repository for spent nuclear fuel and dry blind boring is promising technique for excavation of deposition holes, as demonstrated in the Research Tunnel at Olkiluoto. In addition, there is potential for use of other mechanical excavation techniques in different parts of the repository. One of the main objectives of this study was to analyze the factors which affect the feasibility of mechanical rock excavation in hard rock conditions and to enhance the understanding of factors which affect rock cutting so as to provide an improved basis for excavator performance prediction modeling. The study included the following four main topics: (a) phenomenological model based on similarity analysis for roller disk cutting, (b) rock mass properties which affect rock cuttability and tool life, (c) principles for linear and field cutting tests and performance prediction modeling and (d) cutter head lacing design procedures and principles. As a conclusion of this study, a test rig was constructed, field tests were planned and started up. The results of the study can be used to improve the performance prediction models used to assess the feasibility of different mechanical excavation techniques at various repository investigation sites. (orig.)
Principles of Mechanical Excavation
Energy Technology Data Exchange (ETDEWEB)
Lislerud, A. [Tamrock Corp., Tampere (Finland)
1997-12-01
Mechanical excavation of rock today includes several methods such as tunnel boring, raiseboring, roadheading and various continuous mining systems. Of these raiseboring is one potential technique for excavating shafts in the repository for spent nuclear fuel and dry blind boring is promising technique for excavation of deposition holes, as demonstrated in the Research Tunnel at Olkiluoto. In addition, there is potential for use of other mechanical excavation techniques in different parts of the repository. One of the main objectives of this study was to analyze the factors which affect the feasibility of mechanical rock excavation in hard rock conditions and to enhance the understanding of factors which affect rock cutting so as to provide an improved basis for excavator performance prediction modeling. The study included the following four main topics: (a) phenomenological model based on similarity analysis for roller disk cutting, (b) rock mass properties which affect rock cuttability and tool life, (c) principles for linear and field cutting tests and performance prediction modeling and (d) cutter head lacing design procedures and principles. As a conclusion of this study, a test rig was constructed, field tests were planned and started up. The results of the study can be used to improve the performance prediction models used to assess the feasibility of different mechanical excavation techniques at various repository investigation sites. (orig.). 21 refs.
Genetic algorithms and supernovae type Ia analysis
International Nuclear Information System (INIS)
Bogdanos, Charalampos; Nesseris, Savvas
2009-01-01
We introduce genetic algorithms as a means to analyze supernovae type Ia data and extract model-independent constraints on the evolution of the Dark Energy equation of state w(z) ≡ P DE /ρ DE . Specifically, we will give a brief introduction to the genetic algorithms along with some simple examples to illustrate their advantages and finally we will apply them to the supernovae type Ia data. We find that genetic algorithms can lead to results in line with already established parametric and non-parametric reconstruction methods and could be used as a complementary way of treating SNIa data. As a non-parametric method, genetic algorithms provide a model-independent way to analyze data and can minimize bias due to premature choice of a dark energy model
Basic economic principles of road pricing: From theory to applications
Rouwendal, J.; Verhoef, E.T.
2006-01-01
This paper presents, a non-technical introduction to the economic principles relevant for transport pricing design and analysis. We provide the basic rationale behind pricing of externalities, discuss why simple Pigouvian tax rules that equate charges to marginal external costs are not optimal in
Some special features of the le chatelier-braun principle
Nesis, E. I.; Skibin, Yu. N.
2000-07-01
The relaxation reaction of a system that follows from the Le Chatelier-Braun principle and weakens the result of an external influence turns out to be more intense under a complex action. A method for quantitative determination of the weakening effect for simple and complex actions is suggested.
Babinet principle and diffraction losses in laser resonators
International Nuclear Information System (INIS)
Kubarev, V V
2000-01-01
A simple analytical technique, based on the Babinet principle, for calculating low diffraction losses of different kinds in stable resonators is described. The technique was verified by comparison with the known numerical and analytical calculations of the losses in specific diffraction problems. (laser applications and other topics in quantum electronics)
a simple a simple excitation control excitation control excitation
African Journals Online (AJOL)
eobe
field voltages determined follow a simple quadratic relationship that offer a very simple control scheme, dependent on only the stator current. Keywords: saturated reactances, no-load field voltage, excitation control, synchronous generators. 1. Introduction. Introduction. Introduction. The commonest generator in use today is ...
A Data-Guided Lexisearch Algorithm for the Asymmetric Traveling Salesman Problem
Directory of Open Access Journals (Sweden)
Zakir Hussain Ahmed
2011-01-01
Full Text Available A simple lexisearch algorithm that uses path representation method for the asymmetric traveling salesman problem (ATSP is proposed, along with an illustrative example, to obtain exact optimal solution to the problem. Then a data-guided lexisearch algorithm is presented. First, the cost matrix of the problem is transposed depending on the variance of rows and columns, and then the simple lexisearch algorithm is applied. It is shown that this minor preprocessing of the data before the simple lexisearch algorithm is applied improves the computational time substantially. The efficiency of our algorithms to the problem against two existing algorithms has been examined for some TSPLIB and random instances of various sizes. The results show remarkably better performance of our algorithms, especially our data-guided algorithm.
International Nuclear Information System (INIS)
Chandrasekharan, Shailesh
2000-01-01
Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm
Autonomous Star Tracker Algorithms
DEFF Research Database (Denmark)
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....