WorldWideScience

Sample records for state machine decomposition

  1. Ultra-precision machining induced phase decomposition at surface of Zn-Al based alloy

    International Nuclear Information System (INIS)

    To, S.; Zhu, Y.H.; Lee, W.B.

    2006-01-01

    The microstructural changes and phase transformation of an ultra-precision machined Zn-Al based alloy were examined using X-ray diffraction and back-scattered electron microscopy techniques. Decomposition of the Zn-rich η phase and the related changes in crystal orientation was detected at the surface of the ultra-precision machined alloy specimen. The effects of the machining parameters, such as cutting speed and depth of cut, on the phase decomposition were discussed in comparison with the tensile and rolling induced microstrucutural changes and phase decomposition

  2. Gear fault diagnosis under variable conditions with intrinsic time-scale decomposition-singular value decomposition and support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Zhanqiang; Qu, Jianfeng; Chai, Yi; Tang, Qiu; Zhou, Yuming [Chongqing University, Chongqing (China)

    2017-02-15

    The gear vibration signal is nonlinear and non-stationary, gear fault diagnosis under variable conditions has always been unsatisfactory. To solve this problem, an intelligent fault diagnosis method based on Intrinsic time-scale decomposition (ITD)-Singular value decomposition (SVD) and Support vector machine (SVM) is proposed in this paper. The ITD method is adopted to decompose the vibration signal of gearbox into several Proper rotation components (PRCs). Subsequently, the singular value decomposition is proposed to obtain the singular value vectors of the proper rotation components and improve the robustness of feature extraction under variable conditions. Finally, the Support vector machine is applied to classify the fault type of gear. According to the experimental results, the performance of ITD-SVD exceeds those of the time-frequency analysis methods with EMD and WPT combined with SVD for feature extraction, and the classifier of SVM outperforms those for K-nearest neighbors (K-NN) and Back propagation (BP). Moreover, the proposed approach can accurately diagnose and identify different fault types of gear under variable conditions.

  3. An effective secondary decomposition approach for wind power forecasting using extreme learning machine trained by crisscross optimization

    International Nuclear Information System (INIS)

    Yin, Hao; Dong, Zhen; Chen, Yunlong; Ge, Jiafei; Lai, Loi Lei; Vaccaro, Alfredo; Meng, Anbo

    2017-01-01

    Highlights: • A secondary decomposition approach is applied in the data pre-processing. • The empirical mode decomposition is used to decompose the original time series. • IMF1 continues to be decomposed by applying wavelet packet decomposition. • Crisscross optimization algorithm is applied to train extreme learning machine. • The proposed SHD-CSO-ELM outperforms other pervious methods in the literature. - Abstract: Large-scale integration of wind energy into electric grid is restricted by its inherent intermittence and volatility. So the increased utilization of wind power necessitates its accurate prediction. The contribution of this study is to develop a new hybrid forecasting model for the short-term wind power prediction by using a secondary hybrid decomposition approach. In the data pre-processing phase, the empirical mode decomposition is used to decompose the original time series into several intrinsic mode functions (IMFs). A unique feature is that the generated IMF1 continues to be decomposed into appropriate and detailed components by applying wavelet packet decomposition. In the training phase, all the transformed sub-series are forecasted with extreme learning machine trained by our recently developed crisscross optimization algorithm (CSO). The final predicted values are obtained from aggregation. The results show that: (a) The performance of empirical mode decomposition can be significantly improved with its IMF1 decomposed by wavelet packet decomposition. (b) The CSO algorithm has satisfactory performance in addressing the premature convergence problem when applied to optimize extreme learning machine. (c) The proposed approach has great advantage over other previous hybrid models in terms of prediction accuracy.

  4. Separable decompositions of bipartite mixed states

    Science.gov (United States)

    Li, Jun-Li; Qiao, Cong-Feng

    2018-04-01

    We present a practical scheme for the decomposition of a bipartite mixed state into a sum of direct products of local density matrices, using the technique developed in Li and Qiao (Sci. Rep. 8:1442, 2018). In the scheme, the correlation matrix which characterizes the bipartite entanglement is first decomposed into two matrices composed of the Bloch vectors of local states. Then, we show that the symmetries of Bloch vectors are consistent with that of the correlation matrix, and the magnitudes of the local Bloch vectors are lower bounded by the correlation matrix. Concrete examples for the separable decompositions of bipartite mixed states are presented for illustration.

  5. Autocoding State Machine in Erlang

    DEFF Research Database (Denmark)

    Guo, Yu; Hoffman, Torben; Gunder, Nicholas

    2008-01-01

    This paper presents an autocoding tool suit, which supports development of state machine in a model-driven fashion, where models are central to all phases of the development process. The tool suit, which is built on the Eclipse platform, provides facilities for the graphical specification...... of a state machine model. Once the state machine is specified, it is used as input to a code generation engine that generates source code in Erlang....

  6. Collaborative Systems – Finite State Machines

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2011-01-01

    Full Text Available In this paper the finite state machines are defined and formalized. There are presented the collaborative banking systems and their correspondence is done with finite state machines. It highlights the role of finite state machines in the complexity analysis and performs operations on very large virtual databases as finite state machines. It builds the state diagram and presents the commands and documents transition between the collaborative systems states. The paper analyzes the data sets from Collaborative Multicash Servicedesk application and performs a combined analysis in order to determine certain statistics. Indicators are obtained, such as the number of requests by category and the load degree of an agent in the collaborative system.

  7. Exact complexity: The spectral decomposition of intrinsic computation

    International Nuclear Information System (INIS)

    Crutchfield, James P.; Ellison, Christopher J.; Riechers, Paul M.

    2016-01-01

    We give exact formulae for a wide family of complexity measures that capture the organization of hidden nonlinear processes. The spectral decomposition of operator-valued functions leads to closed-form expressions involving the full eigenvalue spectrum of the mixed-state presentation of a process's ϵ-machine causal-state dynamic. Measures include correlation functions, power spectra, past-future mutual information, transient and synchronization informations, and many others. As a result, a direct and complete analysis of intrinsic computation is now available for the temporal organization of finitary hidden Markov models and nonlinear dynamical systems with generating partitions and for the spatial organization in one-dimensional systems, including spin systems, cellular automata, and complex materials via chaotic crystallography. - Highlights: • We provide exact, closed-form expressions for a hidden stationary process' intrinsic computation. • These include information measures such as the excess entropy, transient information, and synchronization information and the entropy-rate finite-length approximations. • The method uses an epsilon-machine's mixed-state presentation. • The spectral decomposition of the mixed-state presentation relies on the recent development of meromorphic functional calculus for nondiagonalizable operators.

  8. Refining Nodes and Edges of State Machines

    DEFF Research Database (Denmark)

    Hallerstede, Stefan; Snook, Colin

    2011-01-01

    State machines are hierarchical automata that are widely used to structure complex behavioural specifications. We develop two notions of refinement of state machines, node refinement and edge refinement. We compare the two notions by means of examples and argue that, by adopting simple conventions...... refinement theory and UML-B state machine refinement influences the style of node refinement. Hence we propose a method with direct proof of state machine refinement avoiding the detour via Event-B that is needed by UML-B....

  9. Combined spatial/angular domain decomposition SN algorithms for shared memory parallel machines

    International Nuclear Information System (INIS)

    Hunter, M.A.; Haghighat, A.

    1993-01-01

    Several parallel processing algorithms on the basis of spatial and angular domain decomposition methods are developed and incorporated into a two-dimensional discrete ordinates transport theory code. These algorithms divide the spatial and angular domains into independent subdomains so that the flux calculations within each subdomain can be processed simultaneously. Two spatial parallel algorithms (Block-Jacobi, red-black), one angular parallel algorithm (η-level), and their combinations are implemented on an eight processor CRAY Y-MP. Parallel performances of the algorithms are measured using a series of fixed source RZ geometry problems. Some of the results are also compared with those executed on an IBM 3090/600J machine. (orig.)

  10. Decomposition of the compound Atwood machine

    Science.gov (United States)

    Lopes Coelho, R.

    2017-11-01

    Non-standard solving strategies for the compound Atwood machine problem have been proposed. The present strategy is based on a very simple idea. Taking an Atwood machine and replacing one of its bodies by another Atwood machine, we have a compound machine. As this operation can be repeated, we can construct any compound Atwood machine. This rule of construction is transferred to a mathematical model, whereby the equations of motion are obtained. The only difference between the machine and its model is that instead of pulleys and bodies, we have reference frames that move solidarily with these objects. This model provides us with the accelerations in the non-inertial frames of the bodies, which we will use to obtain the equations of motion. This approach to the problem will be justified by the Lagrange method and exemplified by machines with six and eight bodies.

  11. SwingStates: adding state machines to the swing toolkit

    OpenAIRE

    Appert , Caroline; Beaudouin-Lafon , Michel

    2006-01-01

    International audience; This article describes SwingStates, a library that adds state machines to the Java Swing user interface toolkit. Unlike traditional approaches, which use callbacks or listeners to define interaction, state machines provide a powerful control structure and localize all of the interaction code in one place. SwingStates takes advantage of Java's inner classes, providing programmers with a natural syntax and making it easier to follow and debug the resulting code. SwingSta...

  12. Fourier decomposition of segmented magnets with radial magnetization in surface-mounted PM machines

    Science.gov (United States)

    Tiang, Tow Leong; Ishak, Dahaman; Lim, Chee Peng

    2017-11-01

    This paper presents a generic field model of radial magnetization (RM) pattern produced by multiple segmented magnets per rotor pole in surface-mounted permanent magnet (PM) machines. The magnetization vectors from either odd- or even-number of magnet blocks per pole are described. Fourier decomposition is first employed to derive the field model, and later integrated with the exact 2D analytical subdomain method to predict the magnetic field distributions and other motor global quantities. For the assessment purpose, a 12-slot/8-pole surface-mounted PM motor with two segmented magnets per pole is investigated by using the proposed field model. The electromagnetic performances of the PM machines are intensively predicted by the proposed magnet field model which include the magnetic field distributions, airgap flux density, phase back-EMF, cogging torque, and output torque during either open-circuit or on-load operating conditions. The analytical results are evaluated and compared with those obtained from both 2D and 3D finite element analyses (FEA) where an excellent agreement has been achieved.

  13. State Machine Framework And Its Use For Driving LHC Operational states

    CERN Document Server

    Misiowiec, M; Solfaroli Camilloci, M

    2011-01-01

    The LHC follows a complex operational cycle with 12 major phases that include equipment tests, preparation, beam injection, ramping and squeezing, finally followed by the physics phase. This cycle is modelled and enforced with a state machine, whereby each operational phase is represented by a state. On each transition, before entering the next state, a series of conditions is verified to make sure the LHC is ready to move on. The State Machine framework was developed to cater for building independent or embedded state machines. They safely drive between the states executing tasks bound to transitions and broadcast related information to interested parties. The framework encourages users to program their own actions. Simple configuration management allows the operators to define and maintain complex models themselves. An emphasis was also put on easy interaction with the remote state machine instances through standard communication protocols. On top of its core functionality, the framework offers a transparen...

  14. Machinery Bearing Fault Diagnosis Using Variational Mode Decomposition and Support Vector Machine as a Classifier

    Science.gov (United States)

    Rama Krishna, K.; Ramachandran, K. I.

    2018-02-01

    Crack propagation is a major cause of failure in rotating machines. It adversely affects the productivity, safety, and the machining quality. Hence, detecting the crack’s severity accurately is imperative for the predictive maintenance of such machines. Fault diagnosis is an established concept in identifying the faults, for observing the non-linear behaviour of the vibration signals at various operating conditions. In this work, we find the classification efficiencies for both original and the reconstructed vibrational signals. The reconstructed signals are obtained using Variational Mode Decomposition (VMD), by splitting the original signal into three intrinsic mode functional components and framing them accordingly. Feature extraction, feature selection and feature classification are the three phases in obtaining the classification efficiencies. All the statistical features from the original signals and reconstructed signals are found out in feature extraction process individually. A few statistical parameters are selected in feature selection process and are classified using the SVM classifier. The obtained results show the best parameters and appropriate kernel in SVM classifier for detecting the faults in bearings. Hence, we conclude that better results were obtained by VMD and SVM process over normal process using SVM. This is owing to denoising and filtering the raw vibrational signals.

  15. SwingStates: Adding state machines to Java and the Swing toolkit

    OpenAIRE

    Appert , Caroline; Beaudouin-Lafon , Michel

    2008-01-01

    International audience; This article describes SwingStates, a Java toolkit designed to facilitate the development of graphical user interfaces and bring advanced interaction techniques to the Java platform. SwingStates is based on the use of finite-state machines specified directly in Java to describe the behavior of interactive systems. State machines can be used to redefine the behavior of existing Swing widgets or, in combination with a new canvas widget that features a rich graphical mode...

  16. Dynamic thermal analysis of machines in running state

    CERN Document Server

    Wang, Lihui

    2014-01-01

    With the increasing complexity and dynamism in today’s machine design and development, more precise, robust and practical approaches and systems are needed to support machine design. Existing design methods treat the targeted machine as stationery. Analysis and simulation are mostly performed at the component level. Although there are some computer-aided engineering tools capable of motion analysis and vibration simulation etc., the machine itself is in the dry-run state. For effective machine design, understanding its thermal behaviours is crucial in achieving the desired performance in real situation. Dynamic Thermal Analysis of Machines in Running State presents a set of innovative solutions to dynamic thermal analysis of machines when they are put under actual working conditions. The objective is to better understand the thermal behaviours of a machine in real situation while at the design stage. The book has two major sections, with the first section presenting a broad-based review of the key areas of ...

  17. Towards Measuring the Abstractness of State Machines based on Mutation Testing

    Directory of Open Access Journals (Sweden)

    Thomas Baar

    2017-01-01

    Full Text Available Abstract. The notation of state machines is widely adopted as a formalism to describe the behaviour of systems. Usually, multiple state machine models can be developed for the very same software system. Some of these models might turn out to be equivalent, but, in many cases, different state machines describing the same system also differ in their level of abstraction. In this paper, we present an approach to actually measure the abstractness level of state machines w.r.t. a given implemented software system. A state machine is considered to be less abstract when it is conceptionally closer to the implemented system. In our approach, this distance between state machine and implementation is measured by applying coverage criteria known from software mutation testing. Abstractness of state machines can be considered as a new metric. As for other metrics as well, a known value for the abstractness of a given state machine allows to assess its quality in terms of a simple number. In model-based software development projects, the abstract metric can help to prevent model degradation since it can actually measure the semantic distance from the behavioural specification of a system in form of a state machine to the current implementation of the system. In contrast to other metrics for state machines, the abstractness cannot be statically computed based on the state machine’s structure, but requires to execute both state machine and corresponding system implementation. The article is published in the author’s wording. 

  18. Evaluating litter decomposition and soil organic matter dynamics in earth system models: contrasting analysis of long-term litter decomposition and steady-state soil carbon

    Science.gov (United States)

    Bonan, G. B.; Wieder, W. R.

    2012-12-01

    Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. Here, we demonstrate a protocol to document model performance with respect to both long-term (10 year) litter decomposition and steady-state soil carbon stocks. First, we test the soil organic matter parameterization of the Community Land Model version 4 (CLM4), the terrestrial component of the Community Earth System Model, with data from the Long-term Intersite Decomposition Experiment Team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We show results for 10-year litter decomposition simulations compared with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes. We show additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results reveal large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, despite using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. Nitrogen immobilization is similarly biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, without requirement for nitrogen limitation of decomposition. Second, we compare global observationally-based datasets of soil carbon with simulated steady-state soil carbon stocks for both models. The models simulations were forced with observationally-based estimates of annual

  19. Hierarchical State Machines as Modular Horn Clauses

    Directory of Open Access Journals (Sweden)

    Pierre-Loïc Garoche

    2016-07-01

    Full Text Available In model based development, embedded systems are modeled using a mix of dataflow formalism, that capture the flow of computation, and hierarchical state machines, that capture the modal behavior of the system. For safety analysis, existing approaches rely on a compilation scheme that transform the original model (dataflow and state machines into a pure dataflow formalism. Such compilation often result in loss of important structural information that capture the modal behaviour of the system. In previous work we have developed a compilation technique from a dataflow formalism into modular Horn clauses. In this paper, we present a novel technique that faithfully compile hierarchical state machines into modular Horn clauses. Our compilation technique preserves the structural and modal behavior of the system, making the safety analysis of such models more tractable.

  20. State machine operation of the MICE cooling channel

    International Nuclear Information System (INIS)

    Hanlet, Pierrick

    2014-01-01

    The Muon Ionization Cooling Experiment (MICE) is a demonstration experiment to prove the feasibility of cooling a beam of muons for use in a Neutrino Factory and/or Muon Collider. The MICE cooling channel is a section of a modified Study II cooling channel which will provide a 10% reduction in beam emittance. In order to ensure a reliable measurement, MICE will measure the beam emittance before and after the cooling channel at the level of 1%, a relative measurement of 0.001. This renders MICE a precision experiment which requires strict controls and monitoring of all experimental parameters in order to control systematic errors. The MICE Controls and Monitoring system is based on EPICS and integrates with the DAQ, Data monitoring systems, and a configuration database. The cooling channel for MICE has between 12 and 18 superconductnig solenoid coils in 3 to 7 magnets, depending on the staged development of the experiment. The magnets are coaxial and in close proximity which requires coordinated operation of the magnets when ramping, responding to quench conditions, and quench recovery. To reliably manage the operation of the magnets, MICE is implementing state machines for each magnet and an over-arching state machine for the magnets integrated in the cooling channel. The state machine transitions and operating parameters are stored/restored to/from the configuration database and coupled with MICE Run Control. Proper implementation of the state machines will not only ensure safe operation of the magnets, but will help ensure reliable data quality. A description of MICE, details of the state machines, and lessons learned from use of the state machines in recent magnet training tests will be discussed.

  1. Daily Peak Load Forecasting Based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2018-01-01

    Full Text Available Daily peak load forecasting is an important part of power load forecasting. The accuracy of its prediction has great influence on the formulation of power generation plan, power grid dispatching, power grid operation and power supply reliability of power system. Therefore, it is of great significance to construct a suitable model to realize the accurate prediction of the daily peak load. A novel daily peak load forecasting model, CEEMDAN-MGWO-SVM (Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, is proposed in this paper. Firstly, the model uses the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN algorithm to decompose the daily peak load sequence into multiple sub sequences. Then, the model of modified grey wolf optimization and support vector machine (MGWO-SVM is adopted to forecast the sub sequences. Finally, the forecasting sequence is reconstructed and the forecasting result is obtained. Using CEEMDAN can realize noise reduction for non-stationary daily peak load sequence, which makes the daily peak load sequence more regular. The model adopts the grey wolf optimization algorithm improved by introducing the population dynamic evolution operator and the nonlinear convergence factor to enhance the global search ability and avoid falling into the local optimum, which can better optimize the parameters of the SVM algorithm for improving the forecasting accuracy of daily peak load. In this paper, three cases are used to test the forecasting accuracy of the CEEMDAN-MGWO-SVM model. We choose the models EEMD-MGWO-SVM (Ensemble Empirical Mode Decomposition and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, MGWO-SVM (Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, GWO-SVM (Support Vector Machine Optimized by Grey Wolf Optimization Algorithm, SVM (Support Vector

  2. Machine learning topological states

    Science.gov (United States)

    Deng, Dong-Ling; Li, Xiaopeng; Das Sarma, S.

    2017-11-01

    Artificial neural networks and machine learning have now reached a new era after several decades of improvement where applications are to explode in many fields of science, industry, and technology. Here, we use artificial neural networks to study an intriguing phenomenon in quantum physics—the topological phases of matter. We find that certain topological states, either symmetry-protected or with intrinsic topological order, can be represented with classical artificial neural networks. This is demonstrated by using three concrete spin systems, the one-dimensional (1D) symmetry-protected topological cluster state and the 2D and 3D toric code states with intrinsic topological orders. For all three cases, we show rigorously that the topological ground states can be represented by short-range neural networks in an exact and efficient fashion—the required number of hidden neurons is as small as the number of physical spins and the number of parameters scales only linearly with the system size. For the 2D toric-code model, we find that the proposed short-range neural networks can describe the excited states with Abelian anyons and their nontrivial mutual statistics as well. In addition, by using reinforcement learning we show that neural networks are capable of finding the topological ground states of nonintegrable Hamiltonians with strong interactions and studying their topological phase transitions. Our results demonstrate explicitly the exceptional power of neural networks in describing topological quantum states, and at the same time provide valuable guidance to machine learning of topological phases in generic lattice models.

  3. An integrated condition-monitoring method for a milling process using reduced decomposition features

    International Nuclear Information System (INIS)

    Liu, Jie; Wu, Bo; Hu, Youmin; Wang, Yan

    2017-01-01

    Complex and non-stationary cutting chatter affects productivity and quality in the milling process. Developing an effective condition-monitoring approach is critical to accurately identify cutting chatter. In this paper, an integrated condition-monitoring method is proposed, where reduced features are used to efficiently recognize and classify machine states in the milling process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition, and Shannon power spectral entropy is calculated to extract features from the decomposed signals. Principal component analysis is adopted to reduce feature size and computational cost. With the extracted feature information, the probabilistic neural network model is used to recognize and classify the machine states, including stable, transition, and chatter states. Experimental studies are conducted, and results show that the proposed method can effectively detect cutting chatter during different milling operation conditions. This monitoring method is also efficient enough to satisfy fast machine state recognition and classification. (paper)

  4. Solid-state resistor for pulsed power machines

    Science.gov (United States)

    Stoltzfus, Brian; Savage, Mark E.; Hutsel, Brian Thomas; Fowler, William E.; MacRunnels, Keven Alan; Justus, David; Stygar, William A.

    2016-12-06

    A flexible solid-state resistor comprises a string of ceramic resistors that can be used to charge the capacitors of a linear transformer driver (LTD) used in a pulsed power machine. The solid-state resistor is able to absorb the energy of a switch prefire, thereby limiting LTD cavity damage, yet has a sufficiently low RC charge time to allow the capacitor to be recharged without disrupting the operation of the pulsed power machine.

  5. Identification method for gas-liquid two-phase flow regime based on singular value decomposition and least square support vector machine

    International Nuclear Information System (INIS)

    Sun Bin; Zhou Yunlong; Zhao Peng; Guan Yuebo

    2007-01-01

    Aiming at the non-stationary characteristics of differential pressure fluctuation signals of gas-liquid two-phase flow, and the slow convergence of learning and liability of dropping into local minima for BP neural networks, flow regime identification method based on Singular Value Decomposition (SVD) and Least Square Support Vector Machine (LS-SVM) is presented. First of all, the Empirical Mode Decomposition (EMD) method is used to decompose the differential pressure fluctuation signals of gas-liquid two-phase flow into a number of stationary Intrinsic Mode Functions (IMFs) components from which the initial feature vector matrix is formed. By applying the singular vale decomposition technique to the initial feature vector matrixes, the singular values are obtained. Finally, the singular values serve as the flow regime characteristic vector to be LS-SVM classifier and flow regimes are identified by the output of the classifier. The identification result of four typical flow regimes of air-water two-phase flow in horizontal pipe has shown that this method achieves a higher identification rate. (authors)

  6. Primal Domain Decomposition Method with Direct and Iterative Solver for Circuit-Field-Torque Coupled Parallel Finite Element Method to Electric Machine Modelling

    Directory of Open Access Journals (Sweden)

    Daniel Marcsa

    2015-01-01

    Full Text Available The analysis and design of electromechanical devices involve the solution of large sparse linear systems, and require therefore high performance algorithms. In this paper, the primal Domain Decomposition Method (DDM with parallel forward-backward and with parallel Preconditioned Conjugate Gradient (PCG solvers are introduced in two-dimensional parallel time-stepping finite element formulation to analyze rotating machine considering the electromagnetic field, external circuit and rotor movement. The proposed parallel direct and the iterative solver with two preconditioners are analyzed concerning its computational efficiency and number of iterations of the solver with different preconditioners. Simulation results of a rotating machine is also presented.

  7. Using Pipelined XNOR Logic to Reduce SEU Risks in State Machines

    Science.gov (United States)

    Le, Martin; Zheng, Xin; Katanyoutant, Sunant

    2008-01-01

    Single-event upsets (SEUs) pose great threats to avionic systems state machine control logic, which are frequently used to control sequence of events and to qualify protocols. The risks of SEUs manifest in two ways: (a) the state machine s state information is changed, causing the state machine to unexpectedly transition to another state; (b) due to the asynchronous nature of SEU, the state machine's state registers become metastable, consequently causing any combinational logic associated with the metastable registers to malfunction temporarily. Effect (a) can be mitigated with methods such as triplemodular redundancy (TMR). However, effect (b) cannot be eliminated and can degrade the effectiveness of any mitigation method of effect (a). Although there is no way to completely eliminate the risk of SEU-induced errors, the risk can be made very small by use of a combination of very fast state-machine logic and error-detection logic. Therefore, one goal of two main elements of the present method is to design the fastest state-machine logic circuitry by basing it on the fastest generic state-machine design, which is that of a one-hot state machine. The other of the two main design elements is to design fast error-detection logic circuitry and to optimize it for implementation in a field-programmable gate array (FPGA) architecture: In the resulting design, the one-hot state machine is fitted with a multiple-input XNOR gate for detection of illegal states. The XNOR gate is implemented with lookup tables and with pipelines for high speed. In this method, the task of designing all the logic must be performed manually because no currently available logic synthesis software tool can produce optimal solutions of design problems of this type. However, some assistance is provided by a script, written for this purpose in the Python language (an object-oriented interpretive computer language) to automatically generate hardware description language (HDL) code from state

  8. Relating dynamic brain states to dynamic machine states: Human and machine solutions to the speech recognition problem.

    Directory of Open Access Journals (Sweden)

    Cai Wingfield

    2017-09-01

    Full Text Available There is widespread interest in the relationship between the neurobiological systems supporting human cognition and emerging computational systems capable of emulating these capacities. Human speech comprehension, poorly understood as a neurobiological process, is an important case in point. Automatic Speech Recognition (ASR systems with near-human levels of performance are now available, which provide a computationally explicit solution for the recognition of words in continuous speech. This research aims to bridge the gap between speech recognition processes in humans and machines, using novel multivariate techniques to compare incremental 'machine states', generated as the ASR analysis progresses over time, to the incremental 'brain states', measured using combined electro- and magneto-encephalography (EMEG, generated as the same inputs are heard by human listeners. This direct comparison of dynamic human and machine internal states, as they respond to the same incrementally delivered sensory input, revealed a significant correspondence between neural response patterns in human superior temporal cortex and the structural properties of ASR-derived phonetic models. Spatially coherent patches in human temporal cortex responded selectively to individual phonetic features defined on the basis of machine-extracted regularities in the speech to lexicon mapping process. These results demonstrate the feasibility of relating human and ASR solutions to the problem of speech recognition, and suggest the potential for further studies relating complex neural computations in human speech comprehension to the rapidly evolving ASR systems that address the same problem domain.

  9. Prediction Model of Machining Failure Trend Based on Large Data Analysis

    Science.gov (United States)

    Li, Jirong

    2017-12-01

    The mechanical processing has high complexity, strong coupling, a lot of control factors in the machining process, it is prone to failure, in order to improve the accuracy of fault detection of large mechanical equipment, research on fault trend prediction requires machining, machining fault trend prediction model based on fault data. The characteristics of data processing using genetic algorithm K mean clustering for machining, machining feature extraction which reflects the correlation dimension of fault, spectrum characteristics analysis of abnormal vibration of complex mechanical parts processing process, the extraction method of the abnormal vibration of complex mechanical parts processing process of multi-component spectral decomposition and empirical mode decomposition Hilbert based on feature extraction and the decomposition results, in order to establish the intelligent expert system for the data base, combined with large data analysis method to realize the machining of the Fault trend prediction. The simulation results show that this method of fault trend prediction of mechanical machining accuracy is better, the fault in the mechanical process accurate judgment ability, it has good application value analysis and fault diagnosis in the machining process.

  10. A novel hybrid model for air quality index forecasting based on two-phase decomposition technique and modified extreme learning machine.

    Science.gov (United States)

    Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier

    2017-02-15

    The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    Science.gov (United States)

    Hartmann, L.

    2002-01-01

    As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react

  12. On a decomposition theorem for density operators of a pure quantum state

    International Nuclear Information System (INIS)

    Giannoni, M.J.

    1979-03-01

    Conditions for the existence of a decomposition of a hermitian projector rho into two hermitian and time reversal invariant operators r/rho 0 and chi under the form rho=esup(i,chi)rho 0 esup(-i,chi) are investigated. Sufficient conditions are given, and an explicit construction of a decomposition is performed when they are fulfilled. A stronger theorem of existence and unicity is studied. All the proofs are valid for any p-body reduced density operator of a pure state of a system of bosons as well as fermions. The decomposition studied in this work has already been used in Nuclear Physics, and may be of interest in other fields of Physics

  13. Four wind speed multi-step forecasting models using extreme learning machines and signal decomposing algorithms

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei

    2015-01-01

    Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions

  14. Decomposition of toluene in a steady-state atmospheric-pressure glow discharge

    International Nuclear Information System (INIS)

    Trushkin, A. N.; Grushin, M. E.; Kochetov, I. V.; Trushkin, N. I.; Akishev, Yu. S.

    2013-01-01

    Results are presented from experimental studies of decomposition of toluene (C 6 H 5 CH 3 ) in a polluted air flow by means of a steady-state atmospheric pressure glow discharge at different water vapor contents in the working gas. The experimental results on the degree of C 6 H 5 CH 3 removal are compared with the results of computer simulations conducted in the framework of the developed kinetic model of plasma chemical decomposition of toluene in the N 2 : O 2 : H 2 O gas mixture. A substantial influence of the gas flow humidity on toluene decomposition in the atmospheric pressure glow discharge is demonstrated. The main mechanisms of the influence of humidity on C 6 H 5 CH 3 decomposition are determined. The existence of two stages in the process of toluene removal, which differ in their duration and the intensity of plasma chemical decomposition of C 6 H 5 CH 3 is established. Based on the results of computer simulations, the composition of the products of plasma chemical reactions at the output of the reactor is analyzed as a function of the specific energy deposition and gas flow humidity. The existence of a catalytic cycle in which hydroxyl radical OH acts a catalyst and which substantially accelerates the recombination of oxygen atoms and suppression of ozone generation when the plasma-forming gas contains water vapor is established.

  15. Decomposition of toluene in a steady-state atmospheric-pressure glow discharge

    Science.gov (United States)

    Trushkin, A. N.; Grushin, M. E.; Kochetov, I. V.; Trushkin, N. I.; Akishev, Yu. S.

    2013-02-01

    Results are presented from experimental studies of decomposition of toluene (C6H5CH3) in a polluted air flow by means of a steady-state atmospheric pressure glow discharge at different water vapor contents in the working gas. The experimental results on the degree of C6H5CH3 removal are compared with the results of computer simulations conducted in the framework of the developed kinetic model of plasma chemical decomposition of toluene in the N2: O2: H2O gas mixture. A substantial influence of the gas flow humidity on toluene decomposition in the atmospheric pressure glow discharge is demonstrated. The main mechanisms of the influence of humidity on C6H5CH3 decomposition are determined. The existence of two stages in the process of toluene removal, which differ in their duration and the intensity of plasma chemical decomposition of C6H5CH3 is established. Based on the results of computer simulations, the composition of the products of plasma chemical reactions at the output of the reactor is analyzed as a function of the specific energy deposition and gas flow humidity. The existence of a catalytic cycle in which hydroxyl radical OH acts a catalyst and which substantially accelerates the recombination of oxygen atoms and suppression of ozone generation when the plasma-forming gas contains water vapor is established.

  16. Artificial emotional model based on finite state machine

    Institute of Scientific and Technical Information of China (English)

    MENG Qing-mei; WU Wei-guo

    2008-01-01

    According to the basic emotional theory, the artificial emotional model based on the finite state machine(FSM) was presented. In finite state machine model of emotion, the emotional space included the basic emotional space and the multiple emotional spaces. The emotion-switching diagram was defined and transition function was developed using Markov chain and linear interpolation algorithm. The simulation model was built using Stateflow toolbox and Simulink toolbox based on the Matlab platform.And the model included three subsystems: the input one, the emotion one and the behavior one. In the emotional subsystem, the responses of different personalities to the external stimuli were described by defining personal space. This model takes states from an emotional space and updates its state depending on its current state and a state of its input (also a state-emotion). The simulation model realizes the process of switching the emotion from the neutral state to other basic emotions. The simulation result is proved to correspond to emotion-switching law of human beings.

  17. An Approach for Implementing State Machines with Online Testability

    Directory of Open Access Journals (Sweden)

    P. K. Lala

    2010-01-01

    Full Text Available During the last two decades, significant amount of research has been performed to simplify the detection of transient or soft errors in VLSI-based digital systems. This paper proposes an approach for implementing state machines that uses 2-hot code for state encoding. State machines designed using this approach allow online detection of soft errors in registers and output logic. The 2-hot code considerably reduces the number of required flip-flops and leads to relatively straightforward implementation of next state and output logic. A new way of designing output logic for online fault detection has also been presented.

  18. The Design of Finite State Machine for Asynchronous Replication Protocol

    Science.gov (United States)

    Wang, Yanlong; Li, Zhanhuai; Lin, Wei; Hei, Minglei; Hao, Jianhua

    Data replication is a key way to design a disaster tolerance system and to achieve reliability and availability. It is difficult for a replication protocol to deal with the diverse and complex environment. This means that data is less well replicated than it ought to be. To reduce data loss and to optimize replication protocols, we (1) present a finite state machine, (2) run it to manage an asynchronous replication protocol and (3) report a simple evaluation of the asynchronous replication protocol based on our state machine. It's proved that our state machine is applicable to guarantee the asynchronous replication protocol running in the proper state to the largest extent in the event of various possible events. It also can helpful to build up replication-based disaster tolerance systems to ensure the business continuity.

  19. Comparison Between Wind Power Prediction Models Based on Wavelet Decomposition with Least-Squares Support Vector Machine (LS-SVM and Artificial Neural Network (ANN

    Directory of Open Access Journals (Sweden)

    Maria Grazia De Giorgi

    2014-08-01

    Full Text Available A high penetration of wind energy into the electricity market requires a parallel development of efficient wind power forecasting models. Different hybrid forecasting methods were applied to wind power prediction, using historical data and numerical weather predictions (NWP. A comparative study was carried out for the prediction of the power production of a wind farm located in complex terrain. The performances of Least-Squares Support Vector Machine (LS-SVM with Wavelet Decomposition (WD were evaluated at different time horizons and compared to hybrid Artificial Neural Network (ANN-based methods. It is acknowledged that hybrid methods based on LS-SVM with WD mostly outperform other methods. A decomposition of the commonly known root mean square error was beneficial for a better understanding of the origin of the differences between prediction and measurement and to compare the accuracy of the different models. A sensitivity analysis was also carried out in order to underline the impact that each input had in the network training process for ANN. In the case of ANN with the WD technique, the sensitivity analysis was repeated on each component obtained by the decomposition.

  20. State Machine Modeling of the Space Launch System Solid Rocket Boosters

    Science.gov (United States)

    Harris, Joshua A.; Patterson-Hine, Ann

    2013-01-01

    The Space Launch System is a Shuttle-derived heavy-lift vehicle currently in development to serve as NASA's premiere launch vehicle for space exploration. The Space Launch System is a multistage rocket with two Solid Rocket Boosters and multiple payloads, including the Multi-Purpose Crew Vehicle. Planned Space Launch System destinations include near-Earth asteroids, the Moon, Mars, and Lagrange points. The Space Launch System is a complex system with many subsystems, requiring considerable systems engineering and integration. To this end, state machine analysis offers a method to support engineering and operational e orts, identify and avert undesirable or potentially hazardous system states, and evaluate system requirements. Finite State Machines model a system as a finite number of states, with transitions between states controlled by state-based and event-based logic. State machines are a useful tool for understanding complex system behaviors and evaluating "what-if" scenarios. This work contributes to a state machine model of the Space Launch System developed at NASA Ames Research Center. The Space Launch System Solid Rocket Booster avionics and ignition subsystems are modeled using MATLAB/Stateflow software. This model is integrated into a larger model of Space Launch System avionics used for verification and validation of Space Launch System operating procedures and design requirements. This includes testing both nominal and o -nominal system states and command sequences.

  1. Rethinking State Politics: The Withering of State Dominant Machines in Brazil

    Directory of Open Access Journals (Sweden)

    André Borges

    2007-03-01

    Full Text Available Research on Brazilian federalism and state politics has focused mainly on the impact of federal arrangements on national political systems, whereas comparative analyses of the workings of state political institutions and patterns of political competition and decision-making have often been neglected. The article contributes to an emerging comparative literature on state politics by developing a typology that systematizes the variation in political competitiveness and the extent of state elites’ control over the electoral arena across Brazilian states. It relies on factor analysis to create an index of “electoral dominance”, comprised of a set of indicators of party and electoral competitiveness at the state level, which measures state elites’ capacity to control the state electoral arena over time. Based on this composite index and on available case-study evidence, the article applies the typological classificatory scheme to all 27 Brazilian states. Further, the article relies on the typological classification to assess the recent evolution of state-level political competitiveness. The empirical analysis demonstrates that state politics is becoming more competitive and fragmented, including in those states that have been characterized as bastions of oligarchism and political bossism. In view of these findings, the article argues that the power of state political machines rests on fragile foundations: in Brazil’s multiparty federalism, vertical competition between the federal and state governments in the provision of social policies works as a constraint on state bosses’ machine-building strategies. It is concluded that our previous views on state political dynamics are in serious need of re-evaluation.

  2. Amplitude-cyclic frequency decomposition of vibration signals for bearing fault diagnosis based on phase editing

    Science.gov (United States)

    Barbini, L.; Eltabach, M.; Hillis, A. J.; du Bois, J. L.

    2018-03-01

    In rotating machine diagnosis different spectral tools are used to analyse vibration signals. Despite the good diagnostic performance such tools are usually refined, computationally complex to implement and require oversight of an expert user. This paper introduces an intuitive and easy to implement method for vibration analysis: amplitude cyclic frequency decomposition. This method firstly separates vibration signals accordingly to their spectral amplitudes and secondly uses the squared envelope spectrum to reveal the presence of cyclostationarity in each amplitude level. The intuitive idea is that in a rotating machine different components contribute vibrations at different amplitudes, for instance defective bearings contribute a very weak signal in contrast to gears. This paper also introduces a new quantity, the decomposition squared envelope spectrum, which enables separation between the components of a rotating machine. The amplitude cyclic frequency decomposition and the decomposition squared envelope spectrum are tested on real word signals, both at stationary and varying speeds, using data from a wind turbine gearbox and an aircraft engine. In addition a benchmark comparison to the spectral correlation method is presented.

  3. Machine Learning Applications to Resting-State Functional MR Imaging Analysis.

    Science.gov (United States)

    Billings, John M; Eder, Maxwell; Flood, William C; Dhami, Devendra Singh; Natarajan, Sriraam; Whitlow, Christopher T

    2017-11-01

    Machine learning is one of the most exciting and rapidly expanding fields within computer science. Academic and commercial research entities are investing in machine learning methods, especially in personalized medicine via patient-level classification. There is great promise that machine learning methods combined with resting state functional MR imaging will aid in diagnosis of disease and guide potential treatment for conditions thought to be impossible to identify based on imaging alone, such as psychiatric disorders. We discuss machine learning methods and explore recent advances. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Removing the Restrictions Imposed on Finite State Machines ...

    African Journals Online (AJOL)

    This study determines an effective method of removing the fixed and finite state amount of memory that restricts finite state machines from carrying out compilation jobs that require larger amount of memory. The study is ... The conclusion reviewed the various steps followed and made projections for further reading. Keyword: ...

  5. A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Yingni Zhai

    2014-10-01

    Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the

  6. Assessing the extent of decomposition of natural organic materials using solid-state 13C NMR spectroscopy

    International Nuclear Information System (INIS)

    Baddock, J.A.; Oades, J.M.; Nelson, P.N.; Skene, T.M.; Golchin, A.; Clarke, P.

    1997-01-01

    Solid-state 13 C nuclear magnetic resonance (NMR) spectroscopy has become an important tool for examining the chemical structure of natural organic materials and the chemical changes associated with decomposition. In this paper, solid-state 13 C NMR data pertaining to changes in the chemical composition of a diverse range of natural organic materials, including wood, peat, composts, forest litter layers, and organic materials in surface layers of mineral soils, were reviewed with the objective of deriving an index of the extent of decomposition of such organic materials based on changes in chemical composition. Chemical changes associated with the decomposition of wood varied considerably and were dependent on a strong interaction between the species of wood examined and the species composition of the microbial decomposer community, making the derivation of a single general index applicable to wood decomposition unlikely. For the remaining forms of natural organic residues, decomposition was almost always associated with an increased content of alkyl C and a decreased content of O-alkyl C. The concomitant increase and decrease in alkyl and O-alkyl C contents, respectively, suggested that the ratio of alkyl to O-alkyl carbon (A/O-A ratio) may provide a sensitive index of the extent of decomposition. Contrary to the traditional view that humic substances with an aromatic core accumulate as decomposition proceeds, changes in the aromatic region were variable and suggested a relationship with the activity of lignin-degrading fungi. The A/O-A ratio did appear to provide a sensitive index of extent of decomposition provided that its use was restricted to situations where the organic materials were derived from a common starting material. In addition, the potential for adsorption of highly decomposable materials on mineral soil surfaces and the impacts which such an adsorption may have on bioavailability required consideration when the A/O-A ratio was used to assess the

  7. Mode decomposition for a synchronous state and its applications

    International Nuclear Information System (INIS)

    Xiong Xiaohua; Wang Junwei; Zhang Yanbin; Zhou Tianshou

    2007-01-01

    Synchronization of coupled dynamical systems including periodic and chaotic systems is investigated both anlaytically and numerically. A novel method, mode decomposition, of treating the stability of a synchronous state is proposed based on the Floquet theory. A rigorous criterion is then derived, which can be applied to arbitrary coupled systems. Two typical numerical examples: coupled Van der Pol systems (corresponding to the case of coupled periodic oscillators) and coupled Lorenz systems (corresponding to the case of chaotic systems) are used to demonstrate the theoretical analysis

  8. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  9. Distributed state machine supervision for long-baseline gravitational-wave detectors

    International Nuclear Information System (INIS)

    Rollins, Jameson Graef

    2016-01-01

    The Laser Interferometer Gravitational-wave Observatory (LIGO) consists of two identical yet independent, widely separated, long-baseline gravitational-wave detectors. Each Advanced LIGO detector consists of complex optical-mechanical systems isolated from the ground by multiple layers of active seismic isolation, all controlled by hundreds of fast, digital, feedback control systems. This article describes a novel state machine-based automation platform developed to handle the automation and supervisory control challenges of these detectors. The platform, called Guardian, consists of distributed, independent, state machine automaton nodes organized hierarchically for full detector control. User code is written in standard Python and the platform is designed to facilitate the fast-paced development process associated with commissioning the complicated Advanced LIGO instruments. While developed specifically for the Advanced LIGO detectors, Guardian is a generic state machine automation platform that is useful for experimental control at all levels, from simple table-top setups to large-scale multi-million dollar facilities.

  10. Distributed state machine supervision for long-baseline gravitational-wave detectors

    Energy Technology Data Exchange (ETDEWEB)

    Rollins, Jameson Graef, E-mail: jameson.rollins@ligo.org [LIGO Laboratory, California Institute of Technology, Pasadena, California 91125 (United States)

    2016-09-15

    The Laser Interferometer Gravitational-wave Observatory (LIGO) consists of two identical yet independent, widely separated, long-baseline gravitational-wave detectors. Each Advanced LIGO detector consists of complex optical-mechanical systems isolated from the ground by multiple layers of active seismic isolation, all controlled by hundreds of fast, digital, feedback control systems. This article describes a novel state machine-based automation platform developed to handle the automation and supervisory control challenges of these detectors. The platform, called Guardian, consists of distributed, independent, state machine automaton nodes organized hierarchically for full detector control. User code is written in standard Python and the platform is designed to facilitate the fast-paced development process associated with commissioning the complicated Advanced LIGO instruments. While developed specifically for the Advanced LIGO detectors, Guardian is a generic state machine automation platform that is useful for experimental control at all levels, from simple table-top setups to large-scale multi-million dollar facilities.

  11. A PARALLEL NONOVERLAPPING DOMAIN DECOMPOSITION METHOD FOR STOKES PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Mei-qun Jiang; Pei-liang Dai

    2006-01-01

    A nonoverlapping domain decomposition iterative procedure is developed and analyzed for generalized Stokes problems and their finite element approximate problems in RN(N=2,3). The method is based on a mixed-type consistency condition with two parameters as a transmission condition together with a derivative-free transmission data updating technique on the artificial interfaces. The method can be applied to a general multi-subdomain decomposition and implemented on parallel machines with local simple communications naturally.

  12. PLA realizations for VLSI state machines

    Science.gov (United States)

    Gopalakrishnan, S.; Whitaker, S.; Maki, G.; Liu, K.

    1990-01-01

    A major problem associated with state assignment procedures for VLSI controllers is obtaining an assignment that produces minimal or near minimal logic. The key item in Programmable Logic Array (PLA) area minimization is the number of unique product terms required by the design equations. This paper presents a state assignment algorithm for minimizing the number of product terms required to implement a finite state machine using a PLA. Partition algebra with predecessor state information is used to derive a near optimal state assignment. A maximum bound on the number of product terms required can be obtained by inspecting the predecessor state information. The state assignment algorithm presented is much simpler than existing procedures and leads to the same number of product terms or less. An area-efficient PLA structure implemented in a 1.0 micron CMOS process is presented along with a summary of the performance for a controller implemented using this design procedure.

  13. ATC calculation with steady-state security constraints using Benders decomposition

    International Nuclear Information System (INIS)

    Shaaban, M.; Yan, Z.; Ni, Y.; Wu, F.; Li, W.; Liu, H.

    2003-01-01

    Available transfer capability (ATC) is an important indicator of the usable amount of transmission capacity accessible by assorted parties for commercial trading, ATC calculation is nontrivial when steady-state security constraints are included. In hie paper, Benders decomposition method is proposed to partition the AC problem with steady-state security constraints into a base case master problem and a series of subproblems relevant to various contingencies to include their impacts on ATC. The mathematical model is formulated and the two solution schemes are presented. Computer testing on the 4-bus system and IEEE 30-bus system shows the effectiveness of the proposed method and the solution schemes. (Author)

  14. Control of discrete event systems modeled as hierarchical state machines

    Science.gov (United States)

    Brave, Y.; Heymann, M.

    1991-01-01

    The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.

  15. Effect of charged and excited states on the decomposition of 1,1-diamino-2,2-dinitroethylene molecules

    International Nuclear Information System (INIS)

    Kimmel, Anna V.; Sushko, Peter V.; Shluger, Alexander L.; Kuklja, Maija M.

    2007-01-01

    The authors have calculated the electronic structure of individual 1,1-diamino-2,2-dinitroethylene molecules (FOX-7) in the gas phase by means of density functional theory with the hybrid B3LYP functional and 6-31+G(d,p) basis set and considered their dissociation pathways. Positively and negatively charged states as well as the lowest excited states of the molecule were simulated. They found that charging and excitation can not only reduce the activation barriers for decomposition reactions but also change the dominating chemistry from endo- to exothermic type. In particular, they found that there are two competing primary initiation mechanisms of FOX-7 decomposition: C-NO 2 bond fission and C-NO 2 to CONO isomerization. Electronic excitation or charging of FOX-7 disfavors CONO formation and, thus, terminates this channel of decomposition. However, if CONO is formed from the neutral FOX-7 molecule, charge trapping and/or excitation results in spontaneous splitting of an NO group accompanied by the energy release. Intramolecular hydrogen transfer is found to be a rare event in FOX-7 unless free electrons are available in the vicinity of the molecule, in which case HONO formation is a feasible exothermic reaction with a relatively low energy barrier. The effect of charged and excited states on other possible reactions is also studied. Implications of the obtained results to FOX-7 decomposition in condensed state are discussed

  16. Deterministic and probabilistic interval prediction for short-term wind power generation based on variational mode decomposition and machine learning methods

    International Nuclear Information System (INIS)

    Zhang, Yachao; Liu, Kaipei; Qin, Liang; An, Xueli

    2016-01-01

    Highlights: • Variational mode decomposition is adopted to process original wind power series. • A novel combined model based on machine learning methods is established. • An improved differential evolution algorithm is proposed for weight adjustment. • Probabilistic interval prediction is performed by quantile regression averaging. - Abstract: Due to the increasingly significant energy crisis nowadays, the exploitation and utilization of new clean energy gains more and more attention. As an important category of renewable energy, wind power generation has become the most rapidly growing renewable energy in China. However, the intermittency and volatility of wind power has restricted the large-scale integration of wind turbines into power systems. High-precision wind power forecasting is an effective measure to alleviate the negative influence of wind power generation on the power systems. In this paper, a novel combined model is proposed to improve the prediction performance for the short-term wind power forecasting. Variational mode decomposition is firstly adopted to handle the instability of the raw wind power series, and the subseries can be reconstructed by measuring sample entropy of the decomposed modes. Then the base models can be established for each subseries respectively. On this basis, the combined model is developed based on the optimal virtual prediction scheme, the weight matrix of which is dynamically adjusted by a self-adaptive multi-strategy differential evolution algorithm. Besides, a probabilistic interval prediction model based on quantile regression averaging and variational mode decomposition-based hybrid models is presented to quantify the potential risks of the wind power series. The simulation results indicate that: (1) the normalized mean absolute errors of the proposed combined model from one-step to three-step forecasting are 4.34%, 6.49% and 7.76%, respectively, which are much lower than those of the base models and the hybrid

  17. Implementing finite state machines in a computer-based teaching system

    Science.gov (United States)

    Hacker, Charles H.; Sitte, Renate

    1999-09-01

    Finite State Machines (FSM) are models for functions commonly implemented in digital circuits such as timers, remote controls, and vending machines. Teaching FSM is core in the curriculum of many university digital electronic or discrete mathematics subjects. Students often have difficulties grasping the theoretical concepts in the design and analysis of FSM. This has prompted the author to develop an MS-WindowsTM compatible software, WinState, that provides a tutorial style teaching aid for understanding the mechanisms of FSM. The animated computer screen is ideal for visually conveying the required design and analysis procedures. WinState complements other software for combinatorial logic previously developed by the author, and enhances the existing teaching package by adding sequential logic circuits. WinState enables the construction of a students own FSM, which can be simulated, to test the design for functionality and possible errors.

  18. Reverse Engineering Integrated Circuits Using Finite State Machine Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Oler, Kiri J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Miller, Carl H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-04-12

    In this paper, we present a methodology for reverse engineering integrated circuits, including a mathematical verification of a scalable algorithm used to generate minimal finite state machine representations of integrated circuits.

  19. Twentieth Century evolution of machining in the United States – An ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    beginning of the Industrial Revolution in the late 1700's, virtually no ... expected that, by the middle of the 19th Century, as machine tools began to be manufactured .... Twentieth Century evolution of machining in the United States. 873. DESIGN ... Merchant M E 1961 The manufacturing system concept in production ...

  20. An Embeddable Virtual Machine for State Space Generation

    NARCIS (Netherlands)

    Weber, M.; Bosnacki, D.; Edelkamp, S.

    2007-01-01

    The semantics of modelling languages are not always specified in a precise and formal way, and their rather complex underlying models make it a non-trivial exercise to reuse them in newly developed tools. We report on experiments with a virtual machine-based approach for state space generation. The

  1. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    Science.gov (United States)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  2. Implementation of a Microcode-controlled State Machine and Simulator in AVR Microcontrollers (MICoSS

    Directory of Open Access Journals (Sweden)

    S. Korbel

    2005-01-01

    Full Text Available This paper describes the design of a microcode-controlled state machine and its software implementation in Atmel AVR microcontrollers. In particular, ATmega103 and ATmega128 microcontrollers are used. This design is closely related to the software implementation of a simulator in AVR microcontrollers. This simulator communicates with the designed state machine and presents a complete design environment for microcode development and debugging. These two devices can be interconnected by a flat cable and linked to a computer through a serial or USB interface.Both devices share the control software that allows us to create and edit microprograms and to control the whole state machine. It is possible to start, cancel or step through the execution of the microprograms. The operator can also observe the current state of the state machine. The second part of the control software enables the operator to create and compile simulating programs. The control software communicates with both devices using commands. All the results of this communication are well arranged in dialog boxes and windows. 

  3. Technical Note: Linking climate change and downed woody debris decomposition across forests of the eastern United States

    Science.gov (United States)

    Russell, Matthew B.; Woodall, Christopher W.; D'Amato, Anthony W.; Fraver, Shawn; Bradford, John B.

    2014-01-01

    Forest ecosystems play a critical role in mitigating greenhouse gas emissions. Forest carbon (C) is stored through photosynthesis and released via decomposition and combustion. Relative to C fixation in biomass, much less is known about C depletion through decomposition of woody debris, particularly under a changing climate. It is assumed that the increased temperatures and longer growing seasons associated with projected climate change will increase the decomposition rates (i.e., more rapid C cycling) of downed woody debris (DWD); however, the magnitude of this increase has not been previously addressed. Using DWD measurements collected from a national forest inventory of the eastern United States, we show that the residence time of DWD may decrease (i.e., more rapid decomposition) by as much as 13% over the next 200 years, depending on various future climate change scenarios and forest types. Although existing dynamic global vegetation models account for the decomposition process, they typically do not include the effect of a changing climate on DWD decomposition rates. We expect that an increased understanding of decomposition rates, as presented in this current work, will be needed to adequately quantify the fate of woody detritus in future forests. Furthermore, we hope these results will lead to improved models that incorporate climate change scenarios for depicting future dead wood dynamics in addition to a traditional emphasis on live-tree demographics.

  4. Exponentially Biased Ground-State Sampling of Quantum Annealing Machines with Transverse-Field Driving Hamiltonians.

    Science.gov (United States)

    Mandrà, Salvatore; Zhu, Zheng; Katzgraber, Helmut G

    2017-02-17

    We study the performance of the D-Wave 2X quantum annealing machine on systems with well-controlled ground-state degeneracy. While obtaining the ground state of a spin-glass benchmark instance represents a difficult task, the gold standard for any optimization algorithm or machine is to sample all solutions that minimize the Hamiltonian with more or less equal probability. Our results show that while naive transverse-field quantum annealing on the D-Wave 2X device can find the ground-state energy of the problems, it is not well suited in identifying all degenerate ground-state configurations associated with a particular instance. Even worse, some states are exponentially suppressed, in agreement with previous studies on toy model problems [New J. Phys. 11, 073021 (2009)NJOPFM1367-263010.1088/1367-2630/11/7/073021]. These results suggest that more complex driving Hamiltonians are needed in future quantum annealing machines to ensure a fair sampling of the ground-state manifold.

  5. Solid state green synthesis and catalytic activity of CuO nanorods in thermal decomposition of potassium periodate

    Science.gov (United States)

    Patel, Vinay Kumar; Bhattacharya, Shantanu

    2017-09-01

    The present study reports a facile solid state green synthesis process using the leaf extracts of Hibiscus rosa-sinensis to synthesize CuO nanorods with average diameters of 15-20 nm and lengths up to 100 nm. The as-synthesized CuO nanorods were characterized by x-ray diffraction, Fourier transform infrared spectroscopy, transmission electron microscopy and selected area electron diffraction. The formation mechanism of CuO nanorods has been explained by involving the individual role of amide I (amino groups) and carboxylate groups under excess hydroxyl ions released from NaOH. The catalytic activity of CuO nanorods in thermal decomposition of potassium periodate microparticles (µ-KIO4) microparticles was studied by thermo gravimetric analysis measurement. The original size (~100 µm) of commercially procured potassium periodate was reduced to microscale length scale to about one-tenth by PEG200 assisted emulsion process. The CuO nanorods prepared by solid state green route were found to catalyze the thermal decomposition of µ-KIO4 with a reduction of 18 °C in the final thermal decomposition temperature of potassium periodate.

  6. A rule-based approach to model checking of UML state machines

    Science.gov (United States)

    Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz

    2016-12-01

    In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.

  7. A highly efficient autothermal microchannel reactor for ammonia decomposition: Analysis of hydrogen production in transient and steady-state regimes

    Science.gov (United States)

    Engelbrecht, Nicolaas; Chiuta, Steven; Bessarabov, Dmitri G.

    2018-05-01

    The experimental evaluation of an autothermal microchannel reactor for H2 production from NH3 decomposition is described. The reactor design incorporates an autothermal approach, with added NH3 oxidation, for coupled heat supply to the endothermic decomposition reaction. An alternating catalytic plate arrangement is used to accomplish this thermal coupling in a cocurrent flow strategy. Detailed analysis of the transient operating regime associated with reactor start-up and steady-state results is presented. The effects of operating parameters on reactor performance are investigated, specifically, the NH3 decomposition flow rate, NH3 oxidation flow rate, and fuel-oxygen equivalence ratio. Overall, the reactor exhibits rapid response time during start-up; within 60 min, H2 production is approximately 95% of steady-state values. The recommended operating point for steady-state H2 production corresponds to an NH3 decomposition flow rate of 6 NL min-1, NH3 oxidation flow rate of 4 NL min-1, and fuel-oxygen equivalence ratio of 1.4. Under these flows, NH3 conversion of 99.8% and H2 equivalent fuel cell power output of 0.71 kWe is achieved. The reactor shows good heat utilization with a thermal efficiency of 75.9%. An efficient autothermal reactor design is therefore demonstrated, which may be upscaled to a multi-kW H2 production system for commercial implementation.

  8. The application of state machine based on labview for solid target transfer control system at BATAN’s cyclotron

    International Nuclear Information System (INIS)

    Heranudin; Rajiman; Parwanto; Edy Slamet R

    2015-01-01

    Software programming for the new solid target transfer control system referred to the working principle of the whole each sub system. System modeling with state machine diagram was chosen because this simplified a complex design of the control system. State machine implementation of this system was performed by creating basic state drawn from the working system of each sub system. All states with their described inputs, outputs and algorithms were compiled in the sequential state machine diagram. In order to ease the operation, three modes namely automatic, major states and micro states were created. Testing of the system has been conducted and as a result, the system worked properly. The implementation of State machine based on LabView has several advantages such as faster, easier programming and the capability for further developments. (author)

  9. Learning Extended Finite State Machines

    Science.gov (United States)

    Cassel, Sofia; Howar, Falk; Jonsson, Bengt; Steffen, Bernhard

    2014-01-01

    We present an active learning algorithm for inferring extended finite state machines (EFSM)s, combining data flow and control behavior. Key to our learning technique is a novel learning model based on so-called tree queries. The learning algorithm uses the tree queries to infer symbolic data constraints on parameters, e.g., sequence numbers, time stamps, identifiers, or even simple arithmetic. We describe sufficient conditions for the properties that the symbolic constraints provided by a tree query in general must have to be usable in our learning model. We have evaluated our algorithm in a black-box scenario, where tree queries are realized through (black-box) testing. Our case studies include connection establishment in TCP and a priority queue from the Java Class Library.

  10. Single-Trial Classification of Bistable Perception by Integrating Empirical Mode Decomposition, Clustering, and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Hualou Liang

    2008-04-01

    Full Text Available We propose an empirical mode decomposition (EMD- based method to extract features from the multichannel recordings of local field potential (LFP, collected from the middle temporal (MT visual cortex in a macaque monkey, for decoding its bistable structure-from-motion (SFM perception. The feature extraction approach consists of three stages. First, we employ EMD to decompose nonstationary single-trial time series into narrowband components called intrinsic mode functions (IMFs with time scales dependent on the data. Second, we adopt unsupervised K-means clustering to group the IMFs and residues into several clusters across all trials and channels. Third, we use the supervised common spatial patterns (CSP approach to design spatial filters for the clustered spatiotemporal signals. We exploit the support vector machine (SVM classifier on the extracted features to decode the reported perception on a single-trial basis. We demonstrate that the CSP feature of the cluster in the gamma frequency band outperforms the features in other frequency bands and leads to the best decoding performance. We also show that the EMD-based feature extraction can be useful for evoked potential estimation. Our proposed feature extraction approach may have potential for many applications involving nonstationary multivariable time series such as brain-computer interfaces (BCI.

  11. Static Object Detection Based on a Dual Background Model and a Finite-State Machine

    Directory of Open Access Journals (Sweden)

    Heras Evangelio Rubén

    2011-01-01

    Full Text Available Detecting static objects in video sequences has a high relevance in many surveillance applications, such as the detection of abandoned objects in public areas. In this paper, we present a system for the detection of static objects in crowded scenes. Based on the detection of two background models learning at different rates, pixels are classified with the help of a finite-state machine. The background is modelled by two mixtures of Gaussians with identical parameters except for the learning rate. The state machine provides the meaning for the interpretation of the results obtained from background subtraction; it can be implemented as a look-up table with negligible computational cost and it can be easily extended. Due to the definition of the states in the state machine, the system can be used either full automatically or interactively, making it extremely suitable for real-life surveillance applications. The system was successfully validated with several public datasets.

  12. Constrained state-feedback control of an externally excited synchronous machine

    NARCIS (Netherlands)

    Carpiuc, S.C.; Lazar, M.

    2013-01-01

    State-feedback control of externally excited synchronous machines employed in applications such as hybrid electric vehicles and full electric vehicles is a challenging problem. Indeed, these applications are characterized by fast dynamics that are subject to hard physical and control constraints.

  13. Time-frequency feature analysis and recognition of fission neutrons signal based on support vector machine

    International Nuclear Information System (INIS)

    Jin Jing; Wei Biao; Feng Peng; Tang Yuelin; Zhou Mi

    2010-01-01

    Based on the interdependent relationship between fission neutrons ( 252 Cf) and fission chain ( 235 U system), the paper presents the time-frequency feature analysis and recognition in fission neutron signal based on support vector machine (SVM) through the analysis on signal characteristics and the measuring principle of the 252 Cf fission neutron signal. The time-frequency characteristics and energy features of the fission neutron signal are extracted by using wavelet decomposition and de-noising wavelet packet decomposition, and then applied to training and classification by means of support vector machine based on statistical learning theory. The results show that, it is effective to obtain features of nuclear signal via wavelet decomposition and de-noising wavelet packet decomposition, and the latter can reflect the internal characteristics of the fission neutron system better. With the training accomplished, the SVM classifier achieves an accuracy rate above 70%, overcoming the lack of training samples, and verifying the effectiveness of the algorithm. (authors)

  14. Implementation and application of the gaussian decomposition software for NaI gamma-ray spectrometry data

    International Nuclear Information System (INIS)

    Zeng Lihui; Wang Nanping; Tian Gui

    2012-01-01

    In order to extract the information of peaks in different energy from the data of overlapping peaks in environmental gamma spectrometer, a spectrum data Gaussian decomposition software was designed based on least-square Gaussian fitting method. The interface of this software is friendly, it can complete the decomposition of overlapping peaks in gamma spectrometer quickly by the way of man-machines interactive. The result of field measured data decomposed by this software indicates that the Gaussian decomposition software can efficiently extract 137 Cs spectra from over lapping peaks, which has significance to estimate the human nuclide contamination in the environment. (authors)

  15. Implementation and application of the gaussian decomposition software for NaI gamma-ray spectrometry data

    International Nuclear Information System (INIS)

    Zeng Lihui; Wang Nanping Tian Gui

    2011-01-01

    In order to extract the information of peaks in different energy from the data of overlapping peaks in environmental gamma spectrometer, a spectrum data Gaussian decomposition soft is designed based on least- square Gaussian fitting method. The interface of this software is friendly, it can complete the decomposition of overlapping peaks in gamma spectrometer quickly by the way of man-machines interactive. The result that applied gamma spectrometry to data analysis in the field measurement indicates that the Gaussian decomposition soft can efficiently extract 137 Cs from overlapping peaks which has significance to assess the human nuclide contamination of environment. (authors)

  16. Distributed Dynamic State Estimation with Extended Kalman Filter

    Energy Technology Data Exchange (ETDEWEB)

    Du, Pengwei; Huang, Zhenyu; Sun, Yannan; Diao, Ruisheng; Kalsi, Karanjit; Anderson, Kevin K.; Li, Yulan; Lee, Barry

    2011-08-04

    Increasing complexity associated with large-scale renewable resources and novel smart-grid technologies necessitates real-time monitoring and control. Our previous work applied the extended Kalman filter (EKF) with the use of phasor measurement data (PMU) for dynamic state estimation. However, high computation complexity creates significant challenges for real-time applications. In this paper, the problem of distributed dynamic state estimation is investigated. One domain decomposition method is proposed to utilize decentralized computing resources. The performance of distributed dynamic state estimation is tested on a 16-machine, 68-bus test system.

  17. Forecasting of Energy Consumption in China Based on Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2018-04-01

    Full Text Available For social development, energy is a crucial material whose consumption affects the stable and sustained development of the natural environment and economy. Currently, China has become the largest energy consumer in the world. Therefore, establishing an appropriate energy consumption prediction model and accurately forecasting energy consumption in China have practical significance, and can provide a scientific basis for China to formulate a reasonable energy production plan and energy-saving and emissions-reduction-related policies to boost sustainable development. For forecasting the energy consumption in China accurately, considering the main driving factors of energy consumption, a novel model, EEMD-ISFLA-LSSVM (Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm, is proposed in this article. The prediction accuracy of energy consumption is influenced by various factors. In this article, first considering population, GDP (Gross Domestic Product, industrial structure (the proportion of the second industry added value, energy consumption structure, energy intensity, carbon emissions intensity, total imports and exports and other influencing factors of energy consumption, the main driving factors of energy consumption are screened as the model input according to the sorting of grey relational degrees to realize feature dimension reduction. Then, the original energy consumption sequence of China is decomposed into multiple subsequences by Ensemble Empirical Mode Decomposition for de-noising. Next, the ISFLA-LSSVM (Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm model is adopted to forecast each subsequence, and the prediction sequences are reconstructed to obtain the forecasting result. After that, the data from 1990 to 2009 are taken as the training set, and the data from 2010 to 2016 are taken as the test set to make an

  18. Entanglement and tensor product decomposition for two fermions

    International Nuclear Information System (INIS)

    Caban, P; Podlaski, K; Rembielinski, J; Smolinski, K A; Walczak, Z

    2005-01-01

    The problem of the choice of tensor product decomposition in a system of two fermions with the help of Bogoliubov transformations of creation and annihilation operators is discussed. The set of physical states of the composite system is restricted by the superselection rule forbidding the superposition of fermions and bosons. It is shown that the Wootters concurrence is not the proper entanglement measure in this case. The explicit formula for the entanglement of formation is found. This formula shows that the entanglement of a given state depends on the tensor product decomposition of a Hilbert space. It is shown that the set of separable states is narrower than in the two-qubit case. Moreover, there exist states which are separable with respect to all tensor product decompositions of the Hilbert space. (letter to the editor)

  19. Singular value decomposition based feature extraction technique for physiological signal analysis.

    Science.gov (United States)

    Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C

    2012-06-01

    Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.

  20. A Novel Memetic Algorithm Based on Decomposition for Multiobjective Flexible Job Shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Chun Wang

    2017-01-01

    Full Text Available A novel multiobjective memetic algorithm based on decomposition (MOMAD is proposed to solve multiobjective flexible job shop scheduling problem (MOFJSP, which simultaneously minimizes makespan, total workload, and critical workload. Firstly, a population is initialized by employing an integration of different machine assignment and operation sequencing strategies. Secondly, multiobjective memetic algorithm based on decomposition is presented by introducing a local search to MOEA/D. The Tchebycheff approach of MOEA/D converts the three-objective optimization problem to several single-objective optimization subproblems, and the weight vectors are grouped by K-means clustering. Some good individuals corresponding to different weight vectors are selected by the tournament mechanism of a local search. In the experiments, the influence of three different aggregation functions is first studied. Moreover, the effect of the proposed local search is investigated. Finally, MOMAD is compared with eight state-of-the-art algorithms on a series of well-known benchmark instances and the experimental results show that the proposed algorithm outperforms or at least has comparative performance to the other algorithms.

  1. Machine Control System of Steady State Superconducting Tokamak-1

    Energy Technology Data Exchange (ETDEWEB)

    Masand, Harish, E-mail: harish@ipr.res.in; Kumar, Aveg; Bhandarkar, M.; Mahajan, K.; Gulati, H.; Dhongde, J.; Patel, K.; Chudasma, H.; Pradhan, S.

    2016-11-15

    Highlights: • Central Control System. • SST-1. • Machine Control System. - Abstract: Central Control System (CCS) of the Steady State Superconducting Tokamak-1 (SST-1) controls and monitors around 25 plant and experiment subsystems of SST-1 located remotely from the Central-Control room. Machine Control System (MCS) is a supervisory system that sits on the top of the CCS hierarchy and implements the CCS state diagram. MCS ensures the software interlock between the SST-1 subsystems with the CCS, any subsystem communication failure or its local error does not prohibit the execution of the MCS and in-turn the CCS operation. MCS also periodically monitors the subsystem’s status and their vital process parameters throughout the campaign. It also provides the platform for the Central Control operator to visualize and exchange remotely the operational and experimental configuration parameters with the sub-systems. MCS remains operational 24 × 7 from the commencement to the termination of the SST-1 campaign. The developed MCS has performed robustly and flawlessly during all the last campaigns of SST-1 carried out so far. This paper will describe various aspects of the development of MCS.

  2. Support vector machines for nuclear reactor state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Zavaljevski, N.; Gross, K. C.

    2000-02-14

    Validation of nuclear power reactor signals is often performed by comparing signal prototypes with the actual reactor signals. The signal prototypes are often computed based on empirical data. The implementation of an estimation algorithm which can make predictions on limited data is an important issue. A new machine learning algorithm called support vector machines (SVMS) recently developed by Vladimir Vapnik and his coworkers enables a high level of generalization with finite high-dimensional data. The improved generalization in comparison with standard methods like neural networks is due mainly to the following characteristics of the method. The input data space is transformed into a high-dimensional feature space using a kernel function, and the learning problem is formulated as a convex quadratic programming problem with a unique solution. In this paper the authors have applied the SVM method for data-based state estimation in nuclear power reactors. In particular, they implemented and tested kernels developed at Argonne National Laboratory for the Multivariate State Estimation Technique (MSET), a nonlinear, nonparametric estimation technique with a wide range of applications in nuclear reactors. The methodology has been applied to three data sets from experimental and commercial nuclear power reactor applications. The results are promising. The combination of MSET kernels with the SVM method has better noise reduction and generalization properties than the standard MSET algorithm.

  3. Support vector machines for nuclear reactor state estimation

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Gross, K. C.

    2000-01-01

    Validation of nuclear power reactor signals is often performed by comparing signal prototypes with the actual reactor signals. The signal prototypes are often computed based on empirical data. The implementation of an estimation algorithm which can make predictions on limited data is an important issue. A new machine learning algorithm called support vector machines (SVMS) recently developed by Vladimir Vapnik and his coworkers enables a high level of generalization with finite high-dimensional data. The improved generalization in comparison with standard methods like neural networks is due mainly to the following characteristics of the method. The input data space is transformed into a high-dimensional feature space using a kernel function, and the learning problem is formulated as a convex quadratic programming problem with a unique solution. In this paper the authors have applied the SVM method for data-based state estimation in nuclear power reactors. In particular, they implemented and tested kernels developed at Argonne National Laboratory for the Multivariate State Estimation Technique (MSET), a nonlinear, nonparametric estimation technique with a wide range of applications in nuclear reactors. The methodology has been applied to three data sets from experimental and commercial nuclear power reactor applications. The results are promising. The combination of MSET kernels with the SVM method has better noise reduction and generalization properties than the standard MSET algorithm

  4. Advances in independent component analysis and learning machines

    CERN Document Server

    Bingham, Ella; Laaksonen, Jorma; Lampinen, Jouko

    2015-01-01

    In honour of Professor Erkki Oja, one of the pioneers of Independent Component Analysis (ICA), this book reviews key advances in the theory and application of ICA, as well as its influence on signal processing, pattern recognition, machine learning, and data mining. Examples of topics which have developed from the advances of ICA, which are covered in the book are: A unifying probabilistic model for PCA and ICA Optimization methods for matrix decompositions Insights into the FastICA algorithmUnsupervised deep learning Machine vision and image retrieval A review of developments in the t

  5. Complete permutation Gray code implemented by finite state machine

    Directory of Open Access Journals (Sweden)

    Li Peng

    2014-09-01

    Full Text Available An enumerating method of complete permutation array is proposed. The list of n! permutations based on Gray code defined over finite symbol set Z(n = {1, 2, …, n} is implemented by finite state machine, named as n-RPGCF. An RPGCF can be used to search permutation code and provide improved lower bounds on the maximum cardinality of a permutation code in some cases.

  6. Underlying finite state machine for the social engineering attack detection model

    CSIR Research Space (South Africa)

    Mouton, Francois

    2017-08-01

    Full Text Available one to have a clearer overview of the mental processing performed within the model. While the current model provides a general procedural template for implementing detection mechanisms for social engineering attacks, the finite state machine provides a...

  7. Employing finite-state machines in data integrity problems

    Directory of Open Access Journals (Sweden)

    Malikov Andrey

    2016-01-01

    Full Text Available This paper explores the issue of group integrity of tuple subsets regarding corporate integrity constraints in relational databases. A solution may be found by applying the finite-state machine theory to guarantee group integrity of data. We present a practical guide to coding such an automaton. After creating SQL queries to manipulate data and control its integrity for real data domains, we study the issue of query performance, determine the level of transaction isolation, and generate query plans.

  8. Nutrient Dynamics and Litter Decomposition in Leucaena ...

    African Journals Online (AJOL)

    Nutrient contents and rate of litter decomposition were investigated in Leucaena leucocephala plantation in the University of Agriculture, Abeokuta, Ogun State, Nigeria. Litter bag technique was used to study the pattern and rate of litter decomposition and nutrient release of Leucaena leucocephala. Fifty grams of oven-dried ...

  9. Multiresolution signal decomposition transforms, subbands, and wavelets

    CERN Document Server

    Akansu, Ali N

    1992-01-01

    This book provides an in-depth, integrated, and up-to-date exposition of the topic of signal decomposition techniques. Application areas of these techniques include speech and image processing, machine vision, information engineering, High-Definition Television, and telecommunications. The book will serve as the major reference for those entering the field, instructors teaching some or all of the topics in an advanced graduate course and researchers needing to consult an authoritative source.n The first book to give a unified and coherent exposition of multiresolutional signal decompos

  10. Mathematical modelling of the decomposition of explosives

    International Nuclear Information System (INIS)

    Smirnov, Lev P

    2010-01-01

    Studies on mathematical modelling of the molecular and supramolecular structures of explosives and the elementary steps and overall processes of their decomposition are analyzed. Investigations on the modelling of combustion and detonation taking into account the decomposition of explosives are also considered. It is shown that solution of problems related to the decomposition kinetics of explosives requires the use of a complex strategy based on the methods and concepts of chemical physics, solid state physics and theoretical chemistry instead of empirical approach.

  11. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    Science.gov (United States)

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  12. State of the Art Review on Theoretical Tribology of Fluid Power Displacement Machines

    DEFF Research Database (Denmark)

    Cerimagic, Remzija; Johansen, Per; Andersen, Torben O.

    2016-01-01

    machines, and also the work done to validate the theoretical models. This review is not a complete historical account, but aim to describe current trends in fluid power displacement machine tribology. The review considers the rheological models used in the theoretical approaches, the modeling...... and wear mechanisms in the lubricating gaps in fluid power machines is confined to simulation models, as experimental treatments of these mechanisms are very difficult. The aim of this paper is a state of the art review on the theoretical work for the design and optimization of fluid power displacement...... of elastohydrodynamic effects, the modeling of thermal effects, and finally the experimental validation of the theoretical models....

  13. Practical programmable circuits a guide to PLDs, state machines, and microcontrollers

    CERN Document Server

    Broesch, James D

    1991-01-01

    This is a practical guide to programmable logic devices. It covers all devices related to PLD: PALs, PGAs, state machines, and microcontrollers. Usefulness is evaluated; support needed in order to effectively use the devices is discussed. All examples are based on real-world circuits.

  14. Reducing Projection Calculation in Quantum Teleportation by Virtue of the IWOP Technique and Schmidt Decomposition of |η〉 State

    Institute of Scientific and Technical Information of China (English)

    FAN Hong-Yi; FAN Yue

    2002-01-01

    By virtue of the technique of integration within an ordered product of operators and the Schmidt decomposition of the entangled state |η〉, we reduce the general projection calculation in the theory of quantum teleportation to a as simple as possible form and present a general formalism for teleportating quantum states of continuous variable.

  15. The efficacy of support vector machines (SVM)

    Indian Academy of Sciences (India)

    (2006) by applying an SVM statistical learning machine on the time-scale wavelet decomposition methods. We used the data of 108 events in central Japan with magnitude ranging from 3 to 7.4 recorded at KiK-net network stations, for a source–receiver distance of up to 150 km during the period 1998–2011. We applied a ...

  16. Effect of Built-Up Edge Formation during Stable State of Wear in AISI 304 Stainless Steel on Machining Performance and Surface Integrity of the Machined Part.

    Science.gov (United States)

    Ahmed, Yassmin Seid; Fox-Rabinovich, German; Paiva, Jose Mario; Wagg, Terry; Veldhuis, Stephen Clarence

    2017-10-25

    During machining of stainless steels at low cutting -speeds, workpiece material tends to adhere to the cutting tool at the tool-chip interface, forming built-up edge (BUE). BUE has a great importance in machining processes; it can significantly modify the phenomenon in the cutting zone, directly affecting the workpiece surface integrity, cutting tool forces, and chip formation. The American Iron and Steel Institute (AISI) 304 stainless steel has a high tendency to form an unstable BUE, leading to deterioration of the surface quality. Therefore, it is necessary to understand the nature of the surface integrity induced during machining operations. Although many reports have been published on the effect of tool wear during machining of AISI 304 stainless steel on surface integrity, studies on the influence of the BUE phenomenon in the stable state of wear have not been investigated so far. The main goal of the present work is to investigate the close link between the BUE formation, surface integrity and cutting forces in the stable sate of wear for uncoated cutting tool during the cutting tests of AISI 304 stainless steel. The cutting parameters were chosen to induce BUE formation during machining. X-ray diffraction (XRD) method was used for measuring superficial residual stresses of the machined surface through the stable state of wear in the cutting and feed directions. In addition, surface roughness of the machined surface was investigated using the Alicona microscope and Scanning Electron Microscopy (SEM) was used to reveal the surface distortions created during the cutting process, combined with chip undersurface analyses. The investigated BUE formation during the stable state of wear showed that the BUE can cause a significant improvement in the surface integrity and cutting forces. Moreover, it can be used to compensate for tool wear through changing the tool geometry, leading to the protection of the cutting tool from wear.

  17. Abstract quantum computing machines and quantum computational logics

    Science.gov (United States)

    Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto

    2016-06-01

    Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.

  18. Limitations Of The Current State Space Modelling Approach In Multistage Machining Processes Due To Operation Variations

    Science.gov (United States)

    Abellán-Nebot, J. V.; Liu, J.; Romero, F.

    2009-11-01

    The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.

  19. Steady-State Characteristics Analysis of Hybrid-Excited Flux-Switching Machines with Identical Iron Laminations

    Directory of Open Access Journals (Sweden)

    Gan Zhang

    2015-11-01

    Full Text Available Since the air-gap field of flux-switching permanent magnet (FSPM machines is difficult to regulate as it is produced by the stator-magnets alone, a type of hybrid-excited flux-switching (HEFS machine is obtained by reducing the magnet length of an original FSPM machine and introducing a set of field windings into the saved space. In this paper, the steady-state characteristics, especially for the loaded performances of four prototyped HEFS machines, namely, PM-top, PM-middle-1, PM-middle-2, and PM-bottom, are comprehensively compared and evaluated based on both 2D and 3D finite element analysis. Also, the influences of PM materials including ferrite and NdFeB, respectively, on the characteristics of HEFS machines are covered. Particularly, the impacts of magnet movement in the corresponding slot on flux-regulating performances are studied in depth. The best overall performances employing NdFeB can be obtained when magnets are located near the air-gap. The FEA predictions are validated by experimental measurements on corresponding machine prototypes.

  20. Laser Beam Machining (LBM), State of the Art and New Opportunities

    NARCIS (Netherlands)

    Meijer, J.

    2004-01-01

    An overview is given of the state of the art of laser beam machining in general with special emphasis on applications of short and ultrashort lasers. In laser welding the trend is to apply optical sensors for process control. Laser surface treatment is mostly used to apply corrosion and wear

  1. Canonical Polyadic Decomposition With Auxiliary Information for Brain-Computer Interface.

    Science.gov (United States)

    Li, Junhua; Li, Chao; Cichocki, Andrzej

    2017-01-01

    Physiological signals are often organized in the form of multiple dimensions (e.g., channel, time, task, and 3-D voxel), so it is better to preserve original organization structure when processing. Unlike vector-based methods that destroy data structure, canonical polyadic decomposition (CPD) aims to process physiological signals in the form of multiway array, which considers relationships between dimensions and preserves structure information contained by the physiological signal. Nowadays, CPD is utilized as an unsupervised method for feature extraction in a classification problem. After that, a classifier, such as support vector machine, is required to classify those features. In this manner, classification task is achieved in two isolated steps. We proposed supervised CPD by directly incorporating auxiliary label information during decomposition, by which a classification task can be achieved without an extra step of classifier training. The proposed method merges the decomposition and classifier learning together, so it reduces procedure of classification task compared with that of respective decomposition and classification. In order to evaluate the performance of the proposed method, three different kinds of signals, synthetic signal, EEG signal, and MEG signal, were used. The results based on evaluations of synthetic and real signals demonstrated that the proposed method is effective and efficient.

  2. Superconducting rotating machines

    International Nuclear Information System (INIS)

    Smith, J.L. Jr.; Kirtley, J.L. Jr.; Thullen, P.

    1975-01-01

    The opportunities and limitations of the applications of superconductors in rotating electric machines are given. The relevant properties of superconductors and the fundamental requirements for rotating electric machines are discussed. The current state-of-the-art of superconducting machines is reviewed. Key problems, future developments and the long range potential of superconducting machines are assessed

  3. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    Science.gov (United States)

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A study on optimal task decomposition of networked parallel computing using PVM

    International Nuclear Information System (INIS)

    Seong, Kwan Jae; Kim, Han Gyoo

    1998-01-01

    A numerical study is performed to investigate the effect of task decomposition on networked parallel processes using Parallel Virtual Machine (PVM). In our study, a PVM program distributed over a network of workstations is used in solving a finite difference version of a one dimensional heat equation, where natural choice of PVM programming structure would be the master-slave paradigm, with the aim of finding an optimal configuration resulting in least computing time including communication overhead among machines. Given a set of PVM tasks comprised of one master and five slave programs, it is found that there exists a pseudo-optimal number of machines, which does not necessarily coincide with the number of tasks, that yields the best performance when the network is under a light usage. Increasing the number of machines beyond this optimal one does not improve computing performance since increase in communication overhead among the excess number of machines offsets the decrease in CPU time obtained by distributing the PVM tasks among these machines. However, when the network traffic is heavy, the results exhibit a more random characteristic that is explained by the random nature of data transfer time

  5. Single Directional SMO Algorithm for Least Squares Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Xigao Shao

    2013-01-01

    Full Text Available Working set selection is a major step in decomposition methods for training least squares support vector machines (LS-SVMs. In this paper, a new technique for the selection of working set in sequential minimal optimization- (SMO- type decomposition methods is proposed. By the new method, we can select a single direction to achieve the convergence of the optimality condition. A simple asymptotic convergence proof for the new algorithm is given. Experimental comparisons demonstrate that the classification accuracy of the new method is not largely different from the existing methods, but the training speed is faster than existing ones.

  6. A Hybrid Model Based on Wavelet Decomposition-Reconstruction in Track Irregularity State Forecasting

    Directory of Open Access Journals (Sweden)

    Chaolong Jia

    2015-01-01

    Full Text Available Wavelet is able to adapt to the requirements of time-frequency signal analysis automatically and can focus on any details of the signal and then decompose the function into the representation of a series of simple basis functions. It is of theoretical and practical significance. Therefore, this paper does subdivision on track irregularity time series based on the idea of wavelet decomposition-reconstruction and tries to find the best fitting forecast model of detail signal and approximate signal obtained through track irregularity time series wavelet decomposition, respectively. On this ideology, piecewise gray-ARMA recursive based on wavelet decomposition and reconstruction (PG-ARMARWDR and piecewise ANN-ARMA recursive based on wavelet decomposition and reconstruction (PANN-ARMARWDR models are proposed. Comparison and analysis of two models have shown that both these models can achieve higher accuracy.

  7. Long-term decomposition of sugarcane harvest residues in Sao Paulo state, Brazil

    International Nuclear Information System (INIS)

    Fortes, Caio; Trivelin, Paulo Cesar Ocheuze; Vitti, Andre Cesar

    2012-01-01

    Crop residues returned to the soil are important to preserve fertility and sustainability. This research addressed the long-term decomposition of sugarcane post-harvest residues (trash) under reduced tillage, therefore field renewal was performed with herbicide followed by subsoiling and ratoons were deprived of interrow scarification. The trial was conducted in the northern Sao Paulo State, Brazil during four consecutive crops (2005–2008) where litter bags containing 15 N-labeled trash were disposed in the field attempting to simulate two distinct situations: the previous crop trash (PCT) or residues incorporated in the field after tillage, and post-harvest trash (PHT) or the remains of plant-cane harvest. Decomposition rates regarding dry matter (DM), carbon (C), root growth, plant nutrients (N, P, K, Ca, Mg and S), lignin (LIG) cellulose (CEL) and hemicellulose (HCEL) contents were assessed for PCT (2005 ndash;2008) and for PHT (2006–2008). There were significant reductions on DM and C:N ratio due to C losses and root growth within the litter bags over time. The DM from PCT and PHT decreased 96% and 73% after four and three crops, respectively, and the higher nutrients release were found for K, Ca and N. The LIG, CEL and HCEL concentrations in PCT decreased 60%, 29%, 70% after four crops and 47%, 35%, 70% from PHT after three crops, respectively. Trash decomposition was driven mainly by residues biochemical composition, root growth within the trash blanket and the climatic conditions during the crop cycles. -- Highlights: ► Degradation of sugarcane previous or post-harvest trash (PCT or PHT) was evaluated. ► Dry matter and C decreased due to microbial and root growth within trash blankets. ► C:N ratio of PCT linearly decreased 23% per year during four consecutive crops. ► Lignin, cellulose and hemicellulose concentration averagely declined 54, 41 and 70%. ► PCT and PHT are long-term sources of C, K, Ca and N to the soil-plant system.

  8. Decomposition of intact chicken feathers by a thermophile in combination with an acidulocomposting garbage-treatment process.

    Science.gov (United States)

    Shigeri, Yasushi; Matsui, Tatsunobu; Watanabe, Kunihiko

    2009-11-01

    In order to develop a practical method for the decomposition of intact chicken feathers, a moderate thermophile strain, Meiothermus ruber H328, having strong keratinolytic activity, was used in a bio-type garbage-treatment machine working with an acidulocomposting process. The addition of strain H328 cells (15 g) combined with acidulocomposting in the garbage machine resulted in 70% degradation of intact chicken feathers (30 g) within 14 d. This degradation efficiency is comparable to a previous result employing the strain as a single bacterium in flask culture, and it indicates that strain H328 can promote intact feather degradation activity in a garbage machine currently on the market.

  9. Thermal decomposition process of silver behenate

    International Nuclear Information System (INIS)

    Liu Xianhao; Lu Shuxia; Zhang Jingchang; Cao Weiliang

    2006-01-01

    The thermal decomposition processes of silver behenate have been studied by infrared spectroscopy (IR), X-ray diffraction (XRD), combined thermogravimetry-differential thermal analysis-mass spectrometry (TG-DTA-MS), transmission electron microscopy (TEM) and UV-vis spectroscopy. The TG-DTA and the higher temperature IR and XRD measurements indicated that complicated structural changes took place while heating silver behenate, but there were two distinct thermal transitions. During the first transition at 138 deg. C, the alkyl chains of silver behenate were transformed from an ordered into a disordered state. During the second transition at about 231 deg. C, a structural change took place for silver behenate, which was the decomposition of silver behenate. The major products of the thermal decomposition of silver behenate were metallic silver and behenic acid. Upon heating up to 500 deg. C, the final product of the thermal decomposition was metallic silver. The combined TG-MS analysis showed that the gas products of the thermal decomposition of silver behenate were carbon dioxide, water, hydrogen, acetylene and some small molecule alkenes. TEM and UV-vis spectroscopy were used to investigate the process of the formation and growth of metallic silver nanoparticles

  10. SEBAL-based Daily Actual Evapotranspiration Forecasting using Wavelets Decomposition Analysis and Multivariate Relevance Vector Machines

    Science.gov (United States)

    Torres, A. F.

    2011-12-01

    two excellent tools from the Learning Machine field know as the Wavelet Decomposition Analysis (WDA) and the Multivariate Relevance Vector Machine (MVRVM) to forecast the results obtained from the SEBAL algorithm using LandSat imagery and soil moisture maps. The predictive capability of this novel hybrid WDA-RVM actual evapotranspiration forecasting technique is tested by comparing the crop water requirements and delivered crop water in the Lower Sevier River Basin, Utah, for the period 2007-2011. This location was selected because of their success increasing the efficiency of water use and control along the entire irrigation system. Research is currently on going to assess the efficacy of the WDA-RVM technique along the irrigation season, which is required to enhance the water use efficiency and minimize climate change impact on the Sevier River Basin.

  11. Scaling up liquid state machines to predict over address events from dynamic vision sensors.

    Science.gov (United States)

    Kaiser, Jacques; Stal, Rainer; Subramoney, Anand; Roennau, Arne; Dillmann, Rüdiger

    2017-09-01

    Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently. In theoretical neuroscience, liquid state machines have been proposed as a biologically inspired method to perform asynchronous prediction without a model. However, they have so far only been demonstrated in simulation or small scale pre-processed camera images. In this paper, we use a liquid state machine to predict over the whole  [Formula: see text]  event stream provided by a real dynamic vision sensor (DVS, or silicon retina). Thanks to the event-based nature of the DVS, the liquid is constantly fed with data when an object is in motion, fully embracing the asynchronicity of spiking neural networks. We propose a smooth continuous representation of the event stream for the short-term visual prediction task. Moreover, compared to previous works (2002 Neural Comput. 2525 282-93 and Burgsteiner H et al 2007 Appl. Intell. 26 99-109), we scale the input dimensionality that the liquid operates on by two order of magnitudes. We also expose the current limits of our method by running experiments in a challenging environment where multiple objects are in motion. This paper is a step towards integrating biologically inspired algorithms derived in theoretical neuroscience to real world robotic setups. We believe that liquid state machines could complement current prediction algorithms used in robotics, especially when dealing with asynchronous sensors.

  12. Logic synthesis for FPGA-based finite state machines

    CERN Document Server

    Barkalov, Alexander; Kolopienczyk, Malgorzata; Mielcarek, Kamil; Bazydlo, Grzegorz

    2016-01-01

    This book discusses control units represented by the model of a finite state machine (FSM). It contains various original methods and takes into account the peculiarities of field-programmable gate arrays (FPGA) chips and a FSM model. It shows that one of the peculiarities of FPGA chips is the existence of embedded memory blocks (EMB). The book is devoted to the solution of problems of logic synthesis and reduction of hardware amount in control units. The book will be interesting and useful for researchers and PhD students in the area of Electrical Engineering and Computer Science, as well as for designers of modern digital systems.

  13. Parallel algorithms for testing finite state machines:Generating UIO sequences

    OpenAIRE

    Hierons, RM; Turker, UC

    2016-01-01

    This paper describes an efficient parallel algorithm that uses many-core GPUs for automatically deriving Unique Input Output sequences (UIOs) from Finite State Machines. The proposed algorithm uses the global scope of the GPU's global memory through coalesced memory access and minimises the transfer between CPU and GPU memory. The results of experiments indicate that the proposed method yields considerably better results compared to a single core UIO construction algorithm. Our algorithm is s...

  14. Spectral decomposition of tent maps using symmetry considerations

    International Nuclear Information System (INIS)

    Ordonez, G.E.; Driebe, D.J.

    1996-01-01

    The spectral decompostion of the Frobenius-Perron operator of maps composed of many tents is determined from symmetry considerations. The eigenstates involve Euler as well as Bernoulli polynomials. The authors have introduced some new techniques, based on symmetry considerations, enabling the construction of spectral decompositions in a much simpler way than previous construction algorithms, Here we utilize these techniques to construct the spectral decomposition for one- dimensional maps of the unit interval composed of many tents. The construction uses the knowledge of the spectral decomposition of the r-adic map, which involves Bernoulli polynomials and their duals. It will be seen that the spectral decomposition of the tent maps involves both Bernoulli polynomials and Euler polynomials along with the appropriate dual states

  15. Generalized decompositions of dynamic systems and vector Lyapunov functions

    Science.gov (United States)

    Ikeda, M.; Siljak, D. D.

    1981-10-01

    The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.

  16. Using Expert Systems in Evaluation of the State of High Voltage Machine Insulation Systems

    Directory of Open Access Journals (Sweden)

    K. Záliš

    2000-01-01

    Full Text Available Expert systems are used for evaluating the actual state and future behavior of insulating systems of high voltage electrical machines and equipment. Several rule-based expert systems have been developed in cooperation with top diagnostic workplaces in the Czech Republic for this purpose. The IZOLEX expert system evaluates diagnostic measurement data from commonly used offline diagnostic methods for the diagnostic of high voltage insulation of rotating machines, non-rotating machines and insulating oils. The CVEX expert system evaluates the discharge activity on high voltage electrical machines and equipment by means of an off-line measurement. The CVEXON expert system is for evaluating the discharge activity by on-line measurement, and the ALTONEX expert system is the expert system for on-line monitoring of rotating machines. These developed expert systems are also used for educating students (in bachelor, master and post-graduate studies and in courses which are organized for practicing engineers and technicians and for specialists in the electrical power engineering branch. A complex project has recently been set up to evaluate the measurement of partial discharges. Two parallel expert systems for evaluating partial dischatge activity on high voltage electrical machines will work at the same time in this complex evaluating system.

  17. FSM-F: Finite State Machine Based Framework for Denial of Service and Intrusion Detection in MANET.

    Science.gov (United States)

    N Ahmed, Malik; Abdullah, Abdul Hanan; Kaiwartya, Omprakash

    2016-01-01

    Due to the continuous advancements in wireless communication in terms of quality of communication and affordability of the technology, the application area of Mobile Adhoc Networks (MANETs) significantly growing particularly in military and disaster management. Considering the sensitivity of the application areas, security in terms of detection of Denial of Service (DoS) and intrusion has become prime concern in research and development in the area. The security systems suggested in the past has state recognition problem where the system is not able to accurately identify the actual state of the network nodes due to the absence of clear definition of states of the nodes. In this context, this paper proposes a framework based on Finite State Machine (FSM) for denial of service and intrusion detection in MANETs. In particular, an Interruption Detection system for Adhoc On-demand Distance Vector (ID-AODV) protocol is presented based on finite state machine. The packet dropping and sequence number attacks are closely investigated and detection systems for both types of attacks are designed. The major functional modules of ID-AODV includes network monitoring system, finite state machine and attack detection model. Simulations are carried out in network simulator NS-2 to evaluate the performance of the proposed framework. A comparative evaluation of the performance is also performed with the state-of-the-art techniques: RIDAN and AODV. The performance evaluations attest the benefits of proposed framework in terms of providing better security for denial of service and intrusion detection attacks.

  18. Basis adaptation and domain decomposition for steady-state partial differential equations with random coefficients

    Energy Technology Data Exchange (ETDEWEB)

    Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.

    2017-12-01

    We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.

  19. Towards Integration of Object-Oriented Languages and State Machines

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann

    1999-01-01

    The goal of this paper is to obtain a one-to-one correspondence between state machines as e.g. used in UML and object-oriented programming languages. A proposal is made for a language mechanism that makes it possible for an object to change its virtual bindings at run-time. A state of an object may...... then be represented as a set of virtual bindings.One advantage of object-orientation is that it provides an integrating perspective on many phases of software development, including analysis, design and implementation. For the static set of OO language constructs there is almost a one-to-one correspondence between...... analysis/design notations and OO programming languages. No such correspondence exists for the dynamic aspects, but the proposed state-mechanism is a contribution to a better cor respondence. The proposal is based on previous work by Antero Taivalsaari and compared to the more complex features for changing...

  20. Towards an automatic model transformation mechanism from UML state machines to DEVS models

    Directory of Open Access Journals (Sweden)

    Ariel González

    2015-08-01

    Full Text Available The development of complex event-driven systems requires studies and analysis prior to deployment with the goal of detecting unwanted behavior. UML is a language widely used by the software engineering community for modeling these systems through state machines, among other mechanisms. Currently, these models do not have appropriate execution and simulation tools to analyze the real behavior of systems. Existing tools do not provide appropriate libraries (sampling from a probability distribution, plotting, etc. both to build and to analyze models. Modeling and simulation for design and prototyping of systems are widely used techniques to predict, investigate and compare the performance of systems. In particular, the Discrete Event System Specification (DEVS formalism separates the modeling and simulation; there are several tools available on the market that run and collect information from DEVS models. This paper proposes a model transformation mechanism from UML state machines to DEVS models in the Model-Driven Development (MDD context, through the declarative QVT Relations language, in order to perform simulations using tools, such as PowerDEVS. A mechanism to validate the transformation is proposed. Moreover, examples of application to analyze the behavior of an automatic banking machine and a control system of an elevator are presented.

  1. A Finite State Machine Approach to Algorithmic Lateral Inhibition for Real-Time Motion Detection †

    Directory of Open Access Journals (Sweden)

    María T. López

    2018-05-01

    Full Text Available Many researchers have explored the relationship between recurrent neural networks and finite state machines. Finite state machines constitute the best-characterized computational model, whereas artificial neural networks have become a very successful tool for modeling and problem solving. The neurally-inspired lateral inhibition method, and its application to motion detection tasks, have been successfully implemented in recent years. In this paper, control knowledge of the algorithmic lateral inhibition (ALI method is described and applied by means of finite state machines, in which the state space is constituted from the set of distinguishable cases of accumulated charge in a local memory. The article describes an ALI implementation for a motion detection task. For the implementation, we have chosen to use one of the members of the 16-nm Kintex UltraScale+ family of Xilinx FPGAs. FPGAs provide the necessary accuracy, resolution, and precision to run neural algorithms alongside current sensor technologies. The results offered in this paper demonstrate that this implementation provides accurate object tracking performance on several datasets, obtaining a high F-score value (0.86 for the most complex sequence used. Moreover, it outperforms implementations of a complete ALI algorithm and a simplified version of the ALI algorithm—named “accumulative computation”—which was run about ten years ago, now reaching real-time processing times that were simply not achievable at that time for ALI.

  2. Algorithm for determining two-periodic steady-states in AC machines directly in time domain

    Directory of Open Access Journals (Sweden)

    Sobczyk Tadeusz J.

    2016-09-01

    Full Text Available This paper describes an algorithm for finding steady states in AC machines for the cases of their two-periodic nature. The algorithm enables to specify the steady-state solution identified directly in time domain despite of the fact that two-periodic waveforms are not repeated in any finite time interval. The basis for such an algorithm is a discrete differential operator that specifies the temporary values of the derivative of the two-periodic function in the selected set of points on the basis of the values of that function in the same set of points. It allows to develop algebraic equations defining the steady state solution reached in a chosen point set for the nonlinear differential equations describing the AC machines when electrical and mechanical equations should be solved together. That set of those values allows determining the steady state solution at any time instant up to infinity. The algorithm described in this paper is competitive with respect to the one known in literature an approach based on the harmonic balance method operated in frequency domain.

  3. Decomposition of the Strategic Plan for Restructuring a Machine-Building Enterprise in View of Continuity of the Plans for Adjacent Periods

    Directory of Open Access Journals (Sweden)

    Kozyr-Chepurna Mariia A.

    2017-09-01

    Full Text Available The aim of the article is to practically approve the authors’ multi-level hierarchical approach to the strategic planning of industrial enterprise restructuring using the example of solving the problem of disaggregating the strategic plan for restructuring a machine-building enterprise of the electrical industry providing for organization of production of railroad freight cars at the enterprise. Besides, there demonstrated the effectiveness of the mechanisms of coordinating the plans for adjacent hierarchical and time periods included in the corresponding mathematical support. In the course of the practical approval, different variants of formulating the problem of decomposing the strategic plan into plans of lower hierarchical levels differing in terms of coordination of the plans of adjacent hierarchical levels and adjacent planning periods are considered, and the solutions of corresponding optimal planning problems are analyzed. It is shown that the developed methodological approach, which is based on the methods of statistical optimization, demonstrates quite satisfactory performance characteristics in solving the problem of coordinating the plans of adjacent time periods in the mode of sliding planning in the process of decomposition of the strategic plan into lower-level plans.

  4. Microbiological decomposition of bagasse after radiation pasteurization

    International Nuclear Information System (INIS)

    Ito, Hitoshi; Ishigaki, Isao

    1987-01-01

    Microbiological decomposition of bagasse was studied for upgrading to animal feeds after radiation pasteurization. Solid-state culture media of bagasse were prepared with addition of some amount of inorganic salts for nitrogen source, and after irradiation, fungi were infected for cultivation. In this study, many kind of cellulosic fungi such as Pleurotus ostreatus, P. flavellatus, Verticillium sp., Coprinus cinereus, Lentinus edodes, Aspergillus niger, Trichoderma koningi, T. viride were used for comparison of decomposition of crude fibers. In alkali nontreated bagasse, P. ostreatus, P. flavellatus, C. cinereus and Verticillium sp. could decompose crude fibers from 25 to 34 % after one month of cultivation, whereas other fungi such as A. niger, T. koningi, T. viride, L. edodes decomposed below 10 %. On the contrary, alkali treatment enhanced the decomposition of crude fiber by A. niger, T. koningi and T. viride to be 29 to 47 % as well as Pleurotus species or C. cinereus. Other species of mushrooms such as L. edodes had a little ability of decomposition even after alkali treatment. Radiation treatment with 10 kGy could not enhance the decomposition of bagasse compared with steam treatment, whereas higher doses of radiation treatment enhanced a little of decomposition of crude fibers by microorganisms. (author)

  5. Microbiological decomposition of bagasse after radiation pasteurization

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Hitoshi; Ishigaki, Isao

    1987-11-01

    Microbiological decomposition of bagasse was studied for upgrading to animal feeds after radiation pasteurization. Solid-state culture media of bagasse were prepared with addition of some amount of inorganic salts for nitrogen source, and after irradiation, fungi were infected for cultivation. In this study, many kind of cellulosic fungi such as Pleurotus ostreatus, P. flavellatus, Verticillium sp., Coprinus cinereus, Lentinus edodes, Aspergillus niger, Trichoderma koningi, T. viride were used for comparison of decomposition of crude fibers. In alkali nontreated bagasse, P. ostreatus, P. flavellatus, C. cinereus and Verticillium sp. could decompose crude fibers from 25 to 34 % after one month of cultivation, whereas other fungi such as A. niger, T. koningi, T. viride, L. edodes decomposed below 10 %. On the contrary, alkali treatment enhanced the decomposition of crude fiber by A. niger, T. koningi and T. viride to be 29 to 47 % as well as Pleurotus species or C. cinereus. Other species of mushrooms such as L. edodes had a little ability of decomposition even after alkali treatment. Radiation treatment with 10 kGy could not enhance the decomposition of bagasse compared with steam treatment, whereas higher doses of radiation treatment enhanced a little of decomposition of crude fibers by microorganisms.

  6. Approximate multi-state reliability expressions using a new machine learning technique

    International Nuclear Information System (INIS)

    Rocco S, Claudio M.; Muselli, Marco

    2005-01-01

    The machine-learning-based methodology, previously proposed by the authors for approximating binary reliability expressions, is now extended to develop a new algorithm, based on the procedure of Hamming Clustering, which is capable to deal with multi-state systems and any success criterion. The proposed technique is presented in details and verified on literature cases: experiment results show that the new algorithm yields excellent predictions

  7. Thermal decomposition of zirconium compounds with some aromatic hydroxycarboxylic acids

    Energy Technology Data Exchange (ETDEWEB)

    Koshel, A V; Malinko, L A; Karlysheva, K F; Sheka, I A; Shchepak, N I [AN Ukrainskoj SSR, Kiev. Inst. Obshchej i Neorganicheskoj Khimii

    1980-02-01

    By the thermogravimetry method investigated are processes of thermal decomposition of different zirconium compounds with mandelic, parabromomandelic, salicylic and sulphosalicylic acids. For identification of decomposition products the specimens have been kept at the temperature of effects up to the constant weight. Taken are IR-spectra, rentgenoarams, carried out is elementary analysis of decomposition products. It is stated that thermal decomposition of the investigated compounds passes in stages; the final product of thermolysis is ZrO/sub 2/. Nonhydrolized compounds are stable at heating in the air up to 200-265 deg. Hydroxy compounds begin to decompose at lower temperature (80-100 deg).

  8. Quantum cloning machines and the applications

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Heng, E-mail: hfan@iphy.ac.cn [Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190 (China); Collaborative Innovation Center of Quantum Matter, Beijing 100190 (China); Wang, Yi-Nan; Jing, Li [School of Physics, Peking University, Beijing 100871 (China); Yue, Jie-Dong [Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190 (China); Shi, Han-Duo; Zhang, Yong-Liang; Mu, Liang-Zhu [School of Physics, Peking University, Beijing 100871 (China)

    2014-11-20

    No-cloning theorem is fundamental for quantum mechanics and for quantum information science that states an unknown quantum state cannot be cloned perfectly. However, we can try to clone a quantum state approximately with the optimal fidelity, or instead, we can try to clone it perfectly with the largest probability. Thus various quantum cloning machines have been designed for different quantum information protocols. Specifically, quantum cloning machines can be designed to analyze the security of quantum key distribution protocols such as BB84 protocol, six-state protocol, B92 protocol and their generalizations. Some well-known quantum cloning machines include universal quantum cloning machine, phase-covariant cloning machine, the asymmetric quantum cloning machine and the probabilistic quantum cloning machine. In the past years, much progress has been made in studying quantum cloning machines and their applications and implementations, both theoretically and experimentally. In this review, we will give a complete description of those important developments about quantum cloning and some related topics. On the other hand, this review is self-consistent, and in particular, we try to present some detailed formulations so that further study can be taken based on those results.

  9. Quantum cloning machines and the applications

    International Nuclear Information System (INIS)

    Fan, Heng; Wang, Yi-Nan; Jing, Li; Yue, Jie-Dong; Shi, Han-Duo; Zhang, Yong-Liang; Mu, Liang-Zhu

    2014-01-01

    No-cloning theorem is fundamental for quantum mechanics and for quantum information science that states an unknown quantum state cannot be cloned perfectly. However, we can try to clone a quantum state approximately with the optimal fidelity, or instead, we can try to clone it perfectly with the largest probability. Thus various quantum cloning machines have been designed for different quantum information protocols. Specifically, quantum cloning machines can be designed to analyze the security of quantum key distribution protocols such as BB84 protocol, six-state protocol, B92 protocol and their generalizations. Some well-known quantum cloning machines include universal quantum cloning machine, phase-covariant cloning machine, the asymmetric quantum cloning machine and the probabilistic quantum cloning machine. In the past years, much progress has been made in studying quantum cloning machines and their applications and implementations, both theoretically and experimentally. In this review, we will give a complete description of those important developments about quantum cloning and some related topics. On the other hand, this review is self-consistent, and in particular, we try to present some detailed formulations so that further study can be taken based on those results

  10. A generic finite state machine framework for the ACNET control system

    International Nuclear Information System (INIS)

    Carmichael, L.; Warner, A.

    2009-01-01

    A significant level of automation and flexibility has been added to the ACNET control system through the development of a Java-based Finite State Machine (FSM) infrastructure. These FSMs are integrated into ACNET and allow users to easily build, test and execute scripts that have full access to ACNET's functionality. In this paper, a description will be given of the FSM design and its ties to the Java-based Data Acquisition Engine (DAE) framework. Each FSM is part of a client-server model with FSM display clients using Remote Method Invocation (RMI) to communicate with DAE servers heavily coupled to ACNET. A web-based monitoring system that allows users to utilize browsers to observe persistent FSMs will also be discussed. Finally, some key implementations such as the crash recovery FSM developed for the Electron Cooling machine protection system will be presented.

  11. Design of Online Monitoring and Fault Diagnosis System for Belt Conveyors Based on Wavelet Packet Decomposition and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Wei Li

    2013-01-01

    Full Text Available Belt conveyors are the equipment widely used in coal mines and other manufacturing factories, whose main components are a number of idlers. The faults of belt conveyors can directly influence the daily production. In this paper, a fault diagnosis method combining wavelet packet decomposition (WPD and support vector machine (SVM is proposed for monitoring belt conveyors with the focus on the detection of idler faults. Since the number of the idlers could be large, one acceleration sensor is applied to gather the vibration signals of several idlers in order to reduce the number of sensors. The vibration signals are decomposed with WPD, and the energy of each frequency band is extracted as the feature. Then, the features are employed to train an SVM to realize the detection of idler faults. The proposed fault diagnosis method is firstly tested on a testbed, and then an online monitoring and fault diagnosis system is designed for belt conveyors. An experiment is also carried out on a belt conveyor in service, and it is verified that the proposed system can locate the position of the faulty idlers with a limited number of sensors, which is important for operating belt conveyors in practices.

  12. FSM-F: Finite State Machine Based Framework for Denial of Service and Intrusion Detection in MANET.

    Directory of Open Access Journals (Sweden)

    Malik N Ahmed

    Full Text Available Due to the continuous advancements in wireless communication in terms of quality of communication and affordability of the technology, the application area of Mobile Adhoc Networks (MANETs significantly growing particularly in military and disaster management. Considering the sensitivity of the application areas, security in terms of detection of Denial of Service (DoS and intrusion has become prime concern in research and development in the area. The security systems suggested in the past has state recognition problem where the system is not able to accurately identify the actual state of the network nodes due to the absence of clear definition of states of the nodes. In this context, this paper proposes a framework based on Finite State Machine (FSM for denial of service and intrusion detection in MANETs. In particular, an Interruption Detection system for Adhoc On-demand Distance Vector (ID-AODV protocol is presented based on finite state machine. The packet dropping and sequence number attacks are closely investigated and detection systems for both types of attacks are designed. The major functional modules of ID-AODV includes network monitoring system, finite state machine and attack detection model. Simulations are carried out in network simulator NS-2 to evaluate the performance of the proposed framework. A comparative evaluation of the performance is also performed with the state-of-the-art techniques: RIDAN and AODV. The performance evaluations attest the benefits of proposed framework in terms of providing better security for denial of service and intrusion detection attacks.

  13. Ozone decomposition

    Directory of Open Access Journals (Sweden)

    Batakliev Todor

    2014-06-01

    Full Text Available Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers. Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates

  14. Laser-induced diffusion decomposition in Fe–V thin-film alloys

    Energy Technology Data Exchange (ETDEWEB)

    Polushkin, N.I., E-mail: nipolushkin@fc.ul.pt [Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal); Instituto de Ciência e Engenharia de Materiais e Superfícies, 1049-001 Lisboa (Portugal); Duarte, A.C.; Conde, O. [Departamento de Física, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisboa (Portugal); Instituto de Ciência e Engenharia de Materiais e Superfícies, 1049-001 Lisboa (Portugal); Alves, E. [Associação Euratom/IST e Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal); Barradas, N.P. [Centro de Ciências e Tecnologias Nucleares, Instituto Superior Técnico, Universidade de Lisboa, 2695-066 Bobadela LRS (Portugal); García-García, A.; Kakazei, G.N.; Ventura, J.O.; Araujo, J.P. [Departamento de Física, Universidade do Porto e IFIMUP, 4169-007 Porto (Portugal); Oliveira, V. [Instituto de Ciência e Engenharia de Materiais e Superfícies, 1049-001 Lisboa (Portugal); Instituto Superior de Engenharia de Lisboa, 1959-007 Lisboa (Portugal); Vilar, R. [Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal); Instituto de Ciência e Engenharia de Materiais e Superfícies, 1049-001 Lisboa (Portugal)

    2015-05-01

    Highlights: • Irradiation of an Fe–V alloy by femtosecond laser triggers diffusion decomposition. • The decomposition occurs with strongly enhanced (∼4 orders) atomic diffusivity. • This anomaly is associated with the metallic glassy state achievable under laser quenching. • The ultrafast diffusion decomposition is responsible for laser-induced ferromagnetism. - Abstract: We investigate the origin of ferromagnetism induced in thin-film (∼20 nm) Fe–V alloys by their irradiation with subpicosecond laser pulses. We find with Rutherford backscattering that the magnetic modifications follow a thermally stimulated process of diffusion decomposition, with formation of a-few-nm-thick Fe enriched layer inside the film. Surprisingly, similar transformations in the samples were also found after their long-time (∼10{sup 3} s) thermal annealing. However, the laser action provides much higher diffusion coefficients (∼4 orders of magnitude) than those obtained under standard heat treatments. We get a hint that this ultrafast diffusion decomposition occurs in the metallic glassy state achievable in laser-quenched samples. This vitrification is thought to be a prerequisite for the laser-induced onset of ferromagnetism that we observe.

  15. Real Time Robot Soccer Game Event Detection Using Finite State Machines with Multiple Fuzzy Logic Probability Evaluators

    Directory of Open Access Journals (Sweden)

    Elmer P. Dadios

    2009-01-01

    Full Text Available This paper presents a new algorithm for real time event detection using Finite State Machines with multiple Fuzzy Logic Probability Evaluators (FLPEs. A machine referee for a robot soccer game is developed and is used as the platform to test the proposed algorithm. A novel technique to detect collisions and other events in microrobot soccer game under inaccurate and insufficient information is presented. The robots' collision is used to determine goalkeeper charging and goal score events which are crucial for the machine referee's decisions. The Main State Machine (MSM handles the schedule of event activation. The FLPE calculates the probabilities of the true occurrence of the events. Final decisions about the occurrences of events are evaluated and compared through threshold crisp probability values. The outputs of FLPEs can be combined to calculate the probability of an event composed of subevents. Using multiple fuzzy logic system, the FLPE utilizes minimal number of rules and can be tuned individually. Experimental results show the accuracy and robustness of the proposed algorithm.

  16. On Coding the States of Sequential Machines with the Use of Partition Pairs

    DEFF Research Database (Denmark)

    Zahle, Torben U.

    1966-01-01

    This article introduces a new technique of making state assignment for sequential machines. The technique is in line with the approach used by Hartmanis [l], Stearns and Hartmanis [3], and Curtis [4]. It parallels the work of Dolotta and McCluskey [7], although it was developed independently...

  17. Nickel Oxide (NiO nanoparticles prepared by solid-state thermal decomposition of Nickel (II schiff base precursor

    Directory of Open Access Journals (Sweden)

    Aliakbar Dehno Khalaji

    2015-06-01

    Full Text Available In this paper, plate-like NiO nanoparticles were prepared by one-pot solid-state thermal decomposition of nickel (II Schiff base complex as new precursor. First, the nickel (II Schiff base precursor was prepared by solid-state grinding using nickel (II nitrate hexahydrate, Ni(NO32∙6H2O, and the Schiff base ligand N,N′-bis-(salicylidene benzene-1,4-diamine for 30 min without using any solvent, catalyst, template or surfactant. It was characterized by Fourier Transform Infrared spectroscopy (FT-IR and elemental analysis (CHN. The resultant solid was subsequently annealed in the electrical furnace at 450 °C for 3 h in air atmosphere. Nanoparticles of NiO were produced and characterized by X-ray powder diffraction (XRD at 2θ degree 0-140°, FT-IR spectroscopy, scanning electron microscopy (SEM and transmission electron microscopy (TEM. The XRD and FT-IR results showed that the product is pure and has good crystallinity with cubic structure because no characteristic peaks of impurity were observed, while the SEM and TEM results showed that the obtained product is tiny, aggregated with plate-like shape, narrow size distribution with an average size between 10-40 nm. Results show that the solid state thermal decomposition method is simple, environmentally friendly, safe and suitable for preparation of NiO nanoparticles. This method can also be used to synthesize nanoparticles of other metal oxides.

  18. Adaptive image denoising based on support vector machine and wavelet description

    Science.gov (United States)

    An, Feng-Ping; Zhou, Xian-Wei

    2017-12-01

    Adaptive image denoising method decomposes the original image into a series of basic pattern feature images on the basis of wavelet description and constructs the support vector machine regression function to realize the wavelet description of the original image. The support vector machine method allows the linear expansion of the signal to be expressed as a nonlinear function of the parameters associated with the SVM. Using the radial basis kernel function of SVM, the original image can be extended into a MEXICAN function and a residual trend. This MEXICAN represents a basic image feature pattern. If the residual does not fluctuate, it can also be represented as a characteristic pattern. If the residuals fluctuate significantly, it is treated as a new image and the same decomposition process is repeated until the residuals obtained by the decomposition do not significantly fluctuate. Experimental results show that the proposed method in this paper performs well; especially, it satisfactorily solves the problem of image noise removal. It may provide a new tool and method for image denoising.

  19. Machine learning properties of materials and molecules with entropy-regularized kernels

    Science.gov (United States)

    Ceriotti, Michele; Bartók, Albert; CsáNyi, GáBor; de, Sandip

    Application of machine-learning methods to physics, chemistry and materials science is gaining traction as a strategy to obtain accurate predictions of the properties of matter at a fraction of the typical cost of quantum mechanical electronic structure calculations. In this endeavor, one can leverage general-purpose frameworks for supervised-learning. It is however very important that the input data - for instance the positions of atoms in a molecule or solid - is processed into a form that reflects all the underlying physical symmetries of the problem, and that possesses the regularity properties that are required by machine-learning algorithms. Here we introduce a general strategy to build a representation of this kind. We will start from existing approaches to compare local environments (basically, groups of atoms), and combine them using techniques borrowed from optimal transport theory, discussing the relation between this idea and additive energy decompositions. We will present a few examples demonstrating the potential of this approach as a tool to predict molecular and materials' properties with an accuracy on par with state-of-the-art electronic structure methods. MARVEL NCCR (Swiss National Science Foundation) and ERC StG HBMAP (European Research Council, G.A. 677013).

  20. Automatic Test Pattern Generator for Fuzzing Based on Finite State Machine

    Directory of Open Access Journals (Sweden)

    Ming-Hung Wang

    2017-01-01

    Full Text Available With the rapid development of the Internet, several emerging technologies are adopted to construct fancy, interactive, and user-friendly websites. Among these technologies, HTML5 is a popular one and is widely used in establishing modern sites. However, the security issues in the new web technologies are also raised and are worthy of investigation. For vulnerability investigation, many previous studies used fuzzing and focused on generation-based approaches to produce test cases for fuzzing; however, these methods require a significant amount of knowledge and mental efforts to develop test patterns for generating test cases. To decrease the entry barrier of conducting fuzzing, in this study, we propose a test pattern generation algorithm based on the concept of finite state machines. We apply graph analysis techniques to extract paths from finite state machines and use these paths to construct test patterns automatically. According to the proposal, fuzzing can be completed through inputting a regular expression corresponding to the test target. To evaluate the performance of our proposal, we conduct an experiment in identifying vulnerabilities of the input attributes in HTML5. According to the results, our approach is not only efficient but also effective for identifying weak validators in HTML5.

  1. FESA class for off-momentum lossmaps and decomposition of beam losses at LHC

    CERN Document Server

    Wyszynski, Michal Jakub; Pojer, Mirko; Salvachua Ferrando, Belen Maria; Valentino, Gianluca; CERN. Geneva. ATS Department

    2016-01-01

    The project consisted of two main parts. The first part was to build a FESA class which would serve as lossmap feedback controller for off-momentum lossmaps, capable of handling 100 Hz BLM data, contrary to existing controller. Thanks to the efficient management RF frequency, beam dumps during this procedure would be avoided and machine availability would improve by shortening the duration of machine validation after technical stops. The second part concerned identification of beam losses at the LHC. It was a continuation of author’s work done as Summer Student project. The aim was to identify issues with the existing losses decomposition matrix for flat top, apply necessary corrections and construct analogous matrix for injection.

  2. Solid State Multinuclear Magnetic Resonance Investigation of Electrolyte Decomposition Products on Lithium Ion Electrodes

    Science.gov (United States)

    DeSilva, J .H. S. R.; Udinwe, V.; Sideris, P. J.; Smart, M. C.; Krause, F. C.; Hwang, C.; Smith, K. A.; Greenbaum, S. G.

    2012-01-01

    Solid electrolyte interphase (SEI) formation in lithium ion cells prepared with advanced electrolytes is investigated by solid state multinuclear (7Li, 19F, 31P) magnetic resonance (NMR) measurements of electrode materials harvested from cycled cells subjected to an accelerated aging protocol. The electrolyte composition is varied to include the addition of fluorinated carbonates and triphenyl phosphate (TPP, a flame retardant). In addition to species associated with LiPF6 decomposition, cathode NMR spectra are characterized by the presence of compounds originating from the TPP additive. Substantial amounts of LiF are observed in the anodes as well as compounds originating from the fluorinated carbonates.

  3. Thermal decomposition of 2-methylbenzoates of rare earth elements

    International Nuclear Information System (INIS)

    Brzyska, W.; Szubartowski, L.

    1980-01-01

    The conditions of thermal decomposition of La, Ce(3), Pr, Nd, Sm and Y 2-methylbenzoates were examined. On the basis of obtained results it was stated that hydrated 2-methylbenzoates were subjected to dehydration passing into anhydrated salts and then they decomposed into oxides. The activation energy of dehydration and decomposition reactions of lanthanons, La and Y 2-methylbenzoates was determined. (author)

  4. Machine medical ethics

    CERN Document Server

    Pontier, Matthijs

    2015-01-01

    The essays in this book, written by researchers from both humanities and sciences, describe various theoretical and experimental approaches to adding medical ethics to a machine in medical settings. Medical machines are in close proximity with human beings, and getting closer: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. In such contexts, machines are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for e...

  5. Advanced Machine learning Algorithm Application for Rotating Machine Health Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Kanemoto, Shigeru; Watanabe, Masaya [The University of Aizu, Aizuwakamatsu (Japan); Yusa, Noritaka [Tohoku University, Sendai (Japan)

    2014-08-15

    The present paper tries to evaluate the applicability of conventional sound analysis techniques and modern machine learning algorithms to rotating machine health monitoring. These techniques include support vector machine, deep leaning neural network, etc. The inner ring defect and misalignment anomaly sound data measured by a rotating machine mockup test facility are used to verify the above various kinds of algorithms. Although we cannot find remarkable difference of anomaly discrimination performance, some methods give us the very interesting eigen patterns corresponding to normal and abnormal states. These results will be useful for future more sensitive and robust anomaly monitoring technology.

  6. Advanced Machine learning Algorithm Application for Rotating Machine Health Monitoring

    International Nuclear Information System (INIS)

    Kanemoto, Shigeru; Watanabe, Masaya; Yusa, Noritaka

    2014-01-01

    The present paper tries to evaluate the applicability of conventional sound analysis techniques and modern machine learning algorithms to rotating machine health monitoring. These techniques include support vector machine, deep leaning neural network, etc. The inner ring defect and misalignment anomaly sound data measured by a rotating machine mockup test facility are used to verify the above various kinds of algorithms. Although we cannot find remarkable difference of anomaly discrimination performance, some methods give us the very interesting eigen patterns corresponding to normal and abnormal states. These results will be useful for future more sensitive and robust anomaly monitoring technology

  7. Rotating electrical machines

    CERN Document Server

    Le Doeuff, René

    2013-01-01

    In this book a general matrix-based approach to modeling electrical machines is promulgated. The model uses instantaneous quantities for key variables and enables the user to easily take into account associations between rotating machines and static converters (such as in variable speed drives).   General equations of electromechanical energy conversion are established early in the treatment of the topic and then applied to synchronous, induction and DC machines. The primary characteristics of these machines are established for steady state behavior as well as for variable speed scenarios. I

  8. Developing a PLC-friendly state machine model: lessons learned

    Science.gov (United States)

    Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans

    2014-07-01

    Modern Programmable Logic Controllers (PLCs) have become an attractive platform for controlling real-time aspects of astronomical telescopes and instruments due to their increased versatility, performance and standardization. Likewise, vendor-neutral middleware technologies such as OPC Unified Architecture (OPC UA) have recently demonstrated that they can greatly facilitate the integration of these industrial platforms into the overall control system. Many practical questions arise, however, when building multi-tiered control systems that consist of PLCs for low level control, and conventional software and platforms for higher level control. How should the PLC software be structured, so that it can rely on well-known programming paradigms on the one hand, and be mapped to a well-organized OPC UA interface on the other hand? Which programming languages of the IEC 61131-3 standard closely match the problem domains of the abstraction levels within this structure? How can the recent additions to the standard (such as the support for namespaces and object-oriented extensions) facilitate a model based development approach? To what degree can our applications already take advantage of the more advanced parts of the OPC UA standard, such as the high expressiveness of the semantic modeling language that it defines, or the support for events, aggregation of data, automatic discovery, ... ? What are the timing and concurrency problems to be expected for the higher level tiers of the control system due to the cyclic execution of control and communication tasks by the PLCs? We try to answer these questions by demonstrating a semantic state machine model that can readily be implemented using IEC 61131 and OPC UA. One that does not aim to capture all possible states of a system, but rather one that attempts to organize the course-grained structure and behaviour of a system. In this paper we focus on the intricacies of this seemingly simple task, and on the lessons that we

  9. Thermic decomposition of biphenyl; Decomposition thermique du biphenyle

    Energy Technology Data Exchange (ETDEWEB)

    Lutz, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1966-03-01

    Liquid and vapour phase pyrolysis of very pure biphenyl obtained by methods described in the text was carried out at 400 C in sealed ampoules, the fraction transformed being always less than 0.1 per cent. The main products were hydrogen, benzene, terphenyls, and a deposit of polyphenyls strongly adhering to the walls. Small quantities of the lower aliphatic hydrocarbons were also found. The variation of the yields of these products with a) the pyrolysis time, b) the state (gas or liquid) of the biphenyl, and c) the pressure of the vapour was measured. Varying the area and nature of the walls showed that in the absence of a liquid phase, the pyrolytic decomposition takes place in the adsorbed layer, and that metallic walls promote the reaction more actively than do those of glass (pyrex or silica). A mechanism is proposed to explain the results pertaining to this decomposition in the adsorbed phase. The adsorption seems to obey a Langmuir isotherm, and the chemical act which determines the overall rate of decomposition is unimolecular. (author) [French] Du biphenyle tres pur, dont la purification est decrite, est pyrolyse a 400 C en phase vapeur et en phase liquide dans des ampoules scellees sous vide, a des taux de decomposition n'ayant jamais depasse 0,1 pour cent. Les produits provenant de la pyrolyse sont essentiellement: l' hydrogene, le benzene, les therphenyles, et un depot de polyphenyles adherant fortement aux parois. En plus il se forme de faibles quantites d'hydrocarbures aliphatiques gazeux. On indique la variation des rendements des differents produits avec la duree de pyrolyse, l'etat gazeux ou liquide du biphenyle, et la pression de la vapeur. Variant la superficie et la nature des parois, on montre qu'en absence de liquide la pyrolyse se fait en phase adsorbee. La pyrolyse est plus active au contact de parois metalliques que de celles de verres (pyrex ou silice). A partir des resultats experimentaux un mecanisme de degradation du biphenyle en phase

  10. AUTONOMOUS GAUSSIAN DECOMPOSITION

    International Nuclear Information System (INIS)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John

    2015-01-01

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes

  11. AUTONOMOUS GAUSSIAN DECOMPOSITION

    Energy Technology Data Exchange (ETDEWEB)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States); Heiles, Carl [Radio Astronomy Lab, UC Berkeley, 601 Campbell Hall, Berkeley, CA 94720 (United States); Hennebelle, Patrick [Laboratoire AIM, Paris-Saclay, CEA/IRFU/SAp-CNRS-Université Paris Diderot, F-91191 Gif-sur Yvette Cedex (France); Goss, W. M. [National Radio Astronomy Observatory, P.O. Box O, 1003 Lopezville, Socorro, NM 87801 (United States); Dickey, John, E-mail: rlindner@astro.wisc.edu [University of Tasmania, School of Maths and Physics, Private Bag 37, Hobart, TAS 7001 (Australia)

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  12. Decomposition techniques

    Science.gov (United States)

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  13. Heat-machine control by quantum-state preparation: from quantum engines to refrigerators.

    Science.gov (United States)

    Gelbwaser-Klimovsky, D; Kurizki, G

    2014-08-01

    We explore the dependence of the performance bounds of heat engines and refrigerators on the initial quantum state and the subsequent evolution of their piston, modeled by a quantized harmonic oscillator. Our goal is to provide a fully quantized treatment of self-contained (autonomous) heat machines, as opposed to their prevailing semiclassical description that consists of a quantum system alternately coupled to a hot or a cold heat bath and parametrically driven by a classical time-dependent piston or field. Here, by contrast, there is no external time-dependent driving. Instead, the evolution is caused by the stationary simultaneous interaction of two heat baths (having distinct spectra and temperatures) with a single two-level system that is in turn coupled to the quantum piston. The fully quantized treatment we put forward allows us to investigate work extraction and refrigeration by the tools of quantum-optical amplifier and dissipation theory, particularly, by the analysis of amplified or dissipated phase-plane quasiprobability distributions. Our main insight is that quantum states may be thermodynamic resources and can provide a powerful handle, or control, on the efficiency of the heat machine. In particular, a piston initialized in a coherent state can cause the engine to produce work at an efficiency above the Carnot bound in the linear amplification regime. In the refrigeration regime, the coefficient of performance can transgress the Carnot bound if the piston is initialized in a Fock state. The piston may be realized by a vibrational mode, as in nanomechanical setups, or an electromagnetic field mode, as in cavity-based scenarios.

  14. Virtual Machine Language 2.1

    Science.gov (United States)

    Riedel, Joseph E.; Grasso, Christopher A.

    2012-01-01

    VML (Virtual Machine Language) is an advanced computing environment that allows spacecraft to operate using mechanisms ranging from simple, time-oriented sequencing to advanced, multicomponent reactive systems. VML has developed in four evolutionary stages. VML 0 is a core execution capability providing multi-threaded command execution, integer data types, and rudimentary branching. VML 1 added named parameterized procedures, extensive polymorphism, data typing, branching, looping issuance of commands using run-time parameters, and named global variables. VML 2 added for loops, data verification, telemetry reaction, and an open flight adaptation architecture. VML 2.1 contains major advances in control flow capabilities for executable state machines. On the resource requirements front, VML 2.1 features a reduced memory footprint in order to fit more capability into modestly sized flight processors, and endian-neutral data access for compatibility with Intel little-endian processors. Sequence packaging has been improved with object-oriented programming constructs and the use of implicit (rather than explicit) time tags on statements. Sequence event detection has been significantly enhanced with multi-variable waiting, which allows a sequence to detect and react to conditions defined by complex expressions with multiple global variables. This multi-variable waiting serves as the basis for implementing parallel rule checking, which in turn, makes possible executable state machines. The new state machine feature in VML 2.1 allows the creation of sophisticated autonomous reactive systems without the need to develop expensive flight software. Users specify named states and transitions, along with the truth conditions required, before taking transitions. Transitions with the same signal name allow separate state machines to coordinate actions: the conditions distributed across all state machines necessary to arm a particular signal are evaluated, and once found true, that

  15. Formal modeling of virtual machines

    Science.gov (United States)

    Cremers, A. B.; Hibbard, T. N.

    1978-01-01

    Systematic software design can be based on the development of a 'hierarchy of virtual machines', each representing a 'level of abstraction' of the design process. The reported investigation presents the concept of 'data space' as a formal model for virtual machines. The presented model of a data space combines the notions of data type and mathematical machine to express the close interaction between data and control structures which takes place in a virtual machine. One of the main objectives of the investigation is to show that control-independent data type implementation is only of limited usefulness as an isolated tool of program development, and that the representation of data is generally dictated by the control context of a virtual machine. As a second objective, a better understanding is to be developed of virtual machine state structures than was heretofore provided by the view of the state space as a Cartesian product.

  16. Self-organized critical pinball machine

    DEFF Research Database (Denmark)

    Flyvbjerg, H.

    2004-01-01

    The nature of self-organized criticality (SOC) is pin-pointed with a simple mechanical model: a pinball machine. Its phase space is fully parameterized by two integer variables, one describing the state of an on-going game, the other describing the state of the machine. This is the simplest...

  17. Multi-Label Classification by Semi-Supervised Singular Value Decomposition.

    Science.gov (United States)

    Jing, Liping; Shen, Chenyang; Yang, Liu; Yu, Jian; Ng, Michael K

    2017-10-01

    Multi-label problems arise in various domains, including automatic multimedia data categorization, and have generated significant interest in computer vision and machine learning community. However, existing methods do not adequately address two key challenges: exploiting correlations between labels and making up for the lack of labelled data or even missing labelled data. In this paper, we proposed to use a semi-supervised singular value decomposition (SVD) to handle these two challenges. The proposed model takes advantage of the nuclear norm regularization on the SVD to effectively capture the label correlations. Meanwhile, it introduces manifold regularization on mapping to capture the intrinsic structure among data, which provides a good way to reduce the required labelled data with improving the classification performance. Furthermore, we designed an efficient algorithm to solve the proposed model based on the alternating direction method of multipliers, and thus, it can efficiently deal with large-scale data sets. Experimental results for synthetic and real-world multimedia data sets demonstrate that the proposed method can exploit the label correlations and obtain promising and better label prediction results than the state-of-the-art methods.

  18. Kinetics of thermal decomposition of aluminium hydride: I-non-isothermal decomposition under vacuum and in inert atmosphere (argon)

    International Nuclear Information System (INIS)

    Ismail, I.M.K.; Hawkins, T.

    2005-01-01

    Recently, interest in aluminium hydride (alane) as a rocket propulsion ingredient has been renewed due to improvements in its manufacturing process and an increase in thermal stability. When alane is added to solid propellant formulations, rocket performance is enhanced and the specific impulse increases. Preliminary work was performed at AFRL on the characterization and evaluation of two alane samples. Decomposition kinetics were determined from gravimetric TGA data and volumetric vacuum thermal stability (VTS) results. Chemical analysis showed the samples had 88.30% (by weight) aluminium and 9.96% hydrogen. The average density, as measured by helium pycnometery, was 1.486 g/cc. Scanning electron microscopy showed that the particles were mostly composed of sharp edged crystallographic polyhedral such as simple cubes, cubic octahedrons and hexagonal prisms. Thermogravimetric analysis was utilized to investigate the decomposition kinetics of alane in argon atmosphere and to shed light on the mechanism of alane decomposition. Two kinetic models were successfully developed and used to propose a mechanism for the complete decomposition of alane and to predict its shelf-life during storage. Alane decomposes in two steps. The slowest (rate-determining) step is solely controlled by solid state nucleation of aluminium crystals; the fastest step is due to growth of the crystals. Thus, during decomposition, hydrogen gas is liberated and the initial polyhedral AlH 3 crystals yield a final mix of amorphous aluminium and aluminium crystals. After establishing the kinetic model, prediction calculations indicated that alane can be stored in inert atmosphere at temperatures below 10 deg. C for long periods of time (e.g., 15 years) without significant decomposition. After 15 years of storage, the kinetic model predicts ∼0.1% decomposition, but storage at higher temperatures (e.g. 30 deg. C) is not recommended

  19. State sales tax rates for soft drinks and snacks sold through grocery stores and vending machines, 2007.

    Science.gov (United States)

    Chriqui, Jamie F; Eidson, Shelby S; Bates, Hannalori; Kowalczyk, Shelly; Chaloupka, Frank J

    2008-07-01

    Junk food consumption is associated with rising obesity rates in the United States. While a "junk food" specific tax is a potential public health intervention, a majority of states already impose sales taxes on certain junk food and soft drinks. This study reviews the state sales tax variance for soft drinks and selected snack products sold through grocery stores and vending machines as of January 2007. Sales taxes vary by state, intended retail location (grocery store vs. vending machine), and product. Vended snacks and soft drinks are taxed at a higher rate than grocery items and other food products, generally, indicative of a "disfavored" tax status attributed to vended items. Soft drinks, candy, and gum are taxed at higher rates than are other items examined. Similar tax schemes in other countries and the potential implications of these findings relative to the relationship between price and consumption are discussed.

  20. Improved Wind Speed Prediction Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    ZHANG, Y.

    2018-05-01

    Full Text Available Wind power industry plays an important role in promoting the development of low-carbon economic and energy transformation in the world. However, the randomness and volatility of wind speed series restrict the healthy development of the wind power industry. Accurate wind speed prediction is the key to realize the stability of wind power integration and to guarantee the safe operation of the power system. In this paper, combined with the Empirical Mode Decomposition (EMD, the Radial Basis Function Neural Network (RBF and the Least Square Support Vector Machine (SVM, an improved wind speed prediction model based on Empirical Mode Decomposition (EMD-RBF-LS-SVM is proposed. The prediction result indicates that compared with the traditional prediction model (RBF, LS-SVM, the EMD-RBF-LS-SVM model can weaken the random fluctuation to a certain extent and improve the short-term accuracy of wind speed prediction significantly. In a word, this research will significantly reduce the impact of wind power instability on the power grid, ensure the power grid supply and demand balance, reduce the operating costs in the grid-connected systems, and enhance the market competitiveness of the wind power.

  1. Joint optimization of maintenance, buffers and machines in manufacturing lines

    Science.gov (United States)

    Nahas, Nabil; Nourelfath, Mustapha

    2018-01-01

    This article considers a series manufacturing line composed of several machines separated by intermediate buffers of finite capacity. The goal is to find the optimal number of preventive maintenance actions performed on each machine, the optimal selection of machines and the optimal buffer allocation plan that minimize the total system cost, while providing the desired system throughput level. The mean times between failures of all machines are assumed to increase when applying periodic preventive maintenance. To estimate the production line throughput, a decomposition method is used. The decision variables in the formulated optimal design problem are buffer levels, types of machines and times between preventive maintenance actions. Three heuristic approaches are developed to solve the formulated combinatorial optimization problem. The first heuristic consists of a genetic algorithm, the second is based on the nonlinear threshold accepting metaheuristic and the third is an ant colony system. The proposed heuristics are compared and their efficiency is shown through several numerical examples. It is found that the nonlinear threshold accepting algorithm outperforms the genetic algorithm and ant colony system, while the genetic algorithm provides better results than the ant colony system for longer manufacturing lines.

  2. Diagnostics of the Technical State of Bearings of Mining Machines Base Assemblies

    Science.gov (United States)

    Gerike, Boris L.; Mokrushev, Andrey A.

    2017-10-01

    The article reviews the methods of technical diagnostics of equipment used during maintenance of mining machines in accordance with their actual technical state, and considers the basics of vibration parameters measuring. The classification of existing methods for diagnosing the technical condition of rolling bearings is given. The advantages and disadvantages of these methods are considered. The main defects of rolling bearings arising during manufacturing, transportation, storage, and operation are considered.

  3. Humanizing machines: Anthropomorphization of slot machines increases gambling.

    Science.gov (United States)

    Riva, Paolo; Sacchi, Simona; Brambilla, Marco

    2015-12-01

    Do people gamble more on slot machines if they think that they are playing against humanlike minds rather than mathematical algorithms? Research has shown that people have a strong cognitive tendency to imbue humanlike mental states to nonhuman entities (i.e., anthropomorphism). The present research tested whether anthropomorphizing slot machines would increase gambling. Four studies manipulated slot machine anthropomorphization and found that exposing people to an anthropomorphized description of a slot machine increased gambling behavior and reduced gambling outcomes. Such findings emerged using tasks that focused on gambling behavior (Studies 1 to 3) as well as in experimental paradigms that included gambling outcomes (Studies 2 to 4). We found that gambling outcomes decrease because participants primed with the anthropomorphic slot machine gambled more (Study 4). Furthermore, we found that high-arousal positive emotions (e.g., feeling excited) played a role in the effect of anthropomorphism on gambling behavior (Studies 3 and 4). Our research indicates that the psychological process of gambling-machine anthropomorphism can be advantageous for the gaming industry; however, this may come at great expense for gamblers' (and their families') economic resources and psychological well-being. (c) 2015 APA, all rights reserved).

  4. Machining dynamics fundamentals, applications and practices

    CERN Document Server

    Cheng, Kai

    2008-01-01

    Machining dynamics are vital to the performance of machine tools and machining processes in manufacturing. This book discusses the state-of-the-art applications, practices and research in machining dynamics. It presents basic theory, analysis and control methodology. It is useful for manufacturing engineers, supervisors, engineers and designers.

  5. Unified universal quantum cloning machine and fidelities

    Energy Technology Data Exchange (ETDEWEB)

    Wang Yinan; Shi Handuo; Xiong Zhaoxi; Jing Li; Mu Liangzhu [School of Physics, Peking University, Beijing 100871 (China); Ren Xijun [School of Physics and Electronics, Henan University, Kaifeng 4750011 (China); Fan Heng [Institute of Physics, Chinese Academy of Sciences, Beijing 100190 (China)

    2011-09-15

    We present a unified universal quantum cloning machine, which combines several different existing universal cloning machines together, including the asymmetric case. In this unified framework, the identical pure states are projected equally into each copy initially constituted by input and one half of the maximally entangled states. We show explicitly that the output states of those universal cloning machines are the same. One importance of this unified cloning machine is that the cloning procession is always the symmetric projection, which reduces dramatically the difficulties for implementation. Also, it is found that this unified cloning machine can be directly modified to the general asymmetric case. Besides the global fidelity and the single-copy fidelity, we also present all possible arbitrary-copy fidelities.

  6. Analysis of the steady-state operation of vacuum systems for fusion machines

    International Nuclear Information System (INIS)

    Roose, T.R.; Hoffman, M.A.; Carlson, G.A.

    1975-01-01

    A computer code named GASBAL was written to calculate the steady-state vacuum system performance of multi-chamber mirror machines as well as rather complex conventional multichamber vacuum systems. Application of the code, with some modifications, to the quasi-steady tokamak operating period should also be possible. Basically, GASBAL analyzes free molecular gas flow in a system consisting of a central chamber (the plasma chamber) connected by conductances to an arbitrary number of one- or two-chamber peripheral tanks. Each of the peripheral tanks may have vacuum pumping capability (pumping speed), sources of cold gas, and sources of energetic atoms. The central chamber may have actual vacuum pumping capability, as well as a plasma capable of ionizing injected atoms and impinging gas molecules and ''pumping'' them to a peripheral chamber. The GASBAL code was used in the preliminary design of a large mirror machine experiment--LLL's MX

  7. Short-range clustering and decomposition in copper-nickel and copper-nickel-iron alloys

    International Nuclear Information System (INIS)

    Aalders, T.J.A.

    1982-07-01

    The thermodynamic equilibrium state of short-range clustering and the kinetics of short-range clustering and decomposition has been studied for a number of CuNi(Fe)-alloys by means of neutron scattering. The validity of the theories, which are usually applied to describe spinodal decomposition, nucleation and growth, coarsening etc., was investigated. It was shown that for the investigated substances the conventional theory of spinodal decomposition is valid for the relaxation of short-range clustering only for the case that the initial and final states do not differ too much. The dynamical scaling procedure described by Lebowitz et al. did not lead to a time-independent scaled function F(x) for the relaxation of short-range clustering, for the early stages of decomposition and for the case that an alloy, which was already decomposed at the quench temperature T 1 , was annealed at a temperature T 2 (T 1 ). For the later stages of decomposition, however, the scaling procedure was indeed successful. The coarsening of the alloys could, except for the later stages, be described by the Lifshitz-Slyozov theory. (Auth.)

  8. Linking climate change and downed woody debris decomposition across forests of the eastern United States

    Science.gov (United States)

    M.B. Russell; C.W. Woodall; A.W. D' Amato; S. Fraver; J.B. Bradford

    2014-01-01

    Forest ecosystems play a critical role in mitigating greenhouse gas emissions. Forest carbon (C) is stored through photosynthesis and released via decomposition and combustion. Relative to C fixation in biomass, much less is known about C depletion through decomposition of woody debris, particularly under a changing climate. It is assumed that the increased...

  9. Equivalence of restricted Boltzmann machines and tensor network states

    Science.gov (United States)

    Chen, Jing; Cheng, Song; Xie, Haidong; Wang, Lei; Xiang, Tao

    2018-02-01

    The restricted Boltzmann machine (RBM) is one of the fundamental building blocks of deep learning. RBM finds wide applications in dimensional reduction, feature extraction, and recommender systems via modeling the probability distributions of a variety of input data including natural images, speech signals, and customer ratings, etc. We build a bridge between RBM and tensor network states (TNS) widely used in quantum many-body physics research. We devise efficient algorithms to translate an RBM into the commonly used TNS. Conversely, we give sufficient and necessary conditions to determine whether a TNS can be transformed into an RBM of given architectures. Revealing these general and constructive connections can cross fertilize both deep learning and quantum many-body physics. Notably, by exploiting the entanglement entropy bound of TNS, we can rigorously quantify the expressive power of RBM on complex data sets. Insights into TNS and its entanglement capacity can guide the design of more powerful deep learning architectures. On the other hand, RBM can represent quantum many-body states with fewer parameters compared to TNS, which may allow more efficient classical simulations.

  10. High Performance Polar Decomposition on Distributed Memory Systems

    KAUST Repository

    Sukkari, Dalal E.

    2016-08-08

    The polar decomposition of a dense matrix is an important operation in linear algebra. It can be directly calculated through the singular value decomposition (SVD) or iteratively using the QR dynamically-weighted Halley algorithm (QDWH). The former is difficult to parallelize due to the preponderant number of memory-bound operations during the bidiagonal reduction. We investigate the latter scenario, which performs more floating-point operations but exposes at the same time more parallelism, and therefore, runs closer to the theoretical peak performance of the system, thanks to more compute-bound matrix operations. Profiling results show the performance scalability of QDWH for calculating the polar decomposition using around 9200 MPI processes on well and ill-conditioned matrices of 100K×100K problem size. We study then the performance impact of the QDWH-based polar decomposition as a pre-processing step toward calculating the SVD itself. The new distributed-memory implementation of the QDWH-SVD solver achieves up to five-fold speedup against current state-of-the-art vendor SVD implementations. © Springer International Publishing Switzerland 2016.

  11. Enter the machine

    Science.gov (United States)

    Palittapongarnpim, Pantita; Sanders, Barry C.

    2018-05-01

    Quantum tomography infers quantum states from measurement data, but it becomes infeasible for large systems. Machine learning enables tomography of highly entangled many-body states and suggests a new powerful approach to this problem.

  12. Asymmetric quantum cloning machines

    International Nuclear Information System (INIS)

    Cerf, N.J.

    1998-01-01

    A family of asymmetric cloning machines for quantum bits and N-dimensional quantum states is introduced. These machines produce two approximate copies of a single quantum state that emerge from two distinct channels. In particular, an asymmetric Pauli cloning machine is defined that makes two imperfect copies of a quantum bit, while the overall input-to-output operation for each copy is a Pauli channel. A no-cloning inequality is derived, characterizing the impossibility of copying imposed by quantum mechanics. If p and p ' are the probabilities of the depolarizing channels associated with the two outputs, the domain in (√p,√p ' )-space located inside a particular ellipse representing close-to-perfect cloning is forbidden. This ellipse tends to a circle when copying an N-dimensional state with N→∞, which has a simple semi-classical interpretation. The symmetric Pauli cloning machines are then used to provide an upper bound on the quantum capacity of the Pauli channel of probabilities p x , p y and p z . The capacity is proven to be vanishing if (√p x , √p y , √p z ) lies outside an ellipsoid whose pole coincides with the depolarizing channel that underlies the universal cloning machine. Finally, the tradeoff between the quality of the two copies is shown to result from a complementarity akin to Heisenberg uncertainty principle. (author)

  13. An Improved Abstract State Machine Based Choreography Specification and Execution Algorithm for Semantic Web Services

    Directory of Open Access Journals (Sweden)

    Shahin Mehdipour Ataee

    2018-01-01

    Full Text Available We identify significant weaknesses in the original Abstract State Machine (ASM based choreography algorithm of Web Service Modeling Ontology (WSMO, which make it impractical for use in semantic web service choreography engines. We present an improved algorithm which rectifies the weaknesses of the original algorithm, as well as a practical, fully functional choreography engine implementation in Flora-2 based on the improved algorithm. Our improvements to the choreography algorithm include (i the linking of the initial state of the ASM to the precondition of the goal, (ii the introduction of the concept of a final state in the execution of the ASM and its linking to the postcondition of the goal, and (iii modification to the execution of the ASM so that it stops when the final state condition is satisfied by the current configuration of the machine. Our choreography engine takes as input semantic web service specifications written in the Flora-2 dialect of F-logic. Furthermore, we prove the equivalence of ASMs (evolving algebras and evolving ontologies in the sense that one can simulate the other, a first in literature. Finally, we present a visual editor which facilitates the design and deployment of our F-logic based web service and goal specifications.

  14. Implementation of domain decomposition and data decomposition algorithms in RMC code

    International Nuclear Information System (INIS)

    Liang, J.G.; Cai, Y.; Wang, K.; She, D.

    2013-01-01

    The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced

  15. Benders’ Decomposition for Curriculum-Based Course Timetabling

    DEFF Research Database (Denmark)

    Bagger, Niels-Christian F.; Sørensen, Matias; Stidsen, Thomas R.

    2018-01-01

    feasibility. We compared our algorithm with other approaches from the literature for a total of 32 data instances. We obtained a lower bound on 23 of the instances, which were at least as good as the lower bounds obtained by the state-of-the-art, and on eight of these, our lower bounds were higher. On two......In this paper we applied Benders’ decomposition to the Curriculum-Based Course Timetabling (CBCT) problem. The objective of the CBCT problem is to assign a set of lectures to time slots and rooms. Our approach was based on segmenting the problem into time scheduling and room allocation problems...... of the instances, our lower bound was an improvement of the currently best-known. Lastly, we compared our decomposition to the model without the decomposition on an additional six instances, which are much larger than the other 32. To our knowledge, this was the first time that lower bounds were calculated...

  16. Thermal decomposition of biphenyl (1963); Decomposition thermique du biphenyle (1963)

    Energy Technology Data Exchange (ETDEWEB)

    Clerc, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1962-06-15

    The rates of formation of the decomposition products of biphenyl; hydrogen, methane, ethane, ethylene, as well as triphenyl have been measured in the vapour and liquid phases at 460 deg. C. The study of the decomposition products of biphenyl at different temperatures between 400 and 460 deg. C has provided values of the activation energies of the reactions yielding the main products of pyrolysis in the vapour phase. Product and Activation energy: Hydrogen 73 {+-} 2 kCal/Mole; Benzene 76 {+-} 2 kCal/Mole; Meta-triphenyl 53 {+-} 2 kCal/Mole; Biphenyl decomposition 64 {+-} 2 kCal/Mole; The rate of disappearance of biphenyl is only very approximately first order. These results show the major role played at the start of the decomposition by organic impurities which are not detectable by conventional physico-chemical analysis methods and the presence of which accelerates noticeably the decomposition rate. It was possible to eliminate these impurities by zone-melting carried out until the initial gradient of the formation curves for the products became constant. The composition of the high-molecular weight products (over 250) was deduced from the mean molecular weight and the dosage of the aromatic C - H bonds by infrared spectrophotometry. As a result the existence in tars of hydrogenated tetra, penta and hexaphenyl has been demonstrated. (author) [French] Les vitesses de formation des produits de decomposition du biphenyle: hydrogene, methane, ethane, ethylene, ainsi que des triphenyles, ont ete mesurees en phase vapeur et en phase liquide a 460 deg. C. L'etude des produits de decomposition du biphenyle a differentes temperatures comprises entre 400 et 460 deg. C, a fourni les valeurs des energies d'activation des reactions conduisant aux principaux produits de la pyrolyse en phase vapeur. Produit et Energie d'activation: Hydrogene 73 {+-} 2 kcal/Mole; Benzene 76 {+-} 2 kcal/Mole; Metatriphenyle, 53 {+-} 2 kcal/Mole; Decomposition du biphenyle 64 {+-} 2 kcal/Mole; La

  17. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  18. Learning Activity Packets for Milling Machines. Unit I--Introduction to Milling Machines.

    Science.gov (United States)

    Oklahoma State Board of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.

    This learning activity packet (LAP) outlines the study activities and performance tasks covered in a related curriculum guide on milling machines. The course of study in this LAP is intended to help students learn to identify parts and attachments of vertical and horizontal milling machines, identify work-holding devices, state safety rules, and…

  19. Using support vector machines in the multivariate state estimation technique

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Gross, K.C.

    1999-01-01

    One approach to validate nuclear power plant (NPP) signals makes use of pattern recognition techniques. This approach often assumes that there is a set of signal prototypes that are continuously compared with the actual sensor signals. These signal prototypes are often computed based on empirical models with little or no knowledge about physical processes. A common problem of all data-based models is their limited ability to make predictions on the basis of available training data. Another problem is related to suboptimal training algorithms. Both of these potential shortcomings with conventional approaches to signal validation and sensor operability validation are successfully resolved by adopting a recently proposed learning paradigm called the support vector machine (SVM). The work presented here is a novel application of SVM for data-based modeling of system state variables in an NPP, integrated with a nonlinear, nonparametric technique called the multivariate state estimation technique (MSET), an algorithm developed at Argonne National Laboratory for a wide range of nuclear plant applications

  20. Tracking an open quantum system using a finite state machine: Stability analysis

    International Nuclear Information System (INIS)

    Karasik, R. I.; Wiseman, H. M.

    2011-01-01

    A finite-dimensional Markovian open quantum system will undergo quantum jumps between pure states, if we can monitor the bath to which it is coupled with sufficient precision. In general these jumps, plus the between-jump evolution, create a trajectory which passes through infinitely many different pure states, even for ergodic systems. However, as shown recently by us [Phys. Rev. Lett. 106, 020406 (2011)], it is possible to construct adaptive monitorings which restrict the system to jumping between a finite number of states. That is, it is possible to track the system using a finite state machine as the apparatus. In this paper we consider the question of the stability of these monitoring schemes. Restricting to cyclic jumps for a qubit, we give a strong analytical argument that these schemes are always stable and supporting analytical and numerical evidence for the example of resonance fluorescence. This example also enables us to explore a range of behaviors in the evolution of individual trajectories, for several different monitoring schemes.

  1. A Hybrid Least Square Support Vector Machine Model with Parameters Optimization for Stock Forecasting

    Directory of Open Access Journals (Sweden)

    Jian Chai

    2015-01-01

    Full Text Available This paper proposes an EMD-LSSVM (empirical mode decomposition least squares support vector machine model to analyze the CSI 300 index. A WD-LSSVM (wavelet denoising least squares support machine is also proposed as a benchmark to compare with the performance of EMD-LSSVM. Since parameters selection is vital to the performance of the model, different optimization methods are used, including simplex, GS (grid search, PSO (particle swarm optimization, and GA (genetic algorithm. Experimental results show that the EMD-LSSVM model with GS algorithm outperforms other methods in predicting stock market movement direction.

  2. Gas hydrates forming and decomposition conditions analysis

    Directory of Open Access Journals (Sweden)

    А. М. Павленко

    2017-07-01

    Full Text Available The concept of gas hydrates has been defined; their brief description has been given; factors that affect the formation and decomposition of the hydrates have been reported; their distribution, structure and thermodynamic conditions determining the gas hydrates formation disposition in gas pipelines have been considered. Advantages and disadvantages of the known methods for removing gas hydrate plugs in the pipeline have been analyzed, the necessity of their further studies has been proved. In addition to the negative impact on the process of gas extraction, the hydrates properties make it possible to outline the following possible fields of their industrial use: obtaining ultrahigh pressures in confined spaces at the hydrate decomposition; separating hydrocarbon mixtures by successive transfer of individual components through the hydrate given the mode; obtaining cold due to heat absorption at the hydrate decomposition; elimination of the open gas fountain by means of hydrate plugs in the bore hole of the gushing gasser; seawater desalination, based on the hydrate ability to only bind water molecules into the solid state; wastewater purification; gas storage in the hydrate state; dispersion of high temperature fog and clouds by means of hydrates; water-hydrates emulsion injection into the productive strata to raise the oil recovery factor; obtaining cold in the gas processing to cool the gas, etc.

  3. Fast implementation of the 1\\rightarrow3 orbital state quantum cloning machine

    Science.gov (United States)

    Lin, Jin-Zhong

    2018-05-01

    We present a scheme to implement a 1→3 orbital state quantum cloning machine assisted by quantum Zeno dynamics. By constructing shortcuts to adiabatic passage with transitionless quantum driving, we can complete this scheme effectively and quickly in one step. The effects of decoherence, including spontaneous emission and the decay of the cavity, are also discussed. The numerical simulation results show that high fidelity can be obtained and the feasibility analysis indicates that this can also be realized in experiments.

  4. A machine-learning approach for damage detection in aircraft structures using self-powered sensor data

    Science.gov (United States)

    Salehi, Hadi; Das, Saptarshi; Chakrabartty, Shantanu; Biswas, Subir; Burgueño, Rigoberto

    2017-04-01

    This study proposes a novel strategy for damage identification in aircraft structures. The strategy was evaluated based on the simulation of the binary data generated from self-powered wireless sensors employing a pulse switching architecture. The energy-aware pulse switching communication protocol uses single pulses instead of multi-bit packets for information delivery resulting in discrete binary data. A system employing this energy-efficient technology requires dealing with time-delayed binary data due to the management of power budgets for sensing and communication. This paper presents an intelligent machine-learning framework based on combination of the low-rank matrix decomposition and pattern recognition (PR) methods. Further, data fusion is employed as part of the machine-learning framework to take into account the effect of data time delay on its interpretation. Simulated time-delayed binary data from self-powered sensors was used to determine damage indicator variables. Performance and accuracy of the damage detection strategy was examined and tested for the case of an aircraft horizontal stabilizer. Damage states were simulated on a finite element model by reducing stiffness in a region of the stabilizer's skin. The proposed strategy shows satisfactory performance to identify the presence and location of the damage, even with noisy and incomplete data. It is concluded that PR is a promising machine-learning algorithm for damage detection for time-delayed binary data from novel self-powered wireless sensors.

  5. Combination of Empirical Mode Decomposition Components of HRV Signals for Discriminating Emotional States

    Directory of Open Access Journals (Sweden)

    Ateke Goshvarpour

    2016-06-01

    Full Text Available Introduction Automatic human emotion recognition is one of the most interesting topics in the field of affective computing. However, development of a reliable approach with a reasonable recognition rate is a challenging task. The main objective of the present study was to propose a robust method for discrimination of emotional responses thorough examination of heart rate variability (HRV. In the present study, considering the non-stationary and non-linear characteristics of HRV, empirical mode decomposition technique was utilized as a feature extraction approach. Materials and Methods In order to induce the emotional states, images indicating four emotional states, i.e., happiness, peacefulness, sadness, and fearfulness were presented. Simultaneously, HRV was recorded in 47 college students. The signals were decomposed into some intrinsic mode functions (IMFs. For each IMF and different IMF combinations, 17 standard and non-linear parameters were extracted. Wilcoxon test was conducted to assess the difference between IMF parameters in different emotional states. Afterwards, a probabilistic neural network was used to classify the features into emotional classes. Results Based on the findings, maximum classification rates were achieved when all IMFs were fed into the classifier. Under such circumstances, the proposed algorithm could discriminate the affective states with sensitivity, specificity, and correct classification rate of 99.01%, 100%, and 99.09%, respectively. In contrast, the lowest discrimination rates were attained by IMF1 frequency and its combinations. Conclusion The high performance of the present approach indicated that the proposed method is applicable for automatic emotion recognition.

  6. Preliminary Development of Real Time Usage-Phase Monitoring System for CNC Machine Tools with a Case Study on CNC Machine VMC 250

    Science.gov (United States)

    Budi Harja, Herman; Prakosa, Tri; Raharno, Sri; Yuwana Martawirya, Yatna; Nurhadi, Indra; Setyo Nogroho, Alamsyah

    2018-03-01

    The production characteristic of job-shop industry at which products have wide variety but small amounts causes every machine tool will be shared to conduct production process with dynamic load. Its dynamic condition operation directly affects machine tools component reliability. Hence, determination of maintenance schedule for every component should be calculated based on actual usage of machine tools component. This paper describes study on development of monitoring system to obtaining information about each CNC machine tool component usage in real time approached by component grouping based on its operation phase. A special device has been developed for monitoring machine tool component usage by utilizing usage phase activity data taken from certain electronics components within CNC machine. The components are adaptor, servo driver and spindle driver, as well as some additional components such as microcontroller and relays. The obtained data are utilized for detecting machine utilization phases such as power on state, machine ready state or spindle running state. Experimental result have shown that the developed CNC machine tool monitoring system is capable of obtaining phase information of machine tool usage as well as its duration and displays the information at the user interface application.

  7. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition.

    Science.gov (United States)

    Yuan, Rui; Lv, Yong; Song, Gangbing

    2018-04-16

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal.

  8. Universality of Schmidt decomposition and particle identity

    Science.gov (United States)

    Sciara, Stefania; Lo Franco, Rosario; Compagno, Giuseppe

    2017-03-01

    Schmidt decomposition is a widely employed tool of quantum theory which plays a key role for distinguishable particles in scenarios such as entanglement characterization, theory of measurement and state purification. Yet, its formulation for identical particles remains controversial, jeopardizing its application to analyze general many-body quantum systems. Here we prove, using a newly developed approach, a universal Schmidt decomposition which allows faithful quantification of the physical entanglement due to the identity of particles. We find that it is affected by single-particle measurement localization and state overlap. We study paradigmatic two-particle systems where identical qubits and qutrits are located in the same place or in separated places. For the case of two qutrits in the same place, we show that their entanglement behavior, whose physical interpretation is given, differs from that obtained before by different methods. Our results are generalizable to multiparticle systems and open the way for further developments in quantum information processing exploiting particle identity as a resource.

  9. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  10. Optimization and Assessment of Wavelet Packet Decompositions with Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    Schell Thomas

    2003-01-01

    Full Text Available In image compression, the wavelet transformation is a state-of-the-art component. Recently, wavelet packet decomposition has received quite an interest. A popular approach for wavelet packet decomposition is the near-best-basis algorithm using nonadditive cost functions. In contrast to additive cost functions, the wavelet packet decomposition of the near-best-basis algorithm is only suboptimal. We apply methods from the field of evolutionary computation (EC to test the quality of the near-best-basis results. We observe a phenomenon: the results of the near-best-basis algorithm are inferior in terms of cost-function optimization but are superior in terms of rate/distortion performance compared to EC methods.

  11. Dynamic State Estimation for Multi-Machine Power System by Unscented Kalman Filter With Enhanced Numerical Stability

    Energy Technology Data Exchange (ETDEWEB)

    Qi, Junjian; Sun, Kai; Wang, Jianhui; Liu, Hui

    2018-03-01

    In this paper, in order to enhance the numerical stability of the unscented Kalman filter (UKF) used for power system dynamic state estimation, a new UKF with guaranteed positive semidifinite estimation error covariance (UKFGPS) is proposed and compared with five existing approaches, including UKFschol, UKF-kappa, UKFmodified, UKF-Delta Q, and the squareroot UKF (SRUKF). These methods and the extended Kalman filter (EKF) are tested by performing dynamic state estimation on WSCC 3-machine 9-bus system and NPCC 48-machine 140-bus system. For WSCC system, all methods obtain good estimates. However, for NPCC system, both EKF and the classic UKF fail. It is found that UKFschol, UKF-kappa, and UKF-Delta Q do not work well in some estimations while UKFGPS works well in most cases. UKFmodified and SRUKF can always work well, indicating their better scalability mainly due to the enhanced numerical stability.

  12. Estimation of the Dynamic States of Synchronous Machines Using an Extended Particle Filter

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ning; Meng, Da; Lu, Shuai

    2013-11-11

    In this paper, an extended particle filter (PF) is proposed to estimate the dynamic states of a synchronous machine using phasor measurement unit (PMU) data. A PF propagates the mean and covariance of states via Monte Carlo simulation, is easy to implement, and can be directly applied to a non-linear system with non-Gaussian noise. The extended PF modifies a basic PF to improve robustness. Using Monte Carlo simulations with practical noise and model uncertainty considerations, the extended PF’s performance is evaluated and compared with the basic PF and an extended Kalman filter (EKF). The extended PF results showed high accuracy and robustness against measurement and model noise.

  13. Customer requirement modeling and mapping of numerical control machine

    Directory of Open Access Journals (Sweden)

    Zhongqi Sheng

    2015-10-01

    Full Text Available In order to better obtain information about customer requirement and develop products meeting customer requirement, it is necessary to systematically analyze and handle the customer requirement. This article uses the product service system of numerical control machine as research objective and studies the customer requirement modeling and mapping oriented toward configuration design. It introduces the conception of requirement unit, expounds the customer requirement decomposition rules, and establishes customer requirement model; it builds the house of quality using quality function deployment and confirms the weight of technical feature of product and service; it explores the relevance rules between data using rough set theory, establishes rule database, and solves the target value of technical feature of product. Using economical turning center series numerical control machine as an example, it verifies the rationality of proposed customer requirement model.

  14. Energy-efficient algorithm for classification of states of wireless sensor network using machine learning methods

    Science.gov (United States)

    Yuldashev, M. N.; Vlasov, A. I.; Novikov, A. N.

    2018-05-01

    This paper focuses on the development of an energy-efficient algorithm for classification of states of a wireless sensor network using machine learning methods. The proposed algorithm reduces energy consumption by: 1) elimination of monitoring of parameters that do not affect the state of the sensor network, 2) reduction of communication sessions over the network (the data are transmitted only if their values can affect the state of the sensor network). The studies of the proposed algorithm have shown that at classification accuracy close to 100%, the number of communication sessions can be reduced by 80%.

  15. Prediction of electronically nonadiabatic decomposition mechanisms of isolated gas phase nitrogen-rich energetic salt: Guanidium-triazolate

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Jayanta; Bhattacharya, Atanu, E-mail: atanub@ipc.iisc.ernet.in

    2016-01-13

    Highlights: • Decomposition mechanisms of model energetic salt, guanidium triazolate, are explored. • Decomposition pathways are electronically nonadiabatic. • CASPT2, CASMP2 and CASSCF methodologies are employed. • N{sub 2} and NH{sub 3} are predicted to be the most possible initial decomposition products. - Abstract: Electronically nonadiabatic decomposition pathways of guanidium triazolate are explored theoretically. Nonadiabatically coupled potential energy surfaces are explored at the complete active space self-consistent field (CASSCF) level of theory. For better estimation of energies complete active space second order perturbation theories (CASPT2 and CASMP2) are also employed. Density functional theory (DFT) with B3LYP functional and MP2 level of theory are used to explore subsequent ground state decomposition pathways. In comparison with all possible stable decomposition products (such as, N{sub 2}, NH{sub 3}, HNC, HCN, NH{sub 2}CN and CH{sub 3}NC), only NH{sub 3} (with NH{sub 2}CN) and N{sub 2} are predicted to be energetically most accessible initial decomposition products. Furthermore, different conical intersections between the S{sub 1} and S{sub 0} surfaces, which are computed at the CASSCF(14,10)/6-31G(d) level of theory, are found to play an essential role in the excited state deactivation process of guanidium triazolate. This is the first report on the electronically nonadiabatic decomposition mechanisms of isolated guanidium triazolate salt.

  16. Decompositions of manifolds

    CERN Document Server

    Daverman, Robert J

    2007-01-01

    Decomposition theory studies decompositions, or partitions, of manifolds into simple pieces, usually cell-like sets. Since its inception in 1929, the subject has become an important tool in geometric topology. The main goal of the book is to help students interested in geometric topology to bridge the gap between entry-level graduate courses and research at the frontier as well as to demonstrate interrelations of decomposition theory with other parts of geometric topology. With numerous exercises and problems, many of them quite challenging, the book continues to be strongly recommended to eve

  17. Thermal decomposition of pyrite

    International Nuclear Information System (INIS)

    Music, S.; Ristic, M.; Popovic, S.

    1992-01-01

    Thermal decomposition of natural pyrite (cubic, FeS 2 ) has been investigated using X-ray diffraction and 57 Fe Moessbauer spectroscopy. X-ray diffraction analysis of pyrite ore from different sources showed the presence of associated minerals, such as quartz, szomolnokite, stilbite or stellerite, micas and hematite. Hematite, maghemite and pyrrhotite were detected as thermal decomposition products of natural pyrite. The phase composition of the thermal decomposition products depends on the terature, time of heating and starting size of pyrite chrystals. Hematite is the end product of the thermal decomposition of natural pyrite. (author) 24 refs.; 6 figs.; 2 tabs

  18. Danburite decomposition by sulfuric acid

    International Nuclear Information System (INIS)

    Mirsaidov, U.; Mamatov, E.D.; Ashurov, N.A.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by sulfuric acid. The process of decomposition of danburite concentrate by sulfuric acid was studied. The chemical nature of decomposition process of boron containing ore was determined. The influence of temperature on the rate of extraction of boron and iron oxides was defined. The dependence of decomposition of boron and iron oxides on process duration, dosage of H 2 SO 4 , acid concentration and size of danburite particles was determined. The kinetics of danburite decomposition by sulfuric acid was studied as well. The apparent activation energy of the process of danburite decomposition by sulfuric acid was calculated. The flowsheet of danburite processing by sulfuric acid was elaborated.

  19. Human Machine Learning Symbiosis

    Science.gov (United States)

    Walsh, Kenneth R.; Hoque, Md Tamjidul; Williams, Kim H.

    2017-01-01

    Human Machine Learning Symbiosis is a cooperative system where both the human learner and the machine learner learn from each other to create an effective and efficient learning environment adapted to the needs of the human learner. Such a system can be used in online learning modules so that the modules adapt to each learner's learning state both…

  20. How state taxes and policies targeting soda consumption modify the association between school vending machines and student dietary behaviors: a cross-sectional analysis.

    Science.gov (United States)

    Taber, Daniel R; Chriqui, Jamie F; Vuillaume, Renee; Chaloupka, Frank J

    2014-01-01

    Sodas are widely sold in vending machines and other school venues in the United States, particularly in high school. Research suggests that policy changes have reduced soda access, but the impact of reduced access on consumption is unclear. This study was designed to identify student, environmental, or policy characteristics that modify the associations between school vending machines and student dietary behaviors. Data on school vending machine access and student diet were obtained as part of the National Youth Physical Activity and Nutrition Study (NYPANS) and linked to state-level data on soda taxes, restaurant taxes, and state laws governing the sale of soda in schools. Regression models were used to: 1) estimate associations between vending machine access and soda consumption, fast food consumption, and lunch source, and 2) determine if associations were modified by state soda taxes, restaurant taxes, laws banning in-school soda sales, or student characteristics (race/ethnicity, sex, home food access, weight loss behaviors.). Contrary to the hypothesis, students tended to consume 0.53 fewer servings of soda/week (95% CI: -1.17, 0.11) and consume fast food on 0.24 fewer days/week (95% CI: -0.44, -0.05) if they had in-school access to vending machines. They were also less likely to consume soda daily (23.9% vs. 27.9%, average difference  =  -4.02, 95% CI: -7.28, -0.76). However, these inverse associations were observed primarily among states with lower soda and restaurant tax rates (relative to general food tax rates) and states that did not ban in-school soda sales. Associations did not vary by any student characteristics except for weight loss behaviors. Isolated changes to the school food environment may have unintended consequences unless policymakers incorporate other initiatives designed to discourage overall soda consumption.

  1. How state taxes and policies targeting soda consumption modify the association between school vending machines and student dietary behaviors: a cross-sectional analysis.

    Directory of Open Access Journals (Sweden)

    Daniel R Taber

    Full Text Available Sodas are widely sold in vending machines and other school venues in the United States, particularly in high school. Research suggests that policy changes have reduced soda access, but the impact of reduced access on consumption is unclear. This study was designed to identify student, environmental, or policy characteristics that modify the associations between school vending machines and student dietary behaviors.Data on school vending machine access and student diet were obtained as part of the National Youth Physical Activity and Nutrition Study (NYPANS and linked to state-level data on soda taxes, restaurant taxes, and state laws governing the sale of soda in schools. Regression models were used to: 1 estimate associations between vending machine access and soda consumption, fast food consumption, and lunch source, and 2 determine if associations were modified by state soda taxes, restaurant taxes, laws banning in-school soda sales, or student characteristics (race/ethnicity, sex, home food access, weight loss behaviors..Contrary to the hypothesis, students tended to consume 0.53 fewer servings of soda/week (95% CI: -1.17, 0.11 and consume fast food on 0.24 fewer days/week (95% CI: -0.44, -0.05 if they had in-school access to vending machines. They were also less likely to consume soda daily (23.9% vs. 27.9%, average difference  =  -4.02, 95% CI: -7.28, -0.76. However, these inverse associations were observed primarily among states with lower soda and restaurant tax rates (relative to general food tax rates and states that did not ban in-school soda sales. Associations did not vary by any student characteristics except for weight loss behaviors.Isolated changes to the school food environment may have unintended consequences unless policymakers incorporate other initiatives designed to discourage overall soda consumption.

  2. How State Taxes and Policies Targeting Soda Consumption Modify the Association between School Vending Machines and Student Dietary Behaviors: A Cross-Sectional Analysis

    Science.gov (United States)

    Taber, Daniel R.; Chriqui, Jamie F.; Vuillaume, Renee; Chaloupka, Frank J.

    2014-01-01

    Background Sodas are widely sold in vending machines and other school venues in the United States, particularly in high school. Research suggests that policy changes have reduced soda access, but the impact of reduced access on consumption is unclear. This study was designed to identify student, environmental, or policy characteristics that modify the associations between school vending machines and student dietary behaviors. Methods Data on school vending machine access and student diet were obtained as part of the National Youth Physical Activity and Nutrition Study (NYPANS) and linked to state-level data on soda taxes, restaurant taxes, and state laws governing the sale of soda in schools. Regression models were used to: 1) estimate associations between vending machine access and soda consumption, fast food consumption, and lunch source, and 2) determine if associations were modified by state soda taxes, restaurant taxes, laws banning in-school soda sales, or student characteristics (race/ethnicity, sex, home food access, weight loss behaviors.) Results Contrary to the hypothesis, students tended to consume 0.53 fewer servings of soda/week (95% CI: -1.17, 0.11) and consume fast food on 0.24 fewer days/week (95% CI: -0.44, -0.05) if they had in-school access to vending machines. They were also less likely to consume soda daily (23.9% vs. 27.9%, average difference = -4.02, 95% CI: -7.28, -0.76). However, these inverse associations were observed primarily among states with lower soda and restaurant tax rates (relative to general food tax rates) and states that did not ban in-school soda sales. Associations did not vary by any student characteristics except for weight loss behaviors. Conclusion Isolated changes to the school food environment may have unintended consequences unless policymakers incorporate other initiatives designed to discourage overall soda consumption. PMID:25083906

  3. An Elgamal Encryption Scheme of Fibonacci Q-Matrix and Finite State Machine

    Directory of Open Access Journals (Sweden)

    B. Ravi Kumar

    2015-12-01

    Full Text Available Cryptography is the science of writing messages in unknown form using mathematical models. In Cryptography, several ciphers were introduced for the encryption schemes. Recent research focusing on designing various mathematical models in such a way that tracing the inverse of the designed mathematical models is infeasible for the eve droppers. In the present work, the ELGamal encryption scheme is executed using the generator of a cyclic group formed by the points on choosing elliptic curve, finite state machines and key matrices obtained from the Fibonacci sequences.

  4. Identifying student stuck states in programmingassignments using machine learning

    OpenAIRE

    Lindell, Johan

    2014-01-01

    Intelligent tutors are becoming more popular with the increased use of computersand hand held devices in the education sphere. An area of research isinvestigating how machine learning can be used to improve the precision andfeedback of the tutor. This thesis compares machine learning clustering algorithmswith various distance functions in an attempt to cluster together codesnapshots of students solving a programming task. It investigates whethera general non-problem specific implementation of...

  5. Research and application of a novel hybrid decomposition-ensemble learning paradigm with error correction for daily PM10 forecasting

    Science.gov (United States)

    Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang

    2018-03-01

    In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.

  6. Soft computing in machine learning

    CERN Document Server

    Park, Jooyoung; Inoue, Atsushi

    2014-01-01

    As users or consumers are now demanding smarter devices, intelligent systems are revolutionizing by utilizing machine learning. Machine learning as part of intelligent systems is already one of the most critical components in everyday tools ranging from search engines and credit card fraud detection to stock market analysis. You can train machines to perform some things, so that they can automatically detect, diagnose, and solve a variety of problems. The intelligent systems have made rapid progress in developing the state of the art in machine learning based on smart and deep perception. Using machine learning, the intelligent systems make widely applications in automated speech recognition, natural language processing, medical diagnosis, bioinformatics, and robot locomotion. This book aims at introducing how to treat a substantial amount of data, to teach machines and to improve decision making models. And this book specializes in the developments of advanced intelligent systems through machine learning. It...

  7. Studies on the thermal decomposition of lanthanum(III) valerate and lanthanum(III) caproate in argon

    Energy Technology Data Exchange (ETDEWEB)

    Grivel, J.-C., E-mail: jean@dtu.dk [Department of Energy Conversion and Storage, Technical University of Denmark, Frederiksborgvej 399, DK - 4000 Roskilde (Denmark); Zhao, Y.; Suarez Guevara, M.J. [Department of Energy Conversion and Storage, Technical University of Denmark, Frederiksborgvej 399, DK - 4000 Roskilde (Denmark); Watenphul, A. [Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22603 Hamburg (Germany); Institute of Mineralogy and Petrography, University of Hamburg, Grindelallee 48, 20146 Hamburg (Germany)

    2015-07-20

    Highlights: • The thermal decomposition of Lathanum valerate and caproate has been studied in Ar. • The compounds melt prior to decomposition. • Gas release in the molten state results in irregular mass loss. • CO{sub 2} and symmetrical ketones are the main evolving gas species. - Abstract: The decomposition of La-valerate (La(C{sub 4}H{sub 9}CO{sub 2}){sub 3}·xH{sub 2}O (x ≈ 0.45)) and La-caproate (La(C{sub 5}H{sub 11}CO{sub 2}){sub 3}·xH{sub 2}O (x ≈ 0.30)) was studied upon heating at 5 °C/min in a flow of argon. Using a variety of techniques including simultaneous TG-DTA, FTIR, X-ray diffraction with both laboratory Cu Kα and synchrotron sources as well as hot-stage microscopy, it was found that both compounds melt prior to decomposition and that the main decomposition stage from the molten, anhydrous state leads to the formation of La-dioxycarbonate (La{sub 2}O{sub 2}CO{sub 3}) via an unstable intermediate product and release of symmetrical ketones. Final decomposition to La{sub 2}O{sub 3} takes place with release of CO{sub 2}.

  8. Optimization of pocket machining strategy in HSM

    OpenAIRE

    Msaddek, El Bechir; Bouaziz, Zoubeir; Dessein, Gilles; Baili, Maher

    2012-01-01

    International audience; Our two major concerns, which should be taken into consideration as soon as we start the selecting the machining parameters, are the minimization of the machining time and the maintaining of the high-speed machining machine in good state. The manufacturing strategy is one of the parameters which practically influences the time of the different geometrical forms manufacturing, as well as the machine itself. In this article, we propose an optimization methodology of the ...

  9. Azimuthal decomposition of optical modes

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2012-07-01

    Full Text Available This presentation analyses the azimuthal decomposition of optical modes. Decomposition of azimuthal modes need two steps, namely generation and decomposition. An azimuthally-varying phase (bounded by a ring-slit) placed in the spatial frequency...

  10. Thermal decomposition of lutetium propionate

    DEFF Research Database (Denmark)

    Grivel, Jean-Claude

    2010-01-01

    The thermal decomposition of lutetium(III) propionate monohydrate (Lu(C2H5CO2)3·H2O) in argon was studied by means of thermogravimetry, differential thermal analysis, IR-spectroscopy and X-ray diffraction. Dehydration takes place around 90 °C. It is followed by the decomposition of the anhydrous...... °C. Full conversion to Lu2O3 is achieved at about 1000 °C. Whereas the temperatures and solid reaction products of the first two decomposition steps are similar to those previously reported for the thermal decomposition of lanthanum(III) propionate monohydrate, the final decomposition...... of the oxycarbonate to the rare-earth oxide proceeds in a different way, which is here reminiscent of the thermal decomposition path of Lu(C3H5O2)·2CO(NH2)2·2H2O...

  11. Long-term litter decomposition controlled by manganese redox cycling.

    Science.gov (United States)

    Keiluweit, Marco; Nico, Peter; Harmon, Mark E; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus

    2015-09-22

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn(2+) provided by fresh plant litter to produce oxidative Mn(3+) species at sites of active decay, with Mn eventually accumulating as insoluble Mn(3+/4+) oxides. Formation of reactive Mn(3+) species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn(3+)-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn(3+) species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant-soil system may have a profound impact on litter decomposition rates.

  12. Unifying neural-network quantum states and correlator product states via tensor networks

    Science.gov (United States)

    Clark, Stephen R.

    2018-04-01

    Correlator product states (CPS) are a powerful and very broad class of states for quantum lattice systems whose (unnormalised) amplitudes in a fixed basis can be sampled exactly and efficiently. They work by gluing together states of overlapping clusters of sites on the lattice, called correlators. Recently Carleo and Troyer (2017 Science 355 602) introduced a new type sampleable ansatz called neural-network quantum states (NQS) that are inspired by the restricted Boltzmann model used in machine learning. By employing the formalism of tensor networks we show that NQS are a special form of CPS with novel properties. Diagramatically a number of simple observations become transparent. Namely, that NQS are CPS built from extensively sized GHZ-form correlators making them uniquely unbiased geometrically. The appearance of GHZ correlators also relates NQS to canonical polyadic decompositions of tensors. Another immediate implication of the NQS equivalence to CPS is that we are able to formulate exact NQS representations for a wide range of paradigmatic states, including superpositions of weighed-graph states, the Laughlin state, toric code states, and the resonating valence bond state. These examples reveal the potential of using higher dimensional hidden units and a second hidden layer in NQS. The major outlook of this study is the elevation of NQS to correlator operators allowing them to enhance conventional well-established variational Monte Carlo approaches for strongly correlated fermions.

  13. Electromagnetic Comparison of 3-, 5- and 7-phases Permanent-Magnet Synchronous Machines : Mild Hybrid Traction Application

    Directory of Open Access Journals (Sweden)

    D. Ouamara

    2016-09-01

    Full Text Available Authors compare the electromagnetic performances of three multi-phases permanent-magnet (PM synchronous machines (PMSM for Mild Hybridtraction application. This comparison was made using two-dimensional (2-D numerical simulations in transient magnetic with eddy-current reaction field in the PMs. The best machine was determined using an energetic analysis (i.e., losses, torque and efficiency according specifications. In this study, the non-overlapping winding with double layer (i.e. all teeth wound type was used. The winding synthesis is based on the "Star of slots" method as well as the Fourier series decomposition of the magnetomotive force (MMF.

  14. Comparison of three methods for the calibration of cobalt-60 teletherapy machine

    International Nuclear Information System (INIS)

    Adewole, O.O.; Akinlade, B.I.; Oyekunle, O.E.; Ejeh, J.

    2011-01-01

    Two methods of indirect determination of dose rate (machine output) from the Cobalt-60 Teletherapy machines has been reviewed and compared with conventional measurement with dosimetry devices. The dose rate were determined by: (i) Conventional measurement (ii) Application of the law of radioactive decay and (iii) Assumption of 1 % radioactive decomposition per month. The dose rate at the depth of maximum dose (Z m ax), collimator size of 10cm x 10cm and Source to Skin Distance (SSD) of 80cm obtained from these methods were 203.7200cGy/min, 203.8090cGy/min and 203.9530cGy/min respectively. The ratio of dose rate obtained from measurement to that from calculations is within the tolerance value of 2%.

  15. How State Taxes and Policies Targeting Soda Consumption Modify the Association between School Vending Machines and Student Dietary Behaviors: A Cross-Sectional Analysis

    OpenAIRE

    Taber, Daniel R.; Chriqui, Jamie F.; Vuillaume, Renee; Chaloupka, Frank J.

    2014-01-01

    Background: Sodas are widely sold in vending machines and other school venues in the United States, particularly in high school. Research suggests that policy changes have reduced soda access, but the impact of reduced access on consumption is unclear. This study was designed to identify student, environmental, or policy characteristics that modify the associations between school vending machines and student dietary behaviors. Methods: Data on school vending machine access and student diet we...

  16. Preliminary assessment of the possibility of supporting the decomposition of biodegradable packaging

    Directory of Open Access Journals (Sweden)

    Niekraś Lidia

    2017-01-01

    Full Text Available This article presents a preliminary evaluation of the possibility of using grass biomass from a sports field as a compost ingredient which positively affects the degree of decomposition of the biodegradable wrappings. For 5 months the biodegradable bags were stored, both empty and filled with organic waste in the heap of grass clippings. After that period, fragments of the bags were observed under the microscope and then assessed the state of their decomposition. The results indicate that the biomass used favourably affected the process of bag degradation, however the speed of decomposition of the empty bags was quicker than the bags filled with the organic waste.

  17. Three-pattern decomposition of global atmospheric circulation: part I—decomposition model and theorems

    Science.gov (United States)

    Hu, Shujuan; Chou, Jifan; Cheng, Jianbo

    2018-04-01

    In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.

  18. Trait Mindfulness, Problem-Gambling Severity, Altered State of Awareness and Urge to Gamble in Poker-Machine Gamblers.

    Science.gov (United States)

    McKeith, Charles F A; Rock, Adam J; Clark, Gavin I

    2017-06-01

    In Australia, poker-machine gamblers represent a disproportionate number of problem gamblers. To cultivate a greater understanding of the psychological mechanisms involved in poker-machine gambling, a repeated measures cue-reactivity protocol was administered. A community sample of 38 poker-machine gamblers was assessed for problem-gambling severity and trait mindfulness. Participants were also assessed regarding altered state of awareness (ASA) and urge to gamble at baseline, following a neutral cue, and following a gambling cue. Results indicated that: (a) urge to gamble significantly increased from neutral cue to gambling cue, while controlling for baseline urge; (b) cue-reactive ASA did not significantly mediate the relationship between problem-gambling severity and cue-reactive urge (from neutral cue to gambling cue); (c) trait mindfulness was significantly negatively associated with both problem-gambling severity and cue-reactive urge (i.e., from neutral cue to gambling cue, while controlling for baseline urge); and (d) trait mindfulness did not significantly moderate the effect of problem-gambling severity on cue-reactive urge (from neutral cue to gambling cue). This is the first study to demonstrate a negative association between trait mindfulness and cue-reactive urge to gamble in a population of poker-machine gamblers. Thus, this association merits further evaluation both in relation to poker-machine gambling and other gambling modalities.

  19. High-Density Liquid-State Machine Circuitry for Time-Series Forecasting.

    Science.gov (United States)

    Rosselló, Josep L; Alomar, Miquel L; Morro, Antoni; Oliver, Antoni; Canals, Vincent

    2016-08-01

    Spiking neural networks (SNN) are the last neural network generation that try to mimic the real behavior of biological neurons. Although most research in this area is done through software applications, it is in hardware implementations in which the intrinsic parallelism of these computing systems are more efficiently exploited. Liquid state machines (LSM) have arisen as a strategic technique to implement recurrent designs of SNN with a simple learning methodology. In this work, we show a new low-cost methodology to implement high-density LSM by using Boolean gates. The proposed method is based on the use of probabilistic computing concepts to reduce hardware requirements, thus considerably increasing the neuron count per chip. The result is a highly functional system that is applied to high-speed time series forecasting.

  20. Detecting Mental States by Machine Learning Techniques: The Berlin Brain-Computer Interface

    Science.gov (United States)

    Blankertz, Benjamin; Tangermann, Michael; Vidaurre, Carmen; Dickhaus, Thorsten; Sannelli, Claudia; Popescu, Florin; Fazli, Siamac; Danóczy, Márton; Curio, Gabriel; Müller, Klaus-Robert

    The Berlin Brain-Computer Interface Brain-Computer Interface (BBCI) uses a machine learning approach to extract user-specific patterns from high-dimensional EEG-features optimized for revealing the user's mental state. Classical BCI applications are brain actuated tools for patients such as prostheses (see Section 4.1) or mental text entry systems ([1] and see [2-5] for an overview on BCI). In these applications, the BBCI uses natural motor skills of the users and specifically tailored pattern recognition algorithms for detecting the user's intent. But beyond rehabilitation, there is a wide range of possible applications in which BCI technology is used to monitor other mental states, often even covert ones (see also [6] in the fMRI realm). While this field is still largely unexplored, two examples from our studies are exemplified in Sections 4.3 and 4.4.

  1. High speed operation of permanent magnet machines

    Science.gov (United States)

    El-Refaie, Ayman M.

    This work proposes methods to extend the high-speed operating capabilities of both the interior PM (IPM) and surface PM (SPM) machines. For interior PM machines, this research has developed and presented the first thorough analysis of how a new bi-state magnetic material can be usefully applied to the design of IPM machines. Key elements of this contribution include identifying how the unique properties of the bi-state magnetic material can be applied most effectively in the rotor design of an IPM machine by "unmagnetizing" the magnet cavity center posts rather than the outer bridges. The importance of elevated rotor speed in making the best use of the bi-state magnetic material while recognizing its limitations has been identified. For surface PM machines, this research has provided, for the first time, a clear explanation of how fractional-slot concentrated windings can be applied to SPM machines in order to achieve the necessary conditions for optimal flux weakening. A closed-form analytical procedure for analyzing SPM machines designed with concentrated windings has been developed. Guidelines for designing SPM machines using concentrated windings in order to achieve optimum flux weakening are provided. Analytical and numerical finite element analysis (FEA) results have provided promising evidence of the scalability of the concentrated winding technique with respect to the number of poles, machine aspect ratio, and output power rating. Useful comparisons between the predicted performance characteristics of SPM machines equipped with concentrated windings and both SPM and IPM machines designed with distributed windings are included. Analytical techniques have been used to evaluate the impact of the high pole number on various converter performance metrics. Both analytical techniques and FEA have been used for evaluating the eddy-current losses in the surface magnets due to the stator winding subharmonics. Techniques for reducing these losses have been

  2. Domain decomposition methods for fluid dynamics

    International Nuclear Information System (INIS)

    Clerc, S.

    1995-01-01

    A domain decomposition method for steady-state, subsonic fluid dynamics calculations, is proposed. The method is derived from the Schwarz alternating method used for elliptic problems, extended to non-linear hyperbolic problems. Particular emphasis is given on the treatment of boundary conditions. Numerical results are shown for a realistic three-dimensional two-phase flow problem with the FLICA-4 code for PWR cores. (from author). 4 figs., 8 refs

  3. Dispersion and betatron function correction in the Advanced Photon Source storage ring using singular value decomposition

    International Nuclear Information System (INIS)

    Emery, L.

    1999-01-01

    Magnet errors and off-center orbits through sextuples perturb the dispersion and beta functions in a storage ring (SR), which affects machine performance. In a large ring such as the Advanced Photon Source (APS), the magnet errors are difficult to determine with beam-based methods. Also the non-zero orbit through sextuples result from user requests for steering at light source points. For expediency, a singular value decomposition (SVD) matrix method analogous to orbit correction was adopted to make global corrections to these functions using strengths of several quadrupoles as correcting elements. The direct response matrix is calculated from the model of the perfect lattice. The inverse is calculated by SVD with a selected number of singular vectors. Resulting improvement in the lattice functions and machine performance will be presented

  4. Communication strategies for angular domain decomposition of transport calculations on message passing multiprocessors

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1997-01-01

    The effect of three communication schemes for solving Arbitrarily High Order Transport (AHOT) methods of the Nodal type on parallel performance is examined via direct measurements and performance models. The target architecture in this study is Oak Ridge National Laboratory's 128 node Paragon XP/S 5 computer and the parallelization is based on the Parallel Virtual Machine (PVM) library. However, the conclusions reached can be easily generalized to a large class of message passing platforms and communication software. The three schemes considered here are: (1) PVM's global operations (broadcast and reduce) which utilizes the Paragon's native corresponding operations based on a spanning tree routing; (2) the Bucket algorithm wherein the angular domain decomposition of the mesh sweep is complemented with a spatial domain decomposition of the accumulation process of the scalar flux from the angular flux and the convergence test; (3) a distributed memory version of the Bucket algorithm that pushes the spatial domain decomposition one step farther by actually distributing the fixed source and flux iterates over the memories of the participating processes. Their conclusion is that the Bucket algorithm is the most efficient of the three if all participating processes have sufficient memories to hold the entire problem arrays. Otherwise, the third scheme becomes necessary at an additional cost to speedup and parallel efficiency that is quantifiable via the parallel performance model

  5. A Tensor Decomposition-Based Approach for Detecting Dynamic Network States From EEG.

    Science.gov (United States)

    Mahyari, Arash Golibagh; Zoltowski, David M; Bernat, Edward M; Aviyente, Selin

    2017-01-01

    Functional connectivity (FC), defined as the statistical dependency between distinct brain regions, has been an important tool in understanding cognitive brain processes. Most of the current works in FC have focused on the assumption of temporally stationary networks. However, recent empirical work indicates that FC is dynamic due to cognitive functions. The purpose of this paper is to understand the dynamics of FC for understanding the formation and dissolution of networks of the brain. In this paper, we introduce a two-step approach to characterize the dynamics of functional connectivity networks (FCNs) by first identifying change points at which the network connectivity across subjects shows significant changes and then summarizing the FCNs between consecutive change points. The proposed approach is based on a tensor representation of FCNs across time and subjects yielding a four-mode tensor. The change points are identified using a subspace distance measure on low-rank approximations to the tensor at each time point. The network summarization is then obtained through tensor-matrix projections across the subject and time modes. The proposed framework is applied to electroencephalogram (EEG) data collected during a cognitive control task. The detected change-points are consistent with a priori known ERN interval. The results show significant connectivities in medial-frontal regions which are consistent with widely observed ERN amplitude measures. The tensor-based method outperforms conventional matrix-based methods such as singular value decomposition in terms of both change-point detection and state summarization. The proposed tensor-based method captures the topological structure of FCNs which provides more accurate change-point-detection and state summarization.

  6. Identification of liquid-phase decomposition species and reactions for guanidinium azotetrazolate

    International Nuclear Information System (INIS)

    Kumbhakarna, Neeraj R.; Shah, Kaushal J.; Chowdhury, Arindrajit; Thynell, Stefan T.

    2014-01-01

    Highlights: • Guanidinium azotetrazolate (GzT) is a high-nitrogen energetic material. • FTIR spectroscopy and ToFMS spectrometry were used for species identification. • Quantum mechanics was used to identify transition states and decomposition pathways. • Important reactions in the GzT liquid-phase decomposition process were identified. • Initiation of decomposition occurs via ring opening, releasing N 2 . - Abstract: The objective of this work is to analyze the decomposition of guanidinium azotetrazolate (GzT) in the liquid phase by using a combined experimental and computational approach. The experimental part involves the use of Fourier transform infrared (FTIR) spectroscopy to acquire the spectral transmittance of the evolved gas-phase species from rapid thermolysis, as well as to acquire spectral transmittance of the condensate and residue formed from the decomposition. Time-of-flight mass spectrometry (ToFMS) is also used to acquire mass spectra of the evolved gas-phase species. Sub-milligram samples of GzT were heated at rates of about 2000 K/s to a set temperature (553–573 K) where decomposition occurred under isothermal conditions. N 2 , NH 3 , HCN, guanidine and melamine were identified as products of decomposition. The computational approach is based on using quantum mechanics for confirming the identity of the species observed in experiments and for identifying elementary chemical reactions that formed these species. In these ab initio techniques, various levels of theory and basis sets were used. Based on the calculated enthalpy and free energy values of various molecular structures, important reaction pathways were identified. Initiation of decomposition of GzT occurs via ring opening to release N 2

  7. Global decomposition experiment shows soil animal impacts on decomposition are climate-dependent

    Czech Academy of Sciences Publication Activity Database

    Wall, D.H.; Bradford, M.A.; John, M.G.St.; Trofymow, J.A.; Behan-Pelletier, V.; Bignell, D.E.; Dangerfield, J.M.; Parton, W.J.; Rusek, Josef; Voigt, W.; Wolters, V.; Gardel, H.Z.; Ayuke, F. O.; Bashford, R.; Beljakova, O.I.; Bohlen, P.J.; Brauman, A.; Flemming, S.; Henschel, J.R.; Johnson, D.L.; Jones, T.H.; Kovářová, Marcela; Kranabetter, J.M.; Kutny, L.; Lin, K.-Ch.; Maryati, M.; Masse, D.; Pokarzhevskii, A.; Rahman, H.; Sabará, M.G.; Salamon, J.-A.; Swift, M.J.; Varela, A.; Vasconcelos, H.L.; White, D.; Zou, X.

    2008-01-01

    Roč. 14, č. 11 (2008), s. 2661-2677 ISSN 1354-1013 Institutional research plan: CEZ:AV0Z60660521; CEZ:AV0Z60050516 Keywords : climate decomposition index * decomposition * litter Subject RIV: EH - Ecology, Behaviour Impact factor: 5.876, year: 2008

  8. Quantum machine learning for quantum anomaly detection

    Science.gov (United States)

    Liu, Nana; Rebentrost, Patrick

    2018-04-01

    Anomaly detection is used for identifying data that deviate from "normal" data patterns. Its usage on classical data finds diverse applications in many important areas such as finance, fraud detection, medical diagnoses, data cleaning, and surveillance. With the advent of quantum technologies, anomaly detection of quantum data, in the form of quantum states, may become an important component of quantum applications. Machine-learning algorithms are playing pivotal roles in anomaly detection using classical data. Two widely used algorithms are the kernel principal component analysis and the one-class support vector machine. We find corresponding quantum algorithms to detect anomalies in quantum states. We show that these two quantum algorithms can be performed using resources that are logarithmic in the dimensionality of quantum states. For pure quantum states, these resources can also be logarithmic in the number of quantum states used for training the machine-learning algorithm. This makes these algorithms potentially applicable to big quantum data applications.

  9. Decomposition methods for unsupervised learning

    DEFF Research Database (Denmark)

    Mørup, Morten

    2008-01-01

    This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...

  10. Online Artifact Removal for Brain-Computer Interfaces Using Support Vector Machines and Blind Source Separation

    OpenAIRE

    Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang

    2007-01-01

    We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic...

  11. Dictionary-Based Tensor Canonical Polyadic Decomposition

    Science.gov (United States)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  12. Cellular decomposition in vikalloys

    International Nuclear Information System (INIS)

    Belyatskaya, I.S.; Vintajkin, E.Z.; Georgieva, I.Ya.; Golikov, V.A.; Udovenko, V.A.

    1981-01-01

    Austenite decomposition in Fe-Co-V and Fe-Co-V-Ni alloys at 475-600 deg C is investigated. The cellular decomposition in ternary alloys results in the formation of bcc (ordered) and fcc structures, and in quaternary alloys - bcc (ordered) and 12R structures. The cellular 12R structure results from the emergence of stacking faults in the fcc lattice with irregular spacing in four layers. The cellular decomposition results in a high-dispersion structure and magnetic properties approaching the level of well-known vikalloys [ru

  13. A machine learning approach to galaxy-LSS classification - I. Imprints on halo merger trees

    Science.gov (United States)

    Hui, Jianan; Aragon, Miguel; Cui, Xinping; Flegal, James M.

    2018-04-01

    The cosmic web plays a major role in the formation and evolution of galaxies and defines, to a large extent, their properties. However, the relation between galaxies and environment is still not well understood. Here, we present a machine learning approach to study imprints of environmental effects on the mass assembly of haloes. We present a galaxy-LSS machine learning classifier based on galaxy properties sensitive to the environment. We then use the classifier to assess the relevance of each property. Correlations between galaxy properties and their cosmic environment can be used to predict galaxy membership to void/wall or filament/cluster with an accuracy of 93 per cent. Our study unveils environmental information encoded in properties of haloes not normally considered directly dependent on the cosmic environment such as merger history and complexity. Understanding the physical mechanism by which the cosmic web is imprinted in a halo can lead to significant improvements in galaxy formation models. This is accomplished by extracting features from galaxy properties and merger trees, computing feature scores for each feature and then applying support vector machine (SVM) to different feature sets. To this end, we have discovered that the shape and depth of the merger tree, formation time, and density of the galaxy are strongly associated with the cosmic environment. We describe a significant improvement in the original classification algorithm by performing LU decomposition of the distance matrix computed by the feature vectors and then using the output of the decomposition as input vectors for SVM.

  14. Generating feasible transition paths for testing from an extended finite state machine (EFSM) with the counter problem

    OpenAIRE

    Kalaji, AS; Hierons, RM; Swift, S

    2009-01-01

    The extended finite state machine (EFSM) is a powerful approach for modeling state-based systems. However, testing from EFSMs is complicated by the existence of infeasible paths. One important problem is the existence of a transition with a guard that references a counter variable whose value depends on previous transitions. The presence of such transitions in paths often leads to infeasible paths. This paper proposes a novel approach to bypass the counter problem. The proposed approach is ev...

  15. State but not District Nutrition Policies Are Associated with Less Junk Food in Vending Machines and School Stores in US Public Schools

    Science.gov (United States)

    KUBIK, MARTHA Y.; WALL, MELANIE; SHEN, LIJUAN; NANNEY, MARILYN S.; NELSON, TOBEN F.; LASKA, MELISSA N.; STORY, MARY

    2012-01-01

    Background Policy that targets the school food environment has been advanced as one way to increase the availability of healthy food at schools and healthy food choice by students. Although both state- and district-level policy initiatives have focused on school nutrition standards, it remains to be seen whether these policies translate into healthy food practices at the school level, where student behavior will be impacted. Objective To examine whether state- and district-level nutrition policies addressing junk food in school vending machines and school stores were associated with less junk food in school vending machines and school stores. Junk food was defined as foods and beverages with low nutrient density that provide calories primarily through fats and added sugars. Design A cross-sectional study design was used to assess self-report data collected by computer-assisted telephone interviews or self-administered mail questionnaires from state-, district-, and school-level respondents participating in the School Health Policies and Programs Study 2006. The School Health Policies and Programs Study, administered every 6 years since 1994 by the Centers for Disease Control and Prevention, is considered the largest, most comprehensive assessment of school health policies and programs in the United States. Subjects/setting A nationally representative sample (n = 563) of public elementary, middle, and high schools was studied. Statistical analysis Logistic regression adjusted for school characteristics, sampling weights, and clustering was used to analyze data. Policies were assessed for strength (required, recommended, neither required nor recommended prohibiting junk food) and whether strength was similar for school vending machines and school stores. Results School vending machines and school stores were more prevalent in high schools (93%) than middle (84%) and elementary (30%) schools. For state policies, elementary schools that required prohibiting junk food

  16. State but not district nutrition policies are associated with less junk food in vending machines and school stores in US public schools.

    Science.gov (United States)

    Kubik, Martha Y; Wall, Melanie; Shen, Lijuan; Nanney, Marilyn S; Nelson, Toben F; Laska, Melissa N; Story, Mary

    2010-07-01

    Policy that targets the school food environment has been advanced as one way to increase the availability of healthy food at schools and healthy food choice by students. Although both state- and district-level policy initiatives have focused on school nutrition standards, it remains to be seen whether these policies translate into healthy food practices at the school level, where student behavior will be impacted. To examine whether state- and district-level nutrition policies addressing junk food in school vending machines and school stores were associated with less junk food in school vending machines and school stores. Junk food was defined as foods and beverages with low nutrient density that provide calories primarily through fats and added sugars. A cross-sectional study design was used to assess self-report data collected by computer-assisted telephone interviews or self-administered mail questionnaires from state-, district-, and school-level respondents participating in the School Health Policies and Programs Study 2006. The School Health Policies and Programs Study, administered every 6 years since 1994 by the Centers for Disease Control and Prevention, is considered the largest, most comprehensive assessment of school health policies and programs in the United States. A nationally representative sample (n=563) of public elementary, middle, and high schools was studied. Logistic regression adjusted for school characteristics, sampling weights, and clustering was used to analyze data. Policies were assessed for strength (required, recommended, neither required nor recommended prohibiting junk food) and whether strength was similar for school vending machines and school stores. School vending machines and school stores were more prevalent in high schools (93%) than middle (84%) and elementary (30%) schools. For state policies, elementary schools that required prohibiting junk food in school vending machines and school stores offered less junk food than

  17. Preliminary assessment of the possibility of supporting the decomposition of biodegradable packaging

    OpenAIRE

    Niekraś Lidia; Moliszewska Ewa

    2017-01-01

    This article presents a preliminary evaluation of the possibility of using grass biomass from a sports field as a compost ingredient which positively affects the degree of decomposition of the biodegradable wrappings. For 5 months the biodegradable bags were stored, both empty and filled with organic waste in the heap of grass clippings. After that period, fragments of the bags were observed under the microscope and then assessed the state of their decomposition. The results indicate that the...

  18. TensorFlow: A system for large-scale machine learning

    OpenAIRE

    Abadi, Martín; Barham, Paul; Chen, Jianmin; Chen, Zhifeng; Davis, Andy; Dean, Jeffrey; Devin, Matthieu; Ghemawat, Sanjay; Irving, Geoffrey; Isard, Michael; Kudlur, Manjunath; Levenberg, Josh; Monga, Rajat; Moore, Sherry; Murray, Derek G.

    2016-01-01

    TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. TensorFlow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexib...

  19. Machine learning and medical imaging

    CERN Document Server

    Shen, Dinggang; Sabuncu, Mert

    2016-01-01

    Machine Learning and Medical Imaging presents state-of- the-art machine learning methods in medical image analysis. It first summarizes cutting-edge machine learning algorithms in medical imaging, including not only classical probabilistic modeling and learning methods, but also recent breakthroughs in deep learning, sparse representation/coding, and big data hashing. In the second part leading research groups around the world present a wide spectrum of machine learning methods with application to different medical imaging modalities, clinical domains, and organs. The biomedical imaging modalities include ultrasound, magnetic resonance imaging (MRI), computed tomography (CT), histology, and microscopy images. The targeted organs span the lung, liver, brain, and prostate, while there is also a treatment of examining genetic associations. Machine Learning and Medical Imaging is an ideal reference for medical imaging researchers, industry scientists and engineers, advanced undergraduate and graduate students, a...

  20. A novel ECG data compression method based on adaptive Fourier decomposition

    Science.gov (United States)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  1. Tomography and generative training with quantum Boltzmann machines

    Science.gov (United States)

    Kieferová, Mária; Wiebe, Nathan

    2017-12-01

    The promise of quantum neural nets, which utilize quantum effects to model complex data sets, has made their development an aspirational goal for quantum machine learning and quantum computing in general. Here we provide methods of training quantum Boltzmann machines. Our work generalizes existing methods and provides additional approaches for training quantum neural networks that compare favorably to existing methods. We further demonstrate that quantum Boltzmann machines enable a form of partial quantum state tomography that further provides a generative model for the input quantum state. Classical Boltzmann machines are incapable of this. This verifies the long-conjectured connection between tomography and quantum machine learning. Finally, we prove that classical computers cannot simulate our training process in general unless BQP=BPP , provide lower bounds on the complexity of the training procedures and numerically investigate training for small nonstoquastic Hamiltonians.

  2. The Machine Scoring of Writing

    Science.gov (United States)

    McCurry, Doug

    2010-01-01

    This article provides an introduction to the kind of computer software that is used to score student writing in some high stakes testing programs, and that is being promoted as a teaching and learning tool to schools. It sketches the state of play with machines for the scoring of writing, and describes how these machines work and what they do.…

  3. Rate of litter decomposition and microbial activity in an area of Caatinga

    Directory of Open Access Journals (Sweden)

    Patrícia Carneiro Souto

    2013-12-01

    Full Text Available In order to evaluate the decomposition of litter and microbial activity in an area of preserved Caatinga, an experiment was conducted in the Natural Heritage Private Reserve Tamanduá Farm in Santa Terezinha county, State of Paraiba. The decomposition rate was determined by using litter bags containing 30 g of litter, which were arranged on the soil surface in September 2003 and 20 bags were taken each month until September 2005. The collected material was oven dried and weighed to assess weight loss compared to initial weight. Microbial activity was estimated monthly by the quantification of carbon dioxide (CO2 released into the edaphic breathing process from the soil surface, and captured by KOH solution. Weight loss of litter after one year was 41.19% and, after two years, was 48.37%, indicating a faster decomposition in the first year. Data analysis showed the influence of season on litter decomposition and temperature on microbial activity.

  4. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  5. Continuous-variable quantum Gaussian process regression and quantum singular value decomposition of nonsparse low-rank matrices

    Science.gov (United States)

    Das, Siddhartha; Siopsis, George; Weedbrook, Christian

    2018-02-01

    With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.

  6. An Autonomous Connectivity Restoration Algorithm Based on Finite State Machine for Wireless Sensor-Actor Networks

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2018-01-01

    Full Text Available With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms.

  7. An Autonomous Connectivity Restoration Algorithm Based on Finite State Machine for Wireless Sensor-Actor Networks.

    Science.gov (United States)

    Zhang, Ying; Wang, Jun; Hao, Guan

    2018-01-08

    With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs) have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms.

  8. An Autonomous Connectivity Restoration Algorithm Based on Finite State Machine for Wireless Sensor-Actor Networks

    Science.gov (United States)

    Zhang, Ying; Wang, Jun; Hao, Guan

    2018-01-01

    With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs) have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms. PMID:29316702

  9. Symmetric Tensor Decomposition

    DEFF Research Database (Denmark)

    Brachat, Jerome; Comon, Pierre; Mourrain, Bernard

    2010-01-01

    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....

  10. Thermal decomposition of beryllium perchlorate tetrahydrate

    International Nuclear Information System (INIS)

    Berezkina, L.G.; Borisova, S.I.; Tamm, N.S.; Novoselova, A.V.

    1975-01-01

    Thermal decomposition of Be(ClO 4 ) 2 x4H 2 O was studied by the differential flow technique in the helium stream. The kinetics was followed by an exchange reaction of the perchloric acid appearing by the decomposition with potassium carbonate. The rate of CO 2 liberation in this process was recorded by a heat conductivity detector. The exchange reaction yielding CO 2 is quantitative, it is not the limiting one and it does not distort the kinetics of the process of perchlorate decomposition. The solid products of decomposition were studied by infrared and NMR spectroscopy, roentgenography, thermography and chemical analysis. A mechanism suggested for the decomposition involves intermediate formation of hydroxyperchlorate: Be(ClO 4 ) 2 x4H 2 O → Be(OH)ClO 4 +HClO 4 +3H 2 O; Be(OH)ClO 4 → BeO+HClO 4 . Decomposition is accompained by melting of the sample. The mechanism of decomposition is hydrolytic. At room temperature the hydroxyperchlorate is a thick syrup-like compound crystallizing after long storing

  11. Decomposition of gas-phase diphenylether at 473 K by electron beam generated plasma

    CERN Document Server

    Kim, H H; Kojima, T

    2003-01-01

    Decomposition of gas-phase diphenylether (DPE) in the order of several parts per million by volume (ppmv) was studied as a model compound of dioxin using a flow-type electron-beam reactor at an elevated temperature of 473 K. The ground state oxygen ( sup 3 P) atoms played an important role in the decomposition of DPE resulting in the formation of 1,4-hydroquinone (HQ) as a major ring retaining product. The high yield of hydroquinone indicated that the breakage of ether bond (C-O) is important in the initial step of DPE decomposition. Ring cleavage products were CO and CO sub 2 , and NO sub 2 was also produced from background N sub 2 -O sub 2. The sum of the yields of HQ, CO sub 2 and CO accounts for over 90% of the removed DPE. Hydroxyl radicals (OH) were less important in the dilute DPE decomposition at a high water content, and were mostly consumed by recombination reactions to form hydrogen peroxide. The smaller the initial DPE concentrations, the higher the decomposition efficiency and the lower the yield...

  12. Clustering Categories in Support Vector Machines

    DEFF Research Database (Denmark)

    Carrizosa, Emilio; Nogales-Gómez, Amaya; Morales, Dolores Romero

    2017-01-01

    The support vector machine (SVM) is a state-of-the-art method in supervised classification. In this paper the Cluster Support Vector Machine (CLSVM) methodology is proposed with the aim to increase the sparsity of the SVM classifier in the presence of categorical features, leading to a gain in in...

  13. Asynchronous Task-Based Polar Decomposition on Manycore Architectures

    KAUST Repository

    Sukkari, Dalal

    2016-10-25

    This paper introduces the first asynchronous, task-based implementation of the polar decomposition on manycore architectures. Based on a new formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original and hostile LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is also capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been severely weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations (i.e., Intel MKL and Elemental) for the polar decomposition on latest shared-memory vendors\\' systems (i.e., Intel Haswell/Broadwell/Knights Landing, NVIDIA K80/P100 GPUs and IBM Power8), while maintaining high numerical accuracy.

  14. 22 CFR 121.10 - Forgings, castings and machined bodies.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Forgings, castings and machined bodies. 121.10... STATES MUNITIONS LIST Enumeration of Articles § 121.10 Forgings, castings and machined bodies. Articles on the U.S. Munitions List include articles in a partially completed state (such as forgings...

  15. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  16. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    Science.gov (United States)

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  17. Proof of the 1-factorization and Hamilton decomposition conjectures

    CERN Document Server

    Csaba, Béla; Lo, Allan; Osthus, Deryk; Treglown, Andrew

    2016-01-01

    In this paper the authors prove the following results (via a unified approach) for all sufficiently large n: (i) [1-factorization conjecture] Suppose that n is even and D\\geq 2\\lceil n/4\\rceil -1. Then every D-regular graph G on n vertices has a decomposition into perfect matchings. Equivalently, \\chi'(G)=D. (ii) [Hamilton decomposition conjecture] Suppose that D \\ge \\lfloor n/2 \\rfloor . Then every D-regular graph G on n vertices has a decomposition into Hamilton cycles and at most one perfect matching. (iii) [Optimal packings of Hamilton cycles] Suppose that G is a graph on n vertices with minimum degree \\delta\\ge n/2. Then G contains at least {\\rm reg}_{\\rm even}(n,\\delta)/2 \\ge (n-2)/8 edge-disjoint Hamilton cycles. Here {\\rm reg}_{\\rm even}(n,\\delta) denotes the degree of the largest even-regular spanning subgraph one can guarantee in a graph on n vertices with minimum degree \\delta. (i) was first explicitly stated by Chetwynd and Hilton. (ii) and the special case \\delta= \\lceil n/2 \\rceil of (iii) answe...

  18. Reaction mechanism of reductive decomposition of FGD gypsum with anthracite

    International Nuclear Information System (INIS)

    Zheng, Da; Lu, Hailin; Sun, Xiuyun; Liu, Xiaodong; Han, Weiqing; Wang, Lianjun

    2013-01-01

    Highlights: • The reaction mechanism was different if the molar ratio of C/CaSO 4 was different. • The yield of CaO rises with an increase in temperature. • The optimal ratio of C/CaSO 4 = 1.2:1. • The decomposition process is mainly apparent solid–solid reaction with liquid-phase involved. - Abstract: The process of decomposition reaction between flue gas desulfurization (FGD) gypsum and anthracite is complex, which depends on the reaction conditions and atmosphere. In this study, thermogravimetric analysis with Fourier transform infrared spectroscopy (TGA-FTIR), X-ray diffraction (XRD), scanning electron microscopy (SEM) and the experiment in a tubular reactor were used to characterize the decomposition reaction in a nitrogen atmosphere under different conditions. The reaction mechanism analysis showed that the decomposition reaction process and mechanism were different when the molar proportion of C/CaSO 4 was changed. The experiment results showed that appropriate increase in the C/CaSO 4 proportion and higher temperatures were suitable for the formation of the main production of CaO, which can help us to understand the solid state reaction mechanism better. Via kinetic analysis of the reaction between anthracite and FGD gypsum under the optimal molar ratio of C/CaSO 4 , the mechanism model of the reaction was confirmed and the decomposition process was a two-step reaction which was in accordance with apparent solid–solid reaction

  19. Three-photon polarization ququarts: polarization, entanglement and Schmidt decompositions

    International Nuclear Information System (INIS)

    Fedorov, M V; Miklin, N I

    2015-01-01

    We consider polarization states of three photons, propagating collinearly and having equal given frequencies but with arbitrary distributed horizontal or vertical polarizations of photons. A general form of such states is a superposition of four basic three-photon polarization modes, to be referred to as the three-photon polarization ququarts (TPPQ). All such states can be considered as consisting of one- and two-photon parts, which can be entangled with each other. The degrees of entanglement and polarization, as well as the Schmidt decomposition and Stokes vectors of TPPQ are found and discussed. (paper)

  20. Decomposition of Multi-player Games

    Science.gov (United States)

    Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael

    Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.

  1. Neural-Network Quantum States, String-Bond States, and Chiral Topological States

    Science.gov (United States)

    Glasser, Ivan; Pancotti, Nicola; August, Moritz; Rodriguez, Ivan D.; Cirac, J. Ignacio

    2018-01-01

    Neural-network quantum states have recently been introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between neural-network quantum states in the form of restricted Boltzmann machines and some classes of tensor-network states in arbitrary dimensions. In particular, we demonstrate that short-range restricted Boltzmann machines are entangled plaquette states, while fully connected restricted Boltzmann machines are string-bond states with a nonlocal geometry and low bond dimension. These results shed light on the underlying architecture of restricted Boltzmann machines and their efficiency at representing many-body quantum states. String-bond states also provide a generic way of enhancing the power of neural-network quantum states and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of tensor networks and the efficiency of neural-network quantum states into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional tensor networks, we show that, because of their nonlocal geometry, neural-network quantum states and their string-bond-state extension can describe a lattice fractional quantum Hall state exactly. In addition, we provide numerical evidence that neural-network quantum states can approximate a chiral spin liquid with better accuracy than entangled plaquette states and local string-bond states. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of string-bond states as a tool in more traditional machine-learning applications.

  2. NearFar: A computer program for nearside farside decomposition of heavy-ion elastic scattering amplitude

    Science.gov (United States)

    Cha, Moon Hoe

    2007-02-01

    The NearFar program is a package for carrying out an interactive nearside-farside decomposition of heavy-ion elastic scattering amplitude. The program is implemented in Java to perform numerical operations on the nearside and farside angular distributions. It contains a graphical display interface for the numerical results. A test run has been applied to the elastic O16+Si28 scattering at E=1503 MeV. Program summaryTitle of program: NearFar Catalogue identifier: ADYP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYP_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: designed for any machine capable of running Java, developed on PC-Pentium-4 Operating systems under which the program has been tested: Microsoft Windows XP (Home Edition) Program language used: Java Number of bits in a word: 64 Memory required to execute with typical data: case dependent No. of lines in distributed program, including test data, etc.: 3484 Number of bytes distributed program, including test data, etc.: 142 051 Distribution format: tar.gz Other software required: A Java runtime interpreter, or the Java Development Kit, version 5.0 Nature of physical problem: Interactive nearside-farside decomposition of heavy-ion elastic scattering amplitude. Method of solution: The user must supply a external data file or PPSM parameters which calculates theoretical values of the quantities to be decomposed. Typical running time: Problem dependent. In a test run, it is about 35 s on a 2.40 GHz Intel P4-processor machine.

  3. Analysis of machining and machine tools

    CERN Document Server

    Liang, Steven Y

    2016-01-01

    This book delivers the fundamental science and mechanics of machining and machine tools by presenting systematic and quantitative knowledge in the form of process mechanics and physics. It gives readers a solid command of machining science and engineering, and familiarizes them with the geometry and functionality requirements of creating parts and components in today’s markets. The authors address traditional machining topics, such as: single and multiple point cutting processes grinding components accuracy and metrology shear stress in cutting cutting temperature and analysis chatter They also address non-traditional machining, such as: electrical discharge machining electrochemical machining laser and electron beam machining A chapter on biomedical machining is also included. This book is appropriate for advanced undergraduate and graduate mechani cal engineering students, manufacturing engineers, and researchers. Each chapter contains examples, exercises and their solutions, and homework problems that re...

  4. Asynchronous Task-Based Polar Decomposition on Single Node Manycore Architectures

    KAUST Repository

    Sukkari, Dalal E.; Ltaief, Hatem; Faverge, Mathieu; Keyes, David E.

    2017-01-01

    This paper introduces the first asynchronous, task-based formulation of the polar decomposition and its corresponding implementation on manycore architectures. Based on a formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations for the polar decomposition on latest shared-memory vendors' systems, while maintaining numerical accuracy.

  5. Asynchronous Task-Based Polar Decomposition on Single Node Manycore Architectures

    KAUST Repository

    Sukkari, Dalal E.

    2017-09-29

    This paper introduces the first asynchronous, task-based formulation of the polar decomposition and its corresponding implementation on manycore architectures. Based on a formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations for the polar decomposition on latest shared-memory vendors\\' systems, while maintaining numerical accuracy.

  6. Challenges for coexistence of machine to machine and human to human applications in mobile network

    DEFF Research Database (Denmark)

    Sanyal, R.; Cianca, E.; Prasad, Ramjee

    2012-01-01

    A key factor for the evolution of the mobile networks towards 4G is to bring to fruition high bandwidth per mobile node. Eventually, due to the advent of a new class of applications, namely, Machine-to-Machine, we foresee new challenges where bandwidth per user is no more the primal driver...... be evolved to address various nuances of the mobile devices used by man and machines. The bigger question is as follows. Is the state-of-the-art mobile network designed optimally to cater both the Human-to-Human and Machine-to-Machine applications? This paper presents the primary challenges....... As an immediate impact of the high penetration of M2M devices, we envisage a surge in the signaling messages for mobility and location management. The cell size will shrivel due to high tele-density resulting in even more signaling messages related to handoff and location updates. The mobile network should...

  7. Stochastic thermodynamics, fluctuation theorems and molecular machines

    International Nuclear Information System (INIS)

    Seifert, Udo

    2012-01-01

    Stochastic thermodynamics as reviewed here systematically provides a framework for extending the notions of classical thermodynamics such as work, heat and entropy production to the level of individual trajectories of well-defined non-equilibrium ensembles. It applies whenever a non-equilibrium process is still coupled to one (or several) heat bath(s) of constant temperature. Paradigmatic systems are single colloidal particles in time-dependent laser traps, polymers in external flow, enzymes and molecular motors in single molecule assays, small biochemical networks and thermoelectric devices involving single electron transport. For such systems, a first-law like energy balance can be identified along fluctuating trajectories. For a basic Markovian dynamics implemented either on the continuum level with Langevin equations or on a discrete set of states as a master equation, thermodynamic consistency imposes a local-detailed balance constraint on noise and rates, respectively. Various integral and detailed fluctuation theorems, which are derived here in a unifying approach from one master theorem, constrain the probability distributions for work, heat and entropy production depending on the nature of the system and the choice of non-equilibrium conditions. For non-equilibrium steady states, particularly strong results hold like a generalized fluctuation–dissipation theorem involving entropy production. Ramifications and applications of these concepts include optimal driving between specified states in finite time, the role of measurement-based feedback processes and the relation between dissipation and irreversibility. Efficiency and, in particular, efficiency at maximum power can be discussed systematically beyond the linear response regime for two classes of molecular machines, isothermal ones such as molecular motors, and heat engines such as thermoelectric devices, using a common framework based on a cycle decomposition of entropy production. (review article)

  8. Decomposition of diesel oil by various microorganisms

    Energy Technology Data Exchange (ETDEWEB)

    Suess, A; Netzsch-Lehner, A

    1969-01-01

    Previous experiments demonstrated the decomposition of diesel oil in different soils. In this experiment the decomposition of /sup 14/C-n-Hexadecane labelled diesel oil by special microorganisms was studied. The results were as follows: (1) In the experimental soils the microorganisms Mycoccus ruber, Mycobacterium luteum and Trichoderma hamatum are responsible for the diesel oil decomposition. (2) By adding microorganisms to the soil an increase of the decomposition rate was found only in the beginning of the experiments. (3) Maximum decomposition of diesel oil was reached 2-3 weeks after incubation.

  9. Riemann-Theta Boltzmann Machine arXiv

    CERN Document Server

    Krefl, Daniel; Haghighat, Babak; Kahlen, Jens

    A general Boltzmann machine with continuous visible and discrete integer valued hidden states is introduced. Under mild assumptions about the connection matrices, the probability density function of the visible units can be solved for analytically, yielding a novel parametric density function involving a ratio of Riemann-Theta functions. The conditional expectation of a hidden state for given visible states can also be calculated analytically, yielding a derivative of the logarithmic Riemann-Theta function. The conditional expectation can be used as activation function in a feedforward neural network, thereby increasing the modelling capacity of the network. Both the Boltzmann machine and the derived feedforward neural network can be successfully trained via standard gradient- and non-gradient-based optimization techniques.

  10. Short-Term Distribution System State Forecast Based on Optimal Synchrophasor Sensor Placement and Extreme Learning Machine

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang; Zhang, Yingchen

    2016-11-14

    This paper proposes an approach for distribution system state forecasting, which aims to provide an accurate and high speed state forecasting with an optimal synchrophasor sensor placement (OSSP) based state estimator and an extreme learning machine (ELM) based forecaster. Specifically, considering the sensor installation cost and measurement error, an OSSP algorithm is proposed to reduce the number of synchrophasor sensor and keep the whole distribution system numerically and topologically observable. Then, the weighted least square (WLS) based system state estimator is used to produce the training data for the proposed forecaster. Traditionally, the artificial neural network (ANN) and support vector regression (SVR) are widely used in forecasting due to their nonlinear modeling capabilities. However, the ANN contains heavy computation load and the best parameters for SVR are difficult to obtain. In this paper, the ELM, which overcomes these drawbacks, is used to forecast the future system states with the historical system states. The proposed approach is effective and accurate based on the testing results.

  11. Mechanistic Aspects of the Thermal Decomposition of Dicyclopentadienyl Titanium(IV) Dibenzyl

    NARCIS (Netherlands)

    Boekel, C.P.; Teuben, J.H.; Liefde Meijer, H.J. de

    1975-01-01

    The thermal decomposition of dicyclopentadienyltitanium(IV) dibenzyl in the solid state and in hydrocarbon solvents has been investigated. The compound decomposes via intermolecular abstraction of hydrogen atoms from the cyclopentadienyl rings with quantitative formation of toluene. The reaction was

  12. Finite Element Method in Machining Processes

    CERN Document Server

    Markopoulos, Angelos P

    2013-01-01

    Finite Element Method in Machining Processes provides a concise study on the way the Finite Element Method (FEM) is used in the case of manufacturing processes, primarily in machining. The basics of this kind of modeling are detailed to create a reference that will provide guidelines for those who start to study this method now, but also for scientists already involved in FEM and want to expand their research. A discussion on FEM, formulations and techniques currently in use is followed up by machining case studies. Orthogonal cutting, oblique cutting, 3D simulations for turning and milling, grinding, and state-of-the-art topics such as high speed machining and micromachining are explained with relevant examples. This is all supported by a literature review and a reference list for further study. As FEM is a key method for researchers in the manufacturing and especially in the machining sector, Finite Element Method in Machining Processes is a key reference for students studying manufacturing processes but al...

  13. Superconducting three element synchronous ac machine

    International Nuclear Information System (INIS)

    Boyer, L.; Chabrerie, J.P.; Mailfert, A.; Renard, M.

    1975-01-01

    There is a growing interest in ac superconducting machines. Of several new concepts proposed for these machines in the last years one of the most promising seems to be the ''three elements'' concept which allows the cancellation of the torque acting on the superconducting field winding, thus overcoming some of the major contraints. This concept leads to a device of induction-type generator. A synchronous, three element superconducting ac machine is described, in which a room temperature, dc fed rotating winding is inserted between the superconducting field winding and the ac armature. The steady-state machine theory is developed, the flux linkages are established, and the torque expressions are derived. The condition for zero torque on the field winding, as well as the resulting electrical equations of the machine, are given. The theoretical behavior of the machine is studied, using phasor diagrams and assuming for the superconducting field winding either a constant current or a constant flux condition

  14. Multilinear operators for higher-order decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara Gibson

    2006-04-01

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

  15. Measurements for stresses in machine components

    CERN Document Server

    Yakovlev, V F

    1964-01-01

    Measurements for Stresses in Machine Components focuses on the state of stress and strain of components and members, which determines the service life and strength of machines and structures. This book is divided into four chapters. Chapter I describes the physical basis of several methods of measuring strains, which includes strain gauges, photoelasticity, X-ray diffraction, brittle coatings, and dividing grids. The basic concepts of the electric strain gauge method for measuring stresses inside machine components are covered in Chapter II. Chapter III elaborates on the results of experim

  16. Decomposition of tetrachloroethylene by ionizing radiation

    International Nuclear Information System (INIS)

    Hakoda, T.; Hirota, K.; Hashimoto, S.

    1998-01-01

    Decomposition of tetrachloroethylene and other chloroethenes by ionizing radiation were examined to get information on treatment of industrial off-gas. Model gases, airs containing chloroethenes, were confined in batch reactors and irradiated with electron beam and gamma ray. The G-values of decomposition were larger in the order of tetrachloro- > trichloro- > trans-dichloro- > cis-dichloro- > monochloroethylene in electron beam irradiation and tetrachloro-, trichloro-, trans-dichloro- > cis-dichloro- > monochloroethylene in gamma ray irradiation. For tetrachloro-, trichloro- and trans-dichloroethylene, G-values of decomposition in EB irradiation increased with increase of chlorine atom in a molecule, while those in gamma ray irradiation were almost kept constant. The G-value of decomposition for tetrachloroethylene in EB irradiation was the largest of those for all chloroethenes. In order to examine the effect of the initial concentration on G-value of decomposition, airs containing 300 to 1,800 ppm of tetrachloroethylene were irradiated with electron beam and gamma ray. The G-values of decomposition in both irradiation increased with the initial concentration. Those in electron beam irradiation were two times larger than those in gamma ray irradiation

  17. Decomposition of Sodium Tetraphenylborate

    International Nuclear Information System (INIS)

    Barnes, M.J.

    1998-01-01

    The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability

  18. Proper Orthogonal Decomposition and Dynamic Mode Decomposition in the Right Ventricle after Repair of Tetralogy of Fallot

    Science.gov (United States)

    Mikhail, Amanda; Kadem, Lyes; di Labbio, Giuseppe

    2017-11-01

    Tetralogy of Fallot accounts for 5% of all cyanotic congenital heart defects, making it the most predominant today. Approximately 1660 cases per year are seen in the United States alone. Once repaired at a very young age, symptoms such as pulmonary valve regurgitation seem to arise two to three decades after the initial operation. Currently, not much is understood about the blood flow in the right ventricle of the heart when regurgitation is present. In this study, the interaction between the diastolic interventricular flow and the regurgitating pulmonary valve are investigated. This experimental work aims to simulate and characterize this detrimental flow in a right heart simulator using time-resolved particle image velocimetry. Seven severities of regurgitation were simulated. Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) revealed intricate coherent flow structures. With regurgitation severity, the modal energies from POD are more distributed among the modes while DMD reveals more unstable modes. This study can contribute to the further investigation of the detrimental effects of right ventricle regurgitation.

  19. Acceleration of saddle-point searches with machine learning

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, Andrew A., E-mail: andrew-peterson@brown.edu [School of Engineering, Brown University, Providence, Rhode Island 02912 (United States)

    2016-08-21

    In atomistic simulations, the location of the saddle point on the potential-energy surface (PES) gives important information on transitions between local minima, for example, via transition-state theory. However, the search for saddle points often involves hundreds or thousands of ab initio force calls, which are typically all done at full accuracy. This results in the vast majority of the computational effort being spent calculating the electronic structure of states not important to the researcher, and very little time performing the calculation of the saddle point state itself. In this work, we describe how machine learning (ML) can reduce the number of intermediate ab initio calculations needed to locate saddle points. Since machine-learning models can learn from, and thus mimic, atomistic simulations, the saddle-point search can be conducted rapidly in the machine-learning representation. The saddle-point prediction can then be verified by an ab initio calculation; if it is incorrect, this strategically has identified regions of the PES where the machine-learning representation has insufficient training data. When these training data are used to improve the machine-learning model, the estimates greatly improve. This approach can be systematized, and in two simple example problems we demonstrate a dramatic reduction in the number of ab initio force calls. We expect that this approach and future refinements will greatly accelerate searches for saddle points, as well as other searches on the potential energy surface, as machine-learning methods see greater adoption by the atomistics community.

  20. Acceleration of saddle-point searches with machine learning

    International Nuclear Information System (INIS)

    Peterson, Andrew A.

    2016-01-01

    In atomistic simulations, the location of the saddle point on the potential-energy surface (PES) gives important information on transitions between local minima, for example, via transition-state theory. However, the search for saddle points often involves hundreds or thousands of ab initio force calls, which are typically all done at full accuracy. This results in the vast majority of the computational effort being spent calculating the electronic structure of states not important to the researcher, and very little time performing the calculation of the saddle point state itself. In this work, we describe how machine learning (ML) can reduce the number of intermediate ab initio calculations needed to locate saddle points. Since machine-learning models can learn from, and thus mimic, atomistic simulations, the saddle-point search can be conducted rapidly in the machine-learning representation. The saddle-point prediction can then be verified by an ab initio calculation; if it is incorrect, this strategically has identified regions of the PES where the machine-learning representation has insufficient training data. When these training data are used to improve the machine-learning model, the estimates greatly improve. This approach can be systematized, and in two simple example problems we demonstrate a dramatic reduction in the number of ab initio force calls. We expect that this approach and future refinements will greatly accelerate searches for saddle points, as well as other searches on the potential energy surface, as machine-learning methods see greater adoption by the atomistics community.

  1. Acceleration of saddle-point searches with machine learning.

    Science.gov (United States)

    Peterson, Andrew A

    2016-08-21

    In atomistic simulations, the location of the saddle point on the potential-energy surface (PES) gives important information on transitions between local minima, for example, via transition-state theory. However, the search for saddle points often involves hundreds or thousands of ab initio force calls, which are typically all done at full accuracy. This results in the vast majority of the computational effort being spent calculating the electronic structure of states not important to the researcher, and very little time performing the calculation of the saddle point state itself. In this work, we describe how machine learning (ML) can reduce the number of intermediate ab initio calculations needed to locate saddle points. Since machine-learning models can learn from, and thus mimic, atomistic simulations, the saddle-point search can be conducted rapidly in the machine-learning representation. The saddle-point prediction can then be verified by an ab initio calculation; if it is incorrect, this strategically has identified regions of the PES where the machine-learning representation has insufficient training data. When these training data are used to improve the machine-learning model, the estimates greatly improve. This approach can be systematized, and in two simple example problems we demonstrate a dramatic reduction in the number of ab initio force calls. We expect that this approach and future refinements will greatly accelerate searches for saddle points, as well as other searches on the potential energy surface, as machine-learning methods see greater adoption by the atomistics community.

  2. Thermal decomposition of γ-irradiated lead nitrate

    International Nuclear Information System (INIS)

    Nair, S.M.K.; Kumar, T.S.S.

    1990-01-01

    The thermal decomposition of unirradiated and γ-irradiated lead nitrate was studied by the gas evolution method. The decomposition proceeds through initial gas evolution, a short induction period, an acceleratory stage and a decay stage. The acceleratory and decay stages follow the Avrami-Erofeev equation. Irradiation enhances the decomposition but does not affect the shape of the decomposition curve. (author) 10 refs.; 7 figs.; 2 tabs

  3. Decomposing Nekrasov decomposition

    International Nuclear Information System (INIS)

    Morozov, A.; Zenkevich, Y.

    2016-01-01

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  4. Decomposing Nekrasov decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); Institute for Information Transmission Problems,19-1 Bolshoy Karetniy, Moscow, 127051 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Zenkevich, Y. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Institute for Nuclear Research of Russian Academy of Sciences,6a Prospekt 60-letiya Oktyabrya, Moscow, 117312 (Russian Federation)

    2016-02-16

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  5. DOM composition and transformation in boreal forest soils: The effects of temperature and organic-horizon decomposition state

    Science.gov (United States)

    O’Donnell, Jonathan A.; Aiken, George R.; Butler, Kenna D.; Guillemette, Francois; Podgorski, David C.; Spencer, Robert G. M.

    2016-01-01

    The boreal region stores large amounts of organic carbon (C) in organic-soil horizons, which are vulnerable to destabilization via warming and disturbance. Decomposition of soil organic matter (SOM) contributes to the production and turnover of dissolved organic matter (DOM). While temperature is a primary control on rates of SOM and DOM cycling, little is known about temperature effects on DOM composition in soil leachate. Here we conducted a 30 day incubation to examine the effects of temperature (20 versus 5°C) and SOM decomposition state (moss versus fibric versus amorphous horizons) on DOM composition in organic soils of interior Alaska. We characterized DOM using bulk dissolved organic C (DOC) concentration, chemical fractionation, optical properties, and ultrahigh-resolution mass spectrometry. We observed an increase in DOC concentration and DOM aromaticity in the 20°C treatment compared to the 5°C treatment. Leachate from fibric horizons had higher DOC concentration than shallow moss or deep amorphous horizons. We also observed chemical shifts in DOM leachate over time, including increases in hydrophobic organic acids, polyphenols, and condensed aromatics and decreases in low-molecular weight hydrophilic compounds and aliphatics. We compared ultrahigh-resolution mass spectrometry and optical data and observed strong correlations between polyphenols, condensed aromatics, SUVA254, and humic-like fluorescence intensities. These findings suggest that biolabile DOM was preferentially mineralized, and the magnitude of this transformation was determined by kinetics (i.e., temperature) and substrate quality (i.e., soil horizon). With future warming, our findings indicate that organic soils may release higher concentrations of aromatic DOM to aquatic ecosystems.

  6. Freeman-Durden Decomposition with Oriented Dihedral Scattering

    Directory of Open Access Journals (Sweden)

    Yan Jian

    2014-10-01

    Full Text Available In this paper, when the azimuth direction of polarimetric Synthetic Aperature Radars (SAR differs from the planting direction of crops, the double bounce of the incident electromagnetic waves from the terrain surface to the growing crops is investigated and compared with the normal double bounce. Oriented dihedral scattering model is developed to explain the investigated double bounce and is introduced into the Freeman-Durden decomposition. The decomposition algorithm corresponding to the improved decomposition is then proposed. The airborne polarimetric SAR data for agricultural land covering two flight tracks are chosen to validate the algorithm; the decomposition results show that for agricultural vegetated land, the improved Freeman-Durden decomposition has the advantage of increasing the decomposition coherency among the polarimetric SAR data along the different flight tracks.

  7. Young Children's Thinking About Decomposition: Early Modeling Entrees to Complex Ideas in Science

    Science.gov (United States)

    Ero-Tolliver, Isi; Lucas, Deborah; Schauble, Leona

    2013-10-01

    This study was part of a multi-year project on the development of elementary students' modeling approaches to understanding the life sciences. Twenty-three first grade students conducted a series of coordinated observations and investigations on decomposition, a topic that is rarely addressed in the early grades. The instruction included in-class observations of different types of soil and soil profiling, visits to the school's compost bin, structured observations of decaying organic matter of various kinds, study of organisms that live in the soil, and models of environmental conditions that affect rates of decomposition. Both before and after instruction, students completed a written performance assessment that asked them to reason about the process of decomposition. Additional information was gathered through one-on-one interviews with six focus students who represented variability of performance across the class. During instruction, researchers collected video of classroom activity, student science journal entries, and charts and illustrations produced by the teacher. After instruction, the first-grade students showed a more nuanced understanding of the composition and variability of soils, the role of visible organisms in decomposition, and environmental factors that influence rates of decomposition. Through a variety of representational devices, including drawings, narrative records, and physical models, students came to regard decomposition as a process, rather than simply as an end state that does not require explanation.

  8. Ab initio kinetics and thermal decomposition mechanism of mononitrobiuret and 1,5-dinitrobiuret

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Hongyan, E-mail: hongyan.sun1@gmail.com, E-mail: ghanshyam.vaghjiani@us.af.mil; Vaghjiani, Ghanshyam L., E-mail: hongyan.sun1@gmail.com, E-mail: ghanshyam.vaghjiani@us.af.mil [Propellants Branch, Rocket Propulsion Division, Aerospace Systems Directorate, Air Force Research Laboratory, AFRL/RQRP, 10 E. Saturn Blvd., Edwards AFB, California 93524 (United States)

    2015-05-28

    Mononitrobiuret (MNB) and 1,5-dinitrobiuret (DNB) are tetrazole-free, nitrogen-rich, energetic compounds. For the first time, a comprehensive ab initio kinetics study on the thermal decomposition mechanisms of MNB and DNB is reported here. In particular, the intramolecular interactions of amine H-atom with electronegative nitro O-atom and carbonyl O-atom have been analyzed for biuret, MNB, and DNB at the M06-2X/aug-cc-pVTZ level of theory. The results show that the MNB and DNB molecules are stabilized through six-member-ring moieties via intramolecular H-bonding with interatomic distances between 1.8 and 2.0 Å, due to electrostatic as well as polarization and dispersion interactions. Furthermore, it was found that the stable molecules in the solid state have the smallest dipole moment amongst all the conformers in the nitrobiuret series of compounds, thus revealing a simple way for evaluating reactivity of fuel conformers. The potential energy surface for thermal decomposition of MNB was characterized by spin restricted coupled cluster theory at the RCCSD(T)/cc-pV∞ Z//M06-2X/aug-cc-pVTZ level. It was found that the thermal decomposition of MNB is initiated by the elimination of HNCO and HNN(O)OH intermediates. Intramolecular transfer of a H-atom, respectively, from the terminal NH{sub 2} group to the adjacent carbonyl O-atom via a six-member-ring transition state eliminates HNCO with an energy barrier of 35 kcal/mol and from the central NH group to the adjacent nitro O-atom eliminates HNN(O)OH with an energy barrier of 34 kcal/mol. Elimination of HNN(O)OH is also the primary process involved in the thermal decomposition of DNB, which processes C{sub 2v} symmetry. The rate coefficients for the primary decomposition channels for MNB and DNB were quantified as functions of temperature and pressure. In addition, the thermal decomposition of HNN(O)OH was analyzed via Rice–Ramsperger–Kassel–Marcus/multi-well master equation simulations, the results of which

  9. Danburite decomposition by hydrochloric acid

    International Nuclear Information System (INIS)

    Mamatov, E.D.; Ashurov, N.A.; Mirsaidov, U.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by hydrochloric acid. The interaction of boron containing ores of Ak-Arkhar Deposit of Tajikistan with mineral acids, including hydrochloric acid was studied. The optimal conditions of extraction of valuable components from danburite composition were determined. The chemical composition of danburite of Ak-Arkhar Deposit was determined as well. The kinetics of decomposition of calcined danburite by hydrochloric acid was studied. The apparent activation energy of the process of danburite decomposition by hydrochloric acid was calculated.

  10. Research on intelligent machine self-perception method based on LSTM

    Science.gov (United States)

    Wang, Qiang; Cheng, Tao

    2018-05-01

    In this paper, we use the advantages of LSTM in feature extraction and processing high-dimensional and complex nonlinear data, and apply it to the autonomous perception of intelligent machines. Compared with the traditional multi-layer neural network, this model has memory, can handle time series information of any length. Since the multi-physical domain signals of processing machines have a certain timing relationship, and there is a contextual relationship between states and states, using this deep learning method to realize the self-perception of intelligent processing machines has strong versatility and adaptability. The experiment results show that the method proposed in this paper can obviously improve the sensing accuracy under various working conditions of the intelligent machine, and also shows that the algorithm can well support the intelligent processing machine to realize self-perception.

  11. LMDI decomposition approach: A guide for implementation

    International Nuclear Information System (INIS)

    Ang, B.W.

    2015-01-01

    Since it was first used by researchers to analyze industrial electricity consumption in the early 1980s, index decomposition analysis (IDA) has been widely adopted in energy and emission studies. Lately its use as the analytical component of accounting frameworks for tracking economy-wide energy efficiency trends has attracted considerable attention and interest among policy makers. The last comprehensive literature review of IDA was reported in 2000 which is some years back. After giving an update and presenting the key trends in the last 15 years, this study focuses on the implementation issues of the logarithmic mean Divisia index (LMDI) decomposition methods in view of their dominance in IDA in recent years. Eight LMDI models are presented and their origin, decomposition formulae, and strengths and weaknesses are summarized. Guidelines on the choice among these models are provided to assist users in implementation. - Highlights: • Guidelines for implementing LMDI decomposition approach are provided. • Eight LMDI decomposition models are summarized and compared. • The development of the LMDI decomposition approach is presented. • The latest developments of index decomposition analysis are briefly reviewed.

  12. Thermal models of pulse electrochemical machining

    International Nuclear Information System (INIS)

    Kozak, J.

    2004-01-01

    Pulse electrochemical machining (PECM) provides an economical and effective method for machining high strength, heat-resistant materials into complex shapes such as turbine blades, die, molds and micro cavities. Pulse Electrochemical Machining involves the application of a voltage pulse at high current density in the anodic dissolution process. Small interelectrode gap, low electrolyte flow rate, gap state recovery during the pulse off-times lead to improved machining accuracy and surface finish when compared with ECM using continuous current. This paper presents a mathematical model for PECM and employs this model in a computer simulation of the PECM process for determination of the thermal limitation and energy consumption in PECM. The experimental results and discussion of the characteristics PECM are presented. (authors)

  13. Interacting effects of insects and flooding on wood decomposition.

    Directory of Open Access Journals (Sweden)

    Michael D Ulyshen

    Full Text Available Saproxylic arthropods are thought to play an important role in wood decomposition but very few efforts have been made to quantify their contributions to the process and the factors controlling their activities are not well understood. In the current study, mesh exclusion bags were used to quantify how arthropods affect loblolly pine (Pinus taeda L. decomposition rates in both seasonally flooded and unflooded forests over a 31-month period in the southeastern United States. Wood specific gravity (based on initial wood volume was significantly lower in bolts placed in unflooded forests and for those unprotected from insects. Approximately 20.5% and 13.7% of specific gravity loss after 31 months was attributable to insect activity in flooded and unflooded forests, respectively. Importantly, minimal between-treatment differences in water content and the results from a novel test carried out separately suggest the mesh bags had no significant impact on wood mass loss beyond the exclusion of insects. Subterranean termites (Isoptera: Rhinotermitidae: Reticulitermes spp. were 5-6 times more active below-ground in unflooded forests compared to flooded forests based on wooden monitoring stakes. They were also slightly more active above-ground in unflooded forests but these differences were not statistically significant. Similarly, seasonal flooding had no detectable effect on above-ground beetle (Coleoptera richness or abundance. Although seasonal flooding strongly reduced Reticulitermes activity below-ground, it can be concluded from an insignificant interaction between forest type and exclusion treatment that reduced above-ground decomposition rates in seasonally flooded forests were due largely to suppressed microbial activity at those locations. The findings from this study indicate that southeastern U.S. arthropod communities accelerate above-ground wood decomposition significantly and to a similar extent in both flooded and unflooded forests

  14. Machine learning concepts in coherent optical communication systems

    DEFF Research Database (Denmark)

    Zibar, Darko; Schäffer, Christian G.

    2014-01-01

    Powerful statistical signal processing methods, used by the machine learning community, are addressed and linked to current problems in coherent optical communication. Bayesian filtering methods are presented and applied for nonlinear dynamic state tracking. © 2014 OSA.......Powerful statistical signal processing methods, used by the machine learning community, are addressed and linked to current problems in coherent optical communication. Bayesian filtering methods are presented and applied for nonlinear dynamic state tracking. © 2014 OSA....

  15. FDG decomposition products

    International Nuclear Information System (INIS)

    Macasek, F.; Buriova, E.

    2004-01-01

    In this presentation authors present the results of analysis of decomposition products of [ 18 ]fluorodexyglucose. It is concluded that the coupling of liquid chromatography - mass spectrometry with electrospray ionisation is a suitable tool for quantitative analysis of FDG radiopharmaceutical, i.e. assay of basic components (FDG, glucose), impurities (Kryptofix) and decomposition products (gluconic and glucuronic acids etc.); 2-[ 18 F]fluoro-deoxyglucose (FDG) is sufficiently stable and resistant towards autoradiolysis; the content of radiochemical impurities (2-[ 18 F]fluoro-gluconic and 2-[ 18 F]fluoro-glucuronic acids in expired FDG did not exceed 1%

  16. comparative study of moore and mealy machine models adaptation

    African Journals Online (AJOL)

    user

    automata model was developed for ABS manufacturing process using Moore and Mealy Finite State Machines. Simulation ... The simulation results showed that the Mealy Machine is faster than the Moore ..... random numbers from MATLAB.

  17. Modeling multipulsing transition in ring cavity lasers with proper orthogonal decomposition

    International Nuclear Information System (INIS)

    Ding, Edwin; Shlizerman, Eli; Kutz, J. Nathan

    2010-01-01

    A low-dimensional model is constructed via the proper orthogonal decomposition (POD) to characterize the multipulsing phenomenon in a ring cavity laser mode locked by a saturable absorber. The onset of the multipulsing transition is characterized by an oscillatory state (created by a Hopf bifurcation) that is then itself destabilized to a double-pulse configuration (by a fold bifurcation). A four-mode POD analysis, which uses the principal components, or singular value decomposition modes, of the mode-locked laser, provides a simple analytic framework for a complete characterization of the entire transition process and its associated bifurcations. These findings are in good agreement with the full governing equation.

  18. A Function-Behavior-State Approach to Designing Human Machine Interface for Nuclear Power Plant Operators

    Science.gov (United States)

    Lin, Y.; Zhang, W. J.

    2005-02-01

    This paper presents an approach to human-machine interface design for control room operators of nuclear power plants. The first step in designing an interface for a particular application is to determine information content that needs to be displayed. The design methodology for this step is called the interface design framework (called framework ). Several frameworks have been proposed for applications at varying levels, including process plants. However, none is based on the design and manufacture of a plant system for which the interface is designed. This paper presents an interface design framework which originates from design theory and methodology for general technical systems. Specifically, the framework is based on a set of core concepts of a function-behavior-state model originally proposed by the artificial intelligence research community and widely applied in the design research community. Benefits of this new framework include the provision of a model-based fault diagnosis facility, and the seamless integration of the design (manufacture, maintenance) of plants and the design of human-machine interfaces. The missing linkage between design and operation of a plant was one of the causes of the Three Mile Island nuclear reactor incident. A simulated plant system is presented to explain how to apply this framework in designing an interface. The resulting human-machine interface is discussed; specifically, several fault diagnosis examples are elaborated to demonstrate how this interface could support operators' fault diagnosis in an unanticipated situation.

  19. Grassmann integral and Balian–Brézin decomposition in Hartree–Fock–Bogoliubov matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro, E-mail: mizusaki@isc.senshu-u.ac.jp [Institute of Natural Sciences, Senshu University, 3-8-1 Kanda-Jinbocho, Chiyoda-ku, Tokyo 101-8425 (Japan); Oi, Makito [Institute of Natural Sciences, Senshu University, 3-8-1 Kanda-Jinbocho, Chiyoda-ku, Tokyo 101-8425 (Japan); Chen, Fang-Qi [Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240 (China); Sun, Yang [Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240 (China); Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China)

    2013-08-09

    We present a new formula to calculate matrix elements of a general unitary operator with respect to Hartree–Fock–Bogoliubov states allowing multiple quasi-particle excitations. The Balian–Brézin decomposition of the unitary operator [R. Balian, E. Brézin, Il Nuovo Cimento B 64 (1969) 37] is employed in the derivation. We found that this decomposition is extremely suitable for an application of Fermion coherent state and Grassmann integrals in the quasi-particle basis. The resultant formula is compactly expressed in terms of the Pfaffian, and shows the similar bipartite structure to the formula that we have previously derived in the bare-particles basis [T. Mizusaki, M. Oi, Phys. Lett. B 715 (2012) 219].

  20. Probabilistic machine learning and artificial intelligence.

    Science.gov (United States)

    Ghahramani, Zoubin

    2015-05-28

    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

  1. Probabilistic machine learning and artificial intelligence

    Science.gov (United States)

    Ghahramani, Zoubin

    2015-05-01

    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

  2. Management intensity alters decomposition via biological pathways

    Science.gov (United States)

    Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory

    2011-01-01

    Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future

  3. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types

    Directory of Open Access Journals (Sweden)

    Sang-Hoon Hong

    2015-07-01

    Full Text Available The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol synthetic aperture radar (PolSAR data for classifying wetland vegetation in the Everglades. We processed quad-pol data using the Hong & Wdowinski four-component decomposition, which accounts for double bounce scattering in the cross-polarization signal. The calculated decomposition images consist of four scattering mechanisms (single, co- and cross-pol double, and volume scattering. We applied an object-oriented image analysis approach to classify vegetation types with the decomposition results. We also used a high-resolution multispectral optical RapidEye image to compare statistics and classification results with Synthetic Aperture Radar (SAR observations. The calculated classification accuracy was higher than 85%, suggesting that the TerraSAR-X quad-pol SAR signal had a high potential for distinguishing different vegetation types. Scattering components from SAR acquisition were particularly advantageous for classifying mangroves along tidal channels. We conclude that the typical scattering behaviors from model-based decomposition are useful for discriminating among different wetland vegetation types.

  4. Partial discharge signal denoising with spatially adaptive wavelet thresholding and support vector machines

    Energy Technology Data Exchange (ETDEWEB)

    Mota, Hilton de Oliveira; Rocha, Leonardo Chaves Dutra da [Department of Computer Science, Federal University of Sao Joao del-Rei, Visconde do Rio Branco Ave., Colonia do Bengo, Sao Joao del-Rei, MG, 36301-360 (Brazil); Salles, Thiago Cunha de Moura [Department of Computer Science, Federal University of Minas Gerais, 6627 Antonio Carlos Ave., Pampulha, Belo Horizonte, MG, 31270-901 (Brazil); Vasconcelos, Flavio Henrique [Department of Electrical Engineering, Federal University of Minas Gerais, 6627 Antonio Carlos Ave., Pampulha, Belo Horizonte, MG, 31270-901 (Brazil)

    2011-02-15

    In this paper an improved method to denoise partial discharge (PD) signals is presented. The method is based on the wavelet transform (WT) and support vector machines (SVM) and is distinct from other WT-based denoising strategies in the sense that it exploits the high spatial correlations presented by PD wavelet decompositions as a way to identify and select the relevant coefficients. PD spatial correlations are characterized by WT modulus maxima propagation along decomposition levels (scales), which are a strong indicative of the their time-of-occurrence. Denoising is performed by identification and separation of PD-related maxima lines by an SVM pattern classifier. The results obtained confirm that this method has superior denoising capabilities when compared to other WT-based methods found in the literature for the processing of Gaussian and discrete spectral interferences. Moreover, its greatest advantages become clear when the interference has a pulsating or localized shape, situation in which traditional methods usually fail. (author)

  5. A stochastic model for an urea decomposition system

    Directory of Open Access Journals (Sweden)

    VSS Yadavalli

    2005-12-01

    Full Text Available Availability is an important measure in describing the performance of a system. The availability of a decomposition process in an urea production system in the fertilizer industry is considered in this paper. The system contains four subsystems and is supported by a standby unit. An estimation study of the steady state availability of the system is performed and illustrated by means of a numerical example.

  6. Non-equilibrium theory of arrested spinodal decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Olais-Govea, José Manuel; López-Flores, Leticia; Medina-Noyola, Magdaleno [Instituto de Física “Manuel Sandoval Vallarta,” Universidad Autónoma de San Luis Potosí, Álvaro Obregón 64, 78000 San Luis Potosí, SLP (Mexico)

    2015-11-07

    The non-equilibrium self-consistent generalized Langevin equation theory of irreversible relaxation [P. E. Ramŕez-González and M. Medina-Noyola, Phys. Rev. E 82, 061503 (2010); 82, 061504 (2010)] is applied to the description of the non-equilibrium processes involved in the spinodal decomposition of suddenly and deeply quenched simple liquids. For model liquids with hard-sphere plus attractive (Yukawa or square well) pair potential, the theory predicts that the spinodal curve, besides being the threshold of the thermodynamic stability of homogeneous states, is also the borderline between the regions of ergodic and non-ergodic homogeneous states. It also predicts that the high-density liquid-glass transition line, whose high-temperature limit corresponds to the well-known hard-sphere glass transition, at lower temperature intersects the spinodal curve and continues inside the spinodal region as a glass-glass transition line. Within the region bounded from below by this low-temperature glass-glass transition and from above by the spinodal dynamic arrest line, we can recognize two distinct domains with qualitatively different temperature dependence of various physical properties. We interpret these two domains as corresponding to full gas-liquid phase separation conditions and to the formation of physical gels by arrested spinodal decomposition. The resulting theoretical scenario is consistent with the corresponding experimental observations in a specific colloidal model system.

  7. Machine rates for selected forest harvesting machines

    Science.gov (United States)

    R.W. Brinker; J. Kinard; Robert Rummer; B. Lanford

    2002-01-01

    Very little new literature has been published on the subject of machine rates and machine cost analysis since 1989 when the Alabama Agricultural Experiment Station Circular 296, Machine Rates for Selected Forest Harvesting Machines, was originally published. Many machines discussed in the original publication have undergone substantial changes in various aspects, not...

  8. Enhanced Flexibility and Reusability through State Machine-Based Architectures for Multisensor Intelligent Robotics

    Directory of Open Access Journals (Sweden)

    Héctor Herrero

    2017-05-01

    Full Text Available This paper presents a state machine-based architecture, which enhances the flexibility and reusability of industrial robots, more concretely dual-arm multisensor robots. The proposed architecture, in addition to allowing absolute control of the execution, eases the programming of new applications by increasing the reusability of the developed modules. Through an easy-to-use graphical user interface, operators are able to create, modify, reuse and maintain industrial processes, increasing the flexibility of the cell. Moreover, the proposed approach is applied in a real use case in order to demonstrate its capabilities and feasibility in industrial environments. A comparative analysis is presented for evaluating the presented approach versus traditional robot programming techniques.

  9. Automatic welding machine for piping

    International Nuclear Information System (INIS)

    Yoshida, Kazuhiro; Koyama, Takaichi; Iizuka, Tomio; Ito, Yoshitoshi; Takami, Katsumi.

    1978-01-01

    A remotely controlled automatic special welding machine for piping was developed. This machine is utilized for long distance pipe lines, chemical plants, thermal power generating plants and nuclear power plants effectively from the viewpoint of good quality control, reduction of labor and good controllability. The function of this welding machine is to inspect the shape and dimensions of edge preparation before welding work by the sense of touch, to detect the temperature of melt pool, inspect the bead form by the sense of touch, and check the welding state by ITV during welding work, and to grind the bead surface and inspect the weld metal by ultrasonic test automatically after welding work. The construction of this welding system, the main specification of the apparatus, the welding procedure in detail, the electrical source of this welding machine, the cooling system, the structure and handling of guide ring, the central control system and the operating characteristics are explained. The working procedure and the effect by using this welding machine, and the application to nuclear power plants and the other industrial field are outlined. The HIDIC 08 is used as the controlling computer. This welding machine is useful for welding SUS piping as well as carbon steel piping. (Nakai, Y.)

  10. Theoretical study of the decomposition pathways and products of C5- perfluorinated ketone (C5 PFK)

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Yuwei; Wang, Xiaohua, E-mail: xhw@mail.xjtu.edu.cn, E-mail: mzrong@mail.xjtu.edu.cn; Li, Xi; Yang, Aijun; Wu, Yi; Rong, Mingzhe, E-mail: xhw@mail.xjtu.edu.cn, E-mail: mzrong@mail.xjtu.edu.cn [State Key Laboratory of Electrical Insulation and Power Equipment, Xi’an Jiaotong University, No. 28 XianNing West Road, Xi’an, Shaanxi Province 710049 (China); Han, Guohui; Lu, Yanhui [Pinggao Group Co. Ltd., Pingdingshan, Henan Province 467001 (China)

    2016-08-15

    Due to the high global warming potential (GWP) and increasing environmental concerns, efforts on searching the alternative gases to SF{sub 6}, which is predominantly used as insulating and interrupting medium in high-voltage equipment, have become a hot topic in recent decades. Overcoming the drawbacks of the existing candidate gases, C5- perfluorinated ketone (C5 PFK) was reported as a promising gas with remarkable insulation capacity and the low GWP of approximately 1. Experimental measurements of the dielectric strength of this novel gas and its mixtures have been carried out, but the chemical decomposition pathways and products of C5 PFK during breakdown are still unknown, which are the essential factors in evaluating the electric strength of this gas in high-voltage equipment. Therefore, this paper is devoted to exploring all the possible decomposition pathways and species of C5 PFK by density functional theory (DFT). The structural optimizations, vibrational frequency calculations and energy calculations of the species involved in a considered pathway were carried out with DFT-(U)B3LYP/6-311G(d,p) method. Detailed potential energy surface was then investigated thoroughly by the same method. Lastly, six decomposition pathways of C5 PFK decomposition involving fission reactions and the reactions with a transition states were obtained. Important intermediate products were also determined. Among all the pathways studied, the favorable decomposition reactions of C5 PFK were found, involving C-C bond ruptures producing Ia and Ib in pathway I, followed by subsequent C-C bond ruptures and internal F atom transfers in the decomposition of Ia and Ib presented in pathways II + III and IV + V, respectively. Possible routes were pointed out in pathway III and lead to the decomposition of IIa, which is the main intermediate product found in pathway II of Ia decomposition. We also investigated the decomposition of Ib, which can undergo unimolecular reactions to give the

  11. Photochemical decomposition of catecholamines

    International Nuclear Information System (INIS)

    Mol, N.J. de; Henegouwen, G.M.J.B. van; Gerritsma, K.W.

    1979-01-01

    During photochemical decomposition (lambda=254 nm) adrenaline, isoprenaline and noradrenaline in aqueous solution were converted to the corresponding aminochrome for 65, 56 and 35% respectively. In determining this conversion, photochemical instability of the aminochromes was taken into account. Irradiations were performed in such dilute solutions that the neglect of the inner filter effect is permissible. Furthermore, quantum yields for the decomposition of the aminochromes in aqueous solution are given. (Author)

  12. Book review: A first course in Machine Learning

    DEFF Research Database (Denmark)

    Ortiz-Arroyo, Daniel

    2016-01-01

    "The new edition of A First Course in Machine Learning by Rogers and Girolami is an excellent introduction to the use of statistical methods in machine learning. The book introduces concepts such as mathematical modeling, inference, and prediction, providing ‘just in time’ the essential background...... to change models and parameter values to make [it] easier to understand and apply these models in real applications. The authors [also] introduce more advanced, state-of-the-art machine learning methods, such as Gaussian process models and advanced mixture models, which are used across machine learning....... This makes the book interesting not only to students with little or no background in machine learning but also to more advanced graduate students interested in statistical approaches to machine learning." —Daniel Ortiz-Arroyo, Associate Professor, Aalborg University Esbjerg, Denmark...

  13. Investigating hydrogel dosimeter decomposition by chemical methods

    International Nuclear Information System (INIS)

    Jordan, Kevin

    2015-01-01

    The chemical oxidative decomposition of leucocrystal violet micelle hydrogel dosimeters was investigated using the reaction of ferrous ions with hydrogen peroxide or sodium bicarbonate with hydrogen peroxide. The second reaction is more effective at dye decomposition in gelatin hydrogels. Additional chemical analysis is required to determine the decomposition products

  14. Performance of a neutron transport code with full phase space decomposition on the Cray Research T3D

    International Nuclear Information System (INIS)

    Dorr, M.R.; Salo, E.M.

    1995-01-01

    We present performance results obtained on a 128-node Cray Research T3D computer by a neutron transport code implementing a standard mtiltigroup, discrete ordinates algorithm on a three-dimensional Cartesian grid. After summarizing the implementation strategy used to obtain a full decomposition of phase space (i.e., simultaneous parallelization of the neutron energy, directional and spatial variables), we investigate the scalability of the fundamental source iteration step with respect to each phase space variable. We also describe enhancements that have enabled performance rates approaching 10 gigaflops on the full 128-node machine

  15. Decomposition and nutrient release of leguminous plants in coffee agroforestry systems

    Directory of Open Access Journals (Sweden)

    Eduardo da Silva Matos

    2011-02-01

    Full Text Available Leguminous plants used as green manure are an important nutrient source for coffee plantations, especially for soils with low nutrient levels. Field experiments were conducted in the Zona da Mata of Minas Gerais State, Brazil to evaluate the decomposition and nutrient release rates of four leguminous species used as green manures (Arachis pintoi, Calopogonium mucunoides, Stizolobium aterrimum and Stylosanthes guianensis in a coffee agroforestry system under two different climate conditions. The initial N contents in plant residues varied from 25.7 to 37.0 g kg-1 and P from 2.4 to 3.0 g kg-1. The lignin/N, lignin/polyphenol and (lignin+polyphenol/N ratios were low in all residues studied. Mass loss rates were highest in the first 15 days, when 25 % of the residues were decomposed. From 15 to 30 days, the decomposition rate decreased on both farms. On the farm in Pedra Dourada (PD, the decomposition constant k increased in the order C. mucunoides < S. aterrimum < S. guianensis < A. pintoi. On the farm in Araponga (ARA, there was no difference in the decomposition rate among leguminous plants. The N release rates varied from 0.0036 to 0.0096 d-1. Around 32 % of the total N content in the plant material was released in the first 15 days. In ARA, the N concentration in the S. aterrimum residues was always significantly higher than in the other residues. At the end of 360 days, the N released was 78 % in ARA and 89 % in PD of the initial content. Phosphorus was the most rapidly released nutrient (k values from 0.0165 to 0.0394 d-1. Residue decomposition and nutrient release did not correlate with initial residue chemistry and biochemistry, but differences in climatic conditions between the two study sites modified the decomposition rate constants.

  16. Four-level time decomposition quasi-static power flow and successive disturbances analysis. [Power system disturbances

    Energy Technology Data Exchange (ETDEWEB)

    Jovanovic, S M [Nikola Tesla Inst., Belgrade (YU)

    1990-01-01

    This paper presents a model and an appropriate numerical procedure for a four-level time decomposition quasi-static power flow and successive disturbances analysis of power systems. The analysis consists of the sequential computation of the zero, primary, secondary and tertiary quasi-static states and of the estimation of successive structural disturbances during the 1200 s dynamics after a structural disturbance. The model is developed by detailed inspection of the time decomposition characteristics of automatic protection and control devices. Adequate speed of the numerical procedure is attained by a specific application of the inversion matrix lemma and the decoupled model constant coefficient matrices. The four-level time decomposition quasi-static method is intended for security and emergency analysis. (author).

  17. Phase decomposition and ordering in Ni-11.3 at.% Ti studied with atom probe tomography

    KAUST Repository

    Al-Kassab, Talaat

    2014-09-01

    The decomposition behavior of Ni-rich Ni-Ti was reassessed using Tomographic Atom Probe (TAP) and Laser Assisted Wide Angle Tomographic Atom Probe. Single crystalline specimens of Ni-11.3at.% Ti were investigated, the states selected from the decomposition path were the metastable γ″ and γ\\' states introduced on the basis of small-angle neutron scattering (SANS) and the two-phase model for evaluation. The composition values of the precipitates in these states could not be confirmed by APT data as the interface of the ordered precipitates may not be neglected. The present results rather suggest to apply a three-phase model for the interpretation of SANS measurements, in which the width of the interface remains nearly unchanged and the L12 structure close to 3:1 stoichiometry is maintained in the core of the precipitates from the γ″ to the γ\\' state. © 2014 Elsevier Ltd.

  18. Three-dimensional decomposition models for carbon productivity

    International Nuclear Information System (INIS)

    Meng, Ming; Niu, Dongxiao

    2012-01-01

    This paper presents decomposition models for the change in carbon productivity, which is considered a key indicator that reflects the contributions to the control of greenhouse gases. Carbon productivity differential was used to indicate the beginning of decomposition. After integrating the differential equation and designing the Log Mean Divisia Index equations, a three-dimensional absolute decomposition model for carbon productivity was derived. Using this model, the absolute change of carbon productivity was decomposed into a summation of the absolute quantitative influences of each industrial sector, for each influence factor (technological innovation and industrial structure adjustment) in each year. Furthermore, the relative decomposition model was built using a similar process. Finally, these models were applied to demonstrate the decomposition process in China. The decomposition results reveal several important conclusions: (a) technological innovation plays a far more important role than industrial structure adjustment; (b) industry and export trade exhibit great influence; (c) assigning the responsibility for CO 2 emission control to local governments, optimizing the structure of exports, and eliminating backward industrial capacity are highly essential to further increase China's carbon productivity. -- Highlights: ► Using the change of carbon productivity to measure a country's contribution. ► Absolute and relative decomposition models for carbon productivity are built. ► The change is decomposed to the quantitative influence of three-dimension. ► Decomposition results can be used for improving a country's carbon productivity.

  19. Multilevel index decomposition analysis: Approaches and application

    International Nuclear Information System (INIS)

    Xu, X.Y.; Ang, B.W.

    2014-01-01

    With the growing interest in using the technique of index decomposition analysis (IDA) in energy and energy-related emission studies, such as to analyze the impacts of activity structure change or to track economy-wide energy efficiency trends, the conventional single-level IDA may not be able to meet certain needs in policy analysis. In this paper, some limitations of single-level IDA studies which can be addressed through applying multilevel decomposition analysis are discussed. We then introduce and compare two multilevel decomposition procedures, which are referred to as the multilevel-parallel (M-P) model and the multilevel-hierarchical (M-H) model. The former uses a similar decomposition procedure as in the single-level IDA, while the latter uses a stepwise decomposition procedure. Since the stepwise decomposition procedure is new in the IDA literature, the applicability of the popular IDA methods in the M-H model is discussed and cases where modifications are needed are explained. Numerical examples and application studies using the energy consumption data of the US and China are presented. - Highlights: • We discuss the limitations of single-level decomposition in IDA applied to energy study. • We introduce two multilevel decomposition models, study their features and discuss how they can address the limitations. • To extend from single-level to multilevel analysis, necessary modifications to some popular IDA methods are discussed. • We further discuss the practical significance of the multilevel models and present examples and cases to illustrate

  20. Numerical identifiability of the parameters of induction machines

    Energy Technology Data Exchange (ETDEWEB)

    Corcoles, F.; Pedra, J.; Salichs, M. [Dep. d' Eng. Electrica ETSEIB. UPC, Barcelona (Spain)

    2000-08-01

    This paper analyses the numerical identifiability of the electrical parameters of induction machines. Relations between parameters and the impossibility to estimate all of them - when only external measures are used: voltage, current, speed and torque - are shown. Formulations of the single and double-cage induction machine, with and without core losses in both models, are developed. The proposed solution is the formulation of machine equations by using the minimum number of parameters (which are identifiable parameters). As an application example, the parameters of a double-cage induction machine are identified using steady-state measurements corresponding to different angular speeds. (orig.)

  1. Primary decomposition of torsion R[X]-modules

    Directory of Open Access Journals (Sweden)

    William A. Adkins

    1994-01-01

    Full Text Available This paper is concerned with studying hereditary properties of primary decompositions of torsion R[X]-modules M which are torsion free as R-modules. Specifically, if an R[X]-submodule of M is pure as an R-submodule, then the primary decomposition of M determines a primary decomposition of the submodule. This is a generalization of the classical fact from linear algebra that a diagonalizable linear transformation on a vector space restricts to a diagonalizable linear transformation of any invariant subspace. Additionally, primary decompositions are considered under direct sums and tensor product.

  2. Tensor network decompositions in the presence of a global symmetry

    International Nuclear Information System (INIS)

    Singh, Sukhwinder; Pfeifer, Robert N. C.; Vidal, Guifre

    2010-01-01

    Tensor network decompositions offer an efficient description of certain many-body states of a lattice system and are the basis of a wealth of numerical simulation algorithms. We discuss how to incorporate a global symmetry, given by a compact, completely reducible group G, in tensor network decompositions and algorithms. This is achieved by considering tensors that are invariant under the action of the group G. Each symmetric tensor decomposes into two types of tensors: degeneracy tensors, containing all the degrees of freedom, and structural tensors, which only depend on the symmetry group. In numerical calculations, the use of symmetric tensors ensures the preservation of the symmetry, allows selection of a specific symmetry sector, and significantly reduces computational costs. On the other hand, the resulting tensor network can be interpreted as a superposition of exponentially many spin networks. Spin networks are used extensively in loop quantum gravity, where they represent states of quantum geometry. Our work highlights their importance in the context of tensor network algorithms as well, thus setting the stage for cross-fertilization between these two areas of research.

  3. Differential Decomposition Among Pig, Rabbit, and Human Remains.

    Science.gov (United States)

    Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe

    2018-03-30

    While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.

  4. Global sensitivity analysis by polynomial dimensional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2011-07-15

    This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.

  5. Modelling machine ensembles with discrete event dynamical system theory

    Science.gov (United States)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  6. Exploring Patterns of Soil Organic Matter Decomposition with Students and the Public Through the Global Decomposition Project (GDP)

    Science.gov (United States)

    Wood, J. H.; Natali, S.

    2014-12-01

    The Global Decomposition Project (GDP) is a program designed to introduce and educate students and the general public about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. This easy-to-use hands-on activity focuses on questions such as "How do environmental conditions control decomposition of organic matter in soil?" and "Why do some areas accumulate organic matter and others do not?" Soil organic matter is important to local ecosystems because it affects soil structure, regulates soil moisture and temperature, and provides energy and nutrients to soil organisms. It is also important globally because it stores a large amount of carbon, and when microbes "eat", or decompose organic matter they release greenhouse gasses such as carbon dioxide and methane into the atmosphere, which affects the earth's climate. The protocol describes a commonly used method to measure decomposition using a paper made of cellulose, a component of plant cell walls. Participants can receive pre-made cellulose decomposition bags, or make decomposition bags using instructions in the protocol and easily obtained materials (e.g., window screen and lignin-free paper). Individual results will be shared with all participants and the broader public through an online database. We will present decomposition bag results from a research site in Alaskan tundra, as well as from a middle-school-student led experiment in California. The GDP demonstrates how scientific methods can be extended to educate broader audiences, while at the same time, data collected by students and the public can provide new insight into global patterns of soil decomposition. The GDP provides a pathway for scientists and educators to interact and reach meaningful education and research goals.

  7. Classification of fMRI resting-state maps using machine learning techniques: A comparative study

    Science.gov (United States)

    Gallos, Ioannis; Siettos, Constantinos

    2017-11-01

    We compare the efficiency of Principal Component Analysis (PCA) and nonlinear learning manifold algorithms (ISOMAP and Diffusion maps) for classifying brain maps between groups of schizophrenia patients and healthy from fMRI scans during a resting-state experiment. After a standard pre-processing pipeline, we applied spatial Independent component analysis (ICA) to reduce (a) noise and (b) spatial-temporal dimensionality of fMRI maps. On the cross-correlation matrix of the ICA components, we applied PCA, ISOMAP and Diffusion Maps to find an embedded low-dimensional space. Finally, support-vector-machines (SVM) and k-NN algorithms were used to evaluate the performance of the algorithms in classifying between the two groups.

  8. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment

    Science.gov (United States)

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye

    2016-01-01

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively. PMID:27271840

  9. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment

    Science.gov (United States)

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye

    2016-06-01

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.

  10. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment.

    Science.gov (United States)

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S; Phoon, Sin Ye

    2016-06-07

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.

  11. Finite State Machine Analysis of Remote Sensor Data

    International Nuclear Information System (INIS)

    Barbson, John M.

    1999-01-01

    The use of unattended monitoring systems for monitoring the status of high value assets and processes has proven to be less costly and less intrusive than the on-site inspections which they are intended to replace. However, these systems present a classic information overload problem to anyone trying to analyze the resulting sensor data. These data are typically so voluminous and contain information at such a low level that the significance of any single reading (e.g., a door open event) is not obvious. Sophisticated, automated techniques are needed to extract expected patterns in the data and isolate and characterize the remaining patterns that are due to undeclared activities. This paper describes a data analysis engine that runs a state machine model of each facility and its sensor suite. It analyzes the raw sensor data, converting and combining the inputs from many sensors into operator domain level information. It compares the resulting activities against a set of activities declared by an inspector or operator, and then presents the differences in a form comprehensible to an inspector. Although the current analysis engine was written with international nuclear material safeguards, nonproliferation, and transparency in mind, since there is no information about any particular facility in the software, there is no reason why it cannot be applied anywhere it is important to verify processes are occurring as expected, to detect intrusion into a secured area, or to detect the diversion of valuable assets

  12. Online State Space Model Parameter Estimation in Synchronous Machines

    Directory of Open Access Journals (Sweden)

    Z. Gallehdari

    2014-06-01

    The suggested approach is evaluated for a sample synchronous machine model. Estimated parameters are tested for different inputs at different operating conditions. The effect of noise is also considered in this study. Simulation results show that the proposed approach provides good accuracy for parameter estimation.

  13. Pitfalls in VAR based return decompositions: A clarification

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten

    in their analysis is not "cashflow news" but "inter- est rate news" which should not be zero. Consequently, in contrast to what Chen and Zhao claim, their decomposition does not serve as a valid caution against VAR based decompositions. Second, we point out that in order for VAR based decompositions to be valid......Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component...

  14. Scikit-learn: Machine Learning in Python

    OpenAIRE

    Pedregosa, Fabian; Varoquaux, Gaël; Gramfort, Alexandre; Michel, Vincent; Thirion, Bertrand; Grisel, Olivier; Blondel, Mathieu; Prettenhofer, Peter; Weiss, Ron; Dubourg, Vincent; Vanderplas, Jake; Passos, Alexandre; Cournapeau, David; Brucher, Matthieu; Perrot, Matthieu

    2011-01-01

    International audience; Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic ...

  15. Scikit-learn: Machine Learning in Python

    OpenAIRE

    Pedregosa, Fabian; Varoquaux, Gaël; Gramfort, Alexandre; Michel, Vincent; Thirion, Bertrand; Grisel, Olivier; Blondel, Mathieu; Louppe, Gilles; Prettenhofer, Peter; Weiss, Ron; Dubourg, Vincent; Vanderplas, Jake; Passos, Alexandre; Cournapeau, David; Brucher, Matthieu

    2012-01-01

    Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings....

  16. [A new machinability test machine and the machinability of composite resins for core built-up].

    Science.gov (United States)

    Iwasaki, N

    2001-06-01

    A new machinability test machine especially for dental materials was contrived. The purpose of this study was to evaluate the effects of grinding conditions on machinability of core built-up resins using this machine, and to confirm the relationship between machinability and other properties of composite resins. The experimental machinability test machine consisted of a dental air-turbine handpiece, a control weight unit, a driving unit of the stage fixing the test specimen, and so on. The machinability was evaluated as the change in volume after grinding using a diamond point. Five kinds of core built-up resins and human teeth were used in this study. The machinabilities of these composite resins increased with an increasing load during grinding, and decreased with repeated grinding. There was no obvious correlation between the machinability and Vickers' hardness; however, a negative correlation was observed between machinability and scratch width.

  17. Simulation Tools for Electrical Machines Modelling: Teaching and ...

    African Journals Online (AJOL)

    Simulation tools are used both for research and teaching to allow a good comprehension of the systems under study before practical implementations. This paper illustrates the way MATLAB is used to model non-linearites in synchronous machine. The machine is modeled in rotor reference frame with currents as state ...

  18. Random non-proportional fatigue tests with planar tri-axial fatigue testing machine

    OpenAIRE

    Inoue, T.; Nagao, R.; Takeda, N.

    2016-01-01

    Complex stresses, which occur on the mechanical surfaces of transport machinery in service, bring a drastic degradation in fatigue life. However, it is hard to reproduce such complex stress states for evaluating the fatigue life with conventional multiaxial fatigue machines. We have developed a fatigue testing machine that enables reproduction of such complex stresses. The testing machine can reproduce arbitrary in-plane stress states by applying three independent loads to the test specimen u...

  19. Machine learning for healthcare technologies

    CERN Document Server

    Clifton, David A

    2016-01-01

    This book brings together chapters on the state-of-the-art in machine learning (ML) as it applies to the development of patient-centred technologies, with a special emphasis on 'big data' and mobile data.

  20. Plant Species Rather Than Climate Greatly Alters the Temporal Pattern of Litter Chemical Composition During Long-Term Decomposition

    Science.gov (United States)

    Li, Yongfu; Chen, Na; Harmon, Mark E.; Li, Yuan; Cao, Xiaoyan; Chappell, Mark A.; Mao, Jingdong

    2015-10-01

    A feedback between decomposition and litter chemical composition occurs with decomposition altering composition that in turn influences the decomposition rate. Elucidating the temporal pattern of chemical composition is vital to understand this feedback, but the effects of plant species and climate on chemical changes remain poorly understood, especially over multiple years. In a 10-year decomposition experiment with litter of four species (Acer saccharum, Drypetes glauca, Pinus resinosa, and Thuja plicata) from four sites that range from the arctic to tropics, we determined the abundance of 11 litter chemical constituents that were grouped into waxes, carbohydrates, lignin/tannins, and proteins/peptides using advanced 13C solid-state NMR techniques. Decomposition generally led to an enrichment of waxes and a depletion of carbohydrates, whereas the changes of other chemical constituents were inconsistent. Inconsistent convergence in chemical compositions during decomposition was observed among different litter species across a range of site conditions, whereas one litter species converged under different climate conditions. Our data clearly demonstrate that plant species rather than climate greatly alters the temporal pattern of litter chemical composition, suggesting the decomposition-chemistry feedback varies among different plant species.

  1. An analogue of Wagner's theorem for decompositions of matrix algebras

    International Nuclear Information System (INIS)

    Ivanov, D N

    2004-01-01

    Wagner's celebrated theorem states that a finite affine plane whose collineation group is transitive on lines is a translation plane. The notion of an orthogonal decomposition (OD) of a classically semisimple associative algebra introduced by the author allows one to draw an analogy between finite affine planes of order n and ODs of the matrix algebra M n (C) into a sum of subalgebras conjugate to the diagonal subalgebra. These ODs are called WP-decompositions and are equivalent to the well-known ODs of simple Lie algebras of type A n-1 into a sum of Cartan subalgebras. In this paper we give a detailed and improved proof of the analogue of Wagner's theorem for WP-decompositions of the matrix algebra of odd non-square order an outline of which was earlier published in a short note in 'Russian Math. Surveys' in 1994. In addition, in the framework of the theory of ODs of associative algebras, based on the method of idempotent bases, we obtain an elementary proof of the well-known Kostrikin-Tiep theorem on irreducible ODs of Lie algebras of type A n-1 in the case where n is a prime-power.

  2. A Digital Liquid State Machine With Biologically Inspired Learning and Its Application to Speech Recognition.

    Science.gov (United States)

    Zhang, Yong; Li, Peng; Jin, Yingyezhe; Choe, Yoonsuck

    2015-11-01

    This paper presents a bioinspired digital liquid-state machine (LSM) for low-power very-large-scale-integration (VLSI)-based machine learning applications. To the best of the authors' knowledge, this is the first work that employs a bioinspired spike-based learning algorithm for the LSM. With the proposed online learning, the LSM extracts information from input patterns on the fly without needing intermediate data storage as required in offline learning methods such as ridge regression. The proposed learning rule is local such that each synaptic weight update is based only upon the firing activities of the corresponding presynaptic and postsynaptic neurons without incurring global communications across the neural network. Compared with the backpropagation-based learning, the locality of computation in the proposed approach lends itself to efficient parallel VLSI implementation. We use subsets of the TI46 speech corpus to benchmark the bioinspired digital LSM. To reduce the complexity of the spiking neural network model without performance degradation for speech recognition, we study the impacts of synaptic models on the fading memory of the reservoir and hence the network performance. Moreover, we examine the tradeoffs between synaptic weight resolution, reservoir size, and recognition performance and present techniques to further reduce the overhead of hardware implementation. Our simulation results show that in terms of isolated word recognition evaluated using the TI46 speech corpus, the proposed digital LSM rivals the state-of-the-art hidden Markov-model-based recognizer Sphinx-4 and outperforms all other reported recognizers including the ones that are based upon the LSM or neural networks.

  3. An investigation on thermal decomposition of DNTF-CMDB propellants

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Wei; Wang, Jiangning; Ren, Xiaoning; Zhang, Laying; Zhou, Yanshui [Xi' an Modern Chemistry Research Institute, Xi' an 710065 (China)

    2007-12-15

    The thermal decomposition of DNTF-CMDB propellants was investigated by pressure differential scanning calorimetry (PDSC) and thermogravimetry (TG). The results show that there is only one decomposition peak on DSC curves, because the decomposition peak of DNTF cannot be separated from that of the NC/NG binder. The decomposition of DNTF can be obviously accelerated by the decomposition products of the NC/NG binder. The kinetic parameters of thermal decompositions for four DNTF-CMDB propellants at 6 MPa were obtained by the Kissinger method. It is found that the reaction rate decreases with increasing content of DNTF. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  4. Responses of primary production, leaf litter decomposition and associated communities to stream eutrophication

    International Nuclear Information System (INIS)

    Dunck, Bárbara; Lima-Fernandes, Eva; Cássio, Fernanda; Cunha, Ana; Rodrigues, Liliana; Pascoal, Cláudia

    2015-01-01

    We assessed the eutrophication effects on leaf litter decomposition and primary production, and on periphytic algae, fungi and invertebrates. According to the subsidy-stress model, we expected that when algae and decomposers were nutrient limited, their activity and diversity would increase at moderate levels of nutrient enrichment, but decrease at high levels of nutrients, because eutrophication would lead to the presence of other stressors and overwhelm the subsidy effect. Chestnut leaves (Castanea sativa Mill) were enclosed in mesh bags and immersed in five streams of the Ave River basin (northwest Portugal) to assess leaf decomposition and colonization by invertebrates and fungi. In parallel, polyethylene slides were attached to the mesh bags to allow colonization by algae and to assess primary production. Communities of periphytic algae and decomposers discriminated the streams according to the trophic state. Primary production decomposition and biodiversity were lower in streams at both ends of the trophic gradient. - Highlights: • Algae and decomposers discriminated the streams according to the eutrophication level. • Primary production and litter decomposition are stimulated by moderate eutrophication. • Biodiversity and process rates were reduced in highly eutrophic streams. • Subsidy-stress model explained biodiversity and process rates under eutrophication. - Rates of leaf litter decomposition, primary production and richness of periphytic algae, fungi and invertebrates were lower in streams at both ends of the trophic gradient

  5. Art of spin decomposition

    International Nuclear Information System (INIS)

    Chen Xiangsong; Sun Weimin; Wang Fan; Goldman, T.

    2011-01-01

    We analyze the problem of spin decomposition for an interacting system from a natural perspective of constructing angular-momentum eigenstates. We split, from the total angular-momentum operator, a proper part which can be separately conserved for a stationary state. This part commutes with the total Hamiltonian and thus specifies the quantum angular momentum. We first show how this can be done in a gauge-dependent way, by seeking a specific gauge in which part of the total angular-momentum operator vanishes identically. We then construct a gauge-invariant operator with the desired property. Our analysis clarifies what is the most pertinent choice among the various proposals for decomposing the nucleon spin. A similar analysis is performed for extracting a proper part from the total Hamiltonian to construct energy eigenstates.

  6. Task Decomposition Module For Telerobot Trajectory Generation

    Science.gov (United States)

    Wavering, Albert J.; Lumia, Ron

    1988-10-01

    A major consideration in the design of trajectory generation software for a Flight Telerobotic Servicer (FTS) is that the FTS will be called upon to perform tasks which require a diverse range of manipulator behaviors and capabilities. In a hierarchical control system where tasks are decomposed into simpler and simpler subtasks, the task decomposition module which performs trajectory planning and execution should therefore be able to accommodate a wide range of algorithms. In some cases, it will be desirable to plan a trajectory for an entire motion before manipulator motion commences, as when optimizing over the entire trajectory. Many FTS motions, however, will be highly sensory-interactive, such as moving to attain a desired position relative to a non-stationary object whose position is periodically updated by a vision system. In this case, the time-varying nature of the trajectory may be handled either by frequent replanning using updated sensor information, or by using an algorithm which creates a less specific state-dependent plan that determines the manipulator path as the trajectory is executed (rather than a priori). This paper discusses a number of trajectory generation techniques from these categories and how they may be implemented in a task decompo-sition module of a hierarchical control system. The structure, function, and interfaces of the proposed trajectory gener-ation module are briefly described, followed by several examples of how different algorithms may be performed by the module. The proposed task decomposition module provides a logical structure for trajectory planning and execution, and supports a large number of published trajectory generation techniques.

  7. Field emission study of ammonia absorption and catalytic decomposition on individual molybdenum planes

    International Nuclear Information System (INIS)

    Abon, M.; Bergeret, G.; Tardy, B.

    1977-01-01

    A probe-hole field emission microscope was used to investigate the crystallographic specificity of ammonia adsorption at 200 and 300 K on (110), (100), (211) and (111) molybdenum crystal planes. Chemisorbed NH 3 causes a large work function decrease, especially at 200 K in agreement with an associative adsorption model which can also explain that this decrease is more important on the crystal planes of highest work function (At 200 K, Δpsi = -2.25 eV on Mo(110) compared to Δpsi = -1.55 eV on Mo(111). The decomposition of NH 3 was followed by measuring the work function changes for stepwise heating of the Mo tip covered with NH 3 at 200 K. On the four studied planes NH 3 decomposition and H 2 desorption are completed at about 400 K. Δpsi changes above 400 K depend on the crystal planes and have been related to two different nitrogen surface states. No inactive plane towards NH 3 adsorption and decomposition has been found but the noted crystallographic anisotropy in this low pressure study is relevant to the structure sensitive character of the NH 3 decomposition and synthesis reactions. (Auth.)

  8. Local Fractional Adomian Decomposition and Function Decomposition Methods for Laplace Equation within Local Fractional Operators

    Directory of Open Access Journals (Sweden)

    Sheng-Ping Yan

    2014-01-01

    Full Text Available We perform a comparison between the local fractional Adomian decomposition and local fractional function decomposition methods applied to the Laplace equation. The operators are taken in the local sense. The results illustrate the significant features of the two methods which are both very effective and straightforward for solving the differential equations with local fractional derivative.

  9. Constructive quantum Shannon decomposition from Cartan involutions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, Byron; Love, Peter [Department of Physics, 370 Lancaster Ave., Haverford College, Haverford, PA 19041 (United States)], E-mail: plove@haverford.edu

    2008-10-03

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions.

  10. Constructive quantum Shannon decomposition from Cartan involutions

    International Nuclear Information System (INIS)

    Drury, Byron; Love, Peter

    2008-01-01

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions

  11. The Machine within the Machine

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Although Virtual Machines are widespread across CERN, you probably won't have heard of them unless you work for an experiment. Virtual machines - known as VMs - allow you to create a separate machine within your own, allowing you to run Linux on your Mac, or Windows on your Linux - whatever combination you need.   Using a CERN Virtual Machine, a Linux analysis software runs on a Macbook. When it comes to LHC data, one of the primary issues collaborations face is the diversity of computing environments among collaborators spread across the world. What if an institute cannot run the analysis software because they use different operating systems? "That's where the CernVM project comes in," says Gerardo Ganis, PH-SFT staff member and leader of the CernVM project. "We were able to respond to experimentalists' concerns by providing a virtual machine package that could be used to run experiment software. This way, no matter what hardware they have ...

  12. Chemical physics of decomposition of energetic materials. Problems and prospects

    International Nuclear Information System (INIS)

    Smirnov, Lev P

    2004-01-01

    The review is concerned with analysis of the results obtained in the kinetic and mechanistic studies on decomposition of energetic materials (explosives, powders and solid propellants). It is shown that the state-of-the art in this field is inadequate to the potential of modern chemical kinetics and chemical physics. Unsolved problems are outlined and ways of their solution are proposed.

  13. Hybrid Forecasting Approach Based on GRNN Neural Network and SVR Machine for Electricity Demand Forecasting

    Directory of Open Access Journals (Sweden)

    Weide Li

    2017-01-01

    Full Text Available Accurate electric power demand forecasting plays a key role in electricity markets and power systems. The electric power demand is usually a non-linear problem due to various unknown reasons, which make it difficult to get accurate prediction by traditional methods. The purpose of this paper is to propose a novel hybrid forecasting method for managing and scheduling the electricity power. EEMD-SCGRNN-PSVR, the proposed new method, combines ensemble empirical mode decomposition (EEMD, seasonal adjustment (S, cross validation (C, general regression neural network (GRNN and support vector regression machine optimized by the particle swarm optimization algorithm (PSVR. The main idea of EEMD-SCGRNN-PSVR is respectively to forecast waveform and trend component that hidden in demand series to substitute directly forecasting original electric demand. EEMD-SCGRNN-PSVR is used to predict the one week ahead half-hour’s electricity demand in two data sets (New South Wales (NSW and Victorian State (VIC in Australia. Experimental results show that the new hybrid model outperforms the other three models in terms of forecasting accuracy and model robustness.

  14. Operation of a quantum dot in the finite-state machine mode: Single-electron dynamic memory

    International Nuclear Information System (INIS)

    Klymenko, M. V.; Klein, M.; Levine, R. D.; Remacle, F.

    2016-01-01

    A single electron dynamic memory is designed based on the non-equilibrium dynamics of charge states in electrostatically defined metallic quantum dots. Using the orthodox theory for computing the transfer rates and a master equation, we model the dynamical response of devices consisting of a charge sensor coupled to either a single and or a double quantum dot subjected to a pulsed gate voltage. We show that transition rates between charge states in metallic quantum dots are characterized by an asymmetry that can be controlled by the gate voltage. This effect is more pronounced when the switching between charge states corresponds to a Markovian process involving electron transport through a chain of several quantum dots. By simulating the dynamics of electron transport we demonstrate that the quantum box operates as a finite-state machine that can be addressed by choosing suitable shapes and switching rates of the gate pulses. We further show that writing times in the ns range and retention memory times six orders of magnitude longer, in the ms range, can be achieved on the double quantum dot system using experimentally feasible parameters, thereby demonstrating that the device can operate as a dynamic single electron memory.

  15. Operation of a quantum dot in the finite-state machine mode: Single-electron dynamic memory

    Energy Technology Data Exchange (ETDEWEB)

    Klymenko, M. V. [Department of Chemistry, University of Liège, B4000 Liège (Belgium); Klein, M. [The Fritz Haber Center for Molecular Dynamics and the Institute of Chemistry, The Hebrew University of Jerusalem, Jerusalem 91904 (Israel); Levine, R. D. [The Fritz Haber Center for Molecular Dynamics and the Institute of Chemistry, The Hebrew University of Jerusalem, Jerusalem 91904 (Israel); Crump Institute for Molecular Imaging and Department of Molecular and Medical Pharmacology, David Geffen School of Medicine and Department of Chemistry and Biochemistry, University of California, Los Angeles, California 90095 (United States); Remacle, F., E-mail: fremacle@ulg.ac.be [Department of Chemistry, University of Liège, B4000 Liège (Belgium); The Fritz Haber Center for Molecular Dynamics and the Institute of Chemistry, The Hebrew University of Jerusalem, Jerusalem 91904 (Israel)

    2016-07-14

    A single electron dynamic memory is designed based on the non-equilibrium dynamics of charge states in electrostatically defined metallic quantum dots. Using the orthodox theory for computing the transfer rates and a master equation, we model the dynamical response of devices consisting of a charge sensor coupled to either a single and or a double quantum dot subjected to a pulsed gate voltage. We show that transition rates between charge states in metallic quantum dots are characterized by an asymmetry that can be controlled by the gate voltage. This effect is more pronounced when the switching between charge states corresponds to a Markovian process involving electron transport through a chain of several quantum dots. By simulating the dynamics of electron transport we demonstrate that the quantum box operates as a finite-state machine that can be addressed by choosing suitable shapes and switching rates of the gate pulses. We further show that writing times in the ns range and retention memory times six orders of magnitude longer, in the ms range, can be achieved on the double quantum dot system using experimentally feasible parameters, thereby demonstrating that the device can operate as a dynamic single electron memory.

  16. Method and system employing finite state machine modeling to identify one of a plurality of different electric load types

    Science.gov (United States)

    Du, Liang; Yang, Yi; Harley, Ronald Gordon; Habetler, Thomas G.; He, Dawei

    2016-08-09

    A system is for a plurality of different electric load types. The system includes a plurality of sensors structured to sense a voltage signal and a current signal for each of the different electric loads; and a processor. The processor acquires a voltage and current waveform from the sensors for a corresponding one of the different electric load types; calculates a power or current RMS profile of the waveform; quantizes the power or current RMS profile into a set of quantized state-values; evaluates a state-duration for each of the quantized state-values; evaluates a plurality of state-types based on the power or current RMS profile and the quantized state-values; generates a state-sequence that describes a corresponding finite state machine model of a generalized load start-up or transient profile for the corresponding electric load type; and identifies the corresponding electric load type.

  17. Automation of a universal machine

    International Nuclear Information System (INIS)

    Rodriguez S, J.

    1997-01-01

    The development of the hardware and software of a control system for a servo-hydraulic machine is presented. The universal machine is an Instron, model 1331, used to make mechanical tests. The software includes the acquisition of data from the measurements, processing and graphic presentation of the results in the assay of the 'tension' type. The control is based on a PPI (Programmable Peripheral Interface) 8255, in which the different states of the machine are set. The control functions of the machine are: a) Start of an assay, b) Pause in the assay, c) End of the assay, d) Choice of the control mode of the machine, that they could be in load, stroke or strain modes. For the data acquisition, a commercial card, National Products, model DAS-16, plugged in a slot of a Pc was used. Three transducers provide the analog signals, a cell of load, a LVDT and a extensometer. All the data are digitalized and handled in order to get the results in the appropriate working units. A stress-strain graph is obtained in the screen of the Pc for a tension test for a specific material. The points of maximum stress, rupture stress and the yield stress of the material under test are shown. (Author)

  18. In situ study of glasses decomposition layer

    International Nuclear Information System (INIS)

    Zarembowitch-Deruelle, O.

    1997-01-01

    The aim of this work is to understand the involved mechanisms during the decomposition of glasses by water and the consequences on the morphology of the decomposition layer, in particular in the case of a nuclear glass: the R 7 T 7 . The chemical composition of this glass being very complicated, it is difficult to know the influence of the different elements on the decomposition kinetics and on the resulting morphology because several atoms have a same behaviour. Glasses with simplified composition (only 5 elements) have then been synthesized. The morphological and structural characteristics of these glasses have been given. They have then been decomposed by water. The leaching curves do not reflect the decomposition kinetics but the solubility of the different elements at every moment. The three steps of the leaching are: 1) de-alkalinization 2) lattice rearrangement 3) heavy elements solubilization. Two decomposition layer types have also been revealed according to the glass heavy elements rate. (O.M.)

  19. Energy index decomposition methodology at the plant level

    Science.gov (United States)

    Kumphai, Wisit

    Scope and method of study. The dissertation explores the use of a high level energy intensity index as a facility-level energy performance monitoring indicator with a goal of developing a methodology for an economically based energy performance monitoring system that incorporates production information. The performance measure closely monitors energy usage, production quantity, and product mix and determines the production efficiency as a part of an ongoing process that would enable facility managers to keep track of and, in the future, be able to predict when to perform a recommissioning process. The study focuses on the use of the index decomposition methodology and explored several high level (industry, sector, and country levels) energy utilization indexes, namely, Additive Log Mean Divisia, Multiplicative Log Mean Divisia, and Additive Refined Laspeyres. One level of index decomposition is performed. The indexes are decomposed into Intensity and Product mix effects. These indexes are tested on a flow shop brick manufacturing plant model in three different climates in the United States. The indexes obtained are analyzed by fitting an ARIMA model and testing for dependency between the two decomposed indexes. Findings and conclusions. The results concluded that the Additive Refined Laspeyres index decomposition methodology is suitable to use on a flow shop, non air conditioned production environment as an energy performance monitoring indicator. It is likely that this research can be further expanded in to predicting when to perform a recommissioning process.

  20. Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.

    Science.gov (United States)

    Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone

    2017-12-26

    Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.

  1. Decomposition studies of group 6 hexacarbonyl complexes. Pt. 2. Modelling of the decomposition process

    Energy Technology Data Exchange (ETDEWEB)

    Usoltsev, Ilya; Eichler, Robert; Tuerler, Andreas [Paul Scherrer Institut (PSI), Villigen (Switzerland); Bern Univ. (Switzerland)

    2016-11-01

    The decomposition behavior of group 6 metal hexacarbonyl complexes (M(CO){sub 6}) in a tubular flow reactor is simulated. A microscopic Monte-Carlo based model is presented for assessing the first bond dissociation enthalpy of M(CO){sub 6} complexes. The suggested approach superimposes a microscopic model of gas adsorption chromatography with a first-order heterogeneous decomposition model. The experimental data on the decomposition of Mo(CO){sub 6} and W(CO){sub 6} are successfully simulated by introducing available thermodynamic data. Thermodynamic data predicted by relativistic density functional theory is used in our model to deduce the most probable experimental behavior of the corresponding Sg carbonyl complex. Thus, the design of a chemical experiment with Sg(CO){sub 6} is suggested, which is sensitive to benchmark our theoretical understanding of the bond stability in carbonyl compounds of the heaviest elements.

  2. Empirical mode decomposition and k-nearest embedding vectors for timely analyses of antibiotic resistance trends.

    Science.gov (United States)

    Teodoro, Douglas; Lovis, Christian

    2013-01-01

    Antibiotic resistance is a major worldwide public health concern. In clinical settings, timely antibiotic resistance information is key for care providers as it allows appropriate targeted treatment or improved empirical treatment when the specific results of the patient are not yet available. To improve antibiotic resistance trend analysis algorithms by building a novel, fully data-driven forecasting method from the combination of trend extraction and machine learning models for enhanced biosurveillance systems. We investigate a robust model for extraction and forecasting of antibiotic resistance trends using a decade of microbiology data. Our method consists of breaking down the resistance time series into independent oscillatory components via the empirical mode decomposition technique. The resulting waveforms describing intrinsic resistance trends serve as the input for the forecasting algorithm. The algorithm applies the delay coordinate embedding theorem together with the k-nearest neighbor framework to project mappings from past events into the future dimension and estimate the resistance levels. The algorithms that decompose the resistance time series and filter out high frequency components showed statistically significant performance improvements in comparison with a benchmark random walk model. We present further qualitative use-cases of antibiotic resistance trend extraction, where empirical mode decomposition was applied to highlight the specificities of the resistance trends. The decomposition of the raw signal was found not only to yield valuable insight into the resistance evolution, but also to produce novel models of resistance forecasters with boosted prediction performance, which could be utilized as a complementary method in the analysis of antibiotic resistance trends.

  3. Spatial domain decomposition for neutron transport problems

    International Nuclear Information System (INIS)

    Yavuz, M.; Larsen, E.W.

    1989-01-01

    A spatial Domain Decomposition method is proposed for modifying the Source Iteration (SI) and Diffusion Synthetic Acceleration (DSA) algorithms for solving discrete ordinates problems. The method, which consists of subdividing the spatial domain of the problem and performing the transport sweeps independently on each subdomain, has the advantage of being parallelizable because the calculations in each subdomain can be performed on separate processors. In this paper we describe the details of this spatial decomposition and study, by numerical experimentation, the effect of this decomposition on the SI and DSA algorithms. Our results show that the spatial decomposition has little effect on the convergence rates until the subdomains become optically thin (less than about a mean free path in thickness)

  4. Making molecular machines work

    NARCIS (Netherlands)

    Browne, Wesley R.; Feringa, Ben L.

    2006-01-01

    In this review we chart recent advances in what is at once an old and very new field of endeavour the achievement of control of motion at the molecular level including solid-state and surface-mounted rotors, and its natural progression to the development of synthetic molecular machines. Besides a

  5. Aging-driven decomposition in zolpidem hemitartrate hemihydrate and the single-crystal structure of its decomposition products.

    Science.gov (United States)

    Vega, Daniel R; Baggio, Ricardo; Roca, Mariana; Tombari, Dora

    2011-04-01

    The "aging-driven" decomposition of zolpidem hemitartrate hemihydrate (form A) has been followed by X-ray powder diffraction (XRPD), and the crystal and molecular structures of the decomposition products studied by single-crystal methods. The process is very similar to the "thermally driven" one, recently described in the literature for form E (Halasz and Dinnebier. 2010. J Pharm Sci 99(2): 871-874), resulting in a two-phase system: the neutral free base (common to both decomposition processes) and, in the present case, a novel zolpidem tartrate monohydrate, unique to the "aging-driven" decomposition. Our room-temperature single-crystal analysis gives for the free base comparable results as the high-temperature XRPD ones already reported by Halasz and Dinnebier: orthorhombic, Pcba, a = 9.6360(10) Å, b = 18.2690(5) Å, c = 18.4980(11) Å, and V = 3256.4(4) Å(3) . The unreported zolpidem tartrate monohydrate instead crystallizes in monoclinic P21 , which, for comparison purposes, we treated in the nonstandard setting P1121 with a = 20.7582(9) Å, b = 15.2331(5) Å, c = 7.2420(2) Å, γ = 90.826(2)°, and V = 2289.73(14) Å(3) . The structure presents two complete moieties in the asymmetric unit (z = 4, z' = 2). The different phases obtained in both decompositions are readily explained, considering the diverse genesis of both processes. Copyright © 2010 Wiley-Liss, Inc.

  6. Effect of Machining Velocity in Nanoscale Machining Operations

    International Nuclear Information System (INIS)

    Islam, Sumaiya; Khondoker, Noman; Ibrahim, Raafat

    2015-01-01

    The aim of this study is to investigate the generated forces and deformations of single crystal Cu with (100), (110) and (111) crystallographic orientations at nanoscale machining operation. A nanoindenter equipped with nanoscratching attachment was used for machining operations and in-situ observation of a nano scale groove. As a machining parameter, the machining velocity was varied to measure the normal and cutting forces. At a fixed machining velocity, different levels of normal and cutting forces were generated due to different crystallographic orientations of the specimens. Moreover, after machining operation percentage of elastic recovery was measured and it was found that both the elastic and plastic deformations were responsible for producing a nano scale groove within the range of machining velocities from 250-1000 nm/s. (paper)

  7. Human decomposition and the reliability of a 'Universal' model for post mortem interval estimations.

    Science.gov (United States)

    Cockle, Diane L; Bell, Lynne S

    2015-08-01

    Human decomposition is a complex biological process driven by an array of variables which are not clearly understood. The medico-legal community have long been searching for a reliable method to establish the post-mortem interval (PMI) for those whose deaths have either been hidden, or gone un-noticed. To date, attempts to develop a PMI estimation method based on the state of the body either at the scene or at autopsy have been unsuccessful. One recent study has proposed that two simple formulae, based on the level of decomposition humidity and temperature, could be used to accurately calculate the PMI for bodies outside, on or under the surface worldwide. This study attempted to validate 'Formula I' [1] (for bodies on the surface) using 42 Canadian cases with known PMIs. The results indicated that bodies exposed to warm temperatures consistently overestimated the known PMI by a large and inconsistent margin for Formula I estimations. And for bodies exposed to cold and freezing temperatures (less than 4°C), then the PMI was dramatically under estimated. The ability of 'Formulae II' to estimate the PMI for buried bodies was also examined using a set of 22 known Canadian burial cases. As these cases used in this study are retrospective, some of the data needed for Formula II was not available. The 4.6 value used in Formula II to represent the standard ratio of time that burial decelerates the rate of decomposition was examined. The average time taken to achieve each stage of decomposition both on, and under the surface was compared for the 118 known cases. It was found that the rate of decomposition was not consistent throughout all stages of decomposition. The rates of autolysis above and below the ground were equivalent with the buried cases staying in a state of putrefaction for a prolonged period of time. It is suggested that differences in temperature extremes and humidity levels between geographic regions may make it impractical to apply formulas developed in

  8. Self-decomposition of radiochemicals. Principles, control, observations and effects

    International Nuclear Information System (INIS)

    Evans, E.A.

    1976-01-01

    The aim of the booklet is to remind the established user of radiochemicals of the problems of self-decomposition and to inform those investigators who are new to the applications of radiotracers. The section headings are: introduction; radionuclides; mechanisms of decomposition; effects of temperature; control of decomposition; observations of self-decomposition (sections for compounds labelled with (a) carbon-14, (b) tritium, (c) phosphorus-32, (d) sulphur-35, (e) gamma- or X-ray emitting radionuclides, decomposition of labelled macromolecules); effects of impurities in radiotracer investigations; stability of labelled compounds during radiotracer studies. (U.K.)

  9. A logical correspondence between natural semantics and abstract machines

    DEFF Research Database (Denmark)

    Simmons, Robert J.; Zerny, Ian

    2013-01-01

    We present a logical correspondence between natural semantics and abstract machines. This correspondence enables the mechanical and fully-correct construction of an abstract machine from a natural semantics. Our logical correspondence mirrors the Reynolds functional correspondence, but we...... manipulate semantic specifications encoded in a logical framework instead of manipulating functional programs. Natural semantics and abstract machines are instances of substructural operational semantics. As a byproduct, using a substructural logical framework, we bring concurrent and stateful models...

  10. Method of control of machining accuracy of low-rigidity elastic-deformable shafts

    Directory of Open Access Journals (Sweden)

    Antoni Świć

    Full Text Available The paper presents an analysis of the possibility of increasing the accuracy and stability of machining of low-rigidity shafts while ensuring high efficiency and economy of their machining. An effective way of improving the accuracy of machining of shafts is increasing their rigidity as a result of oriented change of the elastic-deformable state through the application of a tensile force which, combined with the machining force, forms longitudinal-lateral strains. The paper also presents mathematical models describing the changes of the elastic-deformable state resulting from the application of the tensile force. It presents the results of experimental studies on the deformation of elastic low-rigidity shafts, performed on a special test stand developed on the basis of a lathe. An estimation was made of the effectiveness of the method of control of the elastic-deformable state with the use, as the regulating effects, the tensile force and eccentricity. It was demonstrated that controlling the two parameters: tensile force and eccentricity, one can improve the accuracy of machining, and thus achieve a theoretically assumed level of accuracy.

  11. A Functional Correspondence between Monadic Evaluators and Abstract Machines for Languages with Computational Effects

    DEFF Research Database (Denmark)

    Ager, Mads Sig; Danvy, Olivier; Midtgaard, Jan

    2005-01-01

    We extend our correspondence between evaluators and abstract machines from the pure setting of the lambda-calculus to the impure setting of the computational lambda-calculus. We show how to derive new abstract machines from monadic evaluators for the computational lambda-calculus. Starting from (1......) a generic evaluator parameterized by a monad and (2) a monad specifying a computational effect, we inline the components of the monad in the generic evaluator to obtain an evaluator written in a style that is specific to this computational effect. We then derive the corresponding abstract machine by closure......-converting, CPS-transforming, and defunctionalizing this specific evaluator. We illustrate the construction first with the identity monad, obtaining the CEK machine, and then with a lifting monad, a state monad, and with a lifted state monad, obtaining variants of the CEK machine with error handling, state...

  12. Detection of inter-turn short-circuit at start-up of induction machine based on torque analysis

    Directory of Open Access Journals (Sweden)

    Pietrowski Wojciech

    2017-12-01

    Full Text Available Recently, interest in new diagnostics methods in a field of induction machines was observed. Research presented in the paper shows the diagnostics of induction machine based on torque pulsation, under inter-turn short-circuit, during start-up of a machine. In the paper three numerical techniques were used: finite element analysis, signal analysis and artificial neural networks (ANN. The elaborated numerical model of faulty machine consists of field, circuit and motion equations. Voltage excited supply allowed to determine the torque waveform during start-up. The inter-turn short-circuit was treated as a galvanic connection between two points of the stator winding. The waveforms were calculated for different amounts of shorted-turns from 0 to 55. Due to the non-stationary waveforms a wavelet packet decomposition was used to perform an analysis of the torque. The obtained results of analysis were used as input vector for ANN. The response of the neural network was the number of shorted-turns in the stator winding. Special attention was paid to compare response of general regression neural network (GRNN and multi-layer perceptron neural network (MLP. Based on the results of the research, the efficiency of the developed algorithm can be inferred.

  13. Running and machine studies in 1990

    International Nuclear Information System (INIS)

    1991-03-01

    This annual report described the GANIL performance and machine studies. During the year 1990, the machine has been operated for 36 weeks divided into periods of 5, 6 or 7 weeks; consequently the number of beam setting up has been reduced. From 5682 hours of scheduled beam 3239 hours have been delivered on target. Very heavy ions (Pb, U) are now accelerated owing to the OAE modification. Many experiments have been completed with the new medium energy beam facility. The machine studies were devoted to the development ot the following items: production of 157 Gd 19+ ions, acceleration of 238 U 59+ at 24 MeV/u, SSC1 orbit precession, charge state distribution and energy spread after stripping [fr

  14. Effect of Copper Oxide, Titanium Dioxide, and Lithium Fluoride on the Thermal Behavior and Decomposition Kinetics of Ammonium Nitrate

    Science.gov (United States)

    Vargeese, Anuj A.; Mija, S. J.; Muralidharan, Krishnamurthi

    2014-07-01

    Ammonium nitrate (AN) is crystallized along with copper oxide, titanium dioxide, and lithium fluoride. Thermal kinetic constants for the decomposition reaction of the samples were calculated by model-free (Friedman's differential and Vyzovkins nonlinear integral) and model-fitting (Coats-Redfern) methods. To determine the decomposition mechanisms, 12 solid-state mechanisms were tested using the Coats-Redfern method. The results of the Coats-Redfern method show that the decomposition mechanism for all samples is the contracting cylinder mechanism. The phase behavior of the obtained samples was evaluated by differential scanning calorimetry (DSC), and structural properties were determined by X-ray powder diffraction (XRPD). The results indicate that copper oxide modifies the phase transition behavior and can catalyze AN decomposition, whereas LiF inhibits AN decomposition, and TiO2 shows no influence on the rate of decomposition. Possible explanations for these results are discussed. Supplementary materials are available for this article. Go to the publisher's online edition of the Journal of Energetic Materials to view the free supplemental file.

  15. Decomposition of thermal-equilibrium states

    International Nuclear Information System (INIS)

    Gu Lei

    2010-01-01

    It is shown that a thermal-equilibrium state can be decomposed into a tensor product of the operators in subspaces of single-particle energy. On the basis of this form, a straightforward derivation of the Fermi-Dirac and the Bose-Einstein distribution is performed. The derivation can be generalized for systems with weak interaction to obtain an approximate distribution in momentum.

  16. Reviewing the current state of machine learning for artificial intelligence with regards to the use of contextual information

    OpenAIRE

    Kinch, Martin W.; Melis, Wim J.C.; Keates, Simeon

    2017-01-01

    This paper will consider the current state of Machine Learning for Artificial Intelligence, more specifically for applications, such as: Speech Recognition, Game Playing and Image Processing. The artificial world tends to make limited use of context in comparison to what currently happens in human life, while it would benefit from improvements in this area. Additionally, the process of transferring knowledge between application domains is another important area where artificial system can imp...

  17. A Subspace Approach to the Structural Decomposition and Identification of Ankle Joint Dynamic Stiffness.

    Science.gov (United States)

    Jalaleddini, Kian; Tehrani, Ehsan Sobhani; Kearney, Robert E

    2017-06-01

    The purpose of this paper is to present a structural decomposition subspace (SDSS) method for decomposition of the joint torque to intrinsic, reflexive, and voluntary torques and identification of joint dynamic stiffness. First, it formulates a novel state-space representation for the joint dynamic stiffness modeled by a parallel-cascade structure with a concise parameter set that provides a direct link between the state-space representation matrices and the parallel-cascade parameters. Second, it presents a subspace method for the identification of the new state-space model that involves two steps: 1) the decomposition of the intrinsic and reflex pathways and 2) the identification of an impulse response model of the intrinsic pathway and a Hammerstein model of the reflex pathway. Extensive simulation studies demonstrate that SDSS has significant performance advantages over some other methods. Thus, SDSS was more robust under high noise conditions, converging where others failed; it was more accurate, giving estimates with lower bias and random errors. The method also worked well in practice and yielded high-quality estimates of intrinsic and reflex stiffnesses when applied to experimental data at three muscle activation levels. The simulation and experimental results demonstrate that SDSS accurately decomposes the intrinsic and reflex torques and provides accurate estimates of physiologically meaningful parameters. SDSS will be a valuable tool for studying joint stiffness under functionally important conditions. It has important clinical implications for the diagnosis, assessment, objective quantification, and monitoring of neuromuscular diseases that change the muscle tone.

  18. A Machine Learning Approach to Automated Gait Analysis for the Noldus Catwalk System.

    Science.gov (United States)

    Frohlich, Holger; Claes, Kasper; De Wolf, Catherine; Van Damme, Xavier; Michel, Anne

    2018-05-01

    Gait analysis of animal disease models can provide valuable insights into in vivo compound effects and thus help in preclinical drug development. The purpose of this paper is to establish a computational gait analysis approach for the Noldus Catwalk system, in which footprints are automatically captured and stored. We present a - to our knowledge - first machine learning based approach for the Catwalk system, which comprises a step decomposition, definition and extraction of meaningful features, multivariate step sequence alignment, feature selection, and training of different classifiers (gradient boosting machine, random forest, and elastic net). Using animal-wise leave-one-out cross validation we demonstrate that with our method we can reliable separate movement patterns of a putative Parkinson's disease animal model and several control groups. Furthermore, we show that we can predict the time point after and the type of different brain lesions and can even forecast the brain region, where the intervention was applied. We provide an in-depth analysis of the features involved into our classifiers via statistical techniques for model interpretation. A machine learning method for automated analysis of data from the Noldus Catwalk system was established. Our works shows the ability of machine learning to discriminate pharmacologically relevant animal groups based on their walking behavior in a multivariate manner. Further interesting aspects of the approach include the ability to learn from past experiments, improve with more data arriving and to make predictions for single animals in future studies.

  19. Magic Coset Decompositions

    CERN Document Server

    Cacciatori, Sergio L; Marrani, Alessio

    2013-01-01

    By exploiting a "mixed" non-symmetric Freudenthal-Rozenfeld-Tits magic square, two types of coset decompositions are analyzed for the non-compact special K\\"ahler symmetric rank-3 coset E7(-25)/[(E6(-78) x U(1))/Z_3], occurring in supergravity as the vector multiplets' scalar manifold in N=2, D=4 exceptional Maxwell-Einstein theory. The first decomposition exhibits maximal manifest covariance, whereas the second (triality-symmetric) one is of Iwasawa type, with maximal SO(8) covariance. Generalizations to conformal non-compact, real forms of non-degenerate, simple groups "of type E7" are presented for both classes of coset parametrizations, and relations to rank-3 simple Euclidean Jordan algebras and normed trialities over division algebras are also discussed.

  20. Patched bimetallic surfaces are active catalysts for ammonia decomposition.

    Science.gov (United States)

    Guo, Wei; Vlachos, Dionisios G

    2015-10-07

    Ammonia decomposition is often used as an archetypical reaction for predicting new catalytic materials and understanding the very reason of why some reactions are sensitive on material's structure. Core-shell or surface-segregated bimetallic nanoparticles expose outstanding activity for many heterogeneously catalysed reactions but the reasons remain elusive owing to the difficulties in experimentally characterizing active sites. Here by performing multiscale simulations in ammonia decomposition on various nickel loadings on platinum (111), we show that the very high activity of core-shell structures requires patches of the guest metal to create and sustain dual active sites: nickel terraces catalyse N-H bond breaking and nickel edge sites drive atomic nitrogen association. The structure sensitivity on these active catalysts depends profoundly on reaction conditions due to kinetically competing relevant elementary reaction steps. We expose a remarkable difference in active sites between transient and steady-state studies and provide insights into optimal material design.

  1. Aligning observed and modelled behaviour based on workflow decomposition

    Science.gov (United States)

    Wang, Lu; Du, YuYue; Liu, Wei

    2017-09-01

    When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.

  2. Dynamic mode decomposition for plasma diagnostics and validation

    Science.gov (United States)

    Taylor, Roy; Kutz, J. Nathan; Morgan, Kyle; Nelson, Brian A.

    2018-05-01

    We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.

  3. Deuterium isotope effects in condensed-phase thermochemical decomposition reactions of octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine

    International Nuclear Information System (INIS)

    Shackelford, S.A.; Coolidge, M.B.; Goshgarian, B.B.; Loving, B.A.; Rogers, R.N.; Janney, J.L.; Ebinger, M.H.

    1985-01-01

    The deuterium isotope effect was applied to condensed-phase thermochemical reactions of HMX and HMX-d 8 by using isothermal techniques. Dissimilar deuterium isotope effects revealed a mechanistic dependence of HMX upon different physical states which may singularly predominate in a specific type of thermal event. Solid-state HMX thermochemical decomposition produces a primary deuterium isotope effect (DIE), indicating that covalent C-H bond rupture is the rate-controlling step in this phase. An apparent inverse DIE is displayed by the mixed melt phase and can be attributed to C-H bond contraction during a weakening of molecular lattice forces as the solid HMX liquefies. The liquid-state decomposition rate appears to be controlled by ring C-N bond cleavage as evidenced by a secondary DIE and higher molecular weight products. These results reveal a dependence of the HMX decomposition process on physical state and lead to a broader mechanistic interpretation which explains the seemingly contradictory data found in current literature reviews. 33 references, 9 figures, 5 tables

  4. TF.Learn: TensorFlow's High-level Module for Distributed Machine Learning

    OpenAIRE

    Tang, Yuan

    2016-01-01

    TF.Learn is a high-level Python module for distributed machine learning inside TensorFlow. It provides an easy-to-use Scikit-learn style interface to simplify the process of creating, configuring, training, evaluating, and experimenting a machine learning model. TF.Learn integrates a wide range of state-of-art machine learning algorithms built on top of TensorFlow's low level APIs for small to large-scale supervised and unsupervised problems. This module focuses on bringing machine learning t...

  5. Prediction of Machine Tool Condition Using Support Vector Machine

    International Nuclear Information System (INIS)

    Wang Peigong; Meng Qingfeng; Zhao Jian; Li Junjie; Wang Xiufeng

    2011-01-01

    Condition monitoring and predicting of CNC machine tools are investigated in this paper. Considering the CNC machine tools are often small numbers of samples, a condition predicting method for CNC machine tools based on support vector machines (SVMs) is proposed, then one-step and multi-step condition prediction models are constructed. The support vector machines prediction models are used to predict the trends of working condition of a certain type of CNC worm wheel and gear grinding machine by applying sequence data of vibration signal, which is collected during machine processing. And the relationship between different eigenvalue in CNC vibration signal and machining quality is discussed. The test result shows that the trend of vibration signal Peak-to-peak value in surface normal direction is most relevant to the trend of surface roughness value. In trends prediction of working condition, support vector machine has higher prediction accuracy both in the short term ('One-step') and long term (multi-step) prediction compared to autoregressive (AR) model and the RBF neural network. Experimental results show that it is feasible to apply support vector machine to CNC machine tool condition prediction.

  6. Gelcasting compositions having improved drying characteristics and machinability

    Science.gov (United States)

    Janney, Mark A.; Walls, Claudia A. H.

    2001-01-01

    A gelcasting composition has improved drying behavior, machinability and shelf life in the dried and unfired state. The composition includes an inorganic powder, solvent, monomer system soluble in the solvent, an initiator system for polymerizing the monomer system, and a plasticizer soluble in the solvent. Dispersants and other processing aides to control slurry properties can be added. The plasticizer imparts an ability to dry thick section parts, to store samples in the dried state without cracking under conditions of varying relative humidity, and to machine dry gelcast parts without cracking or chipping. A method of making gelcast parts is also disclosed.

  7. Neutron small-angle scattering study of phase decomposition in Au-Pt

    International Nuclear Information System (INIS)

    Singhal, S.P.; Herman, H.

    1978-01-01

    Isothermal decomposition of a Au-60 at.% Pt alloy, quenched from the solid as well as the liquid state, has been studied with the D11 neutron small-angle scattering spectrometer at ILL, Grenoble. An incident neutron wavelength of 6.7 A was used and measurements were carried out in the range of scattering vector [β=4π sin theta/lambda] from 2.8x10 -2 to 21x10 -2 A -1 . The preliminary results indicate that decomposition of this alloy at 550 0 C takes place by a spinodal mode, although deviations were observed from linear spinodal theory, even at very early times. Slower aging kinetics were observed in liquid-quenched alloy as compared with solid-quenched. Liquid quenching is more efficient in suppressing quench clustering than is solid quenching. However, liquid quenching yields an extremely fine-grained material, which thereby enhances discontinuous precipitation at grain boundaries, competing with decomposition in the bulk. A Rundman-Hilliard analysis was used for the early stages of the spinodal reaction to obtain an interdiffusion coefficient of the order of 10 -16 cm 2 s -1 at 550 0 C for the solid-quenched alloy. (Auth.)

  8. The influence of inorganic matrices on the decomposition of Eucalyptus litter

    International Nuclear Information System (INIS)

    Skene, T.M.; Oades, J.M.; Clarke, P.J.; Skjemstad, J.O.; Oades, J.M.; Skjemstad, J.O.

    1997-01-01

    The decomposition of Eucalyptus litter (EL) in the presence and absence of inorganic matrices [sad (S), sand+kaolin (S+K), loamy sand (LS)] with and without added N (urea) was followed over 48 weeks using chemical and spectroscopic means. At the end of the incubation, the residual organic matter in different density and particle size fractions was examined. Urea addition inhibited the mineralisation of C from the litter in all treatments except EL+S+N, whereas the inorganic matrices had little influence on mineralisation. Solid state 13 C CP/MAS NMR spectra of the whole samples suggested there were no differences in the treatments, despite significant differences in the amount of C mineralized. The NMR spectra of the whole samples suggest that a reaction between aromatic-C and urea occurred during thr first week of the incubation which may have rendered the N unavailable to microorganisms. The results were quite different from a similar study on the decomposition of straw. these differences suggest that, for high quality substrates, physical protection by inorganic matrices is the limiting factor to decomposition, whereas for low quality substrates, chemical protection is the limiting factor. 13 refs., 2 tabs., 6 figs

  9. Environmentally Friendly Machining

    CERN Document Server

    Dixit, U S; Davim, J Paulo

    2012-01-01

    Environment-Friendly Machining provides an in-depth overview of environmentally-friendly machining processes, covering numerous different types of machining in order to identify which practice is the most environmentally sustainable. The book discusses three systems at length: machining with minimal cutting fluid, air-cooled machining and dry machining. Also covered is a way to conserve energy during machining processes, along with useful data and detailed descriptions for developing and utilizing the most efficient modern machining tools. Researchers and engineers looking for sustainable machining solutions will find Environment-Friendly Machining to be a useful volume.

  10. A comparison between decomposition rates of buried and surface remains in a temperate region of South Africa.

    Science.gov (United States)

    Marais-Werner, Anátulie; Myburgh, J; Becker, P J; Steyn, M

    2018-01-01

    Several studies have been conducted on decomposition patterns and rates of surface remains; however, much less are known about this process for buried remains. Understanding the process of decomposition in buried remains is extremely important and aids in criminal investigations, especially when attempting to estimate the post mortem interval (PMI). The aim of this study was to compare the rates of decomposition between buried and surface remains. For this purpose, 25 pigs (Sus scrofa; 45-80 kg) were buried and excavated at different post mortem intervals (7, 14, 33, 92, and 183 days). The observed total body scores were then compared to those of surface remains decomposing at the same location. Stages of decomposition were scored according to separate categories for different anatomical regions based on standardised methods. Variation in the degree of decomposition was considerable especially with the buried 7-day interval pigs that displayed different degrees of discolouration in the lower abdomen and trunk. At 14 and 33 days, buried pigs displayed features commonly associated with the early stages of decomposition, but with less variation. A state of advanced decomposition was reached where little change was observed in the next ±90-183 days after interment. Although the patterns of decomposition for buried and surface remains were very similar, the rates differed considerably. Based on the observations made in this study, guidelines for the estimation of PMI are proposed. This pertains to buried remains found at a depth of approximately 0.75 m in the Central Highveld of South Africa.

  11. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  12. On the hadron mass decomposition

    Science.gov (United States)

    Lorcé, Cédric

    2018-02-01

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force.

  13. On the hadron mass decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Lorce, Cedric [Universite Paris-Saclay, Centre de Physique Theorique, Ecole Polytechnique, CNRS, Palaiseau (France)

    2018-02-15

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force. (orig.)

  14. Machine Translation Tools - Tools of The Translator's Trade

    DEFF Research Database (Denmark)

    Kastberg, Peter

    2012-01-01

    In this article three of the more common types of translation tools are presented, discussed and critically evaluated. The types of translation tools dealt with in this article are: Fully Automated Machine Translation (or FAMT), Human Aided Machine Translation (or HAMT) and Machine Aided Human...... Translation (or MAHT). The strengths and weaknesses of the different types of tools are discussed and evaluated by means of a number of examples. The article aims at two things: at presenting a sort of state of the art of what is commonly referred to as “machine translation” as well as at providing the reader...... with a sound basis for considering what translation tool (if any) is the most appropriate in order to meet his or her specific translation needs....

  15. Novel cloning machine with supplementary information

    International Nuclear Information System (INIS)

    Qiu Daowen

    2006-01-01

    Probabilistic cloning was first proposed by Duan and Guo. Then Pati established a novel cloning machine (NCM) for copying superposition of multiple clones simultaneously. In this paper, we deal with the novel cloning machine with supplementary information (NCMSI). For the case of cloning two states, we demonstrate that the optimal efficiency of the NCMSI in which the original party and the supplementary party can perform quantum communication equals that achieved by a two-step cloning protocol wherein classical communication is only allowed between the original and the supplementary parties. From this equivalence, it follows that NCMSI may increase the success probabilities for copying. Also, an upper bound on the unambiguous discrimination of two nonorthogonal pure product states is derived. Our investigation generalizes and completes the results in the literature

  16. Time-varying singular value decomposition for periodic transient identification in bearing fault diagnosis

    Science.gov (United States)

    Zhang, Shangbin; Lu, Siliang; He, Qingbo; Kong, Fanrang

    2016-09-01

    For rotating machines, the defective faults of bearings generally are represented as periodic transient impulses in acquired signals. The extraction of transient features from signals has been a key issue for fault diagnosis. However, the background noise reduces identification performance of periodic faults in practice. This paper proposes a time-varying singular value decomposition (TSVD) method to enhance the identification of periodic faults. The proposed method is inspired by the sliding window method. By applying singular value decomposition (SVD) to the signal under a sliding window, we can obtain a time-varying singular value matrix (TSVM). Each column in the TSVM is occupied by the singular values of the corresponding sliding window, and each row represents the intrinsic structure of the raw signal, namely time-singular-value-sequence (TSVS). Theoretical and experimental analyses show that the frequency of TSVS is exactly twice that of the corresponding intrinsic structure. Moreover, the signal-to-noise ratio (SNR) of TSVS is improved significantly in comparison with the raw signal. The proposed method takes advantages of the TSVS in noise suppression and feature extraction to enhance fault frequency for diagnosis. The effectiveness of the TSVD is verified by means of simulation studies and applications to diagnosis of bearing faults. Results indicate that the proposed method is superior to traditional methods for bearing fault diagnosis.

  17. Wavelet Packet Entropy in Speaker-Independent Emotional State Detection from Speech Signal

    Directory of Open Access Journals (Sweden)

    Mina Kadkhodaei Elyaderani

    2015-01-01

    Full Text Available In this paper, wavelet packet entropy is proposed for speaker-independent emotion detection from speech. After pre-processing, wavelet packet decomposition using wavelet type db3 at level 4 is calculated and Shannon entropy in its nodes is calculated to be used as feature. In addition, prosodic features such as first four formants, jitter or pitch deviation amplitude, and shimmer or energy variation amplitude besides MFCC features are applied to complete the feature vector. Then, Support Vector Machine (SVM is used to classify the vectors in multi-class (all emotions or two-class (each emotion versus normal state format. 46 different utterances of a single sentence from Berlin Emotional Speech Dataset are selected. These are uttered by 10 speakers in sadness, happiness, fear, boredom, anger, and normal emotional state. Experimental results show that proposed features can improve emotional state detection accuracy in multi-class situation. Furthermore, adding to other features wavelet entropy coefficients increase the accuracy of two-class detection for anger, fear, and happiness.

  18. Aridity and decomposition processes in complex landscapes

    Science.gov (United States)

    Ossola, Alessandro; Nyman, Petter

    2015-04-01

    Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally

  19. Early stage litter decomposition across biomes

    Science.gov (United States)

    Ika Djukic; Sebastian Kepfer-Rojas; Inger Kappel Schmidt; Klaus Steenberg Larsen; Claus Beier; Björn Berg; Kris Verheyen; Adriano Caliman; Alain Paquette; Alba Gutiérrez-Girón; Alberto Humber; Alejandro Valdecantos; Alessandro Petraglia; Heather Alexander; Algirdas Augustaitis; Amélie Saillard; Ana Carolina Ruiz Fernández; Ana I. Sousa; Ana I. Lillebø; Anderson da Rocha Gripp; André-Jean Francez; Andrea Fischer; Andreas Bohner; Andrey Malyshev; Andrijana Andrić; Andy Smith; Angela Stanisci; Anikó Seres; Anja Schmidt; Anna Avila; Anne Probst; Annie Ouin; Anzar A. Khuroo; Arne Verstraeten; Arely N. Palabral-Aguilera; Artur Stefanski; Aurora Gaxiola; Bart Muys; Bernard Bosman; Bernd Ahrends; Bill Parker; Birgit Sattler; Bo Yang; Bohdan Juráni; Brigitta Erschbamer; Carmen Eugenia Rodriguez Ortiz; Casper T. Christiansen; E. Carol Adair; Céline Meredieu; Cendrine Mony; Charles A. Nock; Chi-Ling Chen; Chiao-Ping Wang; Christel Baum; Christian Rixen; Christine Delire; Christophe Piscart; Christopher Andrews; Corinna Rebmann; Cristina Branquinho; Dana Polyanskaya; David Fuentes Delgado; Dirk Wundram; Diyaa Radeideh; Eduardo Ordóñez-Regil; Edward Crawford; Elena Preda; Elena Tropina; Elli Groner; Eric Lucot; Erzsébet Hornung; Esperança Gacia; Esther Lévesque; Evanilde Benedito; Evgeny A. Davydov; Evy Ampoorter; Fabio Padilha Bolzan; Felipe Varela; Ferdinand Kristöfel; Fernando T. Maestre; Florence Maunoury-Danger; Florian Hofhansl; Florian Kitz; Flurin Sutter; Francisco Cuesta; Francisco de Almeida Lobo; Franco Leandro de Souza; Frank Berninger; Franz Zehetner; Georg Wohlfahrt; George Vourlitis; Geovana Carreño-Rocabado; Gina Arena; Gisele Daiane Pinha; Grizelle González; Guylaine Canut; Hanna Lee; Hans Verbeeck; Harald Auge; Harald Pauli; Hassan Bismarck Nacro; Héctor A. Bahamonde; Heike Feldhaar; Heinke Jäger; Helena C. Serrano; Hélène Verheyden; Helge Bruelheide; Henning Meesenburg; Hermann Jungkunst; Hervé Jactel; Hideaki Shibata; Hiroko Kurokawa; Hugo López Rosas; Hugo L. Rojas Villalobos; Ian Yesilonis; Inara Melece; Inge Van Halder; Inmaculada García Quirós; Isaac Makelele; Issaka Senou; István Fekete; Ivan Mihal; Ivika Ostonen; Jana Borovská; Javier Roales; Jawad Shoqeir; Jean-Christophe Lata; Jean-Paul Theurillat; Jean-Luc Probst; Jess Zimmerman; Jeyanny Vijayanathan; Jianwu Tang; Jill Thompson; Jiří Doležal; Joan-Albert Sanchez-Cabeza; Joël Merlet; Joh Henschel; Johan Neirynck; Johannes Knops; John Loehr; Jonathan von Oppen; Jónína Sigríður Þorláksdóttir; Jörg Löffler; José-Gilberto Cardoso-Mohedano; José-Luis Benito-Alonso; Jose Marcelo Torezan; Joseph C. Morina; Juan J. Jiménez; Juan Dario Quinde; Juha Alatalo; Julia Seeber; Jutta Stadler; Kaie Kriiska; Kalifa Coulibaly; Karibu Fukuzawa; Katalin Szlavecz; Katarína Gerhátová; Kate Lajtha; Kathrin Käppeler; Katie A. Jennings; Katja Tielbörger; Kazuhiko Hoshizaki; Ken Green; Lambiénou Yé; Laryssa Helena Ribeiro Pazianoto; Laura Dienstbach; Laura Williams; Laura Yahdjian; Laurel M. Brigham; Liesbeth van den Brink; Lindsey Rustad; al. et

    2018-01-01

    Through litter decomposition enormous amounts of carbon is emitted to the atmosphere. Numerous large-scale decomposition experiments have been conducted focusing on this fundamental soil process in order to understand the controls on the terrestrial carbon transfer to the atmosphere. However, previous studies were mostly based on site-specific litter and methodologies...

  20. Optimization and kinetics decomposition of monazite using NaOH

    International Nuclear Information System (INIS)

    MV Purwani; Suyanti; Deddy Husnurrofiq

    2015-01-01

    Decomposition of monazite with NaOH has been done. Decomposition performed at high temperature on furnace. The parameters studied were the comparison NaOH / monazite, temperature and time decomposition. From the research decomposition for 100 grams of monazite with NaOH, it can be concluded that the greater the ratio of NaOH / monazite, the greater the conversion. In the temperature influences decomposition 400 - 700°C, the greater the reaction rate constant with increasing temperature greater decomposition. Comparison NaOH / monazite optimum was 1.5 and the optimum time of 3 hours. Relations ratio NaOH / monazite with conversion (x) following the polynomial equation y = 0.1579x 2 – 0.2855x + 0.8301 (y = conversion and x = ratio of NaOH/monazite). Decomposition reaction of monazite with NaOH was second orde reaction, the relationship between temperature (T) with a reaction rate constant (k), k = 6.106.e - 1006.8 /T or ln k = - 1006.8/T + 6.106, frequency factor A = 448.541, activation energy E = 8.371 kJ/mol. (author)

  1. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  2. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-01-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  3. Chatter identification in milling of Inconel 625 based on recurrence plot technique and Hilbert vibration decomposition

    Directory of Open Access Journals (Sweden)

    Lajmert Paweł

    2018-01-01

    Full Text Available In the paper a cutting stability in the milling process of nickel based alloy Inconel 625 is analysed. This problem is often considered theoretically, but the theoretical finding do not always agree with experimental results. For this reason, the paper presents different methods for instability identification during real machining process. A stability lobe diagram is created based on data obtained in impact test of an end mill. Next, the cutting tests were conducted in which the axial cutting depth of cut was gradually increased in order to find a stability limit. Finally, based on the cutting force measurements the stability estimation problem is investigated using the recurrence plot technique and Hilbert vibration decomposition method.

  4. Decompositional equivalence: A fundamental symmetry underlying quantum theory

    OpenAIRE

    Fields, Chris

    2014-01-01

    Decompositional equivalence is the principle that there is no preferred decomposition of the universe into subsystems. It is shown here, by using simple thought experiments, that quantum theory follows from decompositional equivalence together with Landauer's principle. This demonstration raises within physics a question previously left to psychology: how do human - or any - observers agree about what constitutes a "system of interest"?

  5. Performance analysis of a composite dual-winding reluctance machine

    International Nuclear Information System (INIS)

    Anih, Linus U.; Obe, Emeka S.

    2009-01-01

    The electromagnetic energy conversion process of a composite dual-winding asynchronous reluctance machine is presented. The mechanism of torque production is explained using the magnetic fields distributions. The dynamic model developed in dq-rotor reference frame from first principles depicts the machine operation and response to sudden load change. The device is self-starting in the absence of rotor conductors and its starting current is lower than that of a conventional induction machine. Although the machine possesses salient pole rotors, it is clearly shown that its performance is that of an induction motor operating at half the synchronous speed. Hence the device produces synchronous torque while operating asynchronously. Simple tests were conducted on a prototype demonstration machine and the results obtained are seen to be in tune with the theory and the steady-state calculations.

  6. Human-machine interaction in nuclear power plants

    International Nuclear Information System (INIS)

    Yoshikawa, Hidekazu

    2005-01-01

    Advanced nuclear power plants are generally large complex systems automated by computers. Whenever a rate plant emergency occurs the plant operators must cope with the emergency under severe mental stress without committing any fatal errors. Furthermore, the operators must train to improve and maintain their ability to cope with every conceivable situation, though it is almost impossible to be fully prepared for an infinite variety of situations. In view of the limited capability of operators in emergency situations, there has been a new approach to preventing the human error caused by improper human-machine interaction. The new approach has been triggered by the introduction of advanced information systems that help operators recognize and counteract plant emergencies. In this paper, the adverse effect of automation in human-machine systems is explained. The discussion then focuses on how to configure a joint human-machine system for ideal human-machine interaction. Finally, there is a new proposal on how to organize technologies that recognize the different states of such a joint human-machine system

  7. Mechanistic Aspects of the Thermal Decomposition of Dicyclopentadienyltitanium(IV)diaryl Compounds

    NARCIS (Netherlands)

    Boekel, C.P.; Teuben, J.H.; Liefde Meijer, H.J. de

    1975-01-01

    The thermal decomposition of a number of compounds Cp2TiR2 (R = aryl) was studied in the solid state and in various solvents. A first-order reaction was observed and activation energies of 20-30 kcal mol-1 were found depending on the nature of R. The activation energy for Cp2Ti(C6H5)2 (20-22 kcal

  8. Magnet management in electric machines

    Science.gov (United States)

    Reddy, Patel Bhageerath; El-Refaie, Ayman Mohamed Fawzi; Huh, Kum Kang

    2017-03-21

    A magnet management method of controlling a ferrite-type permanent magnet electrical machine includes receiving and/or estimating the temperature permanent magnets; determining if that temperature is below a predetermined temperature; and if so, then: selectively heating the magnets in order to prevent demagnetization and/or derating the machine. A similar method provides for controlling magnetization level by analyzing flux or magnetization level. Controllers that employ various methods are disclosed. The present invention has been described in terms of specific embodiment(s), and it is recognized that equivalents, alternatives, and modifications, aside from those expressly stated, are possible and within the scope of the appending claims.

  9. Theoretical evidence of the observed kinetic order dependence on temperature during the N(2)O decomposition over Fe-ZSM-5.

    Science.gov (United States)

    Guesmi, Hazar; Berthomieu, Dorothee; Bromley, Bryan; Coq, Bernard; Kiwi-Minsker, Lioubov

    2010-03-28

    The characterization of Fe/ZSM5 zeolite materials, the nature of Fe-sites active in N(2)O direct decomposition, as well as the rate limiting step are still a matter of debate. The mechanism of N(2)O decomposition on the binuclear oxo-hydroxo bridged extraframework iron core site [Fe(II)(mu-O)(mu-OH)Fe(II)](+) inside the ZSM-5 zeolite has been studied by combining theoretical and experimental approaches. The overall calculated path of N(2)O decomposition involves the oxidation of binuclear Fe(II) core sites by N(2)O (atomic alpha-oxygen formation) and the recombination of two surface alpha-oxygen atoms leading to the formation of molecular oxygen. Rate parameters computed using standard statistical mechanics and transition state theory reveal that elementary catalytic steps involved into N(2)O decomposition are strongly dependent on the temperature. This theoretical result was compared to the experimentally observed steady state kinetics of the N(2)O decomposition and temperature-programmed desorption (TPD) experiments. A switch of the reaction order with respect to N(2)O pressure from zero to one occurs at around 800 K suggesting a change of the rate determining step from the alpha-oxygen recombination to alpha-oxygen formation. The TPD results on the molecular oxygen desorption confirmed the mechanism proposed.

  10. Hybrid machining processes perspectives on machining and finishing

    CERN Document Server

    Gupta, Kapil; Laubscher, R F

    2016-01-01

    This book describes various hybrid machining and finishing processes. It gives a critical review of the past work based on them as well as the current trends and research directions. For each hybrid machining process presented, the authors list the method of material removal, machining system, process variables and applications. This book provides a deep understanding of the need, application and mechanism of hybrid machining processes.

  11. Generalized Fisher index or Siegel-Shapley decomposition?

    International Nuclear Information System (INIS)

    De Boer, Paul

    2009-01-01

    It is generally believed that index decomposition analysis (IDA) and input-output structural decomposition analysis (SDA) [Rose, A., Casler, S., Input-output structural decomposition analysis: a critical appraisal, Economic Systems Research 1996; 8; 33-62; Dietzenbacher, E., Los, B., Structural decomposition techniques: sense and sensitivity. Economic Systems Research 1998;10; 307-323] are different approaches in energy studies; see for instance Ang et al. [Ang, B.W., Liu, F.L., Chung, H.S., A generalized Fisher index approach to energy decomposition analysis. Energy Economics 2004; 26; 757-763]. In this paper it is shown that the generalized Fisher approach, introduced in IDA by Ang et al. [Ang, B.W., Liu, F.L., Chung, H.S., A generalized Fisher index approach to energy decomposition analysis. Energy Economics 2004; 26; 757-763] for the decomposition of an aggregate change in a variable in r = 2, 3 or 4 factors is equivalent to SDA. They base their formulae on the very complicated generic formula that Shapley [Shapley, L., A value for n-person games. In: Kuhn H.W., Tucker A.W. (Eds), Contributions to the theory of games, vol. 2. Princeton University: Princeton; 1953. p. 307-317] derived for his value of n-person games, and mention that Siegel [Siegel, I.H., The generalized 'ideal' index-number formula. Journal of the American Statistical Association 1945; 40; 520-523] gave their formulae using a different route. In this paper tables are given from which the formulae of the generalized Fisher approach can easily be derived for the cases of r = 2, 3 or 4 factors. It is shown that these tables can easily be extended to cover the cases of r = 5 and r = 6 factors. (author)

  12. Machine learning techniques in optical communication

    DEFF Research Database (Denmark)

    Zibar, Darko; Piels, Molly; Jones, Rasmus Thomas

    2015-01-01

    Techniques from the machine learning community are reviewed and employed for laser characterization, signal detection in the presence of nonlinear phase noise, and nonlinearity mitigation. Bayesian filtering and expectation maximization are employed within nonlinear state-space framework...

  13. 4th International Conference on Man–Machine Interactions

    CERN Document Server

    Brachman, Agnieszka; Kozielski, Stanisław; Czachórski, Tadeusz

    2016-01-01

    This book provides an overview of the current state of research on development and application of methods, algorithms, tools and systems associated with the studies on man-machine interaction. Modern machines and computer systems are designed not only to process information, but also to work in dynamic environment, supporting or even replacing human activities in areas such as business, industry, medicine or military. The interdisciplinary field of research on man-machine interactions focuses on broad range of aspects related to the ways in which human make or use computational artifacts, systems and infrastructure.   This monograph is the fourth edition in the series and presents new concepts concerning analysis, design and evaluation of man-machine systems. The selection of high-quality, original papers covers a wide scope of research topics focused on the main problems and challenges encountered within rapidly evolving new forms of human-machine relationships. The presented material is structured into fol...

  14. Machine learning techniques in optical communication

    DEFF Research Database (Denmark)

    Zibar, Darko; Piels, Molly; Jones, Rasmus Thomas

    2016-01-01

    Machine learning techniques relevant for nonlinearity mitigation, carrier recovery, and nanoscale device characterization are reviewed and employed. Markov Chain Monte Carlo in combination with Bayesian filtering is employed within the nonlinear state-space framework and demonstrated for parameter...

  15. European CO2 emission trends: A decomposition analysis for water and aviation transport sectors

    International Nuclear Information System (INIS)

    Andreoni, V.; Galmarini, S.

    2012-01-01

    A decomposition analysis is used to investigate the main factors influencing the CO 2 emissions of European transport activities for the period 2001–2008. The decomposition method developed by Sun has been used to investigate the carbon dioxide emissions intensity, the energy intensity, the structural changes and the economy activity growth effects for the water and the aviation transport sectors. The analysis is based on Eurostat data and results are presented for 14 Member States, Norway and EU27. Results indicate that economic growth has been the main factor behind the carbon dioxide emissions increase in EU27 both for water and aviation transport activities. -- Highlights: ► Decomposition analysis is used to investigate factors that influenced the energy-related CO 2 emissions of European transport. ► Economic growth has been the main factor affecting the energy-related CO 2 emissions increases. ► Investigating the CO 2 emissions drivers is the first step to define energy efficiency policies and emission reduction strategies.

  16. Introducing Stable Radicals into Molecular Machines.

    Science.gov (United States)

    Wang, Yuping; Frasconi, Marco; Stoddart, J Fraser

    2017-09-27

    Ever since their discovery, stable organic radicals have received considerable attention from chemists because of their unique optical, electronic, and magnetic properties. Currently, one of the most appealing challenges for the chemical community is to develop sophisticated artificial molecular machines that can do work by consuming external energy, after the manner of motor proteins. In this context, radical-pairing interactions are important in addressing the challenge: they not only provide supramolecular assistance in the synthesis of molecular machines but also open the door to developing multifunctional systems relying on the various properties of the radical species. In this Outlook, by taking the radical cationic state of 1,1'-dialkyl-4,4'-bipyridinium (BIPY •+ ) as an example, we highlight our research on the art and science of introducing radical-pairing interactions into functional systems, from prototypical molecular switches to complex molecular machines, followed by a discussion of the (i) limitations of the current systems and (ii) future research directions for designing BIPY •+ -based molecular machines with useful functions.

  17. Phase decomposition and ordering in Ni-11.3 at.% Ti studied with atom probe tomography

    KAUST Repository

    Al-Kassab, Talaat; Kompatscher, Michael; Kirchheim, Reiner; Kostorz, Gernot; Schö nfeld, Bernd

    2014-01-01

    The decomposition behavior of Ni-rich Ni-Ti was reassessed using Tomographic Atom Probe (TAP) and Laser Assisted Wide Angle Tomographic Atom Probe. Single crystalline specimens of Ni-11.3at.% Ti were investigated, the states selected from

  18. Automated reasoning in man-machine control systems

    International Nuclear Information System (INIS)

    Stratton, R.C.; Lusk, E.L.

    1983-01-01

    This paper describes a project being undertaken at Argonne National Laboratory to demonstrate the usefulness of automated reasoning techniques in the implementation of a man-machine control system being designed at the EBR-II nuclear power plant. It is shown how automated reasoning influences the choice of optimal roles for both man and machine in the system control process, both for normal and off-normal operation. In addition, the requirements imposed by such a system for a rigorously formal specification of operating states, subsystem states, and transition procedures have a useful impact on the analysis phase. The definitions and rules are discussed for a prototype system which is physically simple yet illustrates some of the complexities inherent in real systems

  19. Reactivity continuum modeling of leaf, root, and wood decomposition across biomes

    Science.gov (United States)

    Koehler, Birgit; Tranvik, Lars J.

    2015-07-01

    Large carbon dioxide amounts are released to the atmosphere during organic matter decomposition. Yet the large-scale and long-term regulation of this critical process in global carbon cycling by litter chemistry and climate remains poorly understood. We used reactivity continuum (RC) modeling to analyze the decadal data set of the "Long-term Intersite Decomposition Experiment," in which fine litter and wood decomposition was studied in eight biome types (224 time series). In 32 and 46% of all sites the litter content of the acid-unhydrolyzable residue (AUR, formerly referred to as lignin) and the AUR/nitrogen ratio, respectively, retarded initial decomposition rates. This initial rate-retarding effect generally disappeared within the first year of decomposition, and rate-stimulating effects of nutrients and a rate-retarding effect of the carbon/nitrogen ratio became more prevalent. For needles and leaves/grasses, the influence of climate on decomposition decreased over time. For fine roots, the climatic influence was initially smaller but increased toward later-stage decomposition. The climate decomposition index was the strongest climatic predictor of decomposition. The similar variability in initial decomposition rates across litter categories as across biome types suggested that future changes in decomposition may be dominated by warming-induced changes in plant community composition. In general, the RC model parameters successfully predicted independent decomposition data for the different litter-biome combinations (196 time series). We argue that parameterization of large-scale decomposition models with RC model parameters, as opposed to the currently common discrete multiexponential models, could significantly improve their mechanistic foundation and predictive accuracy across climate zones and litter categories.

  20. Determination of the Lowest-Energy States for the Model Distribution of Trained Restricted Boltzmann Machines Using a 1000 Qubit D-Wave 2X Quantum Computer.

    Science.gov (United States)

    Koshka, Yaroslav; Perera, Dilina; Hall, Spencer; Novotny, M A

    2017-07-01

    The possibility of using a quantum computer D-Wave 2X with more than 1000 qubits to determine the global minimum of the energy landscape of trained restricted Boltzmann machines is investigated. In order to overcome the problem of limited interconnectivity in the D-Wave architecture, the proposed RBM embedding combines multiple qubits to represent a particular RBM unit. The results for the lowest-energy (the ground state) and some of the higher-energy states found by the D-Wave 2X were compared with those of the classical simulated annealing (SA) algorithm. In many cases, the D-Wave machine successfully found the same RBM lowest-energy state as that found by SA. In some examples, the D-Wave machine returned a state corresponding to one of the higher-energy local minima found by SA. The inherently nonperfect embedding of the RBM into the Chimera lattice explored in this work (i.e., multiple qubits combined into a single RBM unit were found not to be guaranteed to be all aligned) and the existence of small, persistent biases in the D-Wave hardware may cause a discrepancy between the D-Wave and the SA results. In some of the investigated cases, introduction of a small bias field into the energy function or optimization of the chain-strength parameter in the D-Wave embedding successfully addressed difficulties of the particular RBM embedding. With further development of the D-Wave hardware, the approach will be suitable for much larger numbers of RBM units.

  1. Kinetic study of lithium-cadmium ternary amalgam decomposition

    International Nuclear Information System (INIS)

    Cordova, M.H.; Andrade, C.E.

    1992-01-01

    The effect of metals, which form stable lithium phase in binary alloys, on the formation of intermetallic species in ternary amalgams and their effect on thermal decomposition in contact with water is analyzed. Cd is selected as ternary metal, based on general experimental selection criteria. Cd (Hg) binary amalgams are prepared by direct contact Cd-Hg, whereas Li is formed by electrolysis of Li OH aq using a liquid Cd (Hg) cathodic well. The decomposition kinetic of Li C(Hg) in contact with 0.6 M Li OH is studied in function of ageing and temperature, and these results are compared with the binary amalgam Li (Hg) decomposition. The decomposition rate is constant during one hour for binary and ternary systems. Ageing does not affect the binary systems but increases the decomposition activation energy of ternary systems. A reaction mechanism that considers an intermetallic specie participating in the activated complex is proposed and a kinetic law is suggested. (author)

  2. Thermodynamic anomaly in magnesium hydroxide decomposition

    International Nuclear Information System (INIS)

    Reis, T.A.

    1983-08-01

    The Origin of the discrepancy in the equilibrium water vapor pressure measurements for the reaction Mg(OH) 2 (s) = MgO(s) + H 2 O(g) when determined by Knudsen effusion and static manometry at the same temperature was investigated. For this reaction undergoing continuous thermal decomposition in Knudsen cells, Kay and Gregory observed that by extrapolating the steady-state apparent equilibrium vapor pressure measurements to zero-orifice, the vapor pressure was approx. 10 -4 of that previously established by Giauque and Archibald as the true thermodynamic equilibrium vapor pressure using statistical mechanical entropy calculations for the entropy of water vapor. This large difference in vapor pressures suggests the possibility of the formation in a Knudsen cell of a higher energy MgO that is thermodynamically metastable by about 48 kJ / mole. It has been shown here that experimental results are qualitatively independent of the type of Mg(OH) 2 used as a starting material, which confirms the inferences of Kay and Gregory. Thus, most forms of Mg(OH) 2 are considered to be the stable thermodynamic equilibrium form. X-ray diffraction results show that during the course of the reaction only the equilibrium NaCl-type MgO is formed, and no different phases result from samples prepared in Knudsen cells. Surface area data indicate that the MgO molar surface area remains constant throughout the course of the reaction at low decomposition temperatures, and no significant annealing occurs at less than 400 0 C. Scanning electron microscope photographs show no change in particle size or particle surface morphology. Solution calorimetric measurements indicate no inherent hgher energy content in the MgO from the solid produced in Knudsen cells. The Knudsen cell vapor pressure discrepancy may reflect the formation of a transient metastable MgO or Mg(OH) 2 -MgO solid solution during continuous thermal decomposition in Knudsen cells

  3. Crop residue decomposition in Minnesota biochar amended plots

    OpenAIRE

    S. L. Weyers; K. A. Spokas

    2014-01-01

    Impacts of biochar application at laboratory scales are routinely studied, but impacts of biochar application on decomposition of crop residues at field scales have not been widely addressed. The priming or hindrance of crop residue decomposition could have a cascading impact on soil processes, particularly those influencing nutrient availability. Our objectives were to evaluate biochar effects on field decomposition of crop residue, using plots that were amended with ...

  4. Testing and Modeling of Machine Properties in Resistance Welding

    DEFF Research Database (Denmark)

    Wu, Pei

    The objective of this work has been to test and model the machine properties including the mechanical properties and the electrical properties in resistance welding. The results are used to simulate the welding process more accurately. The state of the art in testing and modeling machine properties...... as real projection welding tests, is easy to realize in industry, since tests may be performed in situ. In part II, an approach of characterizing the electrical properties of AC resistance welding machines is presented, involving testing and mathematical modelling of the weld current, the firing angle...... in resistance welding has been described based on a comprehensive literature study. The present thesis has been subdivided into two parts: Part I: Mechanical properties of resistance welding machines. Part II: Electrical properties of resistance welding machines. In part I, the electrode force in the squeeze...

  5. AC machine control : robust and sensorless control by parameter independency

    Energy Technology Data Exchange (ETDEWEB)

    Samuelsen, Dag Andreas Hals

    2009-06-15

    In this thesis it is first presented how robust control can be used to give AC motor drive systems competitive dynamic performance under parameter variations. These variations are common to all AC machines, and are a result of temperature change in the machine, and imperfect machine models. This robust control is, however, dependent on sensor operation in the sense that the rotor position is needed in the control loop. Elimination of this control loop has been for many years, and still is, a main research area of AC machines control systems. An integrated PWM modulator and sampler unit has been developed and tested. The sampler unit is able to give current and voltage measurements with a reduced noise component. It is further used to give the true derivative of currents and voltages in the machine and the power converter, as an average over a PWM period, and as separate values for all states of the power converter. In this way, it can give measurements of the currents as well as the derivative of the currents, at the start and at the end of a single power inverter state. This gave a large degree of freedom in parameter and state identification during uninterrupted operation of the induction machine. The special measurement scheme of the system achieved three main goals: By avoiding the time frame where the transistors commutate and the noise in the measurement of the current is large, filtering of the current measurement is no longer needed. The true derivative of the current in the machine is can be measured with far less noise components. This was extended to give any separate derivative in all three switching states of the power converter. Using the computational resources of the FPGA, more advanced information was supplied to the control system, in order to facilitate sensor less operation, with low computational demands on the DSP. As shown in the papers, this extra information was first used to estimate some of the states of the machine, in some or all of the

  6. Excimer laser decomposition of silicone

    International Nuclear Information System (INIS)

    Laude, L.D.; Cochrane, C.; Dicara, Cl.; Dupas-Bruzek, C.; Kolev, K.

    2003-01-01

    Excimer laser irradiation of silicone foils is shown in this work to induce decomposition, ablation and activation of such materials. Thin (100 μm) laminated silicone foils are irradiated at 248 nm as a function of impacting laser fluence and number of pulsed irradiations at 1 s intervals. Above a threshold fluence of 0.7 J/cm 2 , material starts decomposing. At higher fluences, this decomposition develops and gives rise to (i) swelling of the irradiated surface and then (ii) emission of matter (ablation) at a rate that is not proportioned to the number of pulses. Taking into consideration the polymer structure and the foil lamination process, these results help defining the phenomenology of silicone ablation. The polymer decomposition results in two parts: one which is organic and volatile, and another part which is inorganic and remains, forming an ever thickening screen to light penetration as the number of light pulses increases. A mathematical model is developed that accounts successfully for this physical screening effect

  7. Salient Object Detection via Structured Matrix Decomposition.

    Science.gov (United States)

    Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J

    2016-05-04

    Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.

  8. 1.7. Acid decomposition of kaolin clays of Ziddi Deposit. 1.7.1. The hydrochloric acid decomposition of kaolin clays and siallites

    International Nuclear Information System (INIS)

    Mirsaidov, U.M.; Mirzoev, D.Kh.; Boboev, Kh.E.

    2016-01-01

    Present article of book is devoted to hydrochloric acid decomposition of kaolin clays and siallites. The chemical composition of kaolin clays and siallites was determined. The influence of temperature, process duration, acid concentration on hydrochloric acid decomposition of kaolin clays and siallites was studied. The optimal conditions of hydrochloric acid decomposition of kaolin clays and siallites were determined.

  9. Multi hollow needle to plate plasmachemical reactor for pollutant decomposition

    International Nuclear Information System (INIS)

    Pekarek, S.; Kriha, V.; Viden, I.; Pospisil, M.

    2001-01-01

    Modification of the classical multipin to plate plasmachemical reactor for pollutant decomposition is proposed in this paper. In this modified reactor a mixture of air and pollutant flows through the needles, contrary to the classical reactor where a mixture of air and pollutant flows around the pins or through the channel plus through the hollow needles. We give the results of comparison of toluene decomposition efficiency for (a) a reactor with the main stream of a mixture through the channel around the needles and a small flow rate through the needles and (b) a modified reactor. It was found that for similar flow rates and similar energy deposition, the decomposition efficiency of toluene was increased more than six times in the modified reactor. This new modified reactor was also experimentally tested for the decomposition of volatile hydrocarbons from gasoline distillation range. An average efficiency of VOC decomposition of about 25% was reached. However, significant differences in the decomposition of various hydrocarbon types were observed. The best results were obtained for the decomposition of olefins (reaching 90%) and methyl-tert-butyl ether (about 50%). Moreover, the number of carbon atoms in the molecule affects the quality of VOC decomposition. (author)

  10. Advanced Oxidation: Oxalate Decomposition Testing With Ozone

    International Nuclear Information System (INIS)

    Ketusky, E.; Subramanian, K.

    2012-01-01

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing

  11. ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE

    Energy Technology Data Exchange (ETDEWEB)

    Ketusky, E.; Subramanian, K.

    2012-02-29

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration

  12. A handbook of decomposition methods in analytical chemistry

    International Nuclear Information System (INIS)

    Bok, R.

    1984-01-01

    Decomposition methods of metals, alloys, fluxes, slags, calcine, inorganic salts, oxides, nitrides, carbides, borides, sulfides, ores, minerals, rocks, concentrates, glasses, ceramics, organic substances, polymers, phyto- and biological materials from the viewpoint of sample preparation for analysis have been described. The methods are systemitized according to decomposition principle: thermal with the use of electricity, irradiation, dissolution with participation of chemical reactions and without it. Special equipment for different decomposition methods is described. Bibliography contains 3420 references

  13. SDE decomposition and A-type stochastic interpretation in nonequilibrium processes

    Science.gov (United States)

    Yuan, Ruoshi; Tang, Ying; Ao, Ping

    2017-12-01

    An innovative theoretical framework for stochastic dynamics based on the decomposition of a stochastic differential equation (SDE) into a dissipative component, a detailed-balance-breaking component, and a dual-role potential landscape has been developed, which has fruitful applications in physics, engineering, chemistry, and biology. It introduces the A-type stochastic interpretation of the SDE beyond the traditional Ito or Stratonovich interpretation or even the α-type interpretation for multidimensional systems. The potential landscape serves as a Hamiltonian-like function in nonequilibrium processes without detailed balance, which extends this important concept from equilibrium statistical physics to the nonequilibrium region. A question on the uniqueness of the SDE decomposition was recently raised. Our review of both the mathematical and physical aspects shows that uniqueness is guaranteed. The demonstration leads to a better understanding of the robustness of the novel framework. In addition, we discuss related issues including the limitations of an approach to obtaining the potential function from a steady-state distribution.

  14. A Longitudinal Study on Human Outdoor Decomposition in Central Texas.

    Science.gov (United States)

    Suckling, Joanna K; Spradley, M Katherine; Godde, Kanya

    2016-01-01

    The development of a methodology that estimates the postmortem interval (PMI) from stages of decomposition is a goal for which forensic practitioners strive. A proposed equation (Megyesi et al. 2005) that utilizes total body score (TBS) and accumulated degree days (ADD) was tested using longitudinal data collected from human remains donated to the Forensic Anthropology Research Facility (FARF) at Texas State University-San Marcos. Exact binomial tests examined the rate of the equation to successfully predict ADD. Statistically significant differences were found between ADD estimated by the equation and the observed value for decomposition stage. Differences remained significant after carnivore scavenged donations were removed from analysis. Low success rates for the equation to predict ADD from TBS and the wide standard errors demonstrate the need to re-evaluate the use of this equation and methodology for PMI estimation in different environments; rather, multivariate methods and equations should be derived that are environmentally specific. © 2015 American Academy of Forensic Sciences.

  15. Brain-machine and brain-computer interfaces.

    Science.gov (United States)

    Friehs, Gerhard M; Zerris, Vasilios A; Ojakangas, Catherine L; Fellows, Mathew R; Donoghue, John P

    2004-11-01

    The idea of connecting the human brain to a computer or machine directly is not novel and its potential has been explored in science fiction. With the rapid advances in the areas of information technology, miniaturization and neurosciences there has been a surge of interest in turning fiction into reality. In this paper the authors review the current state-of-the-art of brain-computer and brain-machine interfaces including neuroprostheses. The general principles and requirements to produce a successful connection between human and artificial intelligence are outlined and the authors' preliminary experience with a prototype brain-computer interface is reported.

  16. Decomposition of silicon carbide at high pressures and temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Daviau, Kierstin; Lee, Kanani K. M.

    2017-11-01

    We measure the onset of decomposition of silicon carbide, SiC, to silicon and carbon (e.g., diamond) at high pressures and high temperatures in a laser-heated diamond-anvil cell. We identify decomposition through x-ray diffraction and multiwavelength imaging radiometry coupled with electron microscopy analyses on quenched samples. We find that B3 SiC (also known as 3C or zinc blende SiC) decomposes at high pressures and high temperatures, following a phase boundary with a negative slope. The high-pressure decomposition temperatures measured are considerably lower than those at ambient, with our measurements indicating that SiC begins to decompose at ~ 2000 K at 60 GPa as compared to ~ 2800 K at ambient pressure. Once B3 SiC transitions to the high-pressure B1 (rocksalt) structure, we no longer observe decomposition, despite heating to temperatures in excess of ~ 3200 K. The temperature of decomposition and the nature of the decomposition phase boundary appear to be strongly influenced by the pressure-induced phase transitions to higher-density structures in SiC, silicon, and carbon. The decomposition of SiC at high pressure and temperature has implications for the stability of naturally forming moissanite on Earth and in carbon-rich exoplanets.

  17. Radiation decomposition of alcohols and chloro phenols in micellar systems

    International Nuclear Information System (INIS)

    Moreno A, J.

    1998-01-01

    The effect of surfactants on the radiation decomposition yield of alcohols and chloro phenols has been studied with gamma doses of 2, 3, and 5 KGy. These compounds were used as typical pollutants in waste water, and the effect of the water solubility, chemical structure, and the nature of the surfactant, anionic or cationic, was studied. The results show that anionic surfactant like sodium dodecylsulfate (SDS), improve the radiation decomposition yield of ortho-chloro phenol, while cationic surfactant like cetyl trimethylammonium chloride (CTAC), improve the radiation decomposition yield of butyl alcohol. A similar behavior is expected for those alcohols with water solubility close to the studied ones. Surfactant concentrations below critical micellar concentration (CMC), inhibited radiation decomposition for both types of alcohols. However radiation decomposition yield increased when surfactant concentrations were bigger than the CMC. Aromatic alcohols decomposition was more marked than for linear alcohols decomposition. On a mixture of alcohols and chloro phenols in aqueous solution the radiation decomposition yield decreased with increasing surfactant concentration. Nevertheless, there were competitive reactions between the alcohols, surfactants dimers, hydroxyl radical and other reactive species formed on water radiolysis, producing a catalytic positive effect in the decomposition of alcohols. Chemical structure and the number of carbons were not important factors in the radiation decomposition. When an alcohol like ortho-chloro phenol contained an additional chlorine atom, the decomposition of this compound was almost constant. In conclusion the micellar effect depend on both, the nature of the surfactant (anionic or cationic) and the chemical structure of the alcohols. The results of this study are useful for wastewater treatment plants based on the oxidant effect of the hydroxyl radical, like in advanced oxidation processes, or in combined treatment such as

  18. Machine terms dictionary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1979-04-15

    This book gives descriptions of machine terms which includes machine design, drawing, the method of machine, machine tools, machine materials, automobile, measuring and controlling, electricity, basic of electron, information technology, quality assurance, Auto CAD and FA terms and important formula of mechanical engineering.

  19. Note on Symplectic SVD-Like Decomposition

    Directory of Open Access Journals (Sweden)

    AGOUJIL Said

    2016-02-01

    Full Text Available The aim of this study was to introduce a constructive method to compute a symplectic singular value decomposition (SVD-like decomposition of a 2n-by-m rectangular real matrix A, based on symplectic refectors.This approach used a canonical Schur form of skew-symmetric matrix and it allowed us to compute eigenvalues for the structured matrices as Hamiltonian matrix JAA^T.

  20. Residual stresses generated in F-522 steel by different machining processes

    International Nuclear Information System (INIS)

    Gracia-Navas, V.; Ferreres, I.; Maranon, J. A.; Garcia-Rosales, C.; Gil-Sevillano, J.

    2005-01-01

    Machining operations induce plastic deformation and heat generation in the near surface area of the machined part, giving rise to residual stresses. Depending on their magnitude and sign, these stresses can be detrimental or beneficial to the service life of the part. The final stress state depends on the machining process applied, as well as on the machining parameters. Therefore, the establishment of adequate machining guidelines requires the measurement of the residual stresses generated both at the surface and inside the material. in this work, the residual stresses generated in F-522 steel by two hard turning (conventional and laser assisted) and two grinding (production and finishing) processes were measured by X-ray diffraction. Additionally, depth profiles of the volume fraction of retained austenite, microstructure and nano hardness were obtained in order to correlate those results with the residual stress state obtained for each machining process. It has been observed that turning generates tensile stresses in the surface while grinding causes compressive stresses. Below the surface grinding generates weak tensile or nearly null stresses whereas turning generates strong compressive stresses. These results show that the optimum mechanising process (disregarding economical considerations) implies the combination of turning plus elimination of a small thickness by final grinding. (Author) 19 refs

  1. Reliability analysis in intelligent machines

    Science.gov (United States)

    Mcinroy, John E.; Saridis, George N.

    1990-01-01

    Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.

  2. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Science.gov (United States)

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  3. The decomposition of estuarine macrophytes under different ...

    African Journals Online (AJOL)

    The aim of this study was to determine the decomposition characteristics of the most dominant submerged macrophyte and macroalgal species in the Great Brak Estuary. Laboratory experiments were conducted to determine the effect of different temperature regimes on the rate of decomposition of 3 macrophyte species ...

  4. Decomposition and flame structure of hydrazinium nitroformate

    NARCIS (Netherlands)

    Louwers, J.; Parr, T.; Hanson-Parr, D.

    1999-01-01

    The decomposition of hydrazinium nitroformate (HNF) was studied in a hot quartz cell and by dropping small amounts of HNF on a hot plate. The species formed during the decomposition were identified by ultraviolet-visible absorption experiments. These experiments reveal that first HONO is formed. The

  5. Ultrashort pulse laser machining of metals and alloys

    Science.gov (United States)

    Perry, Michael D.; Stuart, Brent C.

    2003-09-16

    The invention consists of a method for high precision machining (cutting, drilling, sculpting) of metals and alloys. By using pulses of a duration in the range of 10 femtoseconds to 100 picoseconds, extremely precise machining can be achieved with essentially no heat or shock affected zone. Because the pulses are so short, there is negligible thermal conduction beyond the region removed resulting in negligible thermal stress or shock to the material beyond approximately 0.1-1 micron (dependent upon the particular material) from the laser machined surface. Due to the short duration, the high intensity (>10.sup.12 W/cm.sup.2) associated with the interaction converts the material directly from the solid-state into an ionized plasma. Hydrodynamic expansion of the plasma eliminates the need for any ancillary techniques to remove material and produces extremely high quality machined surfaces with negligible redeposition either within the kerf or on the surface. Since there is negligible heating beyond the depth of material removed, the composition of the remaining material is unaffected by the laser machining process. This enables high precision machining of alloys and even pure metals with no change in grain structure.

  6. New mechanism for autocatalytic decomposition of H2CO3 in the vapor phase.

    Science.gov (United States)

    Ghoshal, Sourav; Hazra, Montu K

    2014-04-03

    In this article, we present high level ab initio calculations investigating the energetics of a new autocatalytic decomposition mechanism for carbonic acid (H2CO3) in the vapor phase. The calculation have been performed at the MP2 level of theory in conjunction with aug-cc-pVDZ, aug-cc-pVTZ, and 6-311++G(3df,3pd) basis sets as well as at the CCSD(T)/aug-cc-pVTZ level. The present study suggests that this new decomposition mechanism is effectively a near-barrierless process at room temperature and makes vapor phase of H2CO3 unstable even in the absence of water molecules. Our calculation at the MP2/aug-cc-pVTZ level predicts that the effective barrier, defined as the difference between the zero-point vibrational energy (ZPE) corrected energy of the transition state and the total energy of the isolated starting reactants in terms of bimolecular encounters, is nearly zero for the autocatalytic decomposition mechanism. The results at the CCSD(T)/aug-cc-pVTZ level of calculations suggest that the effective barrier, as defined above, is sensitive to some extent to the levels of calculations used, nevertheless, we find that the effective barrier height predicted at the CCSD(T)/aug-cc-pVTZ level is very small or in other words the autocatalytic decomposition mechanism presented in this work is a near-barrierless process as mentioned above. Thus, we suggest that this new autocatalytic decomposition mechanism has to be considered as the primary mechanism for the decomposition of carbonic acid, especially at its source, where the vapor phase concentration of H2CO3 molecules reaches its highest levels.

  7. Decomposition of forest products buried in landfills

    International Nuclear Information System (INIS)

    Wang, Xiaoming; Padgett, Jennifer M.; Powell, John S.; Barlaz, Morton A.

    2013-01-01

    Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g −1 dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than

  8. Decomposition of forest products buried in landfills

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xiaoming, E-mail: xwang25@ncsu.edu [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States); Padgett, Jennifer M. [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States); Powell, John S. [Department of Chemical and Biomolecular Engineering, Campus Box 7905, North Carolina State University, Raleigh, NC 27695-7905 (United States); Barlaz, Morton A. [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States)

    2013-11-15

    Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g{sup −1} dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than

  9. Some relations between quantum Turing machines and Turing machines

    OpenAIRE

    Sicard, Andrés; Vélez, Mario

    1999-01-01

    For quantum Turing machines we present three elements: Its components, its time evolution operator and its local transition function. The components are related with the components of deterministic Turing machines, the time evolution operator is related with the evolution of reversible Turing machines and the local transition function is related with the transition function of probabilistic and reversible Turing machines.

  10. Parallel processing for pitch splitting decomposition

    Science.gov (United States)

    Barnes, Levi; Li, Yong; Wadkins, David; Biederman, Steve; Miloslavsky, Alex; Cork, Chris

    2009-10-01

    Decomposition of an input pattern in preparation for a double patterning process is an inherently global problem in which the influence of a local decomposition decision can be felt across an entire pattern. In spite of this, a large portion of the work can be massively distributed. Here, we discuss the advantages of geometric distribution for polygon operations with limited range of influence. Further, we have found that even the naturally global "coloring" step can, in large part, be handled in a geometrically local manner. In some practical cases, up to 70% of the work can be distributed geometrically. We also describe the methods for partitioning the problem into local pieces and present scaling data up to 100 CPUs. These techniques reduce DPT decomposition runtime by orders of magnitude.

  11. Cost Forecasting of Substation Projects Based on Cuckoo Search Algorithm and Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Dongxiao Niu

    2018-01-01

    Full Text Available Accurate prediction of substation project cost is helpful to improve the investment management and sustainability. It is also directly related to the economy of substation project. Ensemble Empirical Mode Decomposition (EEMD can decompose variables with non-stationary sequence signals into significant regularity and periodicity, which is helpful in improving the accuracy of prediction model. Adding the Gauss perturbation to the traditional Cuckoo Search (CS algorithm can improve the searching vigor and precision of CS algorithm. Thus, the parameters and kernel functions of Support Vector Machines (SVM model are optimized. By comparing the prediction results with other models, this model has higher prediction accuracy.

  12. Climate fails to predict wood decomposition at regional scales

    Science.gov (United States)

    Mark A. Bradford; Robert J. Warren; Petr Baldrian; Thomas W. Crowther; Daniel S. Maynard; Emily E. Oldfield; William R. Wieder; Stephen A. Wood; Joshua R. King

    2014-01-01

    Decomposition of organic matter strongly influences ecosystem carbon storage1. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter2, 3, 4, 5. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on...

  13. Support vector machine in machine condition monitoring and fault diagnosis

    Science.gov (United States)

    Widodo, Achmad; Yang, Bo-Suk

    2007-08-01

    Recently, the issue of machine condition monitoring and fault diagnosis as a part of maintenance system became global due to the potential advantages to be gained from reduced maintenance costs, improved productivity and increased machine availability. This paper presents a survey of machine condition monitoring and fault diagnosis using support vector machine (SVM). It attempts to summarize and review the recent research and developments of SVM in machine condition monitoring and diagnosis. Numerous methods have been developed based on intelligent systems such as artificial neural network, fuzzy expert system, condition-based reasoning, random forest, etc. However, the use of SVM for machine condition monitoring and fault diagnosis is still rare. SVM has excellent performance in generalization so it can produce high accuracy in classification for machine condition monitoring and diagnosis. Until 2006, the use of SVM in machine condition monitoring and fault diagnosis is tending to develop towards expertise orientation and problem-oriented domain. Finally, the ability to continually change and obtain a novel idea for machine condition monitoring and fault diagnosis using SVM will be future works.

  14. Machine Learning for Neuroimaging with Scikit-Learn

    Directory of Open Access Journals (Sweden)

    Alexandre eAbraham

    2014-02-01

    Full Text Available Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g. multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g. resting state functional MRI or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain.

  15. Machine learning for neuroimaging with scikit-learn.

    Science.gov (United States)

    Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël

    2014-01-01

    Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain.

  16. Formation of volatile decomposition products by self-radiolysis of tritiated thymidine

    International Nuclear Information System (INIS)

    Shiba, Kazuhiro; Mori, Hirofumi

    1997-01-01

    In order to estimate the internal exposure dose in an experiment using tritiated thymidine, the rate of volatile 3 H-decomposition of several tritiated thymidine samples was measured. The decomposition rate of (methyl- 3 H)thymidine in water was over 80% in less than one year after initial analysis. (methyl- 3 H)thymidine was decomposed into volatile and non-volatile 3 H-decomposition products. The ratio of volatile 3 H-decomposition products increased with increasing the rate of the decomposition of (methyl- 3 H) thymidine. The volatile 3 H-decomposition products consisted of two components, of which the main component was tritiated water. Internal exposure dose caused by the inhalation of such volatile 3 H-decomposition products of (methyl- 3 H) thymidine was assumed to be several μSv. (author)

  17. Are litter decomposition and fire linked through plant species traits?

    Science.gov (United States)

    Cornelissen, Johannes H C; Grootemaat, Saskia; Verheijen, Lieneke M; Cornwell, William K; van Bodegom, Peter M; van der Wal, René; Aerts, Rien

    2017-11-01

    Contents 653 I. 654 II. 657 III. 659 IV. 661 V. 662 VI. 663 VII. 665 665 References 665 SUMMARY: Biological decomposition and wildfire are connected carbon release pathways for dead plant material: slower litter decomposition leads to fuel accumulation. Are decomposition and surface fires also connected through plant community composition, via the species' traits? Our central concept involves two axes of trait variation related to decomposition and fire. The 'plant economics spectrum' (PES) links biochemistry traits to the litter decomposability of different fine organs. The 'size and shape spectrum' (SSS) includes litter particle size and shape and their consequent effect on fuel bed structure, ventilation and flammability. Our literature synthesis revealed that PES-driven decomposability is largely decoupled from predominantly SSS-driven surface litter flammability across species; this finding needs empirical testing in various environmental settings. Under certain conditions, carbon release will be dominated by decomposition, while under other conditions litter fuel will accumulate and fire may dominate carbon release. Ecosystem-level feedbacks between decomposition and fire, for example via litter amounts, litter decomposition stage, community-level biotic interactions and altered environment, will influence the trait-driven effects on decomposition and fire. Yet, our conceptual framework, explicitly comparing the effects of two plant trait spectra on litter decomposition vs fire, provides a promising new research direction for better understanding and predicting Earth surface carbon dynamics. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  18. Surface and elemental alterations of dental alloys induced by electro discharge machining (EDM).

    Science.gov (United States)

    Zinelis, Spiros

    2007-05-01

    To evaluate the surface and elemental alterations induced by electro discharge machining (EDM) on the surface of dental cast alloys used for the fabrication of implant retained meso- and super-structures. A completed cast model of an arch that received dental implants was used for the preparation of six wax patterns which were divided into three groups (Au, Co and Ti). The wax patterns of the Au and Co groups were invested with conventional phosphate-bonded silica-based investment material and the Ti group with magnesia-based investment material. The investment rings of the Au and Co groups were cast with an Au-Ag alloy (Stabilor G) and a Co-Cr base alloy (Okta C), respectively, while the investment rings of group Ti were cast with cp Ti (Biotan). One casting of each group was subjected to electro discharge machining (EDM); the other was conventionally ground and polished. The surface morphology and the elemental compositions of conventionally and EDM-finished surfaces were studied by SEM/X-ray EDS analysis. Six spectra were collected from each surface employing the area scan mode and the mean value of each element between conventionally and EDM-finished surfaces was statistically analyzed by t-test (a=0.05). Then the specimens of each group were cut perpendicular to their longitudinal axis and after metallographic grinding and polishing the cross-sections studied under the SEM. The EDM surfaces showed a significant increase in C due to the decomposition of the dielectric fluid during spark erosion. Moreover, a significant Cu uptake was noted on these surfaces from the decomposition of the Cu electrodes used for EDM. Cross-sectional analysis showed that all alloys developed a superficial zone (recast layer) varying from 2 microm for Au-Ag to 10 microm for Co-Cr alloy. The elemental composition of dental alloy surfaces is significantly altered after EDM treatment.

  19. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  20. Thermal decomposition of lanthanide and actinide tetrafluorides

    International Nuclear Information System (INIS)

    Gibson, J.K.; Haire, R.G.

    1988-01-01

    The thermal stabilities of several lanthanide/actinide tetrafluorides have been studied using mass spectrometry to monitor the gaseous decomposition products, and powder X-ray diffraction (XRD) to identify solid products. The tetrafluorides, TbF 4 , CmF 4 , and AmF 4 , have been found to thermally decompose to their respective solid trifluorides with accompanying release of fluorine, while cerium tetrafluoride has been found to be significantly more thermally stable and to congruently sublime as CeF 4 prior to appreciable decomposition. The results of these studies are discussed in relation to other relevant experimental studies and the thermodynamics of the decomposition processes. 9 refs., 3 figs

  1. Thermal decomposition of UO3-2H20

    International Nuclear Information System (INIS)

    Flament, T.A.

    1998-01-01

    The first part of the report summarizes the literature data regarding the uranium trioxide water system. In the second part, the experimental aspects are presented. An experimental program has been set up to determine the steps and species involved in decomposition of uranium oxide di-hydrate. Particular attention has been paid to determine both loss of free water (moisture in the fuel) and loss of chemically bound water (decomposition of hydrates). The influence of water pressure on decomposition has been taken into account

  2. On the decomposition of a dynamical system into non-interacting subsystems.

    Science.gov (United States)

    Rosen, R.

    1972-01-01

    It is shown that, under rather general conditions, it is possible to formally decompose the dynamics of an n-dimensional dynamical system into a number of non-interacting subsystems. It is shown that these decompositions are in general not simply related to the kinds of observational procedures in terms of which the original state variables of the system are defined. Some consequences of this construction for reductionism in biology are discussed.

  3. Steganography based on pixel intensity value decomposition

    Science.gov (United States)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  4. Human and machine perception communication, interaction, and integration

    CERN Document Server

    Cantoni, Virginio; Setti, Alessandra

    2005-01-01

    The theme of this book on human and machine perception is communication, interaction, and integration. For each basic topic there are invited lectures, corresponding to approaches in nature and machines, and a panel discussion. The lectures present the state of the art, outlining open questions and stressing synergies among the disciplines related to perception. The panel discussions are forums for open debate. The wide spectrum of topics allows comparison and synergy and can stimulate new approaches.

  5. Wood decomposition as influenced by invertebrates.

    Science.gov (United States)

    Ulyshen, Michael D

    2016-02-01

    The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial ecosystems. Three broad conclusions can be reached from the available literature. First, wood decomposition is largely driven by microbial activity but invertebrates also play a significant role in both temperate and tropical environments. Primary mechanisms include enzymatic digestion (involving both endogenous enzymes and those produced by endo- and ectosymbionts), substrate alteration (tunnelling and fragmentation), biotic interactions and nitrogen fertilization (i.e. promoting nitrogen fixation by endosymbiotic and free-living bacteria). Second, the effects of individual invertebrate taxa or functional groups can be accelerative or inhibitory but the cumulative effect of the entire community is generally to accelerate wood decomposition, at least during the early stages of the process (most studies are limited to the first 2-3 years). Although methodological differences and design limitations preclude meta-analysis, studies aimed at quantifying the contributions of invertebrates to wood decomposition commonly attribute 10-20% of wood loss to these organisms. Finally, some taxa appear to be particularly influential with respect to promoting wood decomposition. These include large wood-boring beetles (Coleoptera) and termites (Termitoidae), especially fungus-farming macrotermitines. The presence or absence of these species may be more consequential than species richness and the influence of invertebrates is likely to vary biogeographically. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  6. A comparative analysis of support vector machines and extreme learning machines.

    Science.gov (United States)

    Liu, Xueyi; Gao, Chuanhou; Li, Ping

    2012-09-01

    The theory of extreme learning machines (ELMs) has recently become increasingly popular. As a new learning algorithm for single-hidden-layer feed-forward neural networks, an ELM offers the advantages of low computational cost, good generalization ability, and ease of implementation. Hence the comparison and model selection between ELMs and other kinds of state-of-the-art machine learning approaches has become significant and has attracted many research efforts. This paper performs a comparative analysis of the basic ELMs and support vector machines (SVMs) from two viewpoints that are different from previous works: one is the Vapnik-Chervonenkis (VC) dimension, and the other is their performance under different training sample sizes. It is shown that the VC dimension of an ELM is equal to the number of hidden nodes of the ELM with probability one. Additionally, their generalization ability and computational complexity are exhibited with changing training sample size. ELMs have weaker generalization ability than SVMs for small sample but can generalize as well as SVMs for large sample. Remarkably, great superiority in computational speed especially for large-scale sample problems is found in ELMs. The results obtained can provide insight into the essential relationship between them, and can also serve as complementary knowledge for their past experimental and theoretical comparisons. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Finding Hierarchical and Overlapping Dense Subgraphs using Nucleus Decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Seshadhri, Comandur [The Ohio State Univ., Columbus, OH (United States); Pinar, Ali [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sariyuce, Ahmet Erdem [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Catalyurek, Umit [The Ohio State Univ., Columbus, OH (United States)

    2014-11-01

    Finding dense substructures in a graph is a fundamental graph mining operation, with applications in bioinformatics, social networks, and visualization to name a few. Yet most standard formulations of this problem (like clique, quasiclique, k-densest subgraph) are NP-hard. Furthermore, the goal is rarely to nd the \\true optimum", but to identify many (if not all) dense substructures, understand their distribution in the graph, and ideally determine a hierarchical structure among them. Current dense subgraph nding algorithms usually optimize some objective, and only nd a few such subgraphs without providing any hierarchy. It is also not clear how to account for overlaps in dense substructures. We de ne the nucleus decomposition of a graph, which represents the graph as a forest of nuclei. Each nucleus is a subgraph where smaller cliques are present in many larger cliques. The forest of nuclei is a hierarchy by containment, where the edge density increases as we proceed towards leaf nuclei. Sibling nuclei can have limited intersections, which allows for discovery of overlapping dense subgraphs. With the right parameters, the nuclear decomposition generalizes the classic notions of k-cores and k-trusses. We give provable e cient algorithms for nuclear decompositions, and empirically evaluate their behavior in a variety of real graphs. The tree of nuclei consistently gives a global, hierarchical snapshot of dense substructures, and outputs dense subgraphs of higher quality than other state-of-theart solutions. Our algorithm can process graphs with tens of millions of edges in less than an hour.

  8. Decomposition of 14C-15N straw incorporated at three depths in a red-yellow latosol at Pernambuco State, Brazil

    International Nuclear Information System (INIS)

    Sampaio, E.V.S.; Salcedo, I.H.; Bettany, J.

    1990-01-01

    The decomposition of 14 C- 15 N labelled straw, incorporated at three depths in a Red-yellow Latosol from the humid, coastal zone of Pernambuco State, Brazil, was measured during two years. The straw was ground, mixed with soil portions and placed in 400-mesh bags and replaced into the original field sites at 10, 30 and 60cm depth. The decomposition was also followed in the laboratory using soil from the superficial layer. Straw carbon losses in the field reached 52% during the first month and about 80% after two years. In the first 4 months mineralization was faster in the superficial layer, with no differences thereafter. In the laboratory, mineralization was slower than in the field, reaching 34 and 50% after one month and two years, respectively. During the first month, most of the soil microbial biomass was apparently formed from straw derived material but the contribution from the straw decrease to 15-30% after two months and to less than 1% after two years. Straw N losses reached 25% in the first month and 40-50% after two years, with significant differences among soil depths in the first six months when losses were higher in the deeper layers. There were no plants to absorb the mineral N which accumulated in the soil to a concentration of up to 32μg/g soil. The contribution of straw N to this mineral N decreased with incubation period but was always less than 12%. The C:N ratio of straw derived material ( 14 C- 15 N) decreased from 22:1 to 8-10:1, at all depths. (author)

  9. Radiolytic decomposition of 4-bromodiphenyl ether

    International Nuclear Information System (INIS)

    Tang Liang; Xu Gang; Wu Wenjing; Shi Wenyan; Liu Ning; Bai Yulei; Wu Minghong

    2010-01-01

    Polybrominated diphenyl ethers (PBDEs) spread widely in the environment are mainly removed by photochemical and anaerobic microbial degradation. In this paper, the decomposition of 4-bromodiphenyl ether (BDE -3), the PBDEs homologues, is investigated by electron beam irradiation of its ethanol/water solution (reduction system) and acetonitrile/water solution (oxidation system). The radiolytic products were determined by GC coupled with electron capture detector, and the reaction rate constant of e sol - in the reduction system was measured at 2.7 x 10 10 L · mol -1 · s -1 by pulsed radiolysis. The results show that the BDE-3 concentration affects strongly the decomposition ratio in the alkali solution, and the reduction system has a higher BDE-3 decomposition rate than the oxidation system. This indicates that the BDE-3 was reduced by effectively capturing e sol - in radiolytic process. (authors)

  10. Time Series Decomposition into Oscillation Components and Phase Estimation.

    Science.gov (United States)

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-02-01

    Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.

  11. Simulations of Quantum Turing Machines by Quantum Multi-Stack Machines

    OpenAIRE

    Qiu, Daowen

    2005-01-01

    As was well known, in classical computation, Turing machines, circuits, multi-stack machines, and multi-counter machines are equivalent, that is, they can simulate each other in polynomial time. In quantum computation, Yao [11] first proved that for any quantum Turing machines $M$, there exists quantum Boolean circuit $(n,t)$-simulating $M$, where $n$ denotes the length of input strings, and $t$ is the number of move steps before machine stopping. However, the simulations of quantum Turing ma...

  12. Active learning machine learns to create new quantum experiments.

    Science.gov (United States)

    Melnikov, Alexey A; Poulsen Nautrup, Hendrik; Krenn, Mario; Dunjko, Vedran; Tiersch, Markus; Zeilinger, Anton; Briegel, Hans J

    2018-02-06

    How useful can machine learning be in a quantum laboratory? Here we raise the question of the potential of intelligent machines in the context of scientific research. A major motivation for the present work is the unknown reachability of various entanglement classes in quantum experiments. We investigate this question by using the projective simulation model, a physics-oriented approach to artificial intelligence. In our approach, the projective simulation system is challenged to design complex photonic quantum experiments that produce high-dimensional entangled multiphoton states, which are of high interest in modern quantum experiments. The artificial intelligence system learns to create a variety of entangled states and improves the efficiency of their realization. In the process, the system autonomously (re)discovers experimental techniques which are only now becoming standard in modern quantum optical experiments-a trait which was not explicitly demanded from the system but emerged through the process of learning. Such features highlight the possibility that machines could have a significantly more creative role in future research.

  13. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Directory of Open Access Journals (Sweden)

    Saerom Park

    Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  14. Quantum cloning of mixed states in symmetric subspaces

    International Nuclear Information System (INIS)

    Fan Heng

    2003-01-01

    Quantum-cloning machine for arbitrary mixed states in symmetric subspaces is proposed. This quantum-cloning machine can be used to copy part of the output state of another quantum-cloning machine and is useful in quantum computation and quantum information. The shrinking factor of this quantum cloning achieves the well-known upper bound. When the input is identical pure states, two different fidelities of this cloning machine are optimal

  15. The Effects of Different Electrode Types for Obtaining Surface Machining Shape on Shape Memory Alloy Using Electrochemical Machining

    Science.gov (United States)

    Choi, S. G.; Kim, S. H.; Choi, W. K.; Moon, G. C.; Lee, E. S.

    2017-06-01

    Shape memory alloy (SMA) is important material used for the medicine and aerospace industry due to its characteristics called the shape memory effect, which involves the recovery of deformed alloy to its original state through the application of temperature or stress. Consumers in modern society demand stability in parts. Electrochemical machining is one of the methods for obtained these stabilities in parts requirements. These parts of shape memory alloy require fine patterns in some applications. In order to machine a fine pattern, the electrochemical machining method is suitable. For precision electrochemical machining using different shape electrodes, the current density should be controlled precisely. And electrode shape is required for precise electrochemical machining. It is possible to obtain precise square holes on the SMA if the insulation layer controlled the unnecessary current between electrode and workpiece. If it is adjusting the unnecessary current to obtain the desired shape, it will be a great contribution to the medical industry and the aerospace industry. It is possible to process a desired shape to the shape memory alloy by micro controlling the unnecessary current. In case of the square electrode without insulation layer, it derives inexact square holes due to the unnecessary current. The results using the insulated electrode in only side show precise square holes. The removal rate improved in case of insulated electrode than others because insulation layer concentrate the applied current to the machining zone.

  16. Tensor decompositions for the analysis of atomic resolution electron energy loss spectra

    Energy Technology Data Exchange (ETDEWEB)

    Spiegelberg, Jakob; Rusz, Ján [Department of Physics and Astronomy, Uppsala University, Box 516, S-751 20 Uppsala (Sweden); Pelckmans, Kristiaan [Department of Information Technology, Uppsala University, Box 337, S-751 05 Uppsala (Sweden)

    2017-04-15

    A selection of tensor decomposition techniques is presented for the detection of weak signals in electron energy loss spectroscopy (EELS) data. The focus of the analysis lies on the correct representation of the simulated spatial structure. An analysis scheme for EEL spectra combining two-dimensional and n-way decomposition methods is proposed. In particular, the performance of robust principal component analysis (ROBPCA), Tucker Decompositions using orthogonality constraints (Multilinear Singular Value Decomposition (MLSVD)) and Tucker decomposition without imposed constraints, canonical polyadic decomposition (CPD) and block term decompositions (BTD) on synthetic as well as experimental data is examined. - Highlights: • A scheme for compression and analysis of EELS or EDX data is proposed. • Several tensor decomposition techniques are presented for BSS on hyperspectral data. • Robust PCA and MLSVD are discussed for denoising of raw data.

  17. Comparison of decomposition rates between autopsied and non-autopsied human remains.

    Science.gov (United States)

    Bates, Lennon N; Wescott, Daniel J

    2016-04-01

    Penetrating trauma has been cited as a significant factor in the rate of decomposition. Therefore, penetrating trauma may have an effect on estimations of time-since-death in medicolegal investigations and on research examining decomposition rates and processes when autopsied human bodies are used. The goal of this study was to determine if there are differences in the rate of decomposition between autopsied and non-autopsied human remains in the same environment. The purpose is to shed light on how large incisions, such as those from a thorocoabdominal autopsy, effect time-since-death estimations and research on the rate of decomposition that use both autopsied and non-autopsied human remains. In this study, 59 non-autopsied and 24 autopsied bodies were studied. The number of accumulated degree days required to reach each decomposition stage was then compared between autopsied and non-autopsied remains. Additionally, both types of bodies were examined for seasonal differences in decomposition rates. As temperature affects the rate of decomposition, this study also compared the internal body temperatures of autopsied and non-autopsied remains to see if differences between the two may be leading to differential decomposition. For this portion of this study, eight non-autopsied and five autopsied bodies were investigated. Internal temperature was collected once a day for two weeks. The results showed that differences in the decomposition rate between autopsied and non-autopsied remains was not statistically significant, though the average ADD needed to reach each stage of decomposition was slightly lower for autopsied bodies than non-autopsied bodies. There was also no significant difference between autopsied and non-autopsied bodies in the rate of decomposition by season or in internal temperature. Therefore, this study suggests that it is unnecessary to separate autopsied and non-autopsied remains when studying gross stages of human decomposition in Central Texas

  18. The platinum catalysed decomposition of hydrazine in acidic media

    International Nuclear Information System (INIS)

    Ananiev, A.V.; Tananaev, I.G.; Brossard, Ph.; Broudic, J.C.

    2000-01-01

    Kinetic study of the hydrazine decomposition in the solutions of HClO 4 , H 2 SO 4 and HNO 3 in the presence of Pt/SiO 2 catalyst has been undertaken. It was shown that the kinetics of the hydrazine catalytic decomposition in HClO 4 and H 2 SO 4 are identical. The process is determined by the heterogeneous catalytic auto-decomposition of N 2 H 4 on the catalyst's surface. The platinum catalysed hydrazine decomposition in the nitric acid solutions is a complex process, including heterogeneous catalytic auto-decomposition of N 2 H 4 , reaction of hydrazine with catalytically generated nitrous acid and the catalytic oxidation of hydrazine by nitric acid. The kinetic parameters of these reactions have been determined. The contribution of each reaction in the total process is determined by the liquid phase composition and by the temperature. (authors)

  19. In situ XAS of the solvothermal decomposition of dithiocarbamate complexes

    NARCIS (Netherlands)

    Islam, H.-U.; Roffey, A.; Hollingsworth, N.; Catlow, R.; Wolthers, M.; de Leeuw, N.H.; Bras, W.; Sankar, G.; Hogarth, G.

    2012-01-01

    An in situ XAS study of the solvothermal decomposition of iron and nickel dithiocarbamate complexes was performed in order to gain understanding of the decomposition mechanisms. This work has given insight into the steps involved in the decomposition, showing variation in reaction pathways between

  20. High Performance Polar Decomposition on Distributed Memory Systems

    KAUST Repository

    Sukkari, Dalal E.; Ltaief, Hatem; Keyes, David E.

    2016-01-01

    The polar decomposition of a dense matrix is an important operation in linear algebra. It can be directly calculated through the singular value decomposition (SVD) or iteratively using the QR dynamically-weighted Halley algorithm (QDWH). The former

  1. Fast approximate convex decomposition using relative concavity

    KAUST Repository

    Ghosh, Mukulika; Amato, Nancy M.; Lu, Yanyan; Lien, Jyh-Ming

    2013-01-01

    Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.

  2. Fast approximate convex decomposition using relative concavity

    KAUST Repository

    Ghosh, Mukulika

    2013-02-01

    Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.

  3. Climate history shapes contemporary leaf litter decomposition

    Science.gov (United States)

    Michael S. Strickland; Ashley D. Keiser; Mark A. Bradford

    2015-01-01

    Litter decomposition is mediated by multiple variables, of which climate is expected to be a dominant factor at global scales. However, like other organisms, traits of decomposers and their communities are shaped not just by the contemporary climate but also their climate history. Whether or not this affects decomposition rates is underexplored. Here we source...

  4. Nuclear power plant risk assembly and decomposition for risk management

    International Nuclear Information System (INIS)

    Iden, D.C.

    1985-01-01

    The state-of-the-art method for analyzing the risk from nuclear power plants is probabilistic risk assessment (PRA). The intermediate results of a PRA are first assembled to quantify the risk from operating a nuclear power plant in the form of (1) core damage (or core melt) frequency, (2) plant damage state frequencies, (3) release category frequencies, and (4) the frequency of exceeding specific levels of offsite consequences. Once the overall PRA results have been quantified, the next step is to decompose those results into the individual contributors to each of the four forms of risk in some rank order. The way in which the PRA model is set up to assemble and decompose the plant risk determines the ease and usefulness of the PRA model as a risk management tool for evaluating perturbations to the PRA model. These perturbations can take the form of technical specification changes, hardware modifications, procedural changes, etc. The matrix formalism developed by Dr. Stan Kaplan for risk assembly and decomposition represents a significant breakthrough in making the PRA model an effective risk management tool. The key to understanding the matrix formalism and making it a useful tool for managing nuclear power plant risk is the structure of the PRA model. PRA risk model structure and decomposition of the risk results are discussed with the Seabrook PRA as an example

  5. First-principles calculated decomposition pathways for LiBH4 nanoclusters

    Science.gov (United States)

    Huang, Zhi-Quan; Chen, Wei-Chih; Chuang, Feng-Chuan; Majzoub, Eric H.; Ozoliņš, Vidvuds

    2016-05-01

    We analyze thermodynamic stability and decomposition pathways of LiBH4 nanoclusters using grand-canonical free-energy minimization based on total energies and vibrational frequencies obtained from density-functional theory (DFT) calculations. We consider (LiBH4)n nanoclusters with n = 2 to 12 as reactants, while the possible products include (Li)n, (B)n, (LiB)n, (LiH)n, and Li2BnHn; off-stoichiometric LinBnHm (m ≤ 4n) clusters were considered for n = 2, 3, and 6. Cluster ground-state configurations have been predicted using prototype electrostatic ground-state (PEGS) and genetic algorithm (GA) based structural optimizations. Free-energy calculations show hydrogen release pathways markedly differ from those in bulk LiBH4. While experiments have found that the bulk material decomposes into LiH and B, with Li2B12H12 as a kinetically inhibited intermediate phase, (LiBH4)n nanoclusters with n ≤ 12 are predicted to decompose into mixed LinBn clusters via a series of intermediate clusters of LinBnHm (m ≤ 4n). The calculated pressure-composition isotherms and temperature-pressure isobars exhibit sloping plateaus due to finite size effects on reaction thermodynamics. Generally, decomposition temperatures of free-standing clusters are found to increase with decreasing cluster size due to thermodynamic destabilization of reaction products.

  6. Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.

    Science.gov (United States)

    Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin

    2017-11-15

    Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  7. Decomposition of dioxin analogues and ablation study for carbon nanotube

    International Nuclear Information System (INIS)

    Yamauchi, Toshihiko

    2002-01-01

    Two application studies associated with the free electron laser are presented separately, which are the titles of 'Decomposition of Dioxin Analogues' and 'Ablation Study for Carbon Nanotube'. The decomposition of dioxin analogues by infrared (IR) laser irradiation includes the thermal destruction and multiple-photon dissociation. It is important for us to choose the highly absorbable laser wavelength for the decomposition. The thermal decomposition takes place by the irradiation of the low IR laser power. Considering the model of thermal decomposition, it is proposed that adjacent water molecules assist the decomposition of dioxin analogues in addition to the thermal decomposition by the direct laser absorption. The laser ablation study is performed for the aim of a carbon nanotube synthesis. The vapor by the ablation is weakly ionized in the power of several-hundred megawatts. The plasma internal energy is kept over an 8.5 times longer than the vacuum. The cluster was produced from the weakly ionized gas in the enclosed gas, which is composed of the rough particles in the low power laser more than the high power which is composed of the fine particles. (J.P.N.)

  8. Decomposition of oxalate precipitates by photochemical reaction

    International Nuclear Information System (INIS)

    Jae-Hyung Yoo; Eung-Ho Kim

    1999-01-01

    A photo-radiation method was applied to decompose oxalate precipitates so that it can be dissolved into dilute nitric acid. This work has been studied as a part of partitioning of minor actinides. Minor actinides can be recovered from high-level wastes as oxalate precipitates, but they tend to be coprecipitated together with lanthanide oxalates. This requires another partitioning step for mutual separation of actinide and lanthanide groups. In this study, therefore, some experimental work of photochemical decomposition of oxalate was carried out to prove its feasibility as a step of partitioning process. The decomposition of oxalic acid in the presence of nitric acid was performed in advance in order to understand the mechanistic behaviour of oxalate destruction, and then the decomposition of neodymium oxalate, which was chosen as a stand-in compound representing minor actinide and lanthanide oxalates, was examined. The decomposition rate of neodymium oxalate was found as 0.003 mole/hr at the conditions of 0.5 M HNO 3 and room temperature when a mercury lamp was used as a light source. (author)

  9. Abstract decomposition theorem and applications

    CERN Document Server

    Grossberg, R; Grossberg, Rami; Lessmann, Olivier

    2005-01-01

    Let K be an Abstract Elementary Class. Under the asusmptions that K has a nicely behaved forking-like notion, regular types and existence of some prime models we establish a decomposition theorem for such classes. The decomposition implies a main gap result for the class K. The setting is general enough to cover \\aleph_0-stable first-order theories (proved by Shelah in 1982), Excellent Classes of atomic models of a first order tehory (proved Grossberg and Hart 1987) and the class of submodels of a large sequentially homogenuus \\aleph_0-stable model (which is new).

  10. Machine tool structures

    CERN Document Server

    Koenigsberger, F

    1970-01-01

    Machine Tool Structures, Volume 1 deals with fundamental theories and calculation methods for machine tool structures. Experimental investigations into stiffness are discussed, along with the application of the results to the design of machine tool structures. Topics covered range from static and dynamic stiffness to chatter in metal cutting, stability in machine tools, and deformations of machine tool structures. This volume is divided into three sections and opens with a discussion on stiffness specifications and the effect of stiffness on the behavior of the machine under forced vibration c

  11. Forest products decomposition in municipal solid waste landfills

    International Nuclear Information System (INIS)

    Barlaz, Morton A.

    2006-01-01

    Cellulose and hemicellulose are present in paper and wood products and are the dominant biodegradable polymers in municipal waste. While their conversion to methane in landfills is well documented, there is little information on the rate and extent of decomposition of individual waste components, particularly under field conditions. Such information is important for the landfill carbon balance as methane is a greenhouse gas that may be recovered and converted to a CO 2 -neutral source of energy, while non-degraded cellulose and hemicellulose are sequestered. This paper presents a critical review of research on the decomposition of cellulosic wastes in landfills and identifies additional work that is needed to quantify the ultimate extent of decomposition of individual waste components. Cellulose to lignin ratios as low as 0.01-0.02 have been measured for well decomposed refuse, with corresponding lignin concentrations of over 80% due to the depletion of cellulose and resulting enrichment of lignin. Only a few studies have even tried to address the decomposition of specific waste components at field-scale. Long-term controlled field experiments with supporting laboratory work will be required to measure the ultimate extent of decomposition of individual waste components

  12. The trait contribution to wood decomposition rates of 15 Neotropical tree species.

    Science.gov (United States)

    van Geffen, Koert G; Poorter, Lourens; Sass-Klaassen, Ute; van Logtestijn, Richard S P; Cornelissen, Johannes H C

    2010-12-01

    The decomposition of dead wood is a critical uncertainty in models of the global carbon cycle. Despite this, relatively few studies have focused on dead wood decomposition, with a strong bias to higher latitudes. Especially the effect of interspecific variation in species traits on differences in wood decomposition rates remains unknown. In order to fill these gaps, we applied a novel method to study long-term wood decomposition of 15 tree species in a Bolivian semi-evergreen tropical moist forest. We hypothesized that interspecific differences in species traits are important drivers of variation in wood decomposition rates. Wood decomposition rates (fractional mass loss) varied between 0.01 and 0.31 yr(-1). We measured 10 different chemical, anatomical, and morphological traits for all species. The species' average traits were useful predictors of wood decomposition rates, particularly the average diameter (dbh) of the tree species (R2 = 0.41). Lignin concentration further increased the proportion of explained inter-specific variation in wood decomposition (both negative relations, cumulative R2 = 0.55), although it did not significantly explain variation in wood decomposition rates if considered alone. When dbh values of the actual dead trees sampled for decomposition rate determination were used as a predictor variable, the final model (including dead tree dbh and lignin concentration) explained even more variation in wood decomposition rates (R2 = 0.71), underlining the importance of dbh in wood decomposition. Other traits, including wood density, wood anatomical traits, macronutrient concentrations, and the amount of phenolic extractives could not significantly explain the variation in wood decomposition rates. The surprising results of this multi-species study, in which for the first time a large set of traits is explicitly linked to wood decomposition rates, merits further testing in other forest ecosystems.

  13. Application of Machine Learning to Rotorcraft Health Monitoring

    Science.gov (United States)

    Cody, Tyler; Dempsey, Paula J.

    2017-01-01

    Machine learning is a powerful tool for data exploration and model building with large data sets. This project aimed to use machine learning techniques to explore the inherent structure of data from rotorcraft gear tests, relationships between features and damage states, and to build a system for predicting gear health for future rotorcraft transmission applications. Classical machine learning techniques are difficult, if not irresponsible to apply to time series data because many make the assumption of independence between samples. To overcome this, Hidden Markov Models were used to create a binary classifier for identifying scuffing transitions and Recurrent Neural Networks were used to leverage long distance relationships in predicting discrete damage states. When combined in a workflow, where the binary classifier acted as a filter for the fatigue monitor, the system was able to demonstrate accuracy in damage state prediction and scuffing identification. The time dependent nature of the data restricted data exploration to collecting and analyzing data from the model selection process. The limited amount of available data was unable to give useful information, and the division of training and testing sets tended to heavily influence the scores of the models across combinations of features and hyper-parameters. This work built a framework for tracking scuffing and fatigue on streaming data and demonstrates that machine learning has much to offer rotorcraft health monitoring by using Bayesian learning and deep learning methods to capture the time dependent nature of the data. Suggested future work is to implement the framework developed in this project using a larger variety of data sets to test the generalization capabilities of the models and allow for data exploration.

  14. Thermoanalytical study of the decomposition of yttrium trifluoroacetate thin films

    International Nuclear Information System (INIS)

    Eloussifi, H.; Farjas, J.; Roura, P.; Ricart, S.; Puig, T.; Obradors, X.; Dammak, M.

    2013-01-01

    We present the use of the thermal analysis techniques to study yttrium trifluoroacetate thin films decomposition. In situ analysis was done by means of thermogravimetry, differential thermal analysis, and evolved gas analysis. Solid residues at different stages and the final product have been characterized by X-ray diffraction and scanning electron microscopy. The thermal decomposition of yttrium trifluoroacetate thin films results in the formation of yttria and presents the same succession of intermediates than powder's decomposition, however, yttria and all intermediates but YF 3 appear at significantly lower temperatures. We also observe a dependence on the water partial pressure that was not observed in the decomposition of yttrium trifluoroacetate powders. Finally, a dependence on the substrate chemical composition is discerned. - Highlights: • Thermal decomposition of yttrium trifluoroacetate films. • Very different behavior of films with respect to powders. • Decomposition is enhanced in films. • Application of thermal analysis to chemical solution deposition synthesis of films

  15. Moved range monitor of a refueling machine

    International Nuclear Information System (INIS)

    Nakajima, Masaaki; Sakanaka, Tadao; Kayano, Hiroyuki.

    1976-01-01

    Purpose: To incorporate light receiving and emitting elements in a face monitor to thereby increase accuracy and reliability to facilitate handling in the refueling of a BWR power plant. Constitution: In the present invention, a refueling machine and a face monitoring light receiving and emitting elements are analogously coupled whereby the face monitoring light receiving and emitting elements may be moved so as to be analogous to a route along which the refueling machine has moved. A shielding plate is positioned in the middle of the light receiving and emitting elements, and the shielding plate is machined so as to be outside of action. The range of action of the refueling machine may be monitored depending on the light receiving state of the light receiving element. Since the present invention utilizes the permeating light as described above, it is possible to detect positions more accurately than the mechanical switch. In addition, the detection section is of the non-contact system and the light receiving element comprises a hot cell, and therefore the service life is extended and the reliability is high. (Nakamura, S.)

  16. Intrinsic Scene Decomposition from RGB-D Images

    KAUST Repository

    Hachama, Mohammed; Ghanem, Bernard; Wonka, Peter

    2015-01-01

    In this paper, we address the problem of computing an intrinsic decomposition of the colors of a surface into an albedo and a shading term. The surface is reconstructed from a single or multiple RGB-D images of a static scene obtained from different views. We thereby extend and improve existing works in the area of intrinsic image decomposition. In a variational framework, we formulate the problem as a minimization of an energy composed of two terms: a data term and a regularity term. The first term is related to the image formation process and expresses the relation between the albedo, the surface normals, and the incident illumination. We use an affine shading model, a combination of a Lambertian model, and an ambient lighting term. This model is relevant for Lambertian surfaces. When available, multiple views can be used to handle view-dependent non-Lambertian reflections. The second term contains an efficient combination of l2 and l1-regularizers on the illumination vector field and albedo respectively. Unlike most previous approaches, especially Retinex-like techniques, these terms do not depend on the image gradient or texture, thus reducing the mixing shading/reflectance artifacts and leading to better results. The obtained non-linear optimization problem is efficiently solved using a cyclic block coordinate descent algorithm. Our method outperforms a range of state-of-the-art algorithms on a popular benchmark dataset.

  17. Intrinsic Scene Decomposition from RGB-D Images

    KAUST Repository

    Hachama, Mohammed

    2015-12-07

    In this paper, we address the problem of computing an intrinsic decomposition of the colors of a surface into an albedo and a shading term. The surface is reconstructed from a single or multiple RGB-D images of a static scene obtained from different views. We thereby extend and improve existing works in the area of intrinsic image decomposition. In a variational framework, we formulate the problem as a minimization of an energy composed of two terms: a data term and a regularity term. The first term is related to the image formation process and expresses the relation between the albedo, the surface normals, and the incident illumination. We use an affine shading model, a combination of a Lambertian model, and an ambient lighting term. This model is relevant for Lambertian surfaces. When available, multiple views can be used to handle view-dependent non-Lambertian reflections. The second term contains an efficient combination of l2 and l1-regularizers on the illumination vector field and albedo respectively. Unlike most previous approaches, especially Retinex-like techniques, these terms do not depend on the image gradient or texture, thus reducing the mixing shading/reflectance artifacts and leading to better results. The obtained non-linear optimization problem is efficiently solved using a cyclic block coordinate descent algorithm. Our method outperforms a range of state-of-the-art algorithms on a popular benchmark dataset.

  18. Joint Matrices Decompositions and Blind Source Separation

    Czech Academy of Sciences Publication Activity Database

    Chabriel, G.; Kleinsteuber, M.; Moreau, E.; Shen, H.; Tichavský, Petr; Yeredor, A.

    2014-01-01

    Roč. 31, č. 3 (2014), s. 34-43 ISSN 1053-5888 R&D Projects: GA ČR GA102/09/1278 Institutional support: RVO:67985556 Keywords : joint matrices decomposition * tensor decomposition * blind source separation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 5.852, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/tichavsky-0427607.pdf

  19. Extreme learning machine for reduced order modeling of turbulent geophysical flows

    Science.gov (United States)

    San, Omer; Maulik, Romit

    2018-04-01

    We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.

  20. Electricity of machine tool

    International Nuclear Information System (INIS)

    Gijeon media editorial department

    1977-10-01

    This book is divided into three parts. The first part deals with electricity machine, which can taints from generator to motor, motor a power source of machine tool, electricity machine for machine tool such as switch in main circuit, automatic machine, a knife switch and pushing button, snap switch, protection device, timer, solenoid, and rectifier. The second part handles wiring diagram. This concludes basic electricity circuit of machine tool, electricity wiring diagram in your machine like milling machine, planer and grinding machine. The third part introduces fault diagnosis of machine, which gives the practical solution according to fault diagnosis and the diagnostic method with voltage and resistance measurement by tester.