WorldWideScience

Sample records for energy minimized sequences

  1. Free energy minimization to predict RNA secondary structures and computational RNA design.

    Science.gov (United States)

    Churkin, Alexander; Weinbrand, Lina; Barash, Danny

    2015-01-01

    Determining the RNA secondary structure from sequence data by computational predictions is a long-standing problem. Its solution has been approached in two distinctive ways. If a multiple sequence alignment of a collection of homologous sequences is available, the comparative method uses phylogeny to determine conserved base pairs that are more likely to form as a result of billions of years of evolution than by chance. In the case of single sequences, recursive algorithms that compute free energy structures by using empirically derived energy parameters have been developed. This latter approach of RNA folding prediction by energy minimization is widely used to predict RNA secondary structure from sequence. For a significant number of RNA molecules, the secondary structure of the RNA molecule is indicative of its function and its computational prediction by minimizing its free energy is important for its functional analysis. A general method for free energy minimization to predict RNA secondary structures is dynamic programming, although other optimization methods have been developed as well along with empirically derived energy parameters. In this chapter, we introduce and illustrate by examples the approach of free energy minimization to predict RNA secondary structures.

  2. Predicting Consensus Structures for RNA Alignments Via Pseudo-Energy Minimization

    Directory of Open Access Journals (Sweden)

    Junilda Spirollari

    2009-01-01

    Full Text Available Thermodynamic processes with free energy parameters are often used in algorithms that solve the free energy minimization problem to predict secondary structures of single RNA sequences. While results from these algorithms are promising, an observation is that single sequence-based methods have moderate accuracy and more information is needed to improve on RNA secondary structure prediction, such as covariance scores obtained from multiple sequence alignments. We present in this paper a new approach to predicting the consensus secondary structure of a set of aligned RNA sequences via pseudo-energy minimization. Our tool, called RSpredict, takes into account sequence covariation and employs effective heuristics for accuracy improvement. RSpredict accepts, as input data, a multiple sequence alignment in FASTA or ClustalW format and outputs the consensus secondary structure of the input sequences in both the Vienna style Dot Bracket format and the Connectivity Table format. Our method was compared with some widely used tools including KNetFold, Pfold and RNAalifold. A comprehensive test on different datasets including Rfam sequence alignments and a multiple sequence alignment obtained from our study on the Drosophila X chromosome reveals that RSpredict is competitive with the existing tools on the tested datasets. RSpredict is freely available online as a web server and also as a jar file for download at http:// datalab.njit.edu/biology/RSpredict.

  3. The Use of Trust Regions in Kohn-Sham Total Energy Minimization

    International Nuclear Information System (INIS)

    Yang, Chao; Meza, Juan C.; Wang, Lin-wang

    2006-01-01

    The Self Consistent Field (SCF) iteration, widely used for computing the ground state energy and the corresponding single particle wave functions associated with a many-electron atomistic system, is viewed in this paper as an optimization procedure that minimizes the Kohn-Sham total energy indirectly by minimizing a sequence of quadratic surrogate functions. We point out the similarity and difference between the total energy and the surrogate, and show how the SCF iteration can fail when the minimizer of the surrogate produces an increase in the KS total energy. A trust region technique is introduced as a way to restrict the update of the wave functions within a small neighborhood of an approximate solution at which the gradient of the total energy agrees with that of the surrogate. The use of trust region in SCF is not new. However, it has been observed that directly applying a trust region based SCF(TRSCF) to the Kohn-Sham total energy often leads to slow convergence. We propose to use TRSCF within a direct constrained minimization(DCM) algorithm we developed in dcm. The key ingredients of the DCM algorithm involve projecting the total energy function into a sequence of subspaces of small dimensions and seeking the minimizer of the total energy function within each subspace. The minimizer of a subspace energy function, which is computed by TRSCF, not only provides a search direction along which the KS total energy function decreases but also gives an optimal 'step-length' that yields a sufficient decrease in total energy. A numerical example is provided to demonstrate that the combination of TRSCF and DCM is more efficient than SCF

  4. A constrained optimization algorithm for total energy minimization in electronic structure calculations

    International Nuclear Information System (INIS)

    Yang Chao; Meza, Juan C.; Wang Linwang

    2006-01-01

    A new direct constrained optimization algorithm for minimizing the Kohn-Sham (KS) total energy functional is presented in this paper. The key ingredients of this algorithm involve projecting the total energy functional into a sequence of subspaces of small dimensions and seeking the minimizer of total energy functional within each subspace. The minimizer of a subspace energy functional not only provides a search direction along which the KS total energy functional decreases but also gives an optimal 'step-length' to move along this search direction. Numerical examples are provided to demonstrate that this new direct constrained optimization algorithm can be more efficient than the self-consistent field (SCF) iteration

  5. Ten scenarios from early radiation to late time acceleration with a minimally coupled dark energy

    Energy Technology Data Exchange (ETDEWEB)

    Fay, Stéphane, E-mail: steph.fay@gmail.com [Palais de la Découverte, Astronomy Department, Avenue Franklin Roosevelt, 75008 Paris (France)

    2013-09-01

    We consider General Relativity with matter, radiation and a minimally coupled dark energy defined by an equation of state w. Using dynamical system method, we find the equilibrium points of such a theory assuming an expanding Universe and a positive dark energy density. Two of these points correspond to classical radiation and matter dominated epochs for the Universe. For the other points, dark energy mimics matter, radiation or accelerates Universe expansion. We then look for possible sequences of epochs describing a Universe starting with some radiation dominated epoch(s) (mimicked or not by dark energy), then matter dominated epoch(s) (mimicked or not by dark energy) and ending with an accelerated expansion. We find ten sequences able to follow this Universe history without singular behaviour of w at some saddle points. Most of them are new in dark energy literature. To get more than these ten sequences, w has to be singular at some specific saddle equilibrium points. This is an unusual mathematical property of the equation of state in dark energy literature, whose physical consequences tend to be discarded by observations. This thus distinguishes the ten above sequences from an infinity of ways to describe Universe expansion.

  6. Ten scenarios from early radiation to late time acceleration with a minimally coupled dark energy

    International Nuclear Information System (INIS)

    Fay, Stéphane

    2013-01-01

    We consider General Relativity with matter, radiation and a minimally coupled dark energy defined by an equation of state w. Using dynamical system method, we find the equilibrium points of such a theory assuming an expanding Universe and a positive dark energy density. Two of these points correspond to classical radiation and matter dominated epochs for the Universe. For the other points, dark energy mimics matter, radiation or accelerates Universe expansion. We then look for possible sequences of epochs describing a Universe starting with some radiation dominated epoch(s) (mimicked or not by dark energy), then matter dominated epoch(s) (mimicked or not by dark energy) and ending with an accelerated expansion. We find ten sequences able to follow this Universe history without singular behaviour of w at some saddle points. Most of them are new in dark energy literature. To get more than these ten sequences, w has to be singular at some specific saddle equilibrium points. This is an unusual mathematical property of the equation of state in dark energy literature, whose physical consequences tend to be discarded by observations. This thus distinguishes the ten above sequences from an infinity of ways to describe Universe expansion

  7. Finding minimal action sequences with a simple evaluation of actions

    Science.gov (United States)

    Shah, Ashvin; Gurney, Kevin N.

    2014-01-01

    Animals are able to discover the minimal number of actions that achieves an outcome (the minimal action sequence). In most accounts of this, actions are associated with a measure of behavior that is higher for actions that lead to the outcome with a shorter action sequence, and learning mechanisms find the actions associated with the highest measure. In this sense, previous accounts focus on more than the simple binary signal of “was the outcome achieved?”; they focus on “how well was the outcome achieved?” However, such mechanisms may not govern all types of behavioral development. In particular, in the process of action discovery (Redgrave and Gurney, 2006), actions are reinforced if they simply lead to a salient outcome because biological reinforcement signals occur too quickly to evaluate the consequences of an action beyond an indication of the outcome's occurrence. Thus, action discovery mechanisms focus on the simple evaluation of “was the outcome achieved?” and not “how well was the outcome achieved?” Notwithstanding this impoverishment of information, can the process of action discovery find the minimal action sequence? We address this question by implementing computational mechanisms, referred to in this paper as no-cost learning rules, in which each action that leads to the outcome is associated with the same measure of behavior. No-cost rules focus on “was the outcome achieved?” and are consistent with action discovery. No-cost rules discover the minimal action sequence in simulated tasks and execute it for a substantial amount of time. Extensive training, however, results in extraneous actions, suggesting that a separate process (which has been proposed in action discovery) must attenuate learning if no-cost rules participate in behavioral development. We describe how no-cost rules develop behavior, what happens when attenuation is disrupted, and relate the new mechanisms to wider computational and biological context. PMID:25506326

  8. 3D motion analysis via energy minimization

    Energy Technology Data Exchange (ETDEWEB)

    Wedel, Andreas

    2009-10-16

    This work deals with 3D motion analysis from stereo image sequences for driver assistance systems. It consists of two parts: the estimation of motion from the image data and the segmentation of moving objects in the input images. The content can be summarized with the technical term machine visual kinesthesia, the sensation or perception and cognition of motion. In the first three chapters, the importance of motion information is discussed for driver assistance systems, for machine vision in general, and for the estimation of ego motion. The next two chapters delineate on motion perception, analyzing the apparent movement of pixels in image sequences for both a monocular and binocular camera setup. Then, the obtained motion information is used to segment moving objects in the input video. Thus, one can clearly identify the thread from analyzing the input images to describing the input images by means of stationary and moving objects. Finally, I present possibilities for future applications based on the contents of this thesis. Previous work in each case is presented in the respective chapters. Although the overarching issue of motion estimation from image sequences is related to practice, there is nothing as practical as a good theory (Kurt Lewin). Several problems in computer vision are formulated as intricate energy minimization problems. In this thesis, motion analysis in image sequences is thoroughly investigated, showing that splitting an original complex problem into simplified sub-problems yields improved accuracy, increased robustness, and a clear and accessible approach to state-of-the-art motion estimation techniques. In Chapter 4, optical flow is considered. Optical flow is commonly estimated by minimizing the combined energy, consisting of a data term and a smoothness term. These two parts are decoupled, yielding a novel and iterative approach to optical flow. The derived Refinement Optical Flow framework is a clear and straight-forward approach to

  9. cgDNA: a software package for the prediction of sequence-dependent coarse-grain free energies of B-form DNA.

    Science.gov (United States)

    Petkevičiūtė, D; Pasi, M; Gonzalez, O; Maddocks, J H

    2014-11-10

    cgDNA is a package for the prediction of sequence-dependent configuration-space free energies for B-form DNA at the coarse-grain level of rigid bases. For a fragment of any given length and sequence, cgDNA calculates the configuration of the associated free energy minimizer, i.e. the relative positions and orientations of each base, along with a stiffness matrix, which together govern differences in free energies. The model predicts non-local (i.e. beyond base-pair step) sequence dependence of the free energy minimizer. Configurations can be input or output in either the Curves+ definition of the usual helical DNA structural variables, or as a PDB file of coordinates of base atoms. We illustrate the cgDNA package by comparing predictions of free energy minimizers from (a) the cgDNA model, (b) time-averaged atomistic molecular dynamics (or MD) simulations, and (c) NMR or X-ray experimental observation, for (i) the Dickerson-Drew dodecamer and (ii) three oligomers containing A-tracts. The cgDNA predictions are rather close to those of the MD simulations, but many orders of magnitude faster to compute. Both the cgDNA and MD predictions are in reasonable agreement with the available experimental data. Our conclusion is that cgDNA can serve as a highly efficient tool for studying structural variations in B-form DNA over a wide range of sequences. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue.

    Science.gov (United States)

    Jezernik, Saso; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  11. Evaluation of the suitability of free-energy minimization using nearest-neighbor energy parameters for RNA secondary structure prediction

    Directory of Open Access Journals (Sweden)

    Cobaugh Christian W

    2004-08-01

    Full Text Available Abstract Background A detailed understanding of an RNA's correct secondary and tertiary structure is crucial to understanding its function and mechanism in the cell. Free energy minimization with energy parameters based on the nearest-neighbor model and comparative analysis are the primary methods for predicting an RNA's secondary structure from its sequence. Version 3.1 of Mfold has been available since 1999. This version contains an expanded sequence dependence of energy parameters and the ability to incorporate coaxial stacking into free energy calculations. We test Mfold 3.1 by performing the largest and most phylogenetically diverse comparison of rRNA and tRNA structures predicted by comparative analysis and Mfold, and we use the results of our tests on 16S and 23S rRNA sequences to assess the improvement between Mfold 2.3 and Mfold 3.1. Results The average prediction accuracy for a 16S or 23S rRNA sequence with Mfold 3.1 is 41%, while the prediction accuracies for the majority of 16S and 23S rRNA structures tested are between 20% and 60%, with some having less than 20% prediction accuracy. The average prediction accuracy was 71% for 5S rRNA and 69% for tRNA. The majority of the 5S rRNA and tRNA sequences have prediction accuracies greater than 60%. The prediction accuracy of 16S rRNA base-pairs decreases exponentially as the number of nucleotides intervening between the 5' and 3' halves of the base-pair increases. Conclusion Our analysis indicates that the current set of nearest-neighbor energy parameters in conjunction with the Mfold folding algorithm are unable to consistently and reliably predict an RNA's correct secondary structure. For 16S or 23S rRNA structure prediction, Mfold 3.1 offers little improvement over Mfold 2.3. However, the nearest-neighbor energy parameters do work well for shorter RNA sequences such as tRNA or 5S rRNA, or for larger rRNAs when the contact distance between the base-pairs is less than 100 nucleotides.

  12. Energy Cost Minimization in Heterogeneous Cellular Networks with Hybrid Energy Supplies

    Directory of Open Access Journals (Sweden)

    Bang Wang

    2016-01-01

    Full Text Available The ever increasing data demand has led to the significant increase of energy consumption in cellular mobile networks. Recent advancements in heterogeneous cellular networks and green energy supplied base stations provide promising solutions for cellular communications industry. In this article, we first review the motivations and challenges as well as approaches to address the energy cost minimization problem for such green heterogeneous networks. Owing to the diversities of mobile traffic and renewable energy, the energy cost minimization problem involves both temporal and spatial optimization of resource allocation. We next present a new solution to illustrate how to combine the optimization of the temporal green energy allocation and spatial mobile traffic distribution. The whole optimization problem is decomposed into four subproblems, and correspondingly our proposed solution is divided into four parts: energy consumption estimation, green energy allocation, user association, and green energy reallocation. Simulation results demonstrate that our proposed algorithm can significantly reduce the total energy cost.

  13. Sculpting proteins interactively: continual energy minimization embedded in a graphical modeling system.

    Science.gov (United States)

    Surles, M C; Richardson, J S; Richardson, D C; Brooks, F P

    1994-02-01

    We describe a new paradigm for modeling proteins in interactive computer graphics systems--continual maintenance of a physically valid representation, combined with direct user control and visualization. This is achieved by a fast algorithm for energy minimization, capable of real-time performance on all atoms of a small protein, plus graphically specified user tugs. The modeling system, called Sculpt, rigidly constrains bond lengths, bond angles, and planar groups (similar to existing interactive modeling programs), while it applies elastic restraints to minimize the potential energy due to torsions, hydrogen bonds, and van der Waals and electrostatic interactions (similar to existing batch minimization programs), and user-specified springs. The graphical interface can show bad and/or favorable contacts, and individual energy terms can be turned on or off to determine their effects and interactions. Sculpt finds a local minimum of the total energy that satisfies all the constraints using an augmented Lagrange-multiplier method; calculation time increases only linearly with the number of atoms because the matrix of constraint gradients is sparse and banded. On a 100-MHz MIPS R4000 processor (Silicon Graphics Indigo), Sculpt achieves 11 updates per second on a 20-residue fragment and 2 updates per second on an 80-residue protein, using all atoms except non-H-bonding hydrogens, and without electrostatic interactions. Applications of Sculpt are described: to reverse the direction of bundle packing in a designed 4-helix bundle protein, to fold up a 2-stranded beta-ribbon into an approximate beta-barrel, and to design the sequence and conformation of a 30-residue peptide that mimics one partner of a protein subunit interaction. Computer models that are both interactive and physically realistic (within the limitations of a given force field) have 2 significant advantages: (1) they make feasible the modeling of very large changes (such as needed for de novo design), and

  14. Energy minimization strategies and renewable energy utilization for desalination: a review.

    Science.gov (United States)

    Subramani, Arun; Badruzzaman, Mohammad; Oppenheimer, Joan; Jacangelo, Joseph G

    2011-02-01

    Energy is a significant cost in the economics of desalinating waters, but water scarcity is driving the rapid expansion in global installed capacity of desalination facilities. Conventional fossil fuels have been utilized as their main energy source, but recent concerns over greenhouse gas (GHG) emissions have promoted global development and implementation of energy minimization strategies and cleaner energy supplies. In this paper, a comprehensive review of energy minimization strategies for membrane-based desalination processes and utilization of lower GHG emission renewable energy resources is presented. The review covers the utilization of energy efficient design, high efficiency pumping, energy recovery devices, advanced membrane materials (nanocomposite, nanotube, and biomimetic), innovative technologies (forward osmosis, ion concentration polarization, and capacitive deionization), and renewable energy resources (solar, wind, and geothermal). Utilization of energy efficient design combined with high efficiency pumping and energy recovery devices have proven effective in full-scale applications. Integration of advanced membrane materials and innovative technologies for desalination show promise but lack long-term operational data. Implementation of renewable energy resources depends upon geography-specific abundance, a feasible means of handling renewable energy power intermittency, and solving technological and economic scale-up and permitting issues. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Minimal Self-Models and the Free Energy Principle

    Directory of Open Access Journals (Sweden)

    Jakub eLimanowski

    2013-09-01

    Full Text Available The term "minimal phenomenal selfhood" describes the basic, pre-reflective experience of being a self (Blanke & Metzinger, 2009. Theoretical accounts of the minimal self have long recognized the importance and the ambivalence of the body as both part of the physical world, and the enabling condition for being in this world (Gallagher, 2005; Grafton, 2009. A recent account of minimal phenomenal selfhood (MPS, Metzinger, 2004a centers on the consideration that minimal selfhood emerges as the result of basic self-modeling mechanisms, thereby being founded on pre-reflective bodily processes. The free energy principle (FEP, Friston, 2010 is a novel unified theory of cortical function that builds upon the imperative that self-organizing systems entail hierarchical generative models of the causes of their sensory input, which are optimized by minimizing free energy as an approximation of the log-likelihood of the model. The implementation of the FEP via predictive coding mechanisms and in particular the active inference principle emphasizes the role of embodiment for predictive self-modeling, which has been appreciated in recent publications. In this review, we provide an overview of these conceptions and illustrate thereby the potential power of the FEP in explaining the mechanisms underlying minimal selfhood and its key constituents, multisensory integration, interoception, agency, perspective, and the experience of mineness. We conclude that the conceptualization of MPS can be well mapped onto a hierarchical generative model furnished by the free energy principle and may constitute the basis for higher-level, cognitive forms of self-referral, as well as the understanding of other minds.

  16. Optimal Allocation of Renewable Energy Sources for Energy Loss Minimization

    Directory of Open Access Journals (Sweden)

    Vaiju Kalkhambkar

    2017-03-01

    Full Text Available Optimal allocation of renewable distributed generation (RDG, i.e., solar and the wind in a distribution system becomes challenging due to intermittent generation and uncertainty of loads. This paper proposes an optimal allocation methodology for single and hybrid RDGs for energy loss minimization. The deterministic generation-load model integrated with optimal power flow provides optimal solutions for single and hybrid RDG. Considering the complexity of the proposed nonlinear, constrained optimization problem, it is solved by a robust and high performance meta-heuristic, Symbiotic Organisms Search (SOS algorithm. Results obtained from SOS algorithm offer optimal solutions than Genetic Algorithm (GA, Particle Swarm Optimization (PSO and Firefly Algorithm (FFA. Economic analysis is carried out to quantify the economic benefits of energy loss minimization over the life span of RDGs.

  17. Dimensionality of Local Minimizers of the Interaction Energy

    KAUST Repository

    Balagué , D.; Carrillo, J. A.; Laurent, T.; Raoul, G.

    2013-01-01

    In this work we consider local minimizers (in the topology of transport distances) of the interaction energy associated with a repulsive-attractive potential. We show how the dimensionality of the support of local minimizers is related to the repulsive strength of the potential at the origin. © 2013 Springer-Verlag Berlin Heidelberg.

  18. Dimensionality of Local Minimizers of the Interaction Energy

    KAUST Repository

    Balagué, D.

    2013-05-22

    In this work we consider local minimizers (in the topology of transport distances) of the interaction energy associated with a repulsive-attractive potential. We show how the dimensionality of the support of local minimizers is related to the repulsive strength of the potential at the origin. © 2013 Springer-Verlag Berlin Heidelberg.

  19. Energy Efficient Smartphones: Minimizing the Energy Consumption of Smartphone GPUs using DVFS Governors

    KAUST Repository

    Ahmad, Enas M.

    2013-01-01

    , they are significantly adding an overhead on the limited energy of the battery. This thesis aims at enhancing the energy efficiency of modern smartphones and increasing their battery life by minimizing the energy consumption of smartphones Graphical Processing Unit (GPU

  20. Energy-efficient ECG compression on wireless biosensors via minimal coherence sensing and weighted ℓ₁ minimization reconstruction.

    Science.gov (United States)

    Zhang, Jun; Gu, Zhenghui; Yu, Zhu Liang; Li, Yuanqing

    2015-03-01

    Low energy consumption is crucial for body area networks (BANs). In BAN-enabled ECG monitoring, the continuous monitoring entails the need of the sensor nodes to transmit a huge data to the sink node, which leads to excessive energy consumption. To reduce airtime over energy-hungry wireless links, this paper presents an energy-efficient compressed sensing (CS)-based approach for on-node ECG compression. At first, an algorithm called minimal mutual coherence pursuit is proposed to construct sparse binary measurement matrices, which can be used to encode the ECG signals with superior performance and extremely low complexity. Second, in order to minimize the data rate required for faithful reconstruction, a weighted ℓ1 minimization model is derived by exploring the multisource prior knowledge in wavelet domain. Experimental results on MIT-BIH arrhythmia database reveals that the proposed approach can obtain higher compression ratio than the state-of-the-art CS-based methods. Together with its low encoding complexity, our approach can achieve significant energy saving in both encoding process and wireless transmission.

  1. TH-C-BRD-07: Minimizing Dose Uncertainty for Spot Scanning Beam Proton Therapy of Moving Tumor with Optimization of Delivery Sequence

    International Nuclear Information System (INIS)

    Li, H; Zhang, X; Zhu, X; Li, Y

    2014-01-01

    Purpose: Intensity modulated proton therapy (IMPT) has been shown to be able to reduce dose to normal tissue compared to intensity modulated photon radio-therapy (IMRT), and has been implemented for selected lung cancer patients. However, respiratory motion-induced dose uncertainty remain one of the major concerns for the radiotherapy of lung cancer, and the utility of IMPT for lung patients was limited because of the proton dose uncertainty induced by motion. Strategies such as repainting and tumor tracking have been proposed and studied but repainting could result in unacceptable long delivery time and tracking is not yet clinically available. We propose a novel delivery strategy for spot scanning proton beam therapy. Method: The effective number of delivery (END) for each spot position in a treatment plan was calculated based on the parameters of the delivery system, including time required for each spot, spot size and energy. The dose uncertainty was then calculated with an analytical formula. The spot delivery sequence was optimized to maximize END and minimize the dose uncertainty. 2D Measurements with a detector array on a 1D moving platform were performed to validate the calculated results. Results: 143 2D measurements on a moving platform were performed for different delivery sequences of a single layer uniform pattern. The measured dose uncertainty is a strong function of the delivery sequence, the worst delivery sequence results in dose error up to 70% while the optimized delivery sequence results in dose error of <5%. END vs. measured dose uncertainty follows the analytical formula. Conclusion: With optimized delivery sequence, it is feasible to minimize the dose uncertainty due to motion in spot scanning proton therapy

  2. Definable Group Extensions and o-Minimal Group Cohomology via Spectral Sequences

    OpenAIRE

    BARRIGA, ELIANA

    2013-01-01

    We provide the theoretical foundation for the Lyndon-Hochschild-Serre spectral sequence as a tool to study the group cohomology and with this the group extensions in the category of definable groups. We also present various results on definable modules and actions, definable extensions and group cohomology of definable groups. These have applications to the study of non-definably compact groups definable in o-minimal theories (see [1]). Se presenta el fundamento teórico para las sucesiones...

  3. Energy-efficient approach to minimizing the energy consumption in an extended job-shop scheduling problem

    Science.gov (United States)

    Tang, Dunbing; Dai, Min

    2015-09-01

    The traditional production planning and scheduling problems consider performance indicators like time, cost and quality as optimization objectives in manufacturing processes. However, environmentally-friendly factors like energy consumption of production have not been completely taken into consideration. Against this background, this paper addresses an approach to modify a given schedule generated by a production planning and scheduling system in a job shop floor, where machine tools can work at different cutting speeds. It can adjust the cutting speeds of the operations while keeping the original assignment and processing sequence of operations of each job fixed in order to obtain energy savings. First, the proposed approach, based on a mixed integer programming mathematical model, changes the total idle time of the given schedule to minimize energy consumption in the job shop floor while accepting the optimal solution of the scheduling objective, makespan. Then, a genetic-simulated annealing algorithm is used to explore the optimal solution due to the fact that the problem is strongly NP-hard. Finally, the effectiveness of the approach is performed smalland large-size instances, respectively. The experimental results show that the approach can save 5%-10% of the average energy consumption while accepting the optimal solution of the makespan in small-size instances. In addition, the average maximum energy saving ratio can reach to 13%. And it can save approximately 1%-4% of the average energy consumption and approximately 2.4% of the average maximum energy while accepting the near-optimal solution of the makespan in large-size instances. The proposed research provides an interesting point to explore an energy-aware schedule optimization for a traditional production planning and scheduling problem.

  4. Hoelder continuity of energy minimizer maps between Riemannian polyhedra

    International Nuclear Information System (INIS)

    Bouziane, Taoufik

    2004-10-01

    The goal of the present paper is to establish some kind of regularity of an energy minimizer map between Riemannian polyhedra. More precisely, we will show the Hoelder continuity of local energy minimizers between Riemannian polyhedra with the target spaces without focal points. With this new result, we also complete our existence theorem obtained elsewhere, and consequently we generalize completely, to the case of target polyhedra without focal points (which is a weaker geometric condition than the nonpositivity of the curvature), the Eells-Fuglede's existence and regularity theorem which is the new version of the famous Eells-Sampson's theorem. (author)

  5. Molecular mechanics calculations of proteins. Comparison of different energy minimization strategies

    DEFF Research Database (Denmark)

    Christensen, I T; Jørgensen, Flemming Steen

    1997-01-01

    A general strategy for performing energy minimization of proteins using the SYBYL molecular modelling program has been developed. The influence of several variables including energy minimization procedure, solvation, dielectric function and dielectric constant have been investigated in order...... to develop a general method, which is capable of producing high quality protein structures. Avian pancreatic polypeptide (APP) and bovine pancreatic phospholipase A2 (BP PLA2) were selected for the calculations, because high quality X-ray structures exist and because all classes of secondary structure...... for this protein. Energy minimized structures of the trimeric PLA2 from Indian cobra (N.n.n. PLA2) were used for assessing the impact of protein-protein interactions. Based on the above mentioned criteria, it could be concluded that using the following conditions: Dielectric constant epsilon = 4 or 20; a distance...

  6. Sequence co-evolutionary information is a natural partner to minimally-frustrated models of biomolecular dynamics [version 1; referees: 3 approved

    Directory of Open Access Journals (Sweden)

    Jeffrey K Noel

    2016-01-01

    Full Text Available Experimentally derived structural constraints have been crucial to the implementation of computational models of biomolecular dynamics. For example, not only does crystallography provide essential starting points for molecular simulations but also high-resolution structures permit for parameterization of simplified models. Since the energy landscapes for proteins and other biomolecules have been shown to be minimally frustrated and therefore funneled, these structure-based models have played a major role in understanding the mechanisms governing folding and many functions of these systems. Structural information, however, may be limited in many interesting cases. Recently, the statistical analysis of residue co-evolution in families of protein sequences has provided a complementary method of discovering residue-residue contact interactions involved in functional configurations. These functional configurations are often transient and difficult to capture experimentally. Thus, co-evolutionary information can be merged with that available for experimentally characterized low free-energy structures, in order to more fully capture the true underlying biomolecular energy landscape.

  7. Energy Efficient Smartphones: Minimizing the Energy Consumption of Smartphone GPUs using DVFS Governors

    KAUST Repository

    Ahmad, Enas M.

    2013-05-15

    Modern smartphones are being designed with increasing processing power, memory capacity, network communication, and graphics performance. Although all of these features are enriching and expanding the experience of a smartphone user, they are significantly adding an overhead on the limited energy of the battery. This thesis aims at enhancing the energy efficiency of modern smartphones and increasing their battery life by minimizing the energy consumption of smartphones Graphical Processing Unit (GPU). Smartphone operating systems are becoming fully hardware-accelerated, which implies relying on the GPU power for rendering all application graphics. In addition, the GPUs installed in smartphones are becoming more and more powerful by the day. This raises an energy consumption concern. We present a novel implementation of GPU Scaling Governors, a Dynamic Voltage and Frequency Scaling (DVFS) scheme implemented in the Android kernel to dynamically scale the GPU. The scheme includes four main governors: Performance, Powersave, Ondmand, and Conservative. Unlike previous studies which looked into the power efficiency of mobile GPUs only through simulation and power estimations, we have implemented our approach on a real modern smartphone GPU, and acquired actual energy measurements using an external power monitor. Our results show that the energy consumption of smartphones can be reduced up to 15% using the Conservative governor in 2D rendering mode, and up to 9% in 3D rendering mode, with minimal effect on the performance.

  8. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  9. Minimization of local impact of energy systems through exergy analysis

    International Nuclear Information System (INIS)

    Cassetti, Gabriele; Colombo, Emanuela

    2013-01-01

    Highlights: • The model proposed aims at minimizing local impact of energy systems. • The model is meant to minimize the impact starting from system thermodynamics. • The formulation combines exergy analysis and quantitative risk analysis. • The approach of the model is dual to Thermoeconomics. - Abstract: For the acceptability of energy systems, environmental impacts are becoming more and more important. One primary way for reducing impacts related to processes is by improving efficiency of plants. A key instrument currently used to verify such improvements is exergy analysis, extended to include also the environmental externalities generated by systems. Through exergy-based analyses, it is possible indeed to evaluate the overall amount of resources consumed along all the phases of the life cycle of a system, from construction to dismantling. However, resource consumption is a dimension of the impact of a system at global level, while it may not be considered a measure of its local impact. In the paper a complementary approach named Combined Risk and Exergy Analysis (CRExA) to assess impacts from major accidents in energy systems is proposed, based on the combination of classical exergy analysis and quantitative risk analysis (QRA). Impacts considered are focused on effects on human health. The approach leads to the identification of solutions to minimize damages of major accidents by acting on the energy system design

  10. Minimizing the number of segments in a delivery sequence for intensity-modulated radiation therapy with a multileaf collimator

    International Nuclear Information System (INIS)

    Dai Jianrong; Zhu Yunping

    2001-01-01

    This paper proposes a sequencing algorithm for intensity-modulated radiation therapy with a multileaf collimator in the static mode. The algorithm aims to minimize the number of segments in a delivery sequence. For a machine with a long verification and recording overhead time (e.g., 15 s per segment), minimizing the number of segments is equivalent to minimizing the delivery time. The proposed new algorithm is based on checking numerous candidates for a segment and selecting the candidate that results in a residual intensity matrix with the least complexity. When there is more than one candidate resulting in the same complexity, the candidate with the largest size is selected. The complexity of an intensity matrix is measured in the new algorithm in terms of the number of segments in the delivery sequence obtained by using a published algorithm. The beam delivery efficiency of the proposed algorithm and the influence of different published algorithms used to calculate the complexity of an intensity matrix were tested with clinical intensity-modulated beams. The results show that no matter which published algorithm is used to calculate the complexity of an intensity matrix, the sequence generated by the algorithm proposed here is always more efficient than that generated by the published algorithm itself. The results also show that the algorithm used to calculate the complexity of an intensity matrix affects the efficiency of beam delivery. The delivery sequences are frequently most efficient when the algorithm of Bortfeld et al. is used to calculate the complexity of an intensity matrix. Because no single variation is most efficient for all beams tested, we suggest implementing multiple variations of our algorithm

  11. Minimization of Dead-Periods in MRI Pulse Sequences for Imaging Oblique Planes

    Science.gov (United States)

    Atalar, Ergin; McVeigh, Elliot R.

    2007-01-01

    With the advent of breath-hold MR cardiac imaging techniques, the minimization of TR and TE for oblique planes has become a critical issue. The slew rates and maximum currents of gradient amplifiers limit the minimum possible TR and TE by adding dead-periods to the pulse sequences. We propose a method of designing gradient waveforms that will be applied to the amplifiers instead of the slice, readout, and phase encoding waveforms. Because this method ensures that the gradient amplifiers will always switch at their maximum slew rate, it results in the minimum possible dead-period for given imaging parameters and scan plane position. A GRASS pulse sequence has been designed and ultra-short TR and TE values have been obtained with standard gradient amplifiers and coils. For some oblique slices, we have achieved shorter TR and TE values than those for nonoblique slices. PMID:7869900

  12. Minimal nuclear energy density functional

    Science.gov (United States)

    Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi; Perez, Rodrigo Navarro; Schunck, Nicolas

    2018-04-01

    We present a minimal nuclear energy density functional (NEDF) called "SeaLL1" that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ɛr=0.022 fm and a standard deviation σr=0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body (NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body (NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. We identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.

  13. A strategy to find minimal energy nanocluster structures.

    Science.gov (United States)

    Rogan, José; Varas, Alejandro; Valdivia, Juan Alejandro; Kiwi, Miguel

    2013-11-05

    An unbiased strategy to search for the global and local minimal energy structures of free standing nanoclusters is presented. Our objectives are twofold: to find a diverse set of low lying local minima, as well as the global minimum. To do so, we use massively the fast inertial relaxation engine algorithm as an efficient local minimizer. This procedure turns out to be quite efficient to reach the global minimum, and also most of the local minima. We test the method with the Lennard-Jones (LJ) potential, for which an abundant literature does exist, and obtain novel results, which include a new local minimum for LJ13 , 10 new local minima for LJ14 , and thousands of new local minima for 15≤N≤65. Insights on how to choose the initial configurations, analyzing the effectiveness of the method in reaching low-energy structures, including the global minimum, are developed as a function of the number of atoms of the cluster. Also, a novel characterization of the potential energy surface, analyzing properties of the local minima basins, is provided. The procedure constitutes a promising tool to generate a diverse set of cluster conformations, both two- and three-dimensional, that can be used as an input for refinement by means of ab initio methods. Copyright © 2013 Wiley Periodicals, Inc.

  14. Sectors of solutions and minimal energies in classical Liouville theories for strings

    International Nuclear Information System (INIS)

    Johansson, L.; Kihlberg, A.; Marnelius, R.

    1984-01-01

    All classical solutions of the Liouville theory for strings having finite stable minimum energies are calculated explicitly together with their minimal energies. Our treatment automatically includes the set of natural solitonlike singularities described by Jorjadze, Pogrebkov, and Polivanov. Since the number of such singularities is preserved in time, a sector of solutions is not only characterized by its boundary conditions but also by its number of singularities. Thus, e.g., the Liouville theory with periodic boundary conditions has three different sectors of solutions with stable minimal energies containing zero, one, and two singularities. (Solutions with more singularities have no stable minimum energy.) It is argued that singular solutions do not make the string singular and therefore may be included in the string quantization

  15. Low-Energy Electron-Induced Strand Breaks in Telomere-Derived DNA Sequences-Influence of DNA Sequence and Topology.

    Science.gov (United States)

    Rackwitz, Jenny; Bald, Ilko

    2018-03-26

    During cancer radiation therapy high-energy radiation is used to reduce tumour tissue. The irradiation produces a shower of secondary low-energy (DNA very efficiently by dissociative electron attachment. Recently, it was suggested that low-energy electron-induced DNA strand breaks strongly depend on the specific DNA sequence with a high sensitivity of G-rich sequences. Here, we use DNA origami platforms to expose G-rich telomere sequences to low-energy (8.8 eV) electrons to determine absolute cross sections for strand breakage and to study the influence of sequence modifications and topology of telomeric DNA on the strand breakage. We find that the telomeric DNA 5'-(TTA GGG) 2 is more sensitive to low-energy electrons than an intermixed sequence 5'-(TGT GTG A) 2 confirming the unique electronic properties resulting from G-stacking. With increasing length of the oligonucleotide (i.e., going from 5'-(GGG ATT) 2 to 5'-(GGG ATT) 4 ), both the variety of topology and the electron-induced strand break cross sections increase. Addition of K + ions decreases the strand break cross section for all sequences that are able to fold G-quadruplexes or G-intermediates, whereas the strand break cross section for the intermixed sequence remains unchanged. These results indicate that telomeric DNA is rather sensitive towards low-energy electron-induced strand breakage suggesting significant telomere shortening that can also occur during cancer radiation therapy. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Three-Dimensional Dirac Oscillator with Minimal Length: Novel Phenomena for Quantized Energy

    Directory of Open Access Journals (Sweden)

    Malika Betrouche

    2013-01-01

    Full Text Available We study quantum features of the Dirac oscillator under the condition that the position and the momentum operators obey generalized commutationrelations that lead to the appearance of minimal length with the order of the Planck length, ∆xmin=ℏ3β+β′, where β and β′ are two positive small parameters. Wave functions of the system and the corresponding energy spectrum are derived rigorously. The presence of the minimal length accompanies a quadratic dependence of the energy spectrum on quantum number n, implying the property of hard confinement of the system. It is shown that the infinite degeneracy of energy levels appearing in the usual Dirac oscillator is vanished by the presence of the minimal length so long as β≠0. Not only in the nonrelativistic limit but also in the limit of the standard case (β=β′=0, our results reduce to well known usual ones.

  17. Efficient modified Jacobi relaxation for minimizing the energy functional

    International Nuclear Information System (INIS)

    Park, C.H.; Lee, I.; Chang, K.J.

    1993-01-01

    We present an efficient scheme of diagonalizing large Hamiltonian matrices in a self-consistent manner. In the framework of the preconditioned conjugate gradient minimization of the energy functional, we replace the modified Jacobi relaxation for preconditioning and use for band-by-band minimization the restricted-block Davidson algorithm, in which only the previous wave functions and the relaxation vectors are included additionally for subspace diagonalization. Our scheme is found to be comparable with the preconditioned conjugate gradient method for both large ordered and disordered Si systems, while it is more rapidly converged for systems with transition-metal elements

  18. Foraging site selection of two subspecies of Bar-tailed Godwit Limosa lapponica: time minimizers accept greater predation danger than energy minimizers

    NARCIS (Netherlands)

    Duijns, S.; Dijk, van J.G.B.; Spaans, B.; Jukema, J.; Boer, de W.F.; Piersma, Th.

    2009-01-01

    Different spatial distributions of food abundance and predators may urge birds to make a trade-off between food intake and danger. Such a trade-off might be solved in different ways in migrant birds that either follow a time-minimizing or energy-minimizing strategy; these strategies have been

  19. Foraging site selection of two subspecies of Bar-tailed Godwit Limosa lapponica : time minimizers accept greater predation danger than energy minimizers

    NARCIS (Netherlands)

    Duijns, Sjoerd; van Dijk, Jacintha G. B.; Spaans, Bernard; Jukema, Joop; de Boer, Willem F.; Piersma, Theunis

    2009-01-01

    Different spatial distributions Of food abundance and predators may urge birds to make a trade-off between food intake and danger. Such a trade-off might be solved in different ways in migrant birds that either follow a time-minimizing or energy-minimizing strategy; these strategies have been

  20. Detection of Cavities by Inverse Heat Conduction Boundary Element Method Using Minimal Energy Technique

    International Nuclear Information System (INIS)

    Choi, C. Y.

    1997-01-01

    A geometrical inverse heat conduction problem is solved for the infrared scanning cavity detection by the boundary element method using minimal energy technique. By minimizing the kinetic energy of temperature field, boundary element equations are converted to the quadratic programming problem. A hypothetical inner boundary is defined such that the actual cavity is located interior to the domain. Temperatures at hypothetical inner boundary are determined to meet the constraints of measurement error of surface temperature obtained by infrared scanning, and then boundary element analysis is performed for the position of an unknown boundary (cavity). Cavity detection algorithm is provided, and the effects of minimal energy technique on the inverse solution method are investigated by means of numerical analysis

  1. Prognostic value of deep sequencing method for minimal residual disease detection in multiple myeloma

    Science.gov (United States)

    Lahuerta, Juan J.; Pepin, François; González, Marcos; Barrio, Santiago; Ayala, Rosa; Puig, Noemí; Montalban, María A.; Paiva, Bruno; Weng, Li; Jiménez, Cristina; Sopena, María; Moorhead, Martin; Cedena, Teresa; Rapado, Immaculada; Mateos, María Victoria; Rosiñol, Laura; Oriol, Albert; Blanchard, María J.; Martínez, Rafael; Bladé, Joan; San Miguel, Jesús; Faham, Malek; García-Sanz, Ramón

    2014-01-01

    We assessed the prognostic value of minimal residual disease (MRD) detection in multiple myeloma (MM) patients using a sequencing-based platform in bone marrow samples from 133 MM patients in at least very good partial response (VGPR) after front-line therapy. Deep sequencing was carried out in patients in whom a high-frequency myeloma clone was identified and MRD was assessed using the IGH-VDJH, IGH-DJH, and IGK assays. The results were contrasted with those of multiparametric flow cytometry (MFC) and allele-specific oligonucleotide polymerase chain reaction (ASO-PCR). The applicability of deep sequencing was 91%. Concordance between sequencing and MFC and ASO-PCR was 83% and 85%, respectively. Patients who were MRD– by sequencing had a significantly longer time to tumor progression (TTP) (median 80 vs 31 months; P < .0001) and overall survival (median not reached vs 81 months; P = .02), compared with patients who were MRD+. When stratifying patients by different levels of MRD, the respective TTP medians were: MRD ≥10−3 27 months, MRD 10−3 to 10−5 48 months, and MRD <10−5 80 months (P = .003 to .0001). Ninety-two percent of VGPR patients were MRD+. In complete response patients, the TTP remained significantly longer for MRD– compared with MRD+ patients (131 vs 35 months; P = .0009). PMID:24646471

  2. Free energy minimization and information gain: The devil is in the details

    NARCIS (Netherlands)

    Kwisthout, J.H.P.; Rooij, I.J.E.I. van

    2015-01-01

    Contrary to Friston's previous work, this paper describes free energy minimization using categorical probability distributions over discrete states. This alternative mathematical framework exposes a fundamental, yet unnoticed challenge for the free energy principle. When considering discrete state

  3. Energy levels of one-dimensional systems satisfying the minimal length uncertainty relation

    Energy Technology Data Exchange (ETDEWEB)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    2016-10-15

    The standard approach to calculating the energy levels for quantum systems satisfying the minimal length uncertainty relation is to solve an eigenvalue problem involving a fourth- or higher-order differential equation in quasiposition space. It is shown that the problem can be reformulated so that the energy levels of these systems can be obtained by solving only a second-order quasiposition eigenvalue equation. Through this formulation the energy levels are calculated for the following potentials: particle in a box, harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well. For the particle in a box, the second-order quasiposition eigenvalue equation is a second-order differential equation with constant coefficients. For the harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well, a method that involves using Wronskians has been used to solve the second-order quasiposition eigenvalue equation. It is observed for all of these quantum systems that the introduction of a nonzero minimal length uncertainty induces a positive shift in the energy levels. It is shown that the calculation of energy levels in systems satisfying the minimal length uncertainty relation is not limited to a small number of problems like particle in a box and the harmonic oscillator but can be extended to a wider class of problems involving potentials such as the Pöschl–Teller and Gaussian wells.

  4. Energy Hub’s Structural and Operational Optimization for Minimal Energy Usage Costs in Energy Systems

    Directory of Open Access Journals (Sweden)

    Thanh Tung Ha

    2018-03-01

    Full Text Available The structural and optimal operation of an Energy Hub (EH has a tremendous influence on the hub’s performance and reliability. This paper envisions an innovative methodology that prominently increases the synergy between structural and operational optimization and targets system cost affordability. The generalized energy system structure is presented theoretically with all selective hub sub-modules, including electric heater (EHe and solar sources block sub-modules. To minimize energy usage cost, an energy hub is proposed that consists of 12 kinds of elements (i.e., energy resources, conversion, and storage functions and is modeled mathematically in a General Algebraic Modeling System (GAMS, which indicates the optimal hub structure’s corresponding elements with binary variables (0, 1. Simulation results contrast with 144 various scenarios established in all 144 categories of hub structures, in which for each scenario the corresponding optimal operation cost is previously calculated. These case studies demonstrate the effectiveness of the suggested model and methodology. Finally, avenues for future research are also prospected.

  5. Rigid Body Energy Minimization on Manifolds for Molecular Docking.

    Science.gov (United States)

    Mirzaei, Hanieh; Beglov, Dmitri; Paschalidis, Ioannis Ch; Vajda, Sandor; Vakili, Pirooz; Kozakov, Dima

    2012-11-13

    Virtually all docking methods include some local continuous minimization of an energy/scoring function in order to remove steric clashes and obtain more reliable energy values. In this paper, we describe an efficient rigid-body optimization algorithm that, compared to the most widely used algorithms, converges approximately an order of magnitude faster to conformations with equal or slightly lower energy. The space of rigid body transformations is a nonlinear manifold, namely, a space which locally resembles a Euclidean space. We use a canonical parametrization of the manifold, called the exponential parametrization, to map the Euclidean tangent space of the manifold onto the manifold itself. Thus, we locally transform the rigid body optimization to an optimization over a Euclidean space where basic optimization algorithms are applicable. Compared to commonly used methods, this formulation substantially reduces the dimension of the search space. As a result, it requires far fewer costly function and gradient evaluations and leads to a more efficient algorithm. We have selected the LBFGS quasi-Newton method for local optimization since it uses only gradient information to obtain second order information about the energy function and avoids the far more costly direct Hessian evaluations. Two applications, one in protein-protein docking, and the other in protein-small molecular interactions, as part of macromolecular docking protocols are presented. The code is available to the community under open source license, and with minimal effort can be incorporated into any molecular modeling package.

  6. Periodic-cylinder vesicle with minimal energy

    International Nuclear Information System (INIS)

    Xiao-Hua, Zhou

    2010-01-01

    We give some details about the periodic cylindrical solution found by Zhang and Ou-Yang in [1996 Phys. Rev. E 53 4206] for the general shape equation of vesicle. Three different kinds of periodic cylindrical surfaces and a special closed cylindrical surface are obtained. Using the elliptic functions contained in mathematic, we find that this periodic shape has the minimal total energy for one period when the period–amplitude ratio β ≈ 1.477, and point out that it is a discontinuous deformation between plane and this periodic shape. Our results also are suitable for DNA and multi-walled carbon nanotubes (MWNTs). (cross-disciplinary physics and related areas of science and technology)

  7. A Comparative Study for Orthogonal Subspace Projection and Constrained Energy Minimization

    National Research Council Canada - National Science Library

    Du, Qian; Ren, Hsuan; Chang, Chein-I

    2003-01-01

    ...: orthogonal subspace projection (OSP) and constrained energy minimization (CEM). It is shown that they are closely related and essentially equivalent provided that the noise is white with large SNR...

  8. Microgrids: Energy management by loss minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Basu, A.K. [Electrical Engineering Dept., Jadavpur University & 20/2, Khanpur Road, Kolkata 700047 (India); Chowdhury, S.; Chowdhury, S.P. [Electrical Engineering Department, University of Cape Town & Private Bag X3, Menzies Building, Room-517, Rondebosch, Cape Town 7701 (India)

    2011-07-01

    Energy management is a techno-economic issue, which dictates, in the context of microgrids, how optimal investment in technology front could bring optimal power quality and reliability (PQR) of supply to the consumers. Investment in distributed energy resources (DERs), with their connection to the utility grid at optimal locations and with optimal sizes, saves energy in the form of line loss reduction. Line loss reduction is the indirect benefit to the microgrid owner who may recover it as an incentive from utility. The present paper focuses on planning of optimal siting and sizing of DERs based on minimization of line loss. Optimal siting is done, here, on the loss sensitivity index (LSI) method and optimal sizing by differential evolution (DE) algorithms, which is, again, compared with particle swarm optimization (PSO) technique. Studies are conducted on 6-bus and 14-bus radial networks under islanded mode of operation with electric demand profile. Islanding helps planning of DER capacity of microgrid, which is self-sufficient to cater its own consumers without utility's support.

  9. Development of a waste minimization plan for a Department of Energy remedial action program: Ideas for minimizing waste in remediation scenarios

    International Nuclear Information System (INIS)

    Hubbard, Linda M.; Galen, Glen R.

    1992-01-01

    Waste minimization has become an important consideration in the management of hazardous waste because of regulatory as well as cost considerations. Waste minimization techniques are often process specific or industry specific and generally are not applicable to site remediation activities. This paper will examine ways in which waste can be minimized in a remediation setting such as the U.S. Department of Energy's Formerly Utilized Sites Remedial Action Program, where the bulk of the waste produced results from remediating existing contamination, not from generating new waste. (author)

  10. Ultra-fast evaluation of protein energies directly from sequence.

    Directory of Open Access Journals (Sweden)

    Gevorg Grigoryan

    2006-06-01

    Full Text Available The structure, function, stability, and many other properties of a protein in a fixed environment are fully specified by its sequence, but in a manner that is difficult to discern. We present a general approach for rapidly mapping sequences directly to their energies on a pre-specified rigid backbone, an important sub-problem in computational protein design and in some methods for protein structure prediction. The cluster expansion (CE method that we employ can, in principle, be extended to model any computable or measurable protein property directly as a function of sequence. Here we show how CE can be applied to the problem of computational protein design, and use it to derive excellent approximations of physical potentials. The approach provides several attractive advantages. First, following a one-time derivation of a CE expansion, the amount of time necessary to evaluate the energy of a sequence adopting a specified backbone conformation is reduced by a factor of 10(7 compared to standard full-atom methods for the same task. Second, the agreement between two full-atom methods that we tested and their CE sequence-based expressions is very high (root mean square deviation 1.1-4.7 kcal/mol, R2 = 0.7-1.0. Third, the functional form of the CE energy expression is such that individual terms of the expansion have clear physical interpretations. We derived expressions for the energies of three classic protein design targets-a coiled coil, a zinc finger, and a WW domain-as functions of sequence, and examined the most significant terms. Single-residue and residue-pair interactions are sufficient to accurately capture the energetics of the dimeric coiled coil, whereas higher-order contributions are important for the two more globular folds. For the task of designing novel zinc-finger sequences, a CE-derived energy function provides significantly better solutions than a standard design protocol, in comparable computation time. Given these advantages

  11. Cooperative Content Distribution over Wireless Networks for Energy and Delay Minimization

    KAUST Repository

    Atat, Rachad

    2012-06-01

    Content distribution with mobile-to-mobile cooperation is studied. Data is sent to mobile terminals on a long range link then the terminals exchange the content using an appropriate short range wireless technology. Unicasting and multicasting are investigated, both on the long range and short range links. Energy minimization is formulated as an optimization problem for each scenario, and the optimal solutions are determined in closed form. Moreover, the schemes are applied in public safety vehicular networks, where Long Term Evolution (LTE) network is used for the long range link, while IEEE 802.11 p is considered for inter-vehicle collaboration on the short range links. Finally, relay-based multicasting is applied in high speed trains for energy and delay minimization. Results show that cooperative schemes outperform non-cooperative ones and other previous related work in terms of energy and delay savings. Furthermore, practical implementation aspects of the proposed methods are also discussed.

  12. Minimizing Energy Spread In The REX/HIE-ISOLDE Linac

    CERN Document Server

    Yucemoz, Mert

    2017-01-01

    This report tries to minimize the energy spread of the beam at the end of the REX-HIE-ISOLDE Linac using the last RF cavity as a buncher. Beams with very low energy spread are often required by the users of the facility In addition, one of the main reason to have minimum energy spread in longitudinal phase space is that higher beam energy spread translates in to a position spread after interacting with target. This causes an overlap in the position of different particles that makes it difficult to distinguish them. Hence, in order to find the operation settings for minimum energy spread at the end of the REX-HIE-ISOLDE linac and to inspect the ongoing physics, several functions on Matlab were created that runs beam dynamics program called “TRACKV39” that provides some graphs and values as a result for analysis.

  13. Learning sequences on the subject of energy

    International Nuclear Information System (INIS)

    1986-01-01

    The ten learning sequences follow on one another. Each picks on a particular aspect from the energy field. The subject notebooks are self-contained and can therefore be used independently. Apart from actual data and energy-related information, the information for the teacher contains: - proposals for teaching - suggestions for further activities - sample solutions for the pupil's sheets - references to the literature and media. The worksheets for the pupils are different; it should be possible to use the learning sequences in all classes of secondary school stage 1. The multicoloured foils for projectors should motivate, on the one hand, and on the other hand should help to check the results of learning. (orig./HP) [de

  14. Energy minimization in medical image analysis: Methodologies and applications.

    Science.gov (United States)

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Free-energy minimization and the dark-room problem.

    Science.gov (United States)

    Friston, Karl; Thornton, Christopher; Clark, Andy

    2012-01-01

    Recent years have seen the emergence of an important new fundamental theory of brain function. This theory brings information-theoretic, Bayesian, neuroscientific, and machine learning approaches into a single framework whose overarching principle is the minimization of surprise (or, equivalently, the maximization of expectation). The most comprehensive such treatment is the "free-energy minimization" formulation due to Karl Friston (see e.g., Friston and Stephan, 2007; Friston, 2010a,b - see also Fiorillo, 2010; Thornton, 2010). A recurrent puzzle raised by critics of these models is that biological systems do not seem to avoid surprises. We do not simply seek a dark, unchanging chamber, and stay there. This is the "Dark-Room Problem." Here, we describe the problem and further unpack the issues to which it speaks. Using the same format as the prolog of Eddington's Space, Time, and Gravitation (Eddington, 1920) we present our discussion as a conversation between: an information theorist (Thornton), a physicist (Friston), and a philosopher (Clark).

  16. [Possible changes in energy-minimizer mechanisms of locomotion due to chronic low back pain - a literature review].

    Science.gov (United States)

    de Carvalho, Alberito Rodrigo; Andrade, Alexandro; Peyré-Tartaruga, Leonardo Alexandre

    2015-01-01

    One goal of the locomotion is to move the body in the space at the most economical way possible. However, little is known about the mechanical and energetic aspects of locomotion that are affected by low back pain. And in case of occurring some damage, little is known about how the mechanical and energetic characteristics of the locomotion are manifested in functional activities, especially with respect to the energy-minimizer mechanisms during locomotion. This study aimed: a) to describe the main energy-minimizer mechanisms of locomotion; b) to check if there are signs of damage on the mechanical and energetic characteristics of the locomotion due to chronic low back pain (CLBP) which may endanger the energy-minimizer mechanisms. This study is characterized as a narrative literature review. The main theory that explains the minimization of energy expenditure during the locomotion is the inverted pendulum mechanism, by which the energy-minimizer mechanism converts kinetic energy into potential energy of the center of mass and vice-versa during the step. This mechanism is strongly influenced by spatio-temporal gait (locomotion) parameters such as step length and preferred walking speed, which, in turn, may be severely altered in patients with chronic low back pain. However, much remains to be understood about the effects of chronic low back pain on the individual's ability to practice an economic locomotion, because functional impairment may compromise the mechanical and energetic characteristics of this type of gait, making it more costly. Thus, there are indications that such changes may compromise the functional energy-minimizer mechanisms. Copyright © 2014 Elsevier Editora Ltda. All rights reserved.

  17. New insights gained on mechanisms of low-energy proton-induced SEUs by minimizing energy straggle

    International Nuclear Information System (INIS)

    Dodds, Nathaniel Anson; Dodd, Paul E.; Shaneyfelt, Marty R.; Sexton, Frederick W.; Martinez, Marino J.; Black, Jeffrey D.; Marshall, P. W.; Reed, R. A.; McCurdy, M. W.; Weller, R. A.; Pellish, J. A.; Rodbell, K. P.; Gordon, M. S.

    2015-01-01

    In this study, we present low-energy proton single-event upset (SEU) data on a 65 nm SOI SRAM whose substrate has been completely removed. Since the protons only had to penetrate a very thin buried oxide layer, these measurements were affected by far less energy loss, energy straggle, flux attrition, and angular scattering than previous datasets. The minimization of these common sources of experimental interference allows more direct interpretation of the data and deeper insight into SEU mechanisms. The results show a strong angular dependence, demonstrate that energy straggle, flux attrition, and angular scattering affect the measured SEU cross sections, and prove that proton direct ionization is the dominant mechanism for low-energy proton-induced SEUs in these circuits

  18. Cooperative relay-based multicasting for energy and delay minimization

    KAUST Repository

    Atat, Rachad

    2012-08-01

    Relay-based multicasting for the purpose of cooperative content distribution is studied. Optimized relay selection is performed with the objective of minimizing the energy consumption or the content distribution delay within a cluster of cooperating mobiles. Two schemes are investigated. The first consists of the BS sending the data only to the relay, and the second scheme considers the scenario of threshold-based multicasting by the BS, where a relay is selected to transmit the data to the mobiles that were not able to receive the multicast data. Both schemes show significant superiority compared to the non-cooperative scenarios, in terms of energy consumption and delay reduction. © 2012 IEEE.

  19. A novel constraint for thermodynamically designing DNA sequences.

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    Full Text Available Biotechnological and biomolecular advances have introduced novel uses for DNA such as DNA computing, storage, and encryption. For these applications, DNA sequence design requires maximal desired (and minimal undesired hybridizations, which are the product of a single new DNA strand from 2 single DNA strands. Here, we propose a novel constraint to design DNA sequences based on thermodynamic properties. Existing constraints for DNA design are based on the Hamming distance, a constraint that does not address the thermodynamic properties of the DNA sequence. Using a unique, improved genetic algorithm, we designed DNA sequence sets which satisfy different distance constraints and employ a free energy gap based on a minimum free energy (MFE to gauge DNA sequences based on set thermodynamic properties. When compared to the best constraints of the Hamming distance, our method yielded better thermodynamic qualities. We then used our improved genetic algorithm to obtain lower-bound DNA sequence sets. Here, we discuss the effects of novel constraint parameters on the free energy gap.

  20. An existence result of energy minimizer maps between Riemannian polyhedra

    International Nuclear Information System (INIS)

    Bouziane, T.

    2004-06-01

    In this paper, we prove the existence of energy minimizers in each free homotopy class of maps between polyhedra with target space without focal points. Our proof involves a careful study of some geometric properties of Riemannian polyhedra without focal points. Among other things, we show that on the relevant polyhedra, there exists a convex supporting function. (author)

  1. Improving the performance of minimizers and winnowing schemes.

    Science.gov (United States)

    Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl

    2017-07-15

    The minimizers scheme is a method for selecting k -mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k -mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k -mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. We provide an in-depth analysis of the effect of k -mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al. ) on the expected density of minimizers in a random sequence. The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git . gmarcais@cs.cmu.edu or carlk@cs.cmu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  2. Minimal and contributing sequence determinants of the cis-acting locus of transfer (clt) of streptomycete plasmid pIJ101 occur within an intrinsically curved plasmid region.

    Science.gov (United States)

    Ducote, M J; Prakash, S; Pettis, G S

    2000-12-01

    Efficient interbacterial transfer of streptomycete plasmid pIJ101 requires the pIJ101 tra gene, as well as a cis-acting plasmid function known as clt. Here we show that the minimal pIJ101 clt locus consists of a sequence no greater than 54 bp in size that includes essential inverted-repeat and direct-repeat sequences and is located in close proximity to the 3' end of the korB regulatory gene. Evidence that sequences extending beyond the minimal locus and into the korB open reading frame influence clt transfer function and demonstration that clt-korB sequences are intrinsically curved raise the possibility that higher-order structuring of DNA and protein within this plasmid region may be an inherent feature of efficient pIJ101 transfer.

  3. Inference with minimal Gibbs free energy in information field theory

    International Nuclear Information System (INIS)

    Ensslin, Torsten A.; Weig, Cornelius

    2010-01-01

    Non-linear and non-Gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the Gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from Poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a Gaussian signal with unknown spectrum, and (iii) inference of a Poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how Gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-Gaussian posterior.

  4. Market clearing of joint energy and reserves auctions using augmented payment minimization

    International Nuclear Information System (INIS)

    Amjady, N.; Aghaei, J.; Shayanfar, H.A.

    2009-01-01

    This paper presents the market clearing of joint energy and reserves auctions and its mathematical formulation, focusing on a possible implementation of the Payment Cost Minimization (PCM). It also discusses another key point in debate: whether market clearing algorithm should minimize offer costs or payment costs? An aggregated simultaneous market clearing approach is proposed for provision of ancillary services as well as energy, which is in the form of Mixed Integer Nonlinear Programming (MINLP) formulation. In the MINLP formulation of the market clearing process, the objective function (Payment cost or offer cost) are optimized while meeting AC power flow constraints, system reserve requirements and lost opportunity cost (LOC) considerations. The model is applied to the IEEE 24-bus Reliability Test System (IEEE 24-bus RTS), and simulation studies are carried out to examine the effectiveness of each objective function. (author)

  5. Integrated analysis of 454 and Illumina transcriptomic sequencing characterizes carbon flux and energy source for fatty acid synthesis in developing Lindera glauca fruits for woody biodiesel.

    Science.gov (United States)

    Lin, Zixin; An, Jiyong; Wang, Jia; Niu, Jun; Ma, Chao; Wang, Libing; Yuan, Guanshen; Shi, Lingling; Liu, Lili; Zhang, Jinsong; Zhang, Zhixiang; Qi, Ji; Lin, Shanzhi

    2017-01-01

    Lindera glauca fruit with high quality and quantity of oil has emerged as a novel potential source of biodiesel in China, but the molecular regulatory mechanism of carbon flux and energy source for oil biosynthesis in developing fruits is still unknown. To better develop fruit oils of L. glauca as woody biodiesel, a combination of two different sequencing platforms (454 and Illumina) and qRT-PCR analysis was used to define a minimal reference transcriptome of developing L. glauca fruits, and to construct carbon and energy metabolic model for regulation of carbon partitioning and energy supply for FA biosynthesis and oil accumulation. We first analyzed the dynamic patterns of growth tendency, oil content, FA compositions, biodiesel properties, and the contents of ATP and pyridine nucleotide of L. glauca fruits from seven different developing stages. Comprehensive characterization of transcriptome of the developing L. glauca fruit was performed using a combination of two different next-generation sequencing platforms, of which three representative fruit samples (50, 125, and 150 DAF) and one mixed sample from seven developing stages were selected for Illumina and 454 sequencing, respectively. The unigenes separately obtained from long and short reads (201, and 259, respectively, in total) were reconciled using TGICL software, resulting in a total of 60,031 unigenes (mean length = 1061.95 bp) to describe a transcriptome for developing L. glauca fruits. Notably, 198 genes were annotated for photosynthesis, sucrose cleavage, carbon allocation, metabolite transport, acetyl-CoA formation, oil synthesis, and energy metabolism, among which some specific transporters, transcription factors, and enzymes were identified to be implicated in carbon partitioning and energy source for oil synthesis by an integrated analysis of transcriptomic sequencing and qRT-PCR. Importantly, the carbon and energy metabolic model was well established for oil biosynthesis of developing L

  6. Wormholes minimally violating the null energy condition

    Energy Technology Data Exchange (ETDEWEB)

    Bouhmadi-López, Mariam [Departamento de Física, Universidade da Beira Interior, 6200 Covilhã (Portugal); Lobo, Francisco S N; Martín-Moruno, Prado, E-mail: mariam.bouhmadi@ehu.es, E-mail: fslobo@fc.ul.pt, E-mail: pmmoruno@fc.ul.pt [Centro de Astronomia e Astrofísica da Universidade de Lisboa, Campo Grande, Edifício C8, 1749-016 Lisboa (Portugal)

    2014-11-01

    We consider novel wormhole solutions supported by a matter content that minimally violates the null energy condition. More specifically, we consider an equation of state in which the sum of the energy density and radial pressure is proportional to a constant with a value smaller than that of the inverse area characterising the system, i.e., the area of the wormhole mouth. This approach is motivated by a recently proposed cosmological event, denoted {sup t}he little sibling of the big rip{sup ,} where the Hubble rate and the scale factor blow up but the cosmic derivative of the Hubble rate does not [1]. By using the cut-and-paste approach, we match interior spherically symmetric wormhole solutions to an exterior Schwarzschild geometry, and analyse the stability of the thin-shell to linearized spherically symmetric perturbations around static solutions, by choosing suitable properties for the exotic material residing on the junction interface radius. Furthermore, we also consider an inhomogeneous generalization of the equation of state considered above and analyse the respective stability regions. In particular, we obtain a specific wormhole solution with an asymptotic behaviour corresponding to a global monopole.

  7. Fuzzy-TLBO optimal reactive power control variables planning for energy loss minimization

    International Nuclear Information System (INIS)

    Moghadam, Ahmad; Seifi, Ali Reza

    2014-01-01

    Highlights: • A new approach to the problem of optimal reactive power control variables planning is proposed. • The energy loss minimization problem has been formulated by modeling the load of system as a Load Duration Curve. • To solving the energy loss problem, the classic methods and the evolutionary methods are used. • A new proposed fuzzy teaching–learning based algorithm is applied to energy loss problem. • Simulations are done to show the effectiveness and superiority of the proposed algorithm compared with other methods. - Abstract: This paper offers a new approach to the problem of optimal reactive power control variables planning (ORPVCP). The basic idea is division of Load Duration Curve (LDC) into several time intervals with constant active power demand in each interval and then solving the energy loss minimization (ELM) problem to obtain an optimal initial set of control variables of the system so that is valid for all time intervals and can be used as an initial operating condition of the system. In this paper, the ELM problem has been solved by the linear programming (LP) and fuzzy linear programming (Fuzzy-LP) and evolutionary algorithms i.e. MHBMO and TLBO and the results are compared with the proposed Fuzzy-TLBO method. In the proposed method both objective function and constraints are evaluated by membership functions. The inequality constraints are embedded into the fitness function by the membership function of the fuzzy decision and the problem is modeled by fuzzy set theory. The proposed Fuzzy-TLBO method is performed on the IEEE 30 bus test system by considering two different LDC; and it is shown that using this method has further minimized objective function than original TLBO and other optimization techniques and confirms its potential to solve the ORPCVP problem with considering ELM as the objective function

  8. Minimizing the Energy Consumption in ‎Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Mohammed Saad Talib

    2017-12-01

    Full Text Available Energy in Wireless Sensor networks (WSNs represents an essential factor in designing, controlling and operating the sensor networks. Minimizing the consumed energy in WSNs application is a crucial issue for the network effectiveness and efficiency in terms of lifetime, cost and operation. Number of algorithms and protocols were proposed and implemented to decrease the energy consumption. WSNs operate with battery powered sensors. Sensors batteries have not easily rechargeable even though having restricted power. Frequently the network failure occurs due to the sensors energy insufficiency. MAC protocols in WSNs achieved low duty-cycle by employing periodic sleep and wakeup. Predictive Wakeup MAC (PW-MAC protocol was made use of the asynchronous duty cycling. It reduces the consumption of the node energy by allowing the senders to predict the receiver′s wakeup times. The WSN must be applied in an efficient manner to utilize the sensor nodes and their energy to ensure efficient network throughput. Prediction of the WSN lifetime previously to its installation represents a significant concern. To ensure energy efficiency the sensors duty cycles must be adjusted appropriately to meet the network traffic demands. The energy consumed in each node due to its switching between the active and the idle states were also estimated. The sensors are assumed to be randomly deployed. This paper aims to improve the randomly deployed network lifetime by scheduling the effects of transmission, reception and sleep states on the sensor node energy consumption. Results for these states with many performance metrics were also studied and discussed

  9. Novel DNA sequence detection method based on fluorescence energy transfer

    International Nuclear Information System (INIS)

    Kobayashi, S.; Tamiya, E.; Karube, I.

    1987-01-01

    Recently the detection of specific DNA sequence, DNA analysis, has been becoming more important for diagnosis of viral genomes causing infections disease and human sequences related to inherited disorders. These methods typically involve electrophoresis, the immobilization of DNA on a solid support, hybridization to a complementary probe, the detection using labeled with /sup 32/P or nonisotopically with a biotin-avidin-enzyme system, and so on. These techniques are highly effective, but they are very time-consuming and expensive. A principle of fluorescene energy transfer is that the light energy from an excited donor (fluorophore) is transferred to an acceptor (fluorophore), if the acceptor exists in the vicinity of the donor and the excitation spectrum of donor overlaps the emission spectrum of acceptor. In this study, the fluorescence energy transfer was applied to the detection of specific DNA sequence using the hybridization method. The analyte, single-stranded DNA labeled with the donor fluorophore is hybridized to a probe DNA labeled with the acceptor. Because of the complementary DNA duplex formation, two fluorophores became to be closed to each other, and the fluorescence energy transfer was occurred

  10. Beyond Group: Multiple Person Tracking via Minimal Topology-Energy-Variation.

    Science.gov (United States)

    Gao, Shan; Ye, Qixiang; Xing, Junliang; Kuijper, Arjan; Han, Zhenjun; Jiao, Jianbin; Ji, Xiangyang

    2017-12-01

    Tracking multiple persons is a challenging task when persons move in groups and occlude each other. Existing group-based methods have extensively investigated how to make group division more accurately in a tracking-by-detection framework; however, few of them quantify the group dynamics from the perspective of targets' spatial topology or consider the group in a dynamic view. Inspired by the sociological properties of pedestrians, we propose a novel socio-topology model with a topology-energy function to factor the group dynamics of moving persons and groups. In this model, minimizing the topology-energy-variance in a two-level energy form is expected to produce smooth topology transitions, stable group tracking, and accurate target association. To search for the strong minimum in energy variation, we design the discrete group-tracklet jump moves embedded in the gradient descent method, which ensures that the moves reduce the energy variation of group and trajectory alternately in the varying topology dimension. Experimental results on both RGB and RGB-D data sets show the superiority of our proposed model for multiple person tracking in crowd scenes.

  11. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    Science.gov (United States)

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  12. Optimal replacement of residential air conditioning equipment to minimize energy, greenhouse gas emissions, and consumer cost in the US

    International Nuclear Information System (INIS)

    De Kleine, Robert D.; Keoleian, Gregory A.; Kelly, Jarod C.

    2011-01-01

    A life cycle optimization of the replacement of residential central air conditioners (CACs) was conducted in order to identify replacement schedules that minimized three separate objectives: life cycle energy consumption, greenhouse gas (GHG) emissions, and consumer cost. The analysis was conducted for the time period of 1985-2025 for Ann Arbor, MI and San Antonio, TX. Using annual sales-weighted efficiencies of residential CAC equipment, the tradeoff between potential operational savings and the burdens of producing new, more efficient equipment was evaluated. The optimal replacement schedule for each objective was identified for each location and service scenario. In general, minimizing energy consumption required frequent replacement (4-12 replacements), minimizing GHG required fewer replacements (2-5 replacements), and minimizing cost required the fewest replacements (1-3 replacements) over the time horizon. Scenario analysis of different federal efficiency standards, regional standards, and Energy Star purchases were conducted to quantify each policy's impact. For example, a 16 SEER regional standard in Texas was shown to either reduce primary energy consumption 13%, GHGs emissions by 11%, or cost by 6-7% when performing optimal replacement of CACs from 2005 or before. The results also indicate that proper servicing should be a higher priority than optimal replacement to minimize environmental burdens. - Highlights: → Optimal replacement schedules for residential central air conditioners were found. → Minimizing energy required more frequent replacement than minimizing consumer cost. → Significant variation in optimal replacement was observed for Michigan and Texas. → Rebates for altering replacement patterns are not cost effective for GHG abatement. → Maintenance levels were significant in determining the energy and GHG impacts.

  13. Interactive seismic interpretation with piecewise global energy minimization

    KAUST Repository

    Hollt, Thomas; Beyer, Johanna; Gschwantner, Fritz M.; Muigg, Philipp; Doleisch, Helmut; Heinemann, Gabor F.; Hadwiger, Markus

    2011-01-01

    Increasing demands in world-wide energy consumption and oil depletion of large reservoirs have resulted in the need for exploring smaller and more complex oil reservoirs. Planning of the reservoir valorization usually starts with creating a model of the subsurface structures, including seismic faults and horizons. However, seismic interpretation and horizon tracing is a difficult and error-prone task, often resulting in hours of work needing to be manually repeated. In this paper, we propose a novel, interactive workflow for horizon interpretation based on well positions, which include additional geological and geophysical data captured by actual drillings. Instead of interpreting the volume slice-by-slice in 2D, we propose 3D seismic interpretation based on well positions. We introduce a combination of 2D and 3D minimal cost path and minimal cost surface tracing for extracting horizons with very little user input. By processing the volume based on well positions rather than slice-based, we are able to create a piecewise optimal horizon surface at interactive rates. We have integrated our system into a visual analysis platform which supports multiple linked views for fast verification, exploration and analysis of the extracted horizons. The system is currently being evaluated by our collaborating domain experts. © 2011 IEEE.

  14. Interactive seismic interpretation with piecewise global energy minimization

    KAUST Repository

    Hollt, Thomas

    2011-03-01

    Increasing demands in world-wide energy consumption and oil depletion of large reservoirs have resulted in the need for exploring smaller and more complex oil reservoirs. Planning of the reservoir valorization usually starts with creating a model of the subsurface structures, including seismic faults and horizons. However, seismic interpretation and horizon tracing is a difficult and error-prone task, often resulting in hours of work needing to be manually repeated. In this paper, we propose a novel, interactive workflow for horizon interpretation based on well positions, which include additional geological and geophysical data captured by actual drillings. Instead of interpreting the volume slice-by-slice in 2D, we propose 3D seismic interpretation based on well positions. We introduce a combination of 2D and 3D minimal cost path and minimal cost surface tracing for extracting horizons with very little user input. By processing the volume based on well positions rather than slice-based, we are able to create a piecewise optimal horizon surface at interactive rates. We have integrated our system into a visual analysis platform which supports multiple linked views for fast verification, exploration and analysis of the extracted horizons. The system is currently being evaluated by our collaborating domain experts. © 2011 IEEE.

  15. Low-dose dual-energy cone-beam CT using a total-variation minimization algorithm

    International Nuclear Information System (INIS)

    Min, Jong Hwan

    2011-02-01

    Dual-energy cone-beam CT is an important imaging modality in diagnostic applications, and may also find its use in other application such as therapeutic image guidance. Despite of its clinical values, relatively high radiation dose of dual-energy scan may pose a challenge to its wide use. In this work, we investigated a low-dose, pre-reconstruction type of dual-energy cone-beam CT (CBCT) using a total-variation minimization algorithm for image reconstruction. An empirical dual-energy calibration method was used to prepare material-specific projection data. Raw data at high and low tube voltages are converted into a set of basis functions which can be linearly combined to produce material-specific data using the coefficients obtained through the calibration process. From much fewer views than are conventionally used, material specific images are reconstructed by use of the total-variation minimization algorithm. An experimental study was performed to demonstrate the feasibility of the proposed method using a micro-CT system. We have reconstructed images of the phantoms from only 90 projections acquired at tube voltages of 40 kVp and 90 kVp each. Aluminum-only and acryl-only images were successfully decomposed. We evaluated the quality of the reconstructed images by use of contrast-to-noise ratio and detectability. A low-dose dual-energy CBCT can be realized via the proposed method by greatly reducing the number of projections

  16. Is the climate system an anticipatory system that minimizes free energy?

    Science.gov (United States)

    Rubin, Sergio; Crucifix, Michel

    2017-04-01

    All systems, whether they are alive or not are structured determined systems, i.e. their present states [x (t)] depends of past states [x (t - α)]. However it has been suggested [Rosen, 1985; Friston, 2013] that systems that contain life are capable of anticipation and active inference. The underlying principle is that state changes in living systems are best modelled as a function of past and future states [ x(t) = f (x (t - α), x(t), x (t + β)) ]. The reason for this is that living systems contain a predictive model of their ambiance on which they are active: they appear to model their ambiance to preserve their integrity and homeorhesis. We therefore formulate the following hypothesis: can the climate system be interpreted as an anticipatory system that minimizes free energy? Can its variability (catastrophe, bifurcation and/or tipping points) be interpreted in terms of active inference and anticipation failure? Here we present a mathematical formulation of the climate system as an anticipatory system that minimizes free energy and its possible implication in the future climate predictability. References Rosen, R. (1985). Anticipatory systems. In Anticipatory systems (pp. 313-370). Springer New York. Friston, K. (2013). Life as we know it. Journal of the Royal Society Interface, 10(86), 20130475.

  17. Free Energy Minimization Calculation of Complex Chemical Equilibria. Reduction of Silicon Dioxide with Carbon at High Temperature.

    Science.gov (United States)

    Wai, C. M.; Hutchinson, S. G.

    1989-01-01

    Discusses the calculation of free energy in reactions between silicon dioxide and carbon. Describes several computer programs for calculating the free energy minimization and their uses in chemistry classrooms. Lists 16 references. (YP)

  18. Energy-minimized design in all-optical networks using unicast/multicast traffic grooming

    Science.gov (United States)

    Puche, William S.; Amaya, Ferney O.; Sierra, Javier E.

    2013-09-01

    The increased bandwidth required by applications, tends to raise the amount of optical equipment, for this reason, it is essential to maintain a balance between the wavelength allocation, available capacity and number of optical devices to achieve the lowest power consumption. You could say that we propose a model that minimizes energy consumption, using unicast / multicast traffic grooming in optical networks.

  19. Outage Probability Minimization for Energy Harvesting Cognitive Radio Sensor Networks

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2017-01-01

    Full Text Available The incorporation of cognitive radio (CR capability in wireless sensor networks yields a promising network paradigm known as CR sensor networks (CRSNs, which is able to provide spectrum efficient data communication. However, due to the high energy consumption results from spectrum sensing, as well as subsequent data transmission, the energy supply for the conventional sensor nodes powered by batteries is regarded as a severe bottleneck for sustainable operation. The energy harvesting technique, which gathers energy from the ambient environment, is regarded as a promising solution to perpetually power-up energy-limited devices with a continual source of energy. Therefore, applying the energy harvesting (EH technique in CRSNs is able to facilitate the self-sustainability of the energy-limited sensors. The primary concern of this study is to design sensing-transmission policies to minimize the long-term outage probability of EH-powered CR sensor nodes. We formulate this problem as an infinite-horizon discounted Markov decision process and propose an ϵ-optimal sensing-transmission (ST policy through using the value iteration algorithm. ϵ is the error bound between the ST policy and the optimal policy, which can be pre-defined according to the actual need. Moreover, for a special case that the signal-to-noise (SNR power ratio is sufficiently high, we present an efficient transmission (ET policy and prove that the ET policy achieves the same performance with the ST policy. Finally, extensive simulations are conducted to evaluate the performance of the proposed policies and the impaction of various network parameters.

  20. Structural differences of matrix metalloproteinases. Homology modeling and energy minimization of enzyme-substrate complexes

    DEFF Research Database (Denmark)

    Terp, G E; Christensen, I T; Jørgensen, Flemming Steen

    2000-01-01

    Matrix metalloproteinases are extracellular enzymes taking part in the remodeling of extracellular matrix. The structures of the catalytic domain of MMP1, MMP3, MMP7 and MMP8 are known, but structures of enzymes belonging to this family still remain to be determined. A general approach...... to the homology modeling of matrix metalloproteinases, exemplified by the modeling of MMP2, MMP9, MMP12 and MMP14 is described. The models were refined using an energy minimization procedure developed for matrix metalloproteinases. This procedure includes incorporation of parameters for zinc and calcium ions...... in the AMBER 4.1 force field, applying a non-bonded approach and a full ion charge representation. Energy minimization of the apoenzymes yielded structures with distorted active sites, while reliable three-dimensional structures of the enzymes containing a substrate in active site were obtained. The structural...

  1. Segmentation of Synchrotron Radiation micro-Computed Tomography Images using Energy Minimization via Graph Cuts

    International Nuclear Information System (INIS)

    Meneses, Anderson A.M.; Giusti, Alessandro; Almeida, André P. de; Nogueira, Liebert; Braz, Delson; Almeida, Carlos E. de; Barroso, Regina C.

    2012-01-01

    The research on applications of segmentation algorithms to Synchrotron Radiation X-Ray micro-Computed Tomography (SR-μCT) is an open problem, due to the interesting and well-known characteristics of SR images, such as the phase contrast effect. The Energy Minimization via Graph Cuts (EMvGC) algorithm represents state-of-art segmentation algorithm, presenting an enormous potential of application in SR-μCT imaging. We describe the application of the algorithm EMvGC with swap move for the segmentation of bone images acquired at the ELETTRA Laboratory (Trieste, Italy). - Highlights: ► Microstructures of Wistar rats' ribs are investigated with Synchrotron Radiation μCT imaging. ► The present work is part of a research on the effects of radiotherapy on the thoracic region. ► Application of the Energy Minimization via Graph Cuts algorithm for segmentation is described.

  2. Power allocation strategies to minimize energy consumption in wireless body area networks.

    Science.gov (United States)

    Kailas, Aravind

    2011-01-01

    The wide scale deployment of wireless body area networks (WBANs) hinges on designing energy efficient communication protocols to support the reliable communication as well as to prolong the network lifetime. Cooperative communications, a relatively new idea in wireless communications, offers the benefits of multi-antenna systems, thereby improving the link reliability and boosting energy efficiency. In this short paper, the advantages of resorting to cooperative communications for WBANs in terms of minimized energy consumption are investigated. Adopting an energy model that encompasses energy consumptions in the transmitter and receiver circuits, and transmitting energy per bit, it is seen that cooperative transmission can improve energy efficiency of the wireless network. In particular, the problem of optimal power allocation is studied with the constraint of targeted outage probability. Two strategies of power allocation are considered: power allocation with and without posture state information. Using analysis and simulation-based results, two key points are demonstrated: (i) allocating power to the on-body sensors making use of the posture information can reduce the total energy consumption of the WBAN; and (ii) when the channel condition is good, it is better to recruit less relays for cooperation to enhance energy efficiency.

  3. Online Speed Scaling Based on Active Job Count to Minimize Flow Plus Energy

    DEFF Research Database (Denmark)

    Lam, Tak-Wah; Lee, Lap Kei; To, Isaac K. K.

    2013-01-01

    This paper is concerned with online scheduling algorithms that aim at minimizing the total flow time plus energy usage. The results are divided into two parts. First, we consider the well-studied “simple” speed scaling model and show how to analyze a speed scaling algorithm (called AJC) that chan...

  4. Metal-insulator transition in one-dimensional lattices with chaotic energy sequences

    International Nuclear Information System (INIS)

    Pinto, R.A.; Rodriguez, M.; Gonzalez, J.A.; Medina, E.

    2005-01-01

    We study electronic transport through a one-dimensional array of sites by using a tight binding Hamiltonian, whose site-energies are drawn from a chaotic sequence. The correlation degree between these energies is controlled by a parameter regulating the dynamic Lyapunov exponent measuring the degree of chaos. We observe the effect of chaotic sequences on the localization length, conductance, conductance distribution and wave function, finding evidence of a metal-insulator transition (MIT) at a critical degree of chaos. The one-dimensional metallic phase is characterized by a Gaussian conductance distribution and exhibits a peculiar non-selfaveraging

  5. Metal-insulator transition in one-dimensional lattices with chaotic energy sequences

    Energy Technology Data Exchange (ETDEWEB)

    Pinto, R.A. [Laboratorio de Fisica Estadistica, Centro de Fisica, Instituto Venezolano de Investigaciones Cientificas, Apartado 21827, Caracas 1020-A (Venezuela)]. E-mail: ripinto@ivic.ve; Rodriguez, M. [Laboratorio de Fisica Estadistica, Centro de Fisica, Instituto Venezolano de Investigaciones Cientificas, Apartado 21827, Caracas 1020-A (Venezuela); Gonzalez, J.A. [Laboratorio de Fisica Computacional, Centro de Fisica, Instituto Venezolano de Investigaciones Cientificas, Apartado 21827, Caracas 1020-A (Venezuela); Medina, E. [Laboratorio de Fisica Estadistica, Centro de Fisica, Instituto Venezolano de Investigaciones Cientificas, Apartado 21827, Caracas 1020-A (Venezuela)

    2005-06-20

    We study electronic transport through a one-dimensional array of sites by using a tight binding Hamiltonian, whose site-energies are drawn from a chaotic sequence. The correlation degree between these energies is controlled by a parameter regulating the dynamic Lyapunov exponent measuring the degree of chaos. We observe the effect of chaotic sequences on the localization length, conductance, conductance distribution and wave function, finding evidence of a metal-insulator transition (MIT) at a critical degree of chaos. The one-dimensional metallic phase is characterized by a Gaussian conductance distribution and exhibits a peculiar non-selfaveraging.

  6. Smart HVAC Control in IoT: Energy Consumption Minimization with User Comfort Constraints

    Directory of Open Access Journals (Sweden)

    Jordi Serra

    2014-01-01

    of heating, ventilation, and air conditioning (HVAC systems in smart grids with variable energy price. To that end, first, we propose an energy scheduling method that minimizes the energy consumption cost for a particular time interval, taking into account the energy price and a set of comfort constraints, that is, a range of temperatures according to user’s preferences for a given room. Then, we propose an energy scheduler where the user may select to relax the temperature constraints to save more energy. Moreover, thanks to the IoT paradigm, the user may interact remotely with the HVAC control system. In particular, the user may decide remotely the temperature of comfort, while the temperature and energy consumption information is sent through Internet and displayed at the end user’s device. The proposed algorithms have been implemented in a real testbed, highlighting the potential gains that can be achieved in terms of both energy and cost.

  7. On the uniqueness of minimizers for a class of variational problems with Polyconvex integrand

    KAUST Repository

    Awi, Romeo

    2017-02-05

    We prove existence and uniqueness of minimizers for a family of energy functionals that arises in Elasticity and involves polyconvex integrands over a certain subset of displacement maps. This work extends previous results by Awi and Gangbo to a larger class of integrands. First, we study these variational problems over displacements for which the determinant is positive. Second, we consider a limit case in which the functionals are degenerate. In that case, the set of admissible displacements reduces to that of incompressible displacements which are measure preserving maps. Finally, we establish that the minimizer over the set of incompressible maps may be obtained as a limit of minimizers corresponding to a sequence of minimization problems over general displacements provided we have enough regularity on the dual problems. We point out that these results defy the direct methods of the calculus of variations.

  8. Segmentation of Synchrotron Radiation micro-Computed Tomography Images using Energy Minimization via Graph Cuts

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Anderson A.M. [Federal University of Western Para (Brazil); Physics Institute, Rio de Janeiro State University (Brazil); Giusti, Alessandro [IDSIA (Dalle Molle Institute for Artificial Intelligence), University of Lugano (Switzerland); Almeida, Andre P. de, E-mail: apalmeid@gmail.com [Physics Institute, Rio de Janeiro State University (Brazil); Nuclear Engineering Program, Federal University of Rio de Janeiro (Brazil); Nogueira, Liebert; Braz, Delson [Nuclear Engineering Program, Federal University of Rio de Janeiro (Brazil); Almeida, Carlos E. de [Radiological Sciences Laboratory, Rio de Janeiro State University (Brazil); Barroso, Regina C. [Physics Institute, Rio de Janeiro State University (Brazil)

    2012-07-15

    The research on applications of segmentation algorithms to Synchrotron Radiation X-Ray micro-Computed Tomography (SR-{mu}CT) is an open problem, due to the interesting and well-known characteristics of SR images, such as the phase contrast effect. The Energy Minimization via Graph Cuts (EMvGC) algorithm represents state-of-art segmentation algorithm, presenting an enormous potential of application in SR-{mu}CT imaging. We describe the application of the algorithm EMvGC with swap move for the segmentation of bone images acquired at the ELETTRA Laboratory (Trieste, Italy). - Highlights: Black-Right-Pointing-Pointer Microstructures of Wistar rats' ribs are investigated with Synchrotron Radiation {mu}CT imaging. Black-Right-Pointing-Pointer The present work is part of a research on the effects of radiotherapy on the thoracic region. Black-Right-Pointing-Pointer Application of the Energy Minimization via Graph Cuts algorithm for segmentation is described.

  9. Energy minimization of mobile video devices with a hardware H.264/AVC encoder based on energy-rate-distortion optimization

    Science.gov (United States)

    Kang, Donghun; Lee, Jungeon; Jung, Jongpil; Lee, Chul-Hee; Kyung, Chong-Min

    2014-09-01

    In mobile video systems powered by battery, reducing the encoder's compression energy consumption is critical to prolong its lifetime. Previous Energy-rate-distortion (E-R-D) optimization methods based on a software codec is not suitable for practical mobile camera systems because the energy consumption is too large and encoding rate is too low. In this paper, we propose an E-R-D model for the hardware codec based on the gate-level simulation framework to measure the switching activity and the energy consumption. From the proposed E-R-D model, an energy minimizing algorithm for mobile video camera sensor have been developed with the GOP (Group of Pictures) size and QP(Quantization Parameter) as run-time control variables. Our experimental results show that the proposed algorithm provides up to 31.76% of energy consumption saving while satisfying the rate and distortion constraints.

  10. Smart HVAC control in IoT: energy consumption minimization with user comfort constraints.

    Science.gov (United States)

    Serra, Jordi; Pubill, David; Antonopoulos, Angelos; Verikoukis, Christos

    2014-01-01

    Smart grid is one of the main applications of the Internet of Things (IoT) paradigm. Within this context, this paper addresses the efficient energy consumption management of heating, ventilation, and air conditioning (HVAC) systems in smart grids with variable energy price. To that end, first, we propose an energy scheduling method that minimizes the energy consumption cost for a particular time interval, taking into account the energy price and a set of comfort constraints, that is, a range of temperatures according to user's preferences for a given room. Then, we propose an energy scheduler where the user may select to relax the temperature constraints to save more energy. Moreover, thanks to the IoT paradigm, the user may interact remotely with the HVAC control system. In particular, the user may decide remotely the temperature of comfort, while the temperature and energy consumption information is sent through Internet and displayed at the end user's device. The proposed algorithms have been implemented in a real testbed, highlighting the potential gains that can be achieved in terms of both energy and cost.

  11. Evaluation of the accuracy of the free-energy-minimization method

    International Nuclear Information System (INIS)

    Najafabadi, R.; Srolovitz, D.J.

    1995-01-01

    We have made a detailed comparison between three competing methods for determining the free energies of solids and their defects: the thermodynamic integration of Monte Carlo (TIMC) data, the quasiharmonic (QH) model, and the free-energy-minimization (FEM) method. The accuracy of these methods decreases from the TIMC to QH to FEM method, while the computational efficiency improves in that order. All three methods yield perfect crystal lattice parameters and free energies at finite temperatures which are in good agreement for three different Cu interatomic potentials [embedded atom method (EAM), Morse and Lennard-Jones]. The FEM error (relative to the TIMC) in the (001) surface free energy and in the vacancy formation energy were found to be much larger for the EAM potential than for the other two potentials. Part of the errors in the FEM determination of the free energies are associated with anharmonicities in the interatomic potentials, with the remainder attributed to decoupling of the atomic vibrations. The anharmonicity of the EAM potential was found to be unphysically large compared with experimental vacancy formation entropy determinations. Based upon these results, we show that the FEM method provides a reasonable compromise between accuracy and computational demands. However, the accuracy of this approach is sensitive to the choice of interatomic potential and the nature of the defect to which it is being applied. The accuracy of the FEM is best in high-symmetry environments (perfect crystal, high-symmetry defects, etc.) and when used to describe materials where the anharmonicity is not too large

  12. Evaluation of the carotid artery stenosis based on minimization of mechanical energy loss of the blood flow.

    Science.gov (United States)

    Sia, Sheau Fung; Zhao, Xihai; Li, Rui; Zhang, Yu; Chong, Winston; He, Le; Chen, Yu

    2016-11-01

    Internal carotid artery stenosis requires an accurate risk assessment for the prevention of stroke. Although the internal carotid artery area stenosis ratio at the common carotid artery bifurcation can be used as one of the diagnostic methods of internal carotid artery stenosis, the accuracy of results would still depend on the measurement techniques. The purpose of this study is to propose a novel method to estimate the effect of internal carotid artery stenosis on the blood flow based on the concept of minimization of energy loss. Eight internal carotid arteries from different medical centers were diagnosed as stenosed internal carotid arteries, as plaques were found at different locations on the vessel. A computational fluid dynamics solver was developed based on an open-source code (OpenFOAM) to test the flow ratio and energy loss of those stenosed internal carotid arteries. For comparison, a healthy internal carotid artery and an idealized internal carotid artery model have also been tested and compared with stenosed internal carotid artery in terms of flow ratio and energy loss. We found that at a given common carotid artery bifurcation, there must be a certain flow distribution in the internal carotid artery and external carotid artery, for which the total energy loss at the bifurcation is at a minimum; for a given common carotid artery flow rate, an irregular shaped plaque at the bifurcation constantly resulted in a large value of minimization of energy loss. Thus, minimization of energy loss can be used as an indicator for the estimation of internal carotid artery stenosis.

  13. Learning sequences on the subject of energy. Secondary school stage 1. Lernsequenzen zum Thema Energie. Sekundarstufe 1

    Energy Technology Data Exchange (ETDEWEB)

    1986-01-01

    The ten learning sequences follow on one another. Each picks on a particular aspect from the energy field. The subject notebooks are self-contained and can therefore be used independently. Apart from actual data and energy-related information, the information for the teacher contains: - proposals for teaching - suggestions for further activities - sample solutions for the pupil's sheets - references to the literature and media. The worksheets for the pupils are different; it should be possible to use the learning sequences in all classes of secondary school stage 1. The multicoloured foils for projectors should motivate, on the one hand, and on the other hand should help to check the results of learning.

  14. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  15. Design of Protein Multi-specificity Using an Independent Sequence Search Reduces the Barrier to Low Energy Sequences.

    Directory of Open Access Journals (Sweden)

    Alexander M Sevy

    2015-07-01

    Full Text Available Computational protein design has found great success in engineering proteins for thermodynamic stability, binding specificity, or enzymatic activity in a 'single state' design (SSD paradigm. Multi-specificity design (MSD, on the other hand, involves considering the stability of multiple protein states simultaneously. We have developed a novel MSD algorithm, which we refer to as REstrained CONvergence in multi-specificity design (RECON. The algorithm allows each state to adopt its own sequence throughout the design process rather than enforcing a single sequence on all states. Convergence to a single sequence is encouraged through an incrementally increasing convergence restraint for corresponding positions. Compared to MSD algorithms that enforce (constrain an identical sequence on all states the energy landscape is simplified, which accelerates the search drastically. As a result, RECON can readily be used in simulations with a flexible protein backbone. We have benchmarked RECON on two design tasks. First, we designed antibodies derived from a common germline gene against their diverse targets to assess recovery of the germline, polyspecific sequence. Second, we design "promiscuous", polyspecific proteins against all binding partners and measure recovery of the native sequence. We show that RECON is able to efficiently recover native-like, biologically relevant sequences in this diverse set of protein complexes.

  16. Formal definition of coherency and computation of minimal cut sequences for binary dynamic and repairable systems

    International Nuclear Information System (INIS)

    Chaux, Pierre-Yves

    2013-01-01

    Preventive risk assessment of a complex system rely on a dynamic models which describe the link between the system failure and the scenarios of failure and repair events from its components. The qualitative analyses of a binary dynamic and repairable system is aiming at computing and analyse the scenarios that lead to the system failure. Since such systems describe a large set of those, only the most representative ones, called Minimal Cut Sequences (MCS), are of interest for the safety engineer. The lack of a formal definition for the MCS has generated multiple definitions either specific to a given model (and thus not generic) or informal. This work proposes i) a formal framework and definition for the MCS while staying independent of the reliability model used, ii) the methodology to compute them using property extracted from their formal definition, iii) an extension of the formal framework for multi-states components in order to perform the qualitative analyses of Boolean logic Driven Markov Processes (BDMP) models. Under the hypothesis that the scenarios implicitly described by any reliability model can always be represented by a finite automaton, this work is defining the coherency for dynamic and repairable systems as the way to give a minimal representation of all scenarios that are leading to the system failure. (author)

  17. A non-minimally coupled quintom dark energy model on the warped DGP brane

    International Nuclear Information System (INIS)

    Nozari, K; Azizi, T; Setare, M R; Behrouz, N

    2009-01-01

    We construct a quintom dark energy model with two non-minimally coupled scalar fields, one quintessence and the other phantom field, confined to the warped Dvali-Gabadadze-Porrati (DGP) brane. We show that this model accounts for crossing of the phantom divide line in appropriate subspaces of the model parameter space. This crossing occurs for both normal and self-accelerating branches of this DGP-inspired setup.

  18. Life on arginine for Mycoplasma hominis: clues from its minimal genome and comparison with other human urogenital mycoplasmas.

    Directory of Open Access Journals (Sweden)

    Sabine Pereyre

    2009-10-01

    Full Text Available Mycoplasma hominis is an opportunistic human mycoplasma. Two other pathogenic human species, M. genitalium and Ureaplasma parvum, reside within the same natural niche as M. hominis: the urogenital tract. These three species have overlapping, but distinct, pathogenic roles. They have minimal genomes and, thus, reduced metabolic capabilities characterized by distinct energy-generating pathways. Analysis of the M. hominis PG21 genome sequence revealed that it is the second smallest genome among self-replicating free living organisms (665,445 bp, 537 coding sequences (CDSs. Five clusters of genes were predicted to have undergone horizontal gene transfer (HGT between M. hominis and the phylogenetically distant U. parvum species. We reconstructed M. hominis metabolic pathways from the predicted genes, with particular emphasis on energy-generating pathways. The Embden-Meyerhoff-Parnas pathway was incomplete, with a single enzyme absent. We identified the three proteins constituting the arginine dihydrolase pathway. This pathway was found essential to promote growth in vivo. The predicted presence of dimethylarginine dimethylaminohydrolase suggested that arginine catabolism is more complex than initially described. This enzyme may have been acquired by HGT from non-mollicute bacteria. Comparison of the three minimal mollicute genomes showed that 247 CDSs were common to all three genomes, whereas 220 CDSs were specific to M. hominis, 172 CDSs were specific to M. genitalium, and 280 CDSs were specific to U. parvum. Within these species-specific genes, two major sets of genes could be identified: one including genes involved in various energy-generating pathways, depending on the energy source used (glucose, urea, or arginine and another involved in cytadherence and virulence. Therefore, a minimal mycoplasma cell, not including cytadherence and virulence-related genes, could be envisaged containing a core genome (247 genes, plus a set of genes required for

  19. Identification of DNA-binding protein target sequences by physical effective energy functions: free energy analysis of lambda repressor-DNA complexes.

    Directory of Open Access Journals (Sweden)

    Caselle Michele

    2007-09-01

    Full Text Available Abstract Background Specific binding of proteins to DNA is one of the most common ways gene expression is controlled. Although general rules for the DNA-protein recognition can be derived, the ambiguous and complex nature of this mechanism precludes a simple recognition code, therefore the prediction of DNA target sequences is not straightforward. DNA-protein interactions can be studied using computational methods which can complement the current experimental methods and offer some advantages. In the present work we use physical effective potentials to evaluate the DNA-protein binding affinities for the λ repressor-DNA complex for which structural and thermodynamic experimental data are available. Results The binding free energy of two molecules can be expressed as the sum of an intermolecular energy (evaluated using a molecular mechanics forcefield, a solvation free energy term and an entropic term. Different solvation models are used including distance dependent dielectric constants, solvent accessible surface tension models and the Generalized Born model. The effect of conformational sampling by Molecular Dynamics simulations on the computed binding energy is assessed; results show that this effect is in general negative and the reproducibility of the experimental values decreases with the increase of simulation time considered. The free energy of binding for non-specific complexes, estimated using the best energetic model, agrees with earlier theoretical suggestions. As a results of these analyses, we propose a protocol for the prediction of DNA-binding target sequences. The possibility of searching regulatory elements within the bacteriophage λ genome using this protocol is explored. Our analysis shows good prediction capabilities, even in absence of any thermodynamic data and information on the naturally recognized sequence. Conclusion This study supports the conclusion that physics-based methods can offer a completely complementary

  20. Potential Evaluation of Energy Supply System in Grid Power System, Commercial, and Residential Sectors by Minimizing Energy Cost

    Science.gov (United States)

    Oda, Takuya; Akisawa, Atushi; Kashiwagi, Takao

    If the economic activity in the commercial and residential sector continues to grow, improvement in energy conversion efficiencies of energy supply systems is necessary for CO2 mitigation. In recent years, the electricity driven hot water heat pump (EDHP) and the solar photo voltaic (PV) are commercialized. The fuel cell (FC) of co-generation system (CGS) for the commercial and residential sector will be commercialized in the future. The aim is to indicate the ideal energy supply system of the users sector, which both manages the economical cost and CO2 mitigation, considering the grid power system. In the paper, cooperative Japanese energy supply systems are modeled by linear-programming. It includes the grid power system and energy systems of five commercial sectors and a residential sector. The demands of sectors are given by the objective term for 2005 to 2025. 24 hours load for each 3 annual seasons are considered. The energy systems are simulated to be minimize the total cost of energy supply, and to be mitigate the CO2 discharge. As result, the ideal energy system at 2025 is shown. The CGS capacity grows to 30% (62GW) of total power system, and the EDHP capacity is 26GW, in commercial and residential sectors.

  1. Impairment in explicit visuomotor sequence learning is related to loss of microstructural integrity of the corpus callosum in multiple sclerosis patients with minimal disability.

    Science.gov (United States)

    Bonzano, L; Tacchino, A; Roccatagliata, L; Sormani, M P; Mancardi, G L; Bove, M

    2011-07-15

    Sequence learning can be investigated by serial reaction-time (SRT) paradigms. Explicit learning occurs when subjects have to recognize a test sequence and has been shown to activate the frontoparietal network in both contralateral and ipsilateral hemispheres. Thus, the left and right superior longitudinal fasciculi (SLF), connecting the intra-hemispheric frontoparietal circuits, could have a role in explicit unimanual visuomotor learning. Also, as both hemispheres are involved, we could hypothesize that the corpus callosum (CC) has a role in this process. Pathological damage in both SLF and CC has been detected in patients with Multiple Sclerosis (PwMS), and microstructural alterations can be quantified by Diffusion Tensor Imaging (DTI). In light of these findings, we inquired whether PwMS with minimal disability showed impairments in explicit visuomotor sequence learning and whether this could be due to loss of white matter integrity in these intra- and inter-hemispheric white matter pathways. Thus, we combined DTI analysis with a modified version of SRT task based on finger opposition movements in a group of PwMS with minimal disability. We found that the performance in explicit sequence learning was significantly reduced in these patients with respect to healthy subjects; the amount of sequence-specific learning was found to be more strongly correlated with fractional anisotropy (FA) in the CC (r=0.93) than in the left (r=0.28) and right SLF (r=0.27) (p for interaction=0.005 and 0.04 respectively). This finding suggests that an inter-hemispheric information exchange between the homologous areas is required to successfully accomplish the task and indirectly supports the role of the right (ipsilateral) hemisphere in explicit visuomotor learning. On the other hand, we found no significant correlation of the FA in the CC and in the SLFs with nonspecific learning (assessed when stimuli are randomly presented), supporting the hypothesis that inter

  2. Minimizing Characterization - Derived Waste at the Department of Energy Savannah River Site, Aiken, South Carolina

    Energy Technology Data Exchange (ETDEWEB)

    Van Pelt, R. S.; Amidon, M. B.; Reboul, S. H.

    2002-02-25

    Environmental restoration activities at the Department of Energy Savannah River Site (SRS) utilize innovative site characterization approaches and technologies that minimize waste generation. Characterization is typically conducted in phases, first by collecting large quantities of inexpensive data, followed by targeted minimally invasive drilling to collect depth-discrete soil/groundwater data, and concluded with the installation of permanent multi-level groundwater monitoring wells. Waste-reducing characterization methods utilize non-traditional drilling practices (sonic drilling), minimally intrusive (geoprobe, cone penetrometer) and non-intrusive (3-D seismic, ground penetration radar, aerial monitoring) investigative tools. Various types of sensor probes (moisture sensors, gamma spectroscopy, Raman spectroscopy, laser induced and X-ray fluorescence) and hydrophobic membranes (FLUTe) are used in conjunction with depth-discrete sampling techniques to obtain high-resolution 3-D plume profiles. Groundwater monitoring (short/long-term) approaches utilize multi-level sampling technologies (Strata-Sampler, Cone-Sipper, Solinst Waterloo, Westbay) and low-cost diffusion samplers for seepline/surface water sampling. Upon collection of soil and groundwater data, information is portrayed in a Geographic Information Systems (GIS) format for interpretation and planning purposes. At the SRS, the use of non-traditional drilling methods and minimally/non intrusive investigation approaches along with in-situ sampling methods has minimized waste generation and improved the effectiveness and efficiency of characterization activities.

  3. Minimizing Characterization - Derived Waste at the Department of Energy Savannah River Site, Aiken, South Carolina

    International Nuclear Information System (INIS)

    Van Pelt, R. S.; Amidon, M. B.; Reboul, S. H.

    2002-01-01

    Environmental restoration activities at the Department of Energy Savannah River Site (SRS) utilize innovative site characterization approaches and technologies that minimize waste generation. Characterization is typically conducted in phases, first by collecting large quantities of inexpensive data, followed by targeted minimally invasive drilling to collect depth-discrete soil/groundwater data, and concluded with the installation of permanent multi-level groundwater monitoring wells. Waste-reducing characterization methods utilize non-traditional drilling practices (sonic drilling), minimally intrusive (geoprobe, cone penetrometer) and non-intrusive (3-D seismic, ground penetration radar, aerial monitoring) investigative tools. Various types of sensor probes (moisture sensors, gamma spectroscopy, Raman spectroscopy, laser induced and X-ray fluorescence) and hydrophobic membranes (FLUTe) are used in conjunction with depth-discrete sampling techniques to obtain high-resolution 3-D plume profiles. Groundwater monitoring (short/long-term) approaches utilize multi-level sampling technologies (Strata-Sampler, Cone-Sipper, Solinst Waterloo, Westbay) and low-cost diffusion samplers for seepline/surface water sampling. Upon collection of soil and groundwater data, information is portrayed in a Geographic Information Systems (GIS) format for interpretation and planning purposes. At the SRS, the use of non-traditional drilling methods and minimally/non intrusive investigation approaches along with in-situ sampling methods has minimized waste generation and improved the effectiveness and efficiency of characterization activities

  4. Minimization of complementary energy to predict shear modulus of laminates with intralaminar cracks

    International Nuclear Information System (INIS)

    Giannadakis, K; Varna, J

    2012-01-01

    The most common damage mode and the one examined in this work is the formation of intralaminar cracks in layers of laminates. These cracks can occur when the composite structure is subjected to mechanical and/or thermal loading and eventually lead to degradation of thermo-elastic properties. In the present work, the shear modulus reduction due to cracking is studied. Mathematical models exist in literature for the simple case of cross-ply laminates. The in-plane shear modulus of a damaged laminate is only considered in a few studies. In the current work, the shear modulus reduction in cross-plies will be analysed based on the principle of minimization of complementary energy. Hashin investigated the in-plane shear modulus reduction of cross-ply laminates with cracks in inside 90-layer using this variational approach and assuming that the in-plane shear stress in layers does not depend on the thickness coordinate. In the present study, a more detailed and accurate approach for stress estimation is followed using shape functions for this dependence with parameters obtained by minimization. The results for complementary energy are then compared with the respective from literature and finally an expression for shear modulus degradation is derived.

  5. The analysis of energy-time sequences in the nuclear power plants construction

    International Nuclear Information System (INIS)

    Milivojevic, S.; Jovanovic, V.; Riznic, J.

    1983-01-01

    The current nuclear energy development pose many problems; one of them is nuclear power plant construction. They are evaluated energy and time features of the construction and their relative ratios by the analysis of available data. The results point at the reached efficiency of the construction and, in the same time, they are the basis for real estimation of energy-time sequences of the construction in the future. (author)

  6. A Novel Low Energy Electron Microscope for DNA Sequencing and Surface Analysis

    Science.gov (United States)

    Mankos, M.; Shadman, K.; Persson, H.H.J.; N’Diaye, A.T.; Schmid, A.K.; Davis, R.W.

    2014-01-01

    Monochromatic, aberration-corrected, dual-beam low energy electron microscopy (MAD-LEEM) is a novel technique that is directed towards imaging nanostructures and surfaces with sub-nanometer resolution. The technique combines a monochromator, a mirror aberration corrector, an energy filter, and dual beam illumination in a single instrument. The monochromator reduces the energy spread of the illuminating electron beam, which significantly improves spectroscopic and spatial resolution. Simulation results predict that the novel aberration corrector design will eliminate the second rank chromatic and third and fifth order spherical aberrations, thereby improving the resolution into the sub-nanometer regime at landing energies as low as one hundred electron-Volts. The energy filter produces a beam that can extract detailed information about the chemical composition and local electronic states of non-periodic objects such as nanoparticles, interfaces, defects, and macromolecules. The dual flood illumination eliminates charging effects that are generated when a conventional LEEM is used to image insulating specimens. A potential application for MAD-LEEM is in DNA sequencing, which requires high resolution to distinguish the individual bases and high speed to reduce the cost. The MAD-LEEM approach images the DNA with low electron impact energies, which provides nucleobase contrast mechanisms without organometallic labels. Furthermore, the micron-size field of view when combined with imaging on the fly provides long read lengths, thereby reducing the demand on assembling the sequence. Experimental results from bulk specimens with immobilized single-base oligonucleotides demonstrate that base specific contrast is available with reflected, photo-emitted, and Auger electrons. Image contrast simulations of model rectangular features mimicking the individual nucleotides in a DNA strand have been developed to translate measurements of contrast on bulk DNA to the detectability of

  7. A novel low energy electron microscope for DNA sequencing and surface analysis.

    Science.gov (United States)

    Mankos, M; Shadman, K; Persson, H H J; N'Diaye, A T; Schmid, A K; Davis, R W

    2014-10-01

    Monochromatic, aberration-corrected, dual-beam low energy electron microscopy (MAD-LEEM) is a novel technique that is directed towards imaging nanostructures and surfaces with sub-nanometer resolution. The technique combines a monochromator, a mirror aberration corrector, an energy filter, and dual beam illumination in a single instrument. The monochromator reduces the energy spread of the illuminating electron beam, which significantly improves spectroscopic and spatial resolution. Simulation results predict that the novel aberration corrector design will eliminate the second rank chromatic and third and fifth order spherical aberrations, thereby improving the resolution into the sub-nanometer regime at landing energies as low as one hundred electron-Volts. The energy filter produces a beam that can extract detailed information about the chemical composition and local electronic states of non-periodic objects such as nanoparticles, interfaces, defects, and macromolecules. The dual flood illumination eliminates charging effects that are generated when a conventional LEEM is used to image insulating specimens. A potential application for MAD-LEEM is in DNA sequencing, which requires high resolution to distinguish the individual bases and high speed to reduce the cost. The MAD-LEEM approach images the DNA with low electron impact energies, which provides nucleobase contrast mechanisms without organometallic labels. Furthermore, the micron-size field of view when combined with imaging on the fly provides long read lengths, thereby reducing the demand on assembling the sequence. Experimental results from bulk specimens with immobilized single-base oligonucleotides demonstrate that base specific contrast is available with reflected, photo-emitted, and Auger electrons. Image contrast simulations of model rectangular features mimicking the individual nucleotides in a DNA strand have been developed to translate measurements of contrast on bulk DNA to the detectability of

  8. Minimization of energy consumption in HVAC systems with data-driven models and an interior-point method

    International Nuclear Information System (INIS)

    Kusiak, Andrew; Xu, Guanglin; Zhang, Zijun

    2014-01-01

    Highlights: • We study the energy saving of HVAC systems with a data-driven approach. • We conduct an in-depth analysis of the topology of developed Neural Network based HVAC model. • We apply interior-point method to solving a Neural Network based HVAC optimization model. • The uncertain building occupancy is incorporated in the minimization of HVAC energy consumption. • A significant potential of saving HVAC energy is discovered. - Abstract: In this paper, a data-driven approach is applied to minimize energy consumption of a heating, ventilating, and air conditioning (HVAC) system while maintaining the thermal comfort of a building with uncertain occupancy level. The uncertainty of arrival and departure rate of occupants is modeled by the Poisson and uniform distributions, respectively. The internal heating gain is calculated from the stochastic process of the building occupancy. Based on the observed and simulated data, a multilayer perceptron algorithm is employed to model and simulate the HVAC system. The data-driven models accurately predict future performance of the HVAC system based on the control settings and the observed historical information. An optimization model is formulated and solved with the interior-point method. The optimization results are compared with the results produced by the simulation models

  9. Minimal free resolutions over complete intersections

    CERN Document Server

    Eisenbud, David

    2016-01-01

    This book introduces a theory of higher matrix factorizations for regular sequences and uses it to describe the minimal free resolutions of high syzygy modules over complete intersections. Such resolutions have attracted attention ever since the elegant construction of the minimal free resolution of the residue field by Tate in 1957. The theory extends the theory of matrix factorizations of a non-zero divisor, initiated by Eisenbud in 1980, which yields a description of the eventual structure of minimal free resolutions over a hypersurface ring. Matrix factorizations have had many other uses in a wide range of mathematical fields, from singularity theory to mathematical physics.

  10. Minimizing the energy spread within a single bunch by shaping its charge distribution

    International Nuclear Information System (INIS)

    Loew, G.A.; Wang, J.

    1984-06-01

    When electrons or positrons in a bunch pass through the periodic structure of a linear accelerator, they leave behind them energy in the form of longitudinal wake fields. The longitudinal fields left behind by early particles in a bunch decrease the energy of later particles. For a linear collider, the energy spread introduced within the bunches by this beam loading effect must be minimized because it limits the degree to which the particles can be focused to a small spot due to chromatic effects in the final focus system. For example, for the SLC, the allowable energy spread is +-0.5%. It has been known for some time that partial compensation of the longitudinal wake field effects can be obtained for any bunch by placing it ahead of the accelerating crest (in space), thereby letting the positive rising sinusoidal field offset the negative beam loading field. The work presented in this report shows that it is possible to obtain complete compensation, i.e., to reduce the energy spread essentially to zero by properly shaping the longitudinal charge distribution of the bunch and by placing it at the correct position on the wave

  11. On minimal energy Hartree-Fock states for the 2DEG at fractional fillings

    International Nuclear Information System (INIS)

    Cabo Montes Oca, A. de.

    1995-08-01

    Approximate minimal energy solutions of the previously discussed general class of Hartree-Fock (HF) states of the 2DEG at 1/3 and 2/3 filling factors are determined. Their selfenergy spectrum is evaluated. Wannier states associated to the filled Bloch states are introduced in a lattice having three flux quanta per cell. They allow to rewrite approximately the ν = 1/3 HF Hamiltonian as sum of three independent tight-binding model Hamiltonians, one describing the dynamics in the band of occupied states and the other ones in the tow bands of excited states. The magnitude of the hopping integral indicates the enhanced role which should have the correlation energy in the present situation with respect to the case of the Yoshioka and Lee second order energy calculation for the lowest energy HF state. Finally, the discussion also suggests the Wannier function, which spreads an electron into a three quanta area, as a physical model for the composite fermion mean field one particle state. (author). 11 refs, 5 figs

  12. A Hybrid Metaheuristic Approach for Minimizing the Total Flow Time in A Flow Shop Sequence Dependent Group Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Antonio Costa

    2014-07-01

    Full Text Available Production processes in Cellular Manufacturing Systems (CMS often involve groups of parts sharing the same technological requirements in terms of tooling and setup. The issue of scheduling such parts through a flow-shop production layout is known as the Flow-Shop Group Scheduling (FSGS problem or, whether setup times are sequence-dependent, the Flow-Shop Sequence-Dependent Group Scheduling (FSDGS problem. This paper addresses the FSDGS issue, proposing a hybrid metaheuristic procedure integrating features from Genetic Algorithms (GAs and Biased Random Sampling (BRS search techniques with the aim of minimizing the total flow time, i.e., the sum of completion times of all jobs. A well-known benchmark of test cases, entailing problems with two, three, and six machines, is employed for both tuning the relevant parameters of the developed procedure and assessing its performances against two metaheuristic algorithms recently presented by literature. The obtained results and a properly arranged ANOVA analysis highlight the superiority of the proposed approach in tackling the scheduling problem under investigation.

  13. The importance of regret minimization in the choice for renewable energy programmes: Evidence from a discrete choice experiment

    International Nuclear Information System (INIS)

    Boeri, Marco; Longo, Alberto

    2017-01-01

    This study provides a methodologically rigorous attempt to disentangle the impact of various factors – unobserved heterogeneity, information and environmental attitudes – on the inclination of individuals to exhibit either a utility maximization or a regret minimization behaviour in a discrete choice experiment for renewable energy programmes described by four attributes: greenhouse gas emissions, power outages, employment in the energy sector, and electricity bill. We explore the ability of different models – multinomial logit, random parameters logit, and hybrid latent class – and of different choice paradigms – utility maximization and regret minimization – in explaining people's choices for renewable energy programmes. The “pure” random regret random parameters logit model explains the choices of our respondents better than other models, indicating that regret is an important choice paradigm, and that choices for renewable energy programmes are mostly driven by regret, rather than by rejoice. In particular, we find that our respondents' choices are driven more by changes in greenhouse gas emissions than by reductions in power outages. Finally, we find that changing the level of information to one attribute has no effect on choices, and that being a member of an environmental organization makes a respondent more likely to be associated with the utility maximization choice framework. - Highlights: • The first paper to use the Random Regret Minimization choice paradigm in energy economics • With a hybrid latent class model, choices conform to either utility or pure random regret. • The pure random regret random parameters logit model outperforms other models. • Reducing greenhouse gas emissions is more important than reducing power outages.

  14. Deterministic and stochastic algorithms for resolving the flow fields in ducts and networks using energy minimization

    Science.gov (United States)

    Sochi, Taha

    2016-09-01

    Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.

  15. Cost minimization in a full-scale conventional wastewater treatment plant: associated costs of biological energy consumption versus sludge production.

    Science.gov (United States)

    Sid, S; Volant, A; Lesage, G; Heran, M

    2017-11-01

    Energy consumption and sludge production minimization represent rising challenges for wastewater treatment plants (WWTPs). The goal of this study is to investigate how energy is consumed throughout the whole plant and how operating conditions affect this energy demand. A WWTP based on the activated sludge process was selected as a case study. Simulations were performed using a pre-compiled model implemented in GPS-X simulation software. Model validation was carried out by comparing experimental and modeling data of the dynamic behavior of the mixed liquor suspended solids (MLSS) concentration and nitrogen compounds concentration, energy consumption for aeration, mixing and sludge treatment and annual sludge production over a three year exercise. In this plant, the energy required for bioreactor aeration was calculated at approximately 44% of the total energy demand. A cost optimization strategy was applied by varying the MLSS concentrations (from 1 to 8 gTSS/L) while recording energy consumption, sludge production and effluent quality. An increase of MLSS led to an increase of the oxygen requirement for biomass aeration, but it also reduced total sludge production. Results permit identification of a key MLSS concentration allowing identification of the best compromise between levels of treatment required, biological energy demand and sludge production while minimizing the overall costs.

  16. MINIMIZATION OF IMPACTS PERTAINING TO EXTERNAL AND INTERNAL ENERGY SECURITY THREATS OF THERMAL POWER PLANTS

    Directory of Open Access Journals (Sweden)

    V. N. Nagornov

    2012-01-01

    Full Text Available The paper contains a classification of internal and external threats for thermal power plants and recommendations on minimization of these risks. A set of concrete measures aimed at ensuring TPP energy security has been presented in the paper. The system comprises preventive measures aimed at reducing the possibilities of emergence and implementation of internal and external threats. The system also presupposes to decrease susceptibility of fuel- and energy supply systems to the threats, and application of liquidation measures that ensure elimination of emergency situation consequences and restoration of the conditions concerning fuel- and power supply to consumers.

  17. MINIMIZING THE MHD POTENTIAL ENERGY FOR THE CURRENT HOLE REGION IN TOKAMAKS

    International Nuclear Information System (INIS)

    CHU, M.S; PARKS, P.B

    2004-01-01

    The current hole region in the tokamak has been observed to arise naturally during the development of internal transport barriers. The magnetohydrodynamic (MHD) potential energy in the current hole region is shown to be determined completely in terms of the displacements at the edge of the current hole. For modes with finite toroidal mode number n ≠ 0, the minimized potential energy is the same as if the current hole region were a vacuum region. For modes with toroidal mode number n = 0, the displacement is a superposition of three types of independent displacements: a vertical displacement or displacements that compress only the plasma or the toroidal field uniformly. Thus for ideal MHD perturbations of plasma with a current hole, the plasma behaves as if it were bordered by an extra ''internal vacuum region''. The relevance of the present work to computer simulations of plasma with a current hole region is also discussed

  18. Minimizing the magnetohydrodynamic potential energy for the current hole region in tokamaks

    International Nuclear Information System (INIS)

    Chu, M.S.; Parks, P.B.

    2004-01-01

    The current hole region in the tokamak has been observed to arise naturally during the development of internal transport barriers. The magnetohydrodynamic (MHD) potential energy in the current hole region is shown to be determined completely in terms of the displacements at the edge of the current hole. For modes with finite toroidal mode number n≠0, the minimized potential energy is the same as if the current hole region were a vacuum region. For modes with toroidal mode number n=0, the displacement is a superposition of three types of independent displacements: a vertical displacement or displacements that compress only the plasma, or the toroidal field uniformly. Thus for ideal MHD perturbations of plasma with a current hole, the plasma behaves as if it were bordered by an extra ''internal vacuum region.'' The relevance of the present work to computer simulations of plasma with a current hole region is also discussed

  19. On the normalization of the minimum free energy of RNAs by sequence length.

    Science.gov (United States)

    Trotta, Edoardo

    2014-01-01

    The minimum free energy (MFE) of ribonucleic acids (RNAs) increases at an apparent linear rate with sequence length. Simple indices, obtained by dividing the MFE by the number of nucleotides, have been used for a direct comparison of the folding stability of RNAs of various sizes. Although this normalization procedure has been used in several studies, the relationship between normalized MFE and length has not yet been investigated in detail. Here, we demonstrate that the variation of MFE with sequence length is not linear and is significantly biased by the mathematical formula used for the normalization procedure. For this reason, the normalized MFEs strongly decrease as hyperbolic functions of length and produce unreliable results when applied for the comparison of sequences with different sizes. We also propose a simple modification of the normalization formula that corrects the bias enabling the use of the normalized MFE for RNAs longer than 40 nt. Using the new corrected normalized index, we analyzed the folding free energies of different human RNA families showing that most of them present an average MFE density more negative than expected for a typical genomic sequence. Furthermore, we found that a well-defined and restricted range of MFE density characterizes each RNA family, suggesting the use of our corrected normalized index to improve RNA prediction algorithms. Finally, in coding and functional human RNAs the MFE density appears scarcely correlated with sequence length, consistent with a negligible role of thermodynamic stability demands in determining RNA size.

  20. Environmental Restoration Progam Waste Minimization and Pollution Prevention Awareness Program Plan

    Energy Technology Data Exchange (ETDEWEB)

    Grumski, J. T.; Swindle, D. W.; Bates, L. D.; DeLozier, M. F.P.; Frye, C. E.; Mitchell, M. E.

    1991-09-30

    In response to DOE Order 5400.1 this plan outlines the requirements for a Waste Minimization and Pollution Prevention Awareness Program for the Environmental Restoration (ER) Program at Martin Marietta Energy System, Inc. Statements of the national, Department of Energy, Energy Systems, and Energy Systems ER Program policies on waste minimization are included and reflect the attitudes of these organizations and their commitment to the waste minimization effort. Organizational responsibilities for the waste minimization effort are clearly defined and discussed, and the program objectives and goals are set forth. Waste assessment is addressed as being a key element in developing the waste generation baseline. There are discussions on the scope of ER-specific waste minimization techniques and approaches to employee awareness and training. There is also a discussion on the process for continual evaluation of the Waste Minimization Program. Appendixes present an implementation schedule for the Waste Minimization and Pollution Prevention Program, the program budget, an organization chart, and the ER waste minimization policy.

  1. Environmental Restoration Progam Waste Minimization and Pollution Prevention Awareness Program Plan

    International Nuclear Information System (INIS)

    1991-01-01

    In response to DOE Order 5400.1 this plan outlines the requirements for a Waste Minimization and Pollution Prevention Awareness Program for the Environmental Restoration (ER) Program at Martin Marietta Energy System, Inc. Statements of the national, Department of Energy, Energy Systems, and Energy Systems ER Program policies on waste minimization are included and reflect the attitudes of these organizations and their commitment to the waste minimization effort. Organizational responsibilities for the waste minimization effort are clearly defined and discussed, and the program objectives and goals are set forth. Waste assessment is addressed as being a key element in developing the waste generation baseline. There are discussions on the scope of ER-specific waste minimization techniques and approaches to employee awareness and training. There is also a discussion on the process for continual evaluation of the Waste Minimization Program. Appendixes present an implementation schedule for the Waste Minimization and Pollution Prevention Program, the program budget, an organization chart, and the ER waste minimization policy

  2. The exponentiated Hencky-logarithmic strain energy. Part II: Coercivity, planar polyconvexity and existence of minimizers

    Science.gov (United States)

    Neff, Patrizio; Lankeit, Johannes; Ghiba, Ionel-Dumitrel; Martin, Robert; Steigmann, David

    2015-08-01

    We consider a family of isotropic volumetric-isochoric decoupled strain energies based on the Hencky-logarithmic (true, natural) strain tensor log U, where μ > 0 is the infinitesimal shear modulus, is the infinitesimal bulk modulus with the first Lamé constant, are dimensionless parameters, is the gradient of deformation, is the right stretch tensor and is the deviatoric part (the projection onto the traceless tensors) of the strain tensor log U. For small elastic strains, the energies reduce to first order to the classical quadratic Hencky energy which is known to be not rank-one convex. The main result in this paper is that in plane elastostatics the energies of the family are polyconvex for , extending a previous finding on its rank-one convexity. Our method uses a judicious application of Steigmann's polyconvexity criteria based on the representation of the energy in terms of the principal invariants of the stretch tensor U. These energies also satisfy suitable growth and coercivity conditions. We formulate the equilibrium equations, and we prove the existence of minimizers by the direct methods of the calculus of variations.

  3. GROUPING WEB ACCESS SEQUENCES uSING SEQUENCE ALIGNMENT METHOD

    OpenAIRE

    BHUPENDRA S CHORDIA; KRISHNAKANT P ADHIYA

    2011-01-01

    In web usage mining grouping of web access sequences can be used to determine the behavior or intent of a set of users. Grouping websessions is how to measure the similarity between web sessions. There are many shortcomings in traditional measurement methods. The taskof grouping web sessions based on similarity and consists of maximizing the intra-group similarity while minimizing the inter-groupsimilarity is done using sequence alignment method. This paper introduces a new method to group we...

  4. 10 CFR 20.1406 - Minimization of contamination.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Minimization of contamination. 20.1406 Section 20.1406... License Termination § 20.1406 Minimization of contamination. (a) Applicants for licenses, other than early... procedures for operation will minimize, to the extent practicable, contamination of the facility and the...

  5. Minimal and non-minimal standard models: Universality of radiative corrections

    International Nuclear Information System (INIS)

    Passarino, G.

    1991-01-01

    The possibility of describing electroweak processes by means of models with a non-minimal Higgs sector is analyzed. The renormalization procedure which leads to a set of fitting equations for the bare parameters of the lagrangian is first reviewed for the minimal standard model. A solution of the fitting equations is obtained, which correctly includes large higher-order corrections. Predictions for physical observables, notably the W boson mass and the Z O partial widths, are discussed in detail. Finally the extension to non-minimal models is described under the assumption that new physics will appear only inside the vector boson self-energies and the concept of universality of radiative corrections is introduced, showing that to a large extent they are insensitive to the details of the enlarged Higgs sector. Consequences for the bounds on the top quark mass are also discussed. (orig.)

  6. Potential pollution prevention and waste minimization for Department of Energy operations

    International Nuclear Information System (INIS)

    Griffin, J.; Ischay, C.; Kennicott, M.; Pemberton, S.; Tull, D.

    1995-10-01

    With the tightening of budgets and limited resources, it is important to ensure operations are carried out in a cost-effective and productive manner. Implementing an effective Pollution Prevention strategy can help to reduce the costs of waste management and prevent harmful releases to the environment. This document provides an estimate of the Department of Energy's waste reduction potential from the implementation of Pollution Prevention opportunities. A team of Waste Minimization and Pollution Prevention professionals was formed to collect the data and make the estimates. The report includes a list of specific reduction opportunities for various waste generating operations and waste types. A generic set of recommendations to achieve these reduction opportunities is also provided as well as a general discussion of the approach and assumptions made for each waste generating operation

  7. Non-minimal derivative coupling scalar field and bulk viscous dark energy

    Energy Technology Data Exchange (ETDEWEB)

    Mostaghel, Behrang [Shahid Beheshti University, Department of Physics, Tehran (Iran, Islamic Republic of); Moshafi, Hossein [Institute for Advanced Studies in Basic Sciences, Department of Physics, Zanjan (Iran, Islamic Republic of); Movahed, S.M.S. [Shahid Beheshti University, Department of Physics, Tehran (Iran, Islamic Republic of); Institute for Research in Fundamental Sciences (IPM), School of Physics, Tehran (Iran, Islamic Republic of)

    2017-08-15

    Inspired by thermodynamical dissipative phenomena, we consider bulk viscosity for dark fluid in a spatially flat two-component Universe. Our viscous dark energy model represents phantom-crossing which avoids big-rip singularity. We propose a non-minimal derivative coupling scalar field with zero potential leading to accelerated expansion of the Universe in the framework of bulk viscous dark energy model. In this approach, the coupling constant, κ, is related to viscosity coefficient, γ, and the present dark energy density, Ω{sub DE}{sup 0}. This coupling is bounded as κ element of [-1/9H{sub 0}{sup 2}(1 - Ω{sub DE}{sup 0}), 0]. We implement recent observational data sets including a joint light-curve analysis (JLA) for SNIa, gamma ray bursts (GRBs) for most luminous astrophysical objects at high redshifts, baryon acoustic oscillations (BAO) from different surveys, Hubble parameter from HST project, Planck CMB power spectrum and lensing to constrain model free parameters. The joint analysis of JLA + GRBs + BAO + HST shows that Ω{sub DE}{sup 0} = 0.696 ± 0.010, γ = 0.1404 ± 0.0014 and H{sub 0} = 68.1 ± 1.3. Planck TT observation provides γ = 0.32{sup +0.31}{sub -0.26} in the 68% confidence limit for the viscosity coefficient. The cosmographic distance ratio indicates that current observed data prefer to increase bulk viscosity. The competition between phantom and quintessence behavior of the viscous dark energy model can accommodate cosmological old objects reported as a sign of age crisis in the ΛCDM model. Finally, tension in the Hubble parameter is alleviated in this model. (orig.)

  8. On the normalization of the minimum free energy of RNAs by sequence length.

    Directory of Open Access Journals (Sweden)

    Edoardo Trotta

    Full Text Available The minimum free energy (MFE of ribonucleic acids (RNAs increases at an apparent linear rate with sequence length. Simple indices, obtained by dividing the MFE by the number of nucleotides, have been used for a direct comparison of the folding stability of RNAs of various sizes. Although this normalization procedure has been used in several studies, the relationship between normalized MFE and length has not yet been investigated in detail. Here, we demonstrate that the variation of MFE with sequence length is not linear and is significantly biased by the mathematical formula used for the normalization procedure. For this reason, the normalized MFEs strongly decrease as hyperbolic functions of length and produce unreliable results when applied for the comparison of sequences with different sizes. We also propose a simple modification of the normalization formula that corrects the bias enabling the use of the normalized MFE for RNAs longer than 40 nt. Using the new corrected normalized index, we analyzed the folding free energies of different human RNA families showing that most of them present an average MFE density more negative than expected for a typical genomic sequence. Furthermore, we found that a well-defined and restricted range of MFE density characterizes each RNA family, suggesting the use of our corrected normalized index to improve RNA prediction algorithms. Finally, in coding and functional human RNAs the MFE density appears scarcely correlated with sequence length, consistent with a negligible role of thermodynamic stability demands in determining RNA size.

  9. Operational tank leak detection and minimization during retrieval

    International Nuclear Information System (INIS)

    Hertzel, J.S.

    1996-03-01

    This report evaluates the activities associated with the retrieval of wastes from the single-shell tanks proposed under the initial Single-Shell Tank Retrieval System. This report focuses on minimizing leakage during retrieval by using effective leak detection and mitigating actions. After reviewing the historical data available on single-shell leakage, and evaluating current leak detection technology, this report concludes that the only currently available leak detection method which can function within the most probable leakage range is the mass balance system. If utilized after each sluicing campaign, this method should allow detection at a leakage value well below the leakage value where significant health effects occur which is calculated for each tank. Furthermore, this report concludes that the planned sequence or sluicing activities will serve to further minimize the probability and volume of leaks by keeping liquid away from areas with the greatest potential for leaking. Finally, this report identifies a series of operational responses which when used in conjunction with the recommended sluicing sequence and leak detection methods will minimize worker exposure and environmental safety health risks

  10. A novel low energy electron microscope for DNA sequencing and surface analysis

    Energy Technology Data Exchange (ETDEWEB)

    Mankos, M., E-mail: marian@electronoptica.com [Electron Optica Inc., 1000 Elwell Court #110, Palo Alto, CA 94303 (United States); Shadman, K. [Electron Optica Inc., 1000 Elwell Court #110, Palo Alto, CA 94303 (United States); Persson, H.H.J. [Stanford Genome Technology Center, Stanford University School of Medicine, 855 California Avenue, Palo Alto, CA 94304 (United States); N’Diaye, A.T. [Electron Optica Inc., 1000 Elwell Court #110, Palo Alto, CA 94303 (United States); NCEM, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Schmid, A.K. [NCEM, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Davis, R.W. [Stanford Genome Technology Center, Stanford University School of Medicine, 855 California Avenue, Palo Alto, CA 94304 (United States)

    2014-10-15

    Monochromatic, aberration-corrected, dual-beam low energy electron microscopy (MAD-LEEM) is a novel technique that is directed towards imaging nanostructures and surfaces with sub-nanometer resolution. The technique combines a monochromator, a mirror aberration corrector, an energy filter, and dual beam illumination in a single instrument. The monochromator reduces the energy spread of the illuminating electron beam, which significantly improves spectroscopic and spatial resolution. Simulation results predict that the novel aberration corrector design will eliminate the second rank chromatic and third and fifth order spherical aberrations, thereby improving the resolution into the sub-nanometer regime at landing energies as low as one hundred electron-Volts. The energy filter produces a beam that can extract detailed information about the chemical composition and local electronic states of non-periodic objects such as nanoparticles, interfaces, defects, and macromolecules. The dual flood illumination eliminates charging effects that are generated when a conventional LEEM is used to image insulating specimens. A potential application for MAD-LEEM is in DNA sequencing, which requires high resolution to distinguish the individual bases and high speed to reduce the cost. The MAD-LEEM approach images the DNA with low electron impact energies, which provides nucleobase contrast mechanisms without organometallic labels. Furthermore, the micron-size field of view when combined with imaging on the fly provides long read lengths, thereby reducing the demand on assembling the sequence. Experimental results from bulk specimens with immobilized single-base oligonucleotides demonstrate that base specific contrast is available with reflected, photo-emitted, and Auger electrons. Image contrast simulations of model rectangular features mimicking the individual nucleotides in a DNA strand have been developed to translate measurements of contrast on bulk DNA to the detectability of

  11. Is spontaneous breaking of R-parity feasible in minimal low-energy supergravity

    International Nuclear Information System (INIS)

    Gato, B.; Leon, J.; Perez-Mercader, J.; Quiros, M.

    1985-01-01

    Spontaneous violation of lepton number without breaking Lorentz invariance can, in principle, be incorporated in models with softly broken supersymmetry. We study the situation for minimal low-energy supergravity models coming from a GUT (hence not having hierarchy destabilizing light singlets) and where the SU(2)xU(1) breaking is radiative. It is found that for this type of model, R-parity breaking requires either too heavy a top quark for a realistic superpartner spectrum or too light a superpartner spectrum for a realistic top quark, making the spontaneous violation of lepton number in the third generation incompatible with present experimental data. We do not discard the possibility of having it in a fourth, heavier, generation. (orig.)

  12. Tool Sequence Trends in Minimally Invasive Surgery: Statistical Analysis and Implications for Predictive Control of Multifunction Instruments

    Directory of Open Access Journals (Sweden)

    Carl A. Nelson

    2012-01-01

    Full Text Available This paper presents an analysis of 67 minimally invasive surgical procedures covering 11 different procedure types to determine patterns of tool use. A new graph-theoretic approach was taken to organize and analyze the data. Through grouping surgeries by type, trends of common tool changes were identified. Using the concept of signal/noise ratio, these trends were found to be statistically strong. The tool-use trends were used to generate tool placement patterns for modular (multi-tool, cartridge-type surgical tool systems, and the same 67 surgeries were numerically simulated to determine the optimality of these tool arrangements. The results indicate that aggregated tool-use data (by procedure type can be employed to predict tool-use sequences with good accuracy, and also indicate the potential for artificial intelligence as a means of preoperative and/or intraoperative planning. Furthermore, this suggests that the use of multifunction surgical tools can be optimized to streamline surgical workflow.

  13. A Localization-Free Interference and Energy Holes Minimization Routing for Underwater Wireless Sensor Networks.

    Science.gov (United States)

    Khan, Anwar; Ahmedy, Ismail; Anisi, Mohammad Hossein; Javaid, Nadeem; Ali, Ihsan; Khan, Nawsher; Alsaqer, Mohammed; Mahmood, Hasan

    2018-01-09

    Interference and energy holes formation in underwater wireless sensor networks (UWSNs) threaten the reliable delivery of data packets from a source to a destination. Interference also causes inefficient utilization of the limited battery power of the sensor nodes in that more power is consumed in the retransmission of the lost packets. Energy holes are dead nodes close to the surface of water, and their early death interrupts data delivery even when the network has live nodes. This paper proposes a localization-free interference and energy holes minimization (LF-IEHM) routing protocol for UWSNs. The proposed algorithm overcomes interference during data packet forwarding by defining a unique packet holding time for every sensor node. The energy holes formation is mitigated by a variable transmission range of the sensor nodes. As compared to the conventional routing protocols, the proposed protocol does not require the localization information of the sensor nodes, which is cumbersome and difficult to obtain, as nodes change their positions with water currents. Simulation results show superior performance of the proposed scheme in terms of packets received at the final destination and end-to-end delay.

  14. A Localization-Free Interference and Energy Holes Minimization Routing for Underwater Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Anwar Khan

    2018-01-01

    Full Text Available Interference and energy holes formation in underwater wireless sensor networks (UWSNs threaten the reliable delivery of data packets from a source to a destination. Interference also causes inefficient utilization of the limited battery power of the sensor nodes in that more power is consumed in the retransmission of the lost packets. Energy holes are dead nodes close to the surface of water, and their early death interrupts data delivery even when the network has live nodes. This paper proposes a localization-free interference and energy holes minimization (LF-IEHM routing protocol for UWSNs. The proposed algorithm overcomes interference during data packet forwarding by defining a unique packet holding time for every sensor node. The energy holes formation is mitigated by a variable transmission range of the sensor nodes. As compared to the conventional routing protocols, the proposed protocol does not require the localization information of the sensor nodes, which is cumbersome and difficult to obtain, as nodes change their positions with water currents. Simulation results show superior performance of the proposed scheme in terms of packets received at the final destination and end-to-end delay.

  15. Dark energy in scalar-tensor theories

    International Nuclear Information System (INIS)

    Moeller, J.

    2007-12-01

    We investigate several aspects of dynamical dark energy in the framework of scalar-tensor theories of gravity. We provide a classification of scalar-tensor coupling functions admitting cosmological scaling solutions. In particular, we recover that Brans-Dicke theory with inverse power-law potential allows for a sequence of background dominated scaling regime and scalar field dominated, accelerated expansion. Furthermore, we compare minimally and non-minimally coupled models, with respect to the small redshift evolution of the dark energy equation of state. We discuss the possibility to discriminate between different models by a reconstruction of the equation-of-state parameter from available observational data. The non-minimal coupling characterizing scalar-tensor models can - in specific cases - alleviate fine tuning problems, which appear if (minimally coupled) quintessence is required to mimic a cosmological constant. Finally, we perform a phase-space analysis of a family of biscalar-tensor models characterized by a specific type of σ-model metric, including two examples from recent literature. In particular, we generalize an axion-dilaton model of Sonner and Townsend, incorporating a perfect fluid background consisting of (dark) matter and radiation. (orig.)

  16. Dark energy in scalar-tensor theories

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, J.

    2007-12-15

    We investigate several aspects of dynamical dark energy in the framework of scalar-tensor theories of gravity. We provide a classification of scalar-tensor coupling functions admitting cosmological scaling solutions. In particular, we recover that Brans-Dicke theory with inverse power-law potential allows for a sequence of background dominated scaling regime and scalar field dominated, accelerated expansion. Furthermore, we compare minimally and non-minimally coupled models, with respect to the small redshift evolution of the dark energy equation of state. We discuss the possibility to discriminate between different models by a reconstruction of the equation-of-state parameter from available observational data. The non-minimal coupling characterizing scalar-tensor models can - in specific cases - alleviate fine tuning problems, which appear if (minimally coupled) quintessence is required to mimic a cosmological constant. Finally, we perform a phase-space analysis of a family of biscalar-tensor models characterized by a specific type of {sigma}-model metric, including two examples from recent literature. In particular, we generalize an axion-dilaton model of Sonner and Townsend, incorporating a perfect fluid background consisting of (dark) matter and radiation. (orig.)

  17. Nonlinear Synchronization for Automatic Learning of 3D Pose Variability in Human Motion Sequences

    Directory of Open Access Journals (Sweden)

    Mozerov M

    2010-01-01

    Full Text Available A dense matching algorithm that solves the problem of synchronizing prerecorded human motion sequences, which show different speeds and accelerations, is proposed. The approach is based on minimization of MRF energy and solves the problem by using Dynamic Programming. Additionally, an optimal sequence is automatically selected from the input dataset to be a time-scale pattern for all other sequences. The paper utilizes an action specific model which automatically learns the variability of 3D human postures observed in a set of training sequences. The model is trained using the public CMU motion capture dataset for the walking action, and a mean walking performance is automatically learnt. Additionally, statistics about the observed variability of the postures and motion direction are also computed at each time step. The synchronized motion sequences are used to learn a model of human motion for action recognition and full-body tracking purposes.

  18. A sequence-dependent rigid-base model of DNA

    Science.gov (United States)

    Gonzalez, O.; Petkevičiutė, D.; Maddocks, J. H.

    2013-02-01

    A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can

  19. A sequence-dependent rigid-base model of DNA.

    Science.gov (United States)

    Gonzalez, O; Petkevičiūtė, D; Maddocks, J H

    2013-02-07

    A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can

  20. Unused energy sources inducing minimal pollution

    Energy Technology Data Exchange (ETDEWEB)

    Voss, A [Inst. fur Reaktorentwicklung, Kernforschungsanlage Julich GmbH, German Federal Republic

    1974-01-01

    The contribution of hydroelectricity to the growing worldwide energy demand is not expected to exceed 6%. As the largest amount of hydroelectric potential is located in developing nations, it will find its greatest development outside the currently industrialized sphere. The potential of 60 GW ascribed to tidal and geothermal energy is a negligible quantity. Solar energy represents an essentially inexhaustible source, but technological problems will preclude any major contribution from it during this century. The environmental problems caused by these 'new' energy sources are different from those engendered by fossil and nuclear power plants, but they are not negligible. It is irresponsible and misleading to describe them as pollution-free.

  1. A Fast Event Preprocessor and Sequencer for the Simbol-X Low Energy Detector

    Science.gov (United States)

    Schanz, T.; Tenzer, C.; Maier, D.; Kendziorra, E.; Santangelo, A.

    2009-05-01

    The Simbol-X Low Energy Detector (LED), a 128×128 pixel DEPFET (Depleted Field Effect Transistor) array, will be read out at a very high rate (8000 frames/second) and, therefore, requires a very fast on board electronics. We present an FPGA-based LED camera electronics consisting of an Event Preprocessor (EPP) for on board data preprocessing and filtering of the Simbol-X low-energy detector and a related Sequencer (SEQ) to generate the necessary signals to control the readout.

  2. A Fast Event Preprocessor and Sequencer for the Simbol-X Low Energy Detector

    International Nuclear Information System (INIS)

    Schanz, T.; Tenzer, C.; Maier, D.; Kendziorra, E.; Santangelo, A.

    2009-01-01

    The Simbol-X Low Energy Detector (LED), a 128x128 pixel DEPFET (Depleted Field Effect Transistor) array, will be read out at a very high rate (8000 frames/second) and, therefore, requires a very fast on board electronics. We present an FPGA-based LED camera electronics consisting of an Event Preprocessor (EPP) for on board data preprocessing and filtering of the Simbol-X low-energy detector and a related Sequencer (SEQ) to generate the necessary signals to control the readout.

  3. Crystal Engineering on Industrial Diaryl Pigments Using Lattice Energy Minimizations and X-ray Powder Diffraction

    International Nuclear Information System (INIS)

    Schmidt, M.; Dinnebier, R.; Kalkhof, H.

    2007-01-01

    Diaryl azo pigments play an important role as yellow pigments for printing inks, with an annual pigment production of more than 50,000 t. The crystal structures of Pigment Yellow 12 (PY12), Pigment Yellow 13 (PY13), Pigment Yellow 14 (PY14), and Pigment Yellow 83 (PY83) were determined from X-ray powder data using lattice energy minimizations and subsequent Rietveld refinements. Details of the lattice energy minimization procedure and of the development of a torsion potential for the biphenyl fragment are given. The Rietveld refinements were carried out using rigid bodies, or constraints. It was also possible to refine all atomic positions individually without any constraint or restraint, even for PY12 having 44 independent non-hydrogen atoms per asymmetric unit. For PY14 (23 independent non-hydrogen atoms), additionally all atomic isotropic temperature factors could be refined individually. PY12 crystallized in a herringbone arrangement with twisted biaryl fragments. PY13 and PY14 formed a layer structure of planar molecules. PY83 showed a herringbone structure with planar molecules. According to quantum mechanical calculations, the twisting of the biaryl fragment results in a lower color strength of the pigments, whereas changes in the substitution pattern have almost no influence on the color strength of a single molecule. Hence, the experimentally observed lower color strength of PY12 in comparison with that of PY13 and PY83 can be explained as a pure packing effect. Further lattice energy calculations explained that the four investigated pigments crystallize in three different structures because these structures are the energetically most favorable ones for each compound. For example, for PY13, PY14, or PY83, a PY12-analogous crystal structure would lead to considerably poorer lattice energies and lower densities. In contrast, lattice energy calculations revealed that PY12 could adopt a PY13-type structure with only slightly poorer energy. This structure was

  4. emMAW: computing minimal absent words in external memory.

    Science.gov (United States)

    Héliou, Alice; Pissis, Solon P; Puglisi, Simon J

    2017-09-01

    The biological significance of minimal absent words has been investigated in genomes of organisms from all domains of life. For instance, three minimal absent words of the human genome were found in Ebola virus genomes. There exists an O(n) -time and O(n) -space algorithm for computing all minimal absent words of a sequence of length n on a fixed-sized alphabet based on suffix arrays. A standard implementation of this algorithm, when applied to a large sequence of length n , requires more than 20 n  bytes of RAM. Such memory requirements are a significant hurdle to the computation of minimal absent words in large datasets. We present emMAW, the first external-memory algorithm for computing minimal absent words. A free open-source implementation of our algorithm is made available. This allows for computation of minimal absent words on far bigger data sets than was previously possible. Our implementation requires less than 3 h on a standard workstation to process the full human genome when as little as 1 GB of RAM is made available. We stress that our implementation, despite making use of external memory, is fast; indeed, even on relatively smaller datasets when enough RAM is available to hold all necessary data structures, it is less than two times slower than state-of-the-art internal-memory implementations. https://github.com/solonas13/maw (free software under the terms of the GNU GPL). alice.heliou@lix.polytechnique.fr or solon.pissis@kcl.ac.uk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  5. Annual Waste Minimization Summary Report

    International Nuclear Information System (INIS)

    Haworth, D.M.

    2011-01-01

    This report summarizes the waste minimization efforts undertaken by National Security TechnoIogies, LLC, for the U. S. Department of Energy, National Nuclear Security Administration Nevada Site Office (NNSA/NSO), during calendar year 2010. The NNSA/NSO Pollution Prevention Program establishes a process to reduce the volume and toxicity of waste generated by NNSA/NSO activities and ensures that proposed methods of treatment, storage, and/or disposal of waste minimize potential threats to human health and the environment.

  6. Minimizing the Levelized Cost of Energy in Single-Phase Photovoltaic Systems with an Absolute Active Power Control

    DEFF Research Database (Denmark)

    Yang, Yongheng; Koutroulis, Eftichios; Sangwongwanich, Ariya

    2015-01-01

    . An increase of the inverter lifetime and a reduction of the energy yield can alter the cost of energy, demanding an optimization of the power limitation. Therefore, aiming at minimizing the Levelized Cost of Energy (LCOE), the power limit is optimized for the AAPC strategy in this paper. The optimization...... control strategy, the Absolute Active Power Control (AAPC) can effectively solve the overloading issues by limiting the maximum possible PV power to a certain level (i.e., the power limitation), and also benefit the inverter reliability. However, its feasibility is challenged by the energy loss......, compared to the conventional PV inverter operating only in the maximum power point tracking mode. In the presented case study, the minimum of LCOE is achieved for the system when the power limit is optimized to a certain level of the designed maximum feed-in power (i.e., 3 kW). In addition, the proposed...

  7. Optimal allocation and sizing of PV/Wind/Split-diesel/Battery hybrid energy system for minimizing life cycle cost, carbon emission and dump energy of remote residential building

    International Nuclear Information System (INIS)

    Ogunjuyigbe, A.S.O.; Ayodele, T.R.; Akinola, O.A.

    2016-01-01

    Highlights: • Genetic Algorithm is used for tri-objective design of hybrid energy system. • The objective is minimizing the Life Cycle Cost, CO_2 emissions and dump energy. • Small split diesel generators are used in place of big single diesel generator. • The split diesel generators are aggregable based on certain set of rules. • The proposed algorithm achieves the set objectives (LCC, CO_2 emission and dump). - Abstract: In this paper, a Genetic Algorithm (GA) is utilized to implement a tri-objective design of a grid independent PV/Wind/Split-diesel/Battery hybrid energy system for a typical residential building with the objective of minimizing the Life Cycle Cost (LCC), CO_2 emissions and dump energy. To achieve some of these objectives, small split Diesel generators are used in place of single big Diesel generator and are aggregable based on certain set of rules depending on available renewable energy resources and state of charge of the battery. The algorithm was utilized to study five scenarios (PV/Battery, Wind/Battery, Single big Diesel generator, aggregable 3-split Diesel generators, PV/Wind/Split-diesel/Battery) for a typical load profile of a residential house using typical wind and solar radiation data. The results obtained revealed that the PV/Wind/Split-diesel/Battery is the most attractive scenario (optimal) having LCC of $11,273, COE of 0.13 ($/kW h), net dump energy of 3 MW h, and net CO_2 emission of 13,273 kg. It offers 46%, 28%, 82% and 94% reduction in LCC, COE, CO_2 emission and dump energy respectively when compared to a single big Diesel generator scenario.

  8. Quantum scattering in one-dimensional systems satisfying the minimal length uncertainty relation

    Energy Technology Data Exchange (ETDEWEB)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    2016-12-15

    In quantum gravity theories, when the scattering energy is comparable to the Planck energy the Heisenberg uncertainty principle breaks down and is replaced by the minimal length uncertainty relation. In this paper, the consequences of the minimal length uncertainty relation on one-dimensional quantum scattering are studied using an approach involving a recently proposed second-order differential equation. An exact analytical expression for the tunneling probability through a locally-periodic rectangular potential barrier system is obtained. Results show that the existence of a non-zero minimal length uncertainty tends to shift the resonant tunneling energies to the positive direction. Scattering through a locally-periodic potential composed of double-rectangular potential barriers shows that the first band of resonant tunneling energies widens for minimal length cases when the double-rectangular potential barrier is symmetric but narrows down when the double-rectangular potential barrier is asymmetric. A numerical solution which exploits the use of Wronskians is used to calculate the transmission probabilities through the Pöschl–Teller well, Gaussian barrier, and double-Gaussian barrier. Results show that the probability of passage through the Pöschl–Teller well and Gaussian barrier is smaller in the minimal length cases compared to the non-minimal length case. For the double-Gaussian barrier, the probability of passage for energies that are more positive than the resonant tunneling energy is larger in the minimal length cases compared to the non-minimal length case. The approach is exact and applicable to many types of scattering potential.

  9. Pediatric neuro MRI. Tricks to minimize sedation

    Energy Technology Data Exchange (ETDEWEB)

    Barkovich, Matthew J.; Desikan, Rahul S. [University of California, San Francisco, Department of Radiology and Diagnostic Imaging, San Francisco, CA (United States); Xu, Duan; Barkovich, A.J. [University of California, San Francisco, Department of Radiology and Diagnostic Imaging, San Francisco, CA (United States); UCSF-Benioff Children' s Hospital, Department of Radiology, San Francisco, CA (United States); Williams, Cassandra [UCSF-Benioff Children' s Hospital, Department of Radiology, San Francisco, CA (United States)

    2018-01-15

    Magnetic resonance imaging (MRI) is the workhorse modality in pediatric neuroimaging because it provides excellent soft-tissue contrast without ionizing radiation. Until recently, studies were uninterpretable without sedation; however, given development of shorter sequences, sequences that correct for motion, and studies showing the potentially deleterious effects of sedation on immature laboratory animals, it is prudent to minimize sedation when possible. This manuscript provides basic guidelines for performing pediatric neuro MRI without sedation by both modifying technical factors to reduce scan time and noise, and using a multi-disciplinary team to coordinate imaging with the patient's biorhythms. (orig.)

  10. Use of Binary Partition Tree and energy minimization for object-based classification of urban land cover

    Science.gov (United States)

    Li, Mengmeng; Bijker, Wietske; Stein, Alfred

    2015-04-01

    Two main challenges are faced when classifying urban land cover from very high resolution satellite images: obtaining an optimal image segmentation and distinguishing buildings from other man-made objects. For optimal segmentation, this work proposes a hierarchical representation of an image by means of a Binary Partition Tree (BPT) and an unsupervised evaluation of image segmentations by energy minimization. For building extraction, we apply fuzzy sets to create a fuzzy landscape of shadows which in turn involves a two-step procedure. The first step is a preliminarily image classification at a fine segmentation level to generate vegetation and shadow information. The second step models the directional relationship between building and shadow objects to extract building information at the optimal segmentation level. We conducted the experiments on two datasets of Pléiades images from Wuhan City, China. To demonstrate its performance, the proposed classification is compared at the optimal segmentation level with Maximum Likelihood Classification and Support Vector Machine classification. The results show that the proposed classification produced the highest overall accuracies and kappa coefficients, and the smallest over-classification and under-classification geometric errors. We conclude first that integrating BPT with energy minimization offers an effective means for image segmentation. Second, we conclude that the directional relationship between building and shadow objects represented by a fuzzy landscape is important for building extraction.

  11. Fast computational methods for predicting protein structure from primary amino acid sequence

    Science.gov (United States)

    Agarwal, Pratul Kumar [Knoxville, TN

    2011-07-19

    The present invention provides a method utilizing primary amino acid sequence of a protein, energy minimization, molecular dynamics and protein vibrational modes to predict three-dimensional structure of a protein. The present invention also determines possible intermediates in the protein folding pathway. The present invention has important applications to the design of novel drugs as well as protein engineering. The present invention predicts the three-dimensional structure of a protein independent of size of the protein, overcoming a significant limitation in the prior art.

  12. Principle of minimal work fluctuations.

    Science.gov (United States)

    Xiao, Gaoyang; Gong, Jiangbin

    2015-08-01

    Understanding and manipulating work fluctuations in microscale and nanoscale systems are of both fundamental and practical interest. For example, in considering the Jarzynski equality 〈e-βW〉=e-βΔF, a change in the fluctuations of e-βW may impact how rapidly the statistical average of e-βW converges towards the theoretical value e-βΔF, where W is the work, β is the inverse temperature, and ΔF is the free energy difference between two equilibrium states. Motivated by our previous study aiming at the suppression of work fluctuations, here we obtain a principle of minimal work fluctuations. In brief, adiabatic processes as treated in quantum and classical adiabatic theorems yield the minimal fluctuations in e-βW. In the quantum domain, if a system initially prepared at thermal equilibrium is subjected to a work protocol but isolated from a bath during the time evolution, then a quantum adiabatic process without energy level crossing (or an assisted adiabatic process reaching the same final states as in a conventional adiabatic process) yields the minimal fluctuations in e-βW, where W is the quantum work defined by two energy measurements at the beginning and at the end of the process. In the classical domain where the classical work protocol is realizable by an adiabatic process, then the classical adiabatic process also yields the minimal fluctuations in e-βW. Numerical experiments based on a Landau-Zener process confirm our theory in the quantum domain, and our theory in the classical domain explains our previous numerical findings regarding the suppression of classical work fluctuations [G. Y. Xiao and J. B. Gong, Phys. Rev. E 90, 052132 (2014)].

  13. Improved Model for Predicting the Free Energy Contribution of Dinucleotide Bulges to RNA Duplex Stability.

    Science.gov (United States)

    Tomcho, Jeremy C; Tillman, Magdalena R; Znosko, Brent M

    2015-09-01

    Predicting the secondary structure of RNA is an intermediate in predicting RNA three-dimensional structure. Commonly, determining RNA secondary structure from sequence uses free energy minimization and nearest neighbor parameters. Current algorithms utilize a sequence-independent model to predict free energy contributions of dinucleotide bulges. To determine if a sequence-dependent model would be more accurate, short RNA duplexes containing dinucleotide bulges with different sequences and nearest neighbor combinations were optically melted to derive thermodynamic parameters. These data suggested energy contributions of dinucleotide bulges were sequence-dependent, and a sequence-dependent model was derived. This model assigns free energy penalties based on the identity of nucleotides in the bulge (3.06 kcal/mol for two purines, 2.93 kcal/mol for two pyrimidines, 2.71 kcal/mol for 5'-purine-pyrimidine-3', and 2.41 kcal/mol for 5'-pyrimidine-purine-3'). The predictive model also includes a 0.45 kcal/mol penalty for an A-U pair adjacent to the bulge and a -0.28 kcal/mol bonus for a G-U pair adjacent to the bulge. The new sequence-dependent model results in predicted values within, on average, 0.17 kcal/mol of experimental values, a significant improvement over the sequence-independent model. This model and new experimental values can be incorporated into algorithms that predict RNA stability and secondary structure from sequence.

  14. MINIMIZE ENERGY AND COSTS REQUIREMENT OF WEEDING AND FERTILIZING PROCESS FOR FIBER CROPS IN SMALL FARMS

    Directory of Open Access Journals (Sweden)

    Tarek FOUDA

    2015-06-01

    Full Text Available The experimental work was carried out through agricultural summer season of 2014 at the experimental farm of Gemmiza Research Station, Gharbiya governorate to minimize energy and costs in weeding and fertilizing processes for fiber crops (Kenaf and Roselle in small farms. The manufactured multipurpose unit performance was studied as a function of change in machine forward speed (2.2, 2.8, 3.4 and 4 Km/h fertilizing rates (30,45 and 60 Kg.N.fed-1,and constant soil moisture content was 20%(d.b in average. Performance of the manufactured machine was evaluated in terms of fuel consumption, power and energy requirements, effective field capacity, theoretical field capacity, field efficiency, and operational costs as a machine measurements .The experiment results reveled that the manufactured machine decreased energy and increased effective field capacity and efficiency under the following conditions: -machine forward speed 2.2Kmlh. -moisture content average 20%.

  15. Non-minimally coupled quintessence dark energy model with a cubic galileon term: a dynamical system analysis

    Science.gov (United States)

    Bhattacharya, Somnath; Mukherjee, Pradip; Roy, Amit Singha; Saha, Anirban

    2018-03-01

    We consider a scalar field which is generally non-minimally coupled to gravity and has a characteristic cubic Galilean-like term and a generic self-interaction, as a candidate of a Dark Energy model. The system is dynamically analyzed and novel fixed points with perturbative stability are demonstrated. Evolution of the system is numerically studied near a novel fixed point which owes its existence to the Galileon character of the model. It turns out that demanding the stability of this novel fixed point puts a strong restriction on the allowed non-minimal coupling and the choice of the self-interaction. The evolution of the equation of state parameter is studied, which shows that our model predicts an accelerated universe throughout and the phantom limit is only approached closely but never crossed. Our result thus extends the findings of Coley, Dynamical systems and cosmology. Kluwer Academic Publishers, Boston (2013) for more general NMC than linear and quadratic couplings.

  16. Prediction of glutathionylation sites in proteins using minimal sequence information and their experimental validation.

    Science.gov (United States)

    Pal, Debojyoti; Sharma, Deepak; Kumar, Mukesh; Sandur, Santosh K

    2016-09-01

    S-glutathionylation of proteins plays an important role in various biological processes and is known to be protective modification during oxidative stress. Since, experimental detection of S-glutathionylation is labor intensive and time consuming, bioinformatics based approach is a viable alternative. Available methods require relatively longer sequence information, which may prevent prediction if sequence information is incomplete. Here, we present a model to predict glutathionylation sites from pentapeptide sequences. It is based upon differential association of amino acids with glutathionylated and non-glutathionylated cysteines from a database of experimentally verified sequences. This data was used to calculate position dependent F-scores, which measure how a particular amino acid at a particular position may affect the likelihood of glutathionylation event. Glutathionylation-score (G-score), indicating propensity of a sequence to undergo glutathionylation, was calculated using position-dependent F-scores for each amino-acid. Cut-off values were used for prediction. Our model returned an accuracy of 58% with Matthew's correlation-coefficient (MCC) value of 0.165. On an independent dataset, our model outperformed the currently available model, in spite of needing much less sequence information. Pentapeptide motifs having high abundance among glutathionylated proteins were identified. A list of potential glutathionylation hotspot sequences were obtained by assigning G-scores and subsequent Protein-BLAST analysis revealed a total of 254 putative glutathionable proteins, a number of which were already known to be glutathionylated. Our model predicted glutathionylation sites in 93.93% of experimentally verified glutathionylated proteins. Outcome of this study may assist in discovering novel glutathionylation sites and finding candidate proteins for glutathionylation.

  17. No evidence that mRNAs have lower folding free energies than random sequences with the same dinucleotide distribution

    DEFF Research Database (Denmark)

    Workman, Christopher; Krogh, Anders Stærmose

    1999-01-01

    This work investigates whether mRNA has a lower estimated folding free energy than random sequences. The free energy estimates are calculated by the mfold program for prediction of RNA secondary structures. For a set of 46 mRNAs it is shown that the predicted free energy is not significantly diff...

  18. Radio frequency energy for non-invasive and minimally invasive skin tightening.

    Science.gov (United States)

    Mulholland, R Stephen

    2011-07-01

    This article reviews the non-invasive and minimally invasive options for skin tightening, focusing on peer-reviewed articles and presentations and those technologies with the most proven or promising RF non-excisional skin-tightening results for excisional surgeons. RF has been the mainstay of non-invasive skin tightening and has emerged as the "cutting edge" technology in the minimally invasive skin-tightening field. Because these RF skin-tightening technologies are capital equipment purchases with a significant cost associated, this article also discusses some business issues and models that have proven to work in the plastic surgeon's office for non-invasive and minimally invasive skin-tightening technologies. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. National Institutes of Health: Mixed waste minimization and treatment

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-01

    The Appalachian States Low-Level Radioactive Waste Commission requested the US Department of Energy`s National Low-Level Waste Management Program (NLLWMP) to assist the biomedical community in becoming more knowledgeable about its mixed waste streams, to help minimize the mixed waste stream generated by the biomedical community, and to identify applicable treatment technologies for these mixed waste streams. As the first step in the waste minimization process, liquid low-level radioactive mixed waste (LLMW) streams generated at the National Institutes of Health (NIH) were characterized and combined into similar process categories. This report identifies possible waste minimization and treatment approaches for the LLMW generated by the biomedical community identified in DOE/LLW-208. In development of the report, on site meetings were conducted with NIH personnel responsible for generating each category of waste identified as lacking disposal options. Based on the meetings and general waste minimization guidelines, potential waste minimization options were identified.

  20. Polyadenylated Sequencing Primers Enable Complete Readability of PCR Amplicons Analyzed by Dideoxynucleotide Sequencing

    Directory of Open Access Journals (Sweden)

    Martin Beránek

    2012-01-01

    Full Text Available Dideoxynucleotide DNA sequencing is one of the principal procedures in molecular biology. Loss of an initial part of nucleotides behind the 3' end of the sequencing primer limits the readability of sequenced amplicons. We present a method which extends the readability by using sequencing primers modified by polyadenylated tails attached to their 5' ends. Performing a polymerase chain reaction, we amplified eight amplicons of six human genes (AMELX, APOE, HFE, MBL2, SERPINA1 and TGFB1 ranging from 106 bp to 680 bp. Polyadenylation of the sequencing primers minimized the loss of bases in all amplicons. Complete sequences of shorter products (AMELX 106 bp, SERPINA1 121 bp, HFE 208 bp, APOE 244 bp, MBL2 317 bp were obtained. In addition, in the case of TGFB1 products (366 bp, 432 bp, and 680 bp, respectively, the lengths of sequencing readings were significantly longer if adenylated primers were used. Thus, single strand dideoxynucleotide sequencing with adenylated primers enables complete or near complete readability of short PCR amplicons.

  1. The prospect of minimally invasive therapy for oncology in 21st century

    International Nuclear Information System (INIS)

    Wu Peihong

    2005-01-01

    Minimally invasive therapy and biotherapy are two tendencies in medicine of the 21st century. It is minimally invasive with exact fixing and therapy, few pains and fast recovery. By the host self defecnce mechanism and biologicals, confinement of tumor and decreasing recurrence will give improvement to the patient's quality of life. The followings are the megatrends of minimally invasive therapy in the 21st century: 1. Follow closely with new technology; 2. Exact fixing and therapy; 3. Mode of sequencely combination; 4. Combined with immunotherapy; 5. Radical cure of minimally invasive therapy on oncology. New mode of minimally invasive therapy combined with biotherapy is expected as an important ingredient for oncotherapy in the 21 century. (authors)

  2. Gravitino problem in minimal supergravity inflation

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, Fuminori [Institute for Cosmic Ray Research, The University of Tokyo, Kashiwa, Chiba 277-8582 (Japan); Mukaida, Kyohei [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583 (Japan); Nakayama, Kazunori [Department of Physics, Faculty of Science, The University of Tokyo, Bunkyo-ku, Tokyo 133-0033 (Japan); Terada, Takahiro, E-mail: terada@kias.re.kr [School of Physics, Korea Institute for Advanced Study (KIAS), Seoul 02455 (Korea, Republic of); Yamada, Yusuke [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, Stanford, CA 94305 (United States)

    2017-04-10

    We study non-thermal gravitino production in the minimal supergravity inflation. In this minimal model utilizing orthogonal nilpotent superfields, the particle spectrum includes only graviton, gravitino, inflaton, and goldstino. We find that a substantial fraction of the cosmic energy density can be transferred to the longitudinal gravitino due to non-trivial change of its sound speed. This implies either a breakdown of the effective theory after inflation or a serious gravitino problem.

  3. Gravitino problem in minimal supergravity inflation

    Directory of Open Access Journals (Sweden)

    Fuminori Hasegawa

    2017-04-01

    Full Text Available We study non-thermal gravitino production in the minimal supergravity inflation. In this minimal model utilizing orthogonal nilpotent superfields, the particle spectrum includes only graviton, gravitino, inflaton, and goldstino. We find that a substantial fraction of the cosmic energy density can be transferred to the longitudinal gravitino due to non-trivial change of its sound speed. This implies either a breakdown of the effective theory after inflation or a serious gravitino problem.

  4. Minimal Walking Technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; A. Ryttov, T.

    2007-01-01

    Different theoretical and phenomenological aspects of the Minimal and Nonminimal Walking Technicolor theories have recently been studied. The goal here is to make the models ready for collider phenomenology. We do this by constructing the low energy effective theory containing scalars......, pseudoscalars, vector mesons and other fields predicted by the minimal walking theory. We construct their self-interactions and interactions with standard model fields. Using the Weinberg sum rules, opportunely modified to take into account the walking behavior of the underlying gauge theory, we find...... interesting relations for the spin-one spectrum. We derive the electroweak parameters using the newly constructed effective theory and compare the results with the underlying gauge theory. Our analysis is sufficiently general such that the resulting model can be used to represent a generic walking technicolor...

  5. Energy consumption during simulated minimal access surgery with and without using an armrest.

    Science.gov (United States)

    Jafri, Mansoor; Brown, Stuart; Arnold, Graham; Abboud, Rami; Wang, Weijie

    2013-03-01

    Minimal access surgery (MAS) can be a lengthy procedure when compared to open surgery and therefore surgeon fatigue becomes an important issue and surgeons may expose themselves to chronic injuries and making errors. There have been few studies on this topic and they have used only questionnaires and electromyography rather than direct measurement of energy expenditure (EE). The aim of this study was to investigate whether the use of an armrest could reduce the EE of surgeons during MAS. Sixteen surgeons performed simulated MAS with and without using an armrest. They were required to perform the time-consuming task of using scissors to cut a rubber glove through its top layer in a triangular fashion with the help of a laparoscopic camera. Energy consumptions were measured using the Oxycon Mobile system during all the procedures. Error rate and duration time for simulated surgery were recorded. After performing the simulated surgery, subjects scored how comfortable they felt using the armrest. It was found that O(2) uptake (VO(2)) was 5 % less when surgeons used the armrest. The error rate when performing the procedure with the armrest was 35 % compared with 42.29 % without the armrest. Additionally, comfort levels with the armrest were higher than without the armrest. 75 % of surgeons indicated a preference for using the armrest during the simulated surgery. The armrest provides support for surgeons and cuts energy consumption during simulated MAS.

  6. Minimization In Digital Design As A Meta-Planning Problem

    Science.gov (United States)

    Ho, William P. C.; Wu, Jung-Gen

    1987-05-01

    In our model-based expert system for automatic digital system design, we formalize the design process into three sub-processes - compiling high-level behavioral specifications into primitive behavioral operations, grouping primitive operations into behavioral functions, and grouping functions into modules. Consideration of design minimization explicitly controls decision-making in the last two subprocesses. Design minimization, a key task in the automatic design of digital systems, is complicated by the high degree of interaction among the time sequence and content of design decisions. In this paper, we present an AI approach which directly addresses these interactions and their consequences by modeling the minimization prob-lem as a planning problem, and the management of design decision-making as a meta-planning problem.

  7. Automatic classification of minimally invasive instruments based on endoscopic image sequences

    Science.gov (United States)

    Speidel, Stefanie; Benzko, Julia; Krappe, Sebastian; Sudra, Gunther; Azad, Pedram; Müller-Stich, Beat Peter; Gutt, Carsten; Dillmann, Rüdiger

    2009-02-01

    Minimally invasive surgery is nowadays a frequently applied technique and can be regarded as a major breakthrough in surgery. The surgeon has to adopt special operation-techniques and deal with difficulties like the complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality techniques. To analyze the current situation for context-aware assistance, we need intraoperatively gained sensor data and a model of the intervention. A situation consists of information about the performed activity, the used instruments, the surgical objects, the anatomical structures and defines the state of an intervention for a given moment in time. The endoscopic images provide a rich source of information which can be used for an image-based analysis. Different visual cues are observed in order to perform an image-based analysis with the objective to gain as much information as possible about the current situation. An important visual cue is the automatic recognition of the instruments which appear in the scene. In this paper we present the classification of minimally invasive instruments using the endoscopic images. The instruments are not modified by markers. The system segments the instruments in the current image and recognizes the instrument type based on three-dimensional instrument models.

  8. Maximizing cellulosic ethanol potentials by minimizing wastewater generation and energy consumption: Competing with corn ethanol.

    Science.gov (United States)

    Liu, Gang; Bao, Jie

    2017-12-01

    Energy consumption and wastewater generation in cellulosic ethanol production are among the determinant factors on overall cost and technology penetration into fuel ethanol industry. This study analyzed the energy consumption and wastewater generation by the new biorefining process technology, dry acid pretreatment and biodetoxification (DryPB), as well as by the current mainstream technologies. DryPB minimizes the steam consumption to 8.63GJ and wastewater generation to 7.71tons in the core steps of biorefining process for production of one metric ton of ethanol, close to 7.83GJ and 8.33tons in corn ethanol production, respectively. The relatively higher electricity consumption is compensated by large electricity surplus from lignin residue combustion. The minimum ethanol selling price (MESP) by DryPB is below $2/gal and falls into the range of corn ethanol production cost. The work indicates that the technical and economical gap between cellulosic ethanol and corn ethanol has been almost filled up. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Homogenization of non-uniformly bounded periodic diffusion energies in dimension two

    International Nuclear Information System (INIS)

    Braides, Andrea; Briane, Marc; Casado-Díaz, Juan

    2009-01-01

    This paper deals with the homogenization of two-dimensional oscillating convex functionals, the densities of which are equicoercive but not uniformly bounded from above. Using a uniform-convergence result for the minimizers, which holds for this type of scalar problems in dimension two, we prove in particular that the limit energy is local and recover the validity of the analogue of the well-known periodic homogenization formula in this degenerate case. However, in the present context the classical argument leading to integral representation based on the use of cut-off functions is useless due to the unboundedness of the densities. In its place we build sequences with bounded energy, which converge uniformly to piecewise-affine functions, taking point-wise extrema of recovery sequences for affine functions

  10. Energy Resources Consumption Minimization in Housing Construction

    Directory of Open Access Journals (Sweden)

    Balastov Alexey

    2017-01-01

    Full Text Available The article deals with the energy savings analysis during operation of buildings, provides the heat balance of residential premises, considers options for energy-efficient solutions for hot water supply systems in buildings. As technical facilities that allow the use of secondary heat sources and solar energy, there are also considered the systems with heat recovery of “gray” wastewater, heat pumps, solar collectors and photoelectric converters.

  11. Inflationary models with non-minimally derivative coupling

    International Nuclear Information System (INIS)

    Yang, Nan; Fei, Qin; Gong, Yungui; Gao, Qing

    2016-01-01

    We derive the general formulae for the scalar and tensor spectral tilts to the second order for the inflationary models with non-minimally derivative coupling without taking the high friction limit. The non-minimally kinetic coupling to Einstein tensor brings the energy scale in the inflationary models down to be sub-Planckian. In the high friction limit, the Lyth bound is modified with an extra suppression factor, so that the field excursion of the inflaton is sub-Planckian. The inflationary models with non-minimally derivative coupling are more consistent with observations in the high friction limit. In particular, with the help of the non-minimally derivative coupling, the quartic power law potential is consistent with the observational constraint at 95% CL. (paper)

  12. Unifying principles of irreversibility minimization for efficiency maximization in steady-flow chemically-reactive engines

    International Nuclear Information System (INIS)

    Ramakrishnan, Sankaran; Edwards, Christopher F.

    2014-01-01

    Systems research has led to the conception and development of various steady-flow, chemically-reactive, engine cycles for stationary power generation and propulsion. However, the question that remains unanswered is: What is the maximum-efficiency steady-flow chemically-reactive engine architecture permitted by physics? On the one hand the search for higher-efficiency cycles continues, often involving newer processes and devices (fuel cells, carbon separation, etc.); on the other hand the design parameters for existing cycles are continually optimized in response to improvements in device engineering. In this paper we establish that any variation in engine architecture—parametric change or process-sequence change—contributes to an efficiency increase via one of only two possible ways to minimize total irreversibility. These two principles help us unify our understanding from a large number of parametric analyses and cycle-optimization studies for any steady-flow chemically-reactive engine, and set a framework to systematically identify maximum-efficiency engine architectures. - Highlights: • A unified thermodynamic model to study chemically-reactive engine architectures is developed. • All parametric analyses of efficiency are unified by two irreversibility-minimization principles. • Variations in internal energy transfers yield a net work increase that is greater than engine irreversibility reduced. • Variations in external energy transfers yield a net work increase that is lesser than engine irreversibility reduced

  13. Commercial radioactive waste minimization program development guidance

    International Nuclear Information System (INIS)

    Fischer, D.K.

    1991-01-01

    This document is one of two prepared by the EG ampersand G Idaho, Inc., Waste Management Technical Support Program Group, National Low-Level Waste Management Program Unit. One of several Department of Energy responsibilities stated in the Amendments Act of 1985 is to provide technical assistance to compact regions Host States, and nonmember States (to the extent provided in appropriations acts) in establishing waste minimization program plans. Technical assistance includes, among other things, the development of technical guidelines for volume reduction options. Pursuant to this defined responsibility, the Department of Energy (through EG ampersand G Idaho, Inc.) has prepared this report, which includes guidance on defining a program, State/compact commission participation, and waste minimization program plans

  14. Transformation of general binary MRF minimization to the first-order case.

    Science.gov (United States)

    Ishikawa, Hiroshi

    2011-06-01

    We introduce a transformation of general higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we formalize a framework for approximately minimizing higher-order multi-label MRF energies that combines the new reduction with the fusion-move and QPBO algorithms. While many computer vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models so that higher-order energies can be used to capture the rich statistics of natural scenes. We also show that some minimization methods can be considered special cases of the present framework, as well as comparing the new method experimentally with other such techniques.

  15. The non-minimal heterotic pure spinor string in a curved background

    Energy Technology Data Exchange (ETDEWEB)

    Chandia, Osvaldo [Facultad de Artes Liberales and Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez,Diagonal Las Torres 2640, Peñalolén, Santiago (Chile)

    2014-03-21

    We study the non-minimal pure spinor string in a curved background. We find that the minimal BRST invariance implies the existence of a non-trivial stress-energy tensor for the minimal and non-minimal variables in the heterotic curved background. We find constraint equations for the b ghost. We construct the b ghost as a solution of these constraints.

  16. Minimizing water consumption when producing hydropower

    Science.gov (United States)

    Leon, A. S.

    2015-12-01

    In 2007, hydropower accounted for only 16% of the world electricity production, with other renewable sources totaling 3%. Thus, it is not surprising that when alternatives are evaluated for new energy developments, there is strong impulse for fossil fuel or nuclear energy as opposed to renewable sources. However, as hydropower schemes are often part of a multipurpose water resources development project, they can often help to finance other components of the project. In addition, hydropower systems and their associated dams and reservoirs provide human well-being benefits, such as flood control and irrigation, and societal benefits such as increased recreational activities and improved navigation. Furthermore, hydropower due to its associated reservoir storage, can provide flexibility and reliability for energy production in integrated energy systems. The storage capability of hydropower systems act as a regulating mechanism by which other intermittent and variable renewable energy sources (wind, wave, solar) can play a larger role in providing electricity of commercial quality. Minimizing water consumption for producing hydropower is critical given that overuse of water for energy production may result in a shortage of water for other purposes such as irrigation, navigation or fish passage. This paper presents a dimensional analysis for finding optimal flow discharge and optimal penstock diameter when designing impulse and reaction water turbines for hydropower systems. The objective of this analysis is to provide general insights for minimizing water consumption when producing hydropower. This analysis is based on the geometric and hydraulic characteristics of the penstock, the total hydraulic head and the desired power production. As part of this analysis, various dimensionless relationships between power production, flow discharge and head losses were derived. These relationships were used to withdraw general insights on determining optimal flow discharge and

  17. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo; Schö nlieb, Carola-Bibiane

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a

  18. Hartree-Fock energies of the doubly excited states of the boron isoelectronic sequence

    International Nuclear Information System (INIS)

    El-Sherbini, T.M.; Mansour, H.M.; Farrag, A.A.; Rahman, A.A.

    1985-08-01

    Hartree-Fock energies of the 1s 2 2s 2p ns( 4 P), 1s 2 2s 2p np ( 4 P, 4 D) and 1s 2 2s 2p nd ( 4 P, 4 D); n=3-6 states in the boron isoelectronic sequence are reported. The results show a fairly good agreement with the experimental data of Bromander for O IV. (author)

  19. In I isoelectronic sequence: wavelengths and energy levels for Xe VI through La IX

    International Nuclear Information System (INIS)

    Kaufman, V.; Sugar, J.

    1987-01-01

    Spectra of Xe, Cs, Ba, and La produced with a high-voltage spark discharge were observed photographically with the National Bureau of Standards 10.7-m normal- and grazing-incidence spectrographs. Identified lines of the In I isoelectronic sequence were used to determine the energy levels of the 5s 2 5p, 5s5p 2 , 5s 2 5d, and 5s 2 6s configurations. Their interactions with unobserved configurations that include a 4f electron are discussed. Fitted values of the radial energy integrals were determined from the known levels

  20. Scheduling Agreeable Jobs On A Single Machine To Minimize ...

    African Journals Online (AJOL)

    Journal of Applied Science and Technology ... (NP) scheduling of number of jobs on a single machine to minimize weighted number of early and tardy jobs where the earliest start times and latest due date are agreeable (i.e. earliest start time must increase in the same sequence as latest due dates) has been considered.

  1. Amelioration of the cooling load based chiller sequencing control

    International Nuclear Information System (INIS)

    Huang, Sen; Zuo, Wangda; Sohn, Michael D.

    2016-01-01

    Highlights: • We developed a new approach for the optimal load distribution for chillers. • We proposed a new approach to optimize the number of operating chillers. • We provided a holistic solution to address chiller sequencing control problems. - Abstract: Cooling Load based Control (CLC) for the chiller sequencing is a commonly used control strategy for multiple-chiller plants. To improve the energy efficiency of these chiller plants, researchers proposed various CLC optimization approaches, which can be divided into two groups: studies to optimize the load distribution and studies to identify the optimal number of operating chillers. However, both groups have their own deficiencies and do not consider the impact of each other. This paper aims to improve the CLC by proposing three new approaches. The first optimizes the load distribution by adjusting the critical points for the chiller staging, which is easier to be implemented than existing approaches. In addition, by considering the impact of the load distribution on the cooling tower energy consumption and the pump energy consumption, this approach can achieve a better energy saving. The second optimizes the number of operating chillers by modulating the critical points and the condenser water set point in order to achieve the minimal energy consumption of the entire chiller plant that may not be guaranteed by existing approaches. The third combines the first two approaches to provide a holistic solution. The proposed three approaches were evaluated via a case study. The results show that the total energy consumption saving for the studied chiller plant is 0.5%, 5.3% and 5.6% by the three approaches, respectively. An energy saving of 4.9–11.8% can be achieved for the chillers at the cost of more energy consumption by the cooling towers (increases of 5.8–43.8%). The pumps’ energy saving varies from −8.6% to 2.0%, depending on the approach.

  2. The thermodynamic approach to boron chemical vapour deposition based on a computer minimization of the total Gibbs free energy

    International Nuclear Information System (INIS)

    Naslain, R.; Thebault, J.; Hagenmuller, P.; Bernard, C.

    1979-01-01

    A thermodynamic approach based on the minimization of the total Gibbs free energy of the system is used to study the chemical vapour deposition (CVD) of boron from BCl 3 -H 2 or BBr 3 -H 2 mixtures on various types of substrates (at 1000 < T< 1900 K and 1 atm). In this approach it is assumed that states close to equilibrium are reached in the boron CVD apparatus. (Auth.)

  3. Coal consumption minimizing by increasing thermal energy efficiency at ROMAG-PROD Heavy Water Plant

    International Nuclear Information System (INIS)

    Preda, Marius Cristian

    2006-01-01

    ROMAG-PROD Heavy Water Plant is a large thermal energy consumer using almost all the steam output from ROMAG-TERMO Power Plant - the steam cost weight in the total heavy water price is about 40%. The steam consumption minimizing by modernization of isotopic exchange facilities and engineering development in ROMAG-PROD Heavy Water Plant results in an corresponding decrease of coal amount burned at ROMAG-TERMO boilers. This decrease could be achieved mainly by the followings ways: - Facility wrappings integrity; - High performance heat exchangers; - Refurbished heat insulations; - Modified condenser-collecting pipeline routes; - High performance steam traps; - Heat electric wire. When coal is burned in Power Plant burners to obtain thermal energy, toxic emissions results in flue gases, such as: - CO 2 and NO x with impact on climate warming; - SO 2 which results in ozone layer thinning effect and in acid rain falls. From the value of steam output per burned coal: 1 GCal steam = 1.41 tone steam = 0.86 thermal MW = 1.1911 tones burned coal (lignite), it is obvious that by decreasing the thermal energy consumption provided for ROMAG PROD, a coal amount decrease is estimated at about 45 t/h, or about 394,200 t/year coal, which means about 10% of the current coal consumption at ROMAG-TERMO PP. At the same time, by reducing the burned coal amount, an yearly decrease in emissions into air to about 400,000 tones CO 2 is expected

  4. Automated degenerate PCR primer design for high-throughput sequencing improves efficiency of viral sequencing

    Directory of Open Access Journals (Sweden)

    Li Kelvin

    2012-11-01

    Full Text Available Abstract Background In a high-throughput environment, to PCR amplify and sequence a large set of viral isolates from populations that are potentially heterogeneous and continuously evolving, the use of degenerate PCR primers is an important strategy. Degenerate primers allow for the PCR amplification of a wider range of viral isolates with only one set of pre-mixed primers, thus increasing amplification success rates and minimizing the necessity for genome finishing activities. To successfully select a large set of degenerate PCR primers necessary to tile across an entire viral genome and maximize their success, this process is best performed computationally. Results We have developed a fully automated degenerate PCR primer design system that plays a key role in the J. Craig Venter Institute’s (JCVI high-throughput viral sequencing pipeline. A consensus viral genome, or a set of consensus segment sequences in the case of a segmented virus, is specified using IUPAC ambiguity codes in the consensus template sequence to represent the allelic diversity of the target population. PCR primer pairs are then selected computationally to produce a minimal amplicon set capable of tiling across the full length of the specified target region. As part of the tiling process, primer pairs are computationally screened to meet the criteria for successful PCR with one of two described amplification protocols. The actual sequencing success rates for designed primers for measles virus, mumps virus, human parainfluenza virus 1 and 3, human respiratory syncytial virus A and B and human metapneumovirus are described, where >90% of designed primer pairs were able to consistently successfully amplify >75% of the isolates. Conclusions Augmenting our previously developed and published JCVI Primer Design Pipeline, we achieved similarly high sequencing success rates with only minor software modifications. The recommended methodology for the construction of the consensus

  5. Strategic planning for minimizing CO2 emissions using LP model based on forecasted energy demand by PSO Algorithm and ANN

    Energy Technology Data Exchange (ETDEWEB)

    Yousefi, M.; Omid, M.; Rafiee, Sh. [Department of Agricultural Machinery Engineering, University of Tehran, Karaj (Iran, Islamic Republic of); Ghaderi, S. F. [Department of Industrial Engineering, University of Tehran, Tehran (Iran, Islamic Republic of)

    2013-07-01

    Iran's primary energy consumption (PEC) was modeled as a linear function of five socioeconomic and meteorological explanatory variables using particle swarm optimization (PSO) and artificial neural networks (ANNs) techniques. Results revealed that ANN outperforms PSO model to predict test data. However, PSO technique is simple and provided us with a closed form expression to forecast PEC. Energy demand was forecasted by PSO and ANN using represented scenario. Finally, adapting about 10% renewable energy revealed that based on the developed linear programming (LP) model under minimum CO2 emissions, Iran will emit about 2520 million metric tons CO2 in 2025. The LP model indicated that maximum possible development of hydropower, geothermal and wind energy resources will satisfy the aim of minimization of CO2 emissions. Therefore, the main strategic policy in order to reduce CO2 emissions would be exploitation of these resources.

  6. Methylene blue binding to DNA with alternating AT base sequence: minor groove binding is favored over intercalation.

    Science.gov (United States)

    Rohs, Remo; Sklenar, Heinz

    2004-04-01

    The results presented in this paper on methylene blue (MB) binding to DNA with AT alternating base sequence complement the data obtained in two former modeling studies of MB binding to GC alternating DNA. In the light of the large amount of experimental data for both systems, this theoretical study is focused on a detailed energetic analysis and comparison in order to understand their different behavior. Since experimental high-resolution structures of the complexes are not available, the analysis is based on energy minimized structural models of the complexes in different binding modes. For both sequences, four different intercalation structures and two models for MB binding in the minor and major groove have been proposed. Solvent electrostatic effects were included in the energetic analysis by using electrostatic continuum theory, and the dependence of MB binding on salt concentration was investigated by solving the non-linear Poisson-Boltzmann equation. We find that the relative stability of the different complexes is similar for the two sequences, in agreement with the interpretation of spectroscopic data. Subtle differences, however, are seen in energy decompositions and can be attributed to the change from symmetric 5'-YpR-3' intercalation to minor groove binding with increasing salt concentration, which is experimentally observed for the AT sequence at lower salt concentration than for the GC sequence. According to our results, this difference is due to the significantly lower non-electrostatic energy for the minor groove complex with AT alternating DNA, whereas the slightly lower binding energy to this sequence is caused by a higher deformation energy of DNA. The energetic data are in agreement with the conclusions derived from different spectroscopic studies and can also be structurally interpreted on the basis of the modeled complexes. The simple static modeling technique and the neglect of entropy terms and of non-electrostatic solute

  7. Cost-Effective Method for Free-Energy Minimization in Complex Systems with Elaborated Ab Initio Potentials.

    Science.gov (United States)

    Bistafa, Carlos; Kitamura, Yukichi; Martins-Costa, Marilia T C; Nagaoka, Masataka; Ruiz-López, Manuel F

    2018-05-22

    We describe a method to locate stationary points in the free-energy hypersurface of complex molecular systems using high-level correlated ab initio potentials. In this work, we assume a combined QM/MM description of the system although generalization to full ab initio potentials or other theoretical schemes is straightforward. The free-energy gradient (FEG) is obtained as the mean force acting on relevant nuclei using a dual level strategy. First, a statistical simulation is carried out using an appropriate, low-level quantum mechanical force-field. Free-energy perturbation (FEP) theory is then used to obtain the free-energy derivatives for the target, high-level quantum mechanical force-field. We show that this composite FEG-FEP approach is able to reproduce the results of a standard free-energy minimization procedure with high accuracy, while simultaneously allowing for a drastic reduction of both computational and wall-clock time. The method has been applied to study the structure of the water molecule in liquid water at the QCISD/aug-cc-pVTZ level of theory, using the sampling from QM/MM molecular dynamics simulations at the B3LYP/6-311+G(d,p) level. The obtained values for the geometrical parameters and for the dipole moment of the water molecule are within the experimental error, and they also display an excellent agreement when compared to other theoretical estimations. The developed methodology represents therefore an important step toward the accurate determination of the mechanism, kinetics, and thermodynamic properties of processes in solution, in enzymes, and in other disordered chemical systems using state-of-the-art ab initio potentials.

  8. Effect of energy level sequences and neutron–proton interaction on α-particle preformation probability

    International Nuclear Information System (INIS)

    Ismail, M.; Adel, A.

    2013-01-01

    A realistic density-dependent nucleon–nucleon (NN) interaction with finite-range exchange part which produces the nuclear matter saturation curve and the energy dependence of the nucleon–nucleus optical model potential is used to calculate the preformation probability, S α , of α-decay from different isotones with neutron numbers N=124,126,128,130 and 132. We studied the variation of S α with the proton number, Z, for each isotone and found the effect of neutron and proton energy levels of parent nuclei on the behavior of the α-particle preformation probability. We found that S α increases regularly with the proton number when the proton pair in α-particle is emitted from the same level and the neutron level sequence is not changed during the Z-variation. In this case the neutron–proton (n–p) interaction of the two levels, contributing to emission process, is too small. On the contrary, if the proton or neutron level sequence is changed during the emission process, S α behaves irregularly, the irregular behavior increases if both proton and neutron levels are changed. This behavior is accompanied by change or rapid increase in the strength of n–p interaction

  9. Fast selection of miRNA candidates based on large-scale pre-computed MFE sets of randomized sequences.

    Science.gov (United States)

    Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter

    2014-01-13

    Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.

  10. A procedure to compute equilibrium concentrations in multicomponent systems by Gibbs energy minimization on spreadsheets

    International Nuclear Information System (INIS)

    Lima da Silva, Aline; Heck, Nestor Cesar

    2003-01-01

    Equilibrium concentrations are traditionally calculated with the help of equilibrium constant equations from selected reactions. This procedure, however, is only useful for simpler problems. Analysis of the equilibrium state in a multicomponent and multiphase system necessarily involves solution of several simultaneous equations, and, as the number of system components grows, the required computation becomes more complex and tedious. A more direct and general method for solving the problem is the direct minimization of the Gibbs energy function. The solution for the nonlinear problem consists in minimizing the objective function (Gibbs energy of the system) subjected to the constraints of the elemental mass-balance. To solve it, usually a computer code is developed, which requires considerable testing and debugging efforts. In this work, a simple method to predict equilibrium composition in multicomponent systems is presented, which makes use of an electronic spreadsheet. The ability to carry out these calculations within a spreadsheet environment shows several advantages. First, spreadsheets are available 'universally' on nearly all personal computers. Second, the input and output capabilities of spreadsheets can be effectively used to monitor calculated results. Third, no additional systems or programs need to be learned. In this way, spreadsheets can be as suitable in computing equilibrium concentrations as well as to be used as teaching and learning aids. This work describes, therefore, the use of the Solver tool, contained in the Microsoft Excel spreadsheet package, on computing equilibrium concentrations in a multicomponent system, by the method of direct Gibbs energy minimization. The four phases Fe-Cr-O-C-Ni system is used as an example to illustrate the method proposed. The pure stoichiometric phases considered in equilibrium calculations are: Cr 2 O 3 (s) and FeO C r 2 O 3 (s). The atmosphere consists of O 2 , CO e CO 2 constituents. The liquid iron

  11. Minimally invasive approach of panfacial fractures

    Directory of Open Access Journals (Sweden)

    Yudi Wijaya

    2017-08-01

    Full Text Available Background. Panfacial fractures involves fractures of several bones of face. They are associated with malocclusion, dish face deformity, enopthalmos, diplopia, cerebrospinal fluid leak and soft tissue injuries. Purpose. The purpose of this paper is to present a case of minimizing surgical wound and morbidity. Case. A 40 year old female presented with severe maxillofacial injuries caused by motor vehicle collisions about 5 days prior to admission. The assessment of the patient is mild head injury, panfacial fractures, lacerated wound at face,  rupture of globe of occular sinistra. An open reduction and internal fixation  (ORIF and enucleation of globe occular sinistra was performed.  Intraoral vestibular incision is made in the upper and lower vestibular region. Mucoperiosteal flap elevation of vestibular will exposure of the anterior maxilla and mandibular fractures. Intermaksilary fixation within 3 week and restore aesthetic with prosthesis fitting eyeball and denture. Discusion. The goal of  treatment of  panfacial fracture is to restore both the functions and pre-injury 3-dimensional facial contours. To achieve this goal two common  sequences of management of Panfacial fractures are proposed, “Bottom up and inside out” or “Top down and outside in”. Other sequences exist but there are variations of these two major approaches. Conclusion. A minimally invasive approach to  the fracture site is an alternative method  to manage panfacial fracture with a simple, effective and lower complication rate.

  12. Human Inferences about Sequences: A Minimal Transition Probability Model.

    Directory of Open Access Journals (Sweden)

    Florent Meyniel

    2016-12-01

    Full Text Available The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge.

  13. Reduction efficiency prediction of CENIBRA's recovery boiler by direct minimization of gibbs free energy

    Directory of Open Access Journals (Sweden)

    W. L. Silva

    2008-09-01

    Full Text Available The reduction efficiency is an important variable during the black liquor burning process in the Kraft recovery boiler. This variable value is obtained by slow experimental routines and the delay of this measure disturbs the pulp and paper industry customary control. This paper describes an optimization approach for the reduction efficiency determination in the furnace bottom of the recovery boiler based on the minimization of the Gibbs free energy. The industrial data used in this study were directly obtained from CENIBRA's data acquisition system. The resulting approach is able to predict the steady state behavior of the chemical composition of the furnace recovery boiler, - especially the reduction efficiency when different operational conditions are used. This result confirms the potential of this approach in the analysis of the daily operation of the recovery boiler.

  14. Adenovirus sequences required for replication in vivo.

    OpenAIRE

    Wang, K; Pearson, G D

    1985-01-01

    We have studied the in vivo replication properties of plasmids carrying deletion mutations within cloned adenovirus terminal sequences. Deletion mapping located the adenovirus DNA replication origin entirely within the first 67 bp of the adenovirus inverted terminal repeat. This region could be further subdivided into two functional domains: a minimal replication origin and an adjacent auxillary region which boosted the efficiency of replication by more than 100-fold. The minimal origin occup...

  15. Towards a Logical Distinction Between Swarms and Aftershock Sequences

    Science.gov (United States)

    Gardine, M.; Burris, L.; McNutt, S.

    2007-12-01

    The distinction between swarms and aftershock sequences has, up to this point, been fairly arbitrary and non- uniform. Typically 0.5 to 1 order of magnitude difference between the mainshock and largest aftershock has been a traditional choice, but there are many exceptions. Seismologists have generally assumed that the mainshock carries most of the energy, but this is only true if it is sufficiently large compared to the size and numbers of aftershocks. Here we present a systematic division based on energy of the aftershock sequence compared to the energy of the largest event of the sequence. It is possible to calculate the amount of aftershock energy assumed to be in the sequence using the b-value of the frequency-magnitude relation with a fixed choice of magnitude separation (M-mainshock minus M-largest aftershock). Assuming that the energy of an aftershock sequence is less than the energy of the mainshock, the b-value at which the aftershock energy exceeds that of the mainshock energy determines the boundary between aftershock sequences and swarms. The amount of energy for various choices of b-value is also calculated using different values of magnitude separation. When the minimum b-value at which the sequence energy exceeds that of the largest event/mainshock is plotted against the magnitude separation, a linear trend emerges. Values plotting above this line represent swarms and values plotting below it represent aftershock sequences. This scheme has the advantage that it represents a physical quantity - energy - rather than only statistical features of earthquake distributions. As such it may be useful to help distinguish swarms from mainshock/aftershock sequences and to better determine the underlying causes of earthquake swarms.

  16. The minimally tuned minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Essig, Rouven; Fortin, Jean-Francois

    2008-01-01

    The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV

  17. Efficient construction of an inverted minimal H1 promoter driven siRNA expression cassette: facilitation of promoter and siRNA sequence exchange.

    Directory of Open Access Journals (Sweden)

    Hoorig Nassanian

    2007-08-01

    Full Text Available RNA interference (RNAi, mediated by small interfering RNA (siRNA, is an effective method used to silence gene expression at the post-transcriptional level. Upon introduction into target cells, siRNAs incorporate into the RNA-induced silencing complex (RISC. The antisense strand of the siRNA duplex then "guides" the RISC to the homologous mRNA, leading to target degradation and gene silencing. In recent years, various vector-based siRNA expression systems have been developed which utilize opposing polymerase III promoters to independently drive expression of the sense and antisense strands of the siRNA duplex from the same template.We show here the use of a ligase chain reaction (LCR to develop a new vector system called pInv-H1 in which a DNA sequence encoding a specific siRNA is placed between two inverted minimal human H1 promoters (approximately 100 bp each. Expression of functional siRNAs from this construct has led to efficient silencing of both reporter and endogenous genes. Furthermore, the inverted H1 promoter-siRNA expression cassette was used to generate a retrovirus vector capable of transducing and silencing expression of the targeted protein by>80% in target cells.The unique design of this construct allows for the efficient exchange of siRNA sequences by the directional cloning of short oligonucleotides via asymmetric restriction sites. This provides a convenient way to test the functionality of different siRNA sequences. Delivery of the siRNA cassette by retroviral transduction suggests that a single copy of the siRNA expression cassette efficiently knocks down gene expression at the protein level. We note that this vector system can potentially be used to generate a random siRNA library. The flexibility of the ligase chain reaction suggests that additional control elements can easily be introduced into this siRNA expression cassette.

  18. Strategic planning for minimizing CO2 emissions using LP model based on forecasted energy demand by PSO Algorithm and ANN

    Energy Technology Data Exchange (ETDEWEB)

    Yousefi, M.; Omid, M.; Rafiee, Sh. [Department of Agricultural Machinery Engineering, University of Tehran, Karaj (Iran, Islamic Republic of); Ghaderi, S.F. [Department of Industrial Engineering, University of Tehran, Tehran (Iran, Islamic Republic of)

    2013-07-01

    Iran's primary energy consumption (PEC) was modeled as a linear function of five socioeconomic and meteorological explanatory variables using particle swarm optimization (PSO) and artificial neural networks (ANNs) techniques. Results revealed that ANN outperforms PSO model to predict test data. However, PSO technique is simple and provided us with a closed form expression to forecast PEC. Energy demand was forecasted by PSO and ANN using represented scenario. Finally, adapting about 10% renewable energy revealed that based on the developed linear programming (LP) model under minimum CO2 emissions, Iran will emit about 2520 million metric tons CO2 in 2025. The LP model indicated that maximum possible development of hydropower, geothermal and wind energy resources will satisfy the aim of minimization of CO2 emissions. Therefore, the main strategic policy in order to reduce CO2 emissions would be exploitation of these resources.

  19. Evolved Minimal Frustration in Multifunctional Biomolecules.

    Science.gov (United States)

    Röder, Konstantin; Wales, David J

    2018-05-25

    Protein folding is often viewed in terms of a funnelled potential or free energy landscape. A variety of experiments now indicate the existence of multifunnel landscapes, associated with multifunctional biomolecules. Here, we present evidence that these systems have evolved to exhibit the minimal number of funnels required to fulfil their cellular functions, suggesting an extension to the principle of minimum frustration. We find that minimal disruptive mutations result in additional funnels, and the associated structural ensembles become more diverse. The same trends are observed in an atomic cluster. These observations suggest guidelines for rational design of engineered multifunctional biomolecules.

  20. On balanced minimal repeated measurements designs

    Directory of Open Access Journals (Sweden)

    Shakeel Ahmad Mir

    2014-10-01

    Full Text Available Repeated Measurements designs are concerned with scientific experiments in which each experimental unit is assigned more than once to a treatment either different or identical. This class of designs has the property that the unbiased estimators for elementary contrasts among direct and residual effects are obtainable. Afsarinejad (1983 provided a method of constructing balanced Minimal Repeated Measurements designs p < t , when t is an odd or prime power, one or more than one treatment may occur more than once in some sequences and  designs so constructed no longer remain uniform in periods. In this paper an attempt has been made to provide a new method to overcome this drawback. Specifically, two cases have been considered                RM[t,n=t(t-t/(p-1,p], λ2=1 for balanced minimal repeated measurements designs and  RM[t,n=2t(t-t/(p-1,p], λ2=2 for balanced  repeated measurements designs. In addition , a method has been provided for constructing              extra-balanced minimal designs for special case RM[t,n=t2/(p-1,p], λ2=1.

  1. The minimal non-minimal standard model

    International Nuclear Information System (INIS)

    Bij, J.J. van der

    2006-01-01

    In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed

  2. Waste Minimization and Pollution Prevention Awareness Plan

    International Nuclear Information System (INIS)

    1992-01-01

    The purpose of this plan is to document the Lawrence Livermore National Laboratory (LLNL) Waste Minimization and Pollution Prevention Awareness Program. The plan specifies those activities and methods that are or will be employed to reduce the quantity and toxicity of wastes generated at the site. It is intended to satisfy Department of Energy (DOE) and other legal requirements that are discussed in Section C, below. The Pollution Prevention Awareness Program is included with the Waste Minimization Program as suggested by DOE Order 5400.1. The intent of this plan is to respond to and comply with the Department's policy and guidelines concerning the need for pollution prevention. The Plan is composed of a LLNL Waste Minimization and Pollution Prevention Awareness Program Plan and, as attachments, Directorate-, Program- and Department-specific waste minimization plans. This format reflects the fact that waste minimization is considered a line management responsibility and is to be addressed by each of the Directorates, Programs and Departments. Several Directorates have been reorganized, necessitating changes in the Directorate plans that were published in 1991

  3. Perturbed Yukawa textures in the minimal seesaw model

    Energy Technology Data Exchange (ETDEWEB)

    Rink, Thomas; Schmitz, Kai [Max Planck Institute for Nuclear Physics (MPIK),69117 Heidelberg (Germany)

    2017-03-29

    We revisit the minimal seesaw model, i.e., the type-I seesaw mechanism involving only two right-handed neutrinos. This model represents an important minimal benchmark scenario for future experimental updates on neutrino oscillations. It features four real parameters that cannot be fixed by the current data: two CP-violating phases, δ and σ, as well as one complex parameter, z, that is experimentally inaccessible at low energies. The parameter z controls the structure of the neutrino Yukawa matrix at high energies, which is why it may be regarded as a label or index for all UV completions of the minimal seesaw model. The fact that z encompasses only two real degrees of freedom allows us to systematically scan the minimal seesaw model over all of its possible UV completions. In doing so, we address the following question: suppose δ and σ should be measured at particular values in the future — to what extent is one then still able to realize approximate textures in the neutrino Yukawa matrix? Our analysis, thus, generalizes previous studies of the minimal seesaw model based on the assumption of exact texture zeros. In particular, our study allows us to assess the theoretical uncertainty inherent to the common texture ansatz. One of our main results is that a normal light-neutrino mass hierarchy is, in fact, still consistent with a two-zero Yukawa texture, provided that the two texture zeros receive corrections at the level of O(10 %). While our numerical results pertain to the minimal seesaw model only, our general procedure appears to be applicable to other neutrino mass models as well.

  4. Intermixing in heteroepitaxial islands: fast, self-consistent calculation of the concentration profile minimizing the elastic energy

    International Nuclear Information System (INIS)

    Gatti, R; UhlIk, F; Montalenti, F

    2008-01-01

    We present a novel computational method for finding the concentration profile which minimizes the elastic energy stored in heteroepitaxial islands. Based on a suitable combination of continuum elasticity theory and configurational Monte Carlo, we show that such profiles can be readily found by a simple, yet fully self-consistent, iterative procedure. We apply the method to SiGe/Si islands, considering realistic three-dimensional shapes (pyramids, domes and barns), finding strongly non-uniform distributions of Si and Ge atoms, in qualitative agreement with several experiments. Moreover, our simulated selective-etching profiles display, in some cases, a remarkable resemblance to the experimental ones, opening intriguing questions on the interplay between kinetic, entropic and elastic effects

  5. Development of a waste minimization plan for the Department of Energy's Naval petroleum reserve No. 3

    International Nuclear Information System (INIS)

    Falconer, K.L.; Lane, T.C.

    1991-01-01

    A Waste Minimization Program Plan for the U.S. Department of Energy's (DOE) Naval Petroleum Reserve No. 3 (NPR-3) was prepared in response to DOE Order 5400.1, open-quotes General Environmental Protection Program close-quote The NPR-3 Waste Minimization Program Plan encompasses all ongoing operations at the Naval Petroleum Reserve and is consistent with the principles set forth in the mission statement for NPR-3. The mission of the NPR-3 is to apply project management, engineering and scientific capabilities to produce oil and gas from subsurface zones at the maximum efficiency rate for the United States Government. NPR-3 generates more than 60 discrete waste streams, many of significant volume. Most of these waste streams are categorized as wastes from the exploration, development and production of oil and gas and, as such, are exempt from Subtitle C of RCRA as indicated in the regulatory determination published in the Federal Register on July 6, 1988. However, because so many of these waste streams contain hazardous substances and because of an increasingly more restrictive regulatory environment, in 1990 an overall effort was made to characterize all waste streams produced and institute the best waste management practice economically practical to reduce the volume and toxicity of the waste generated

  6. Hydrogen atom in momentum space with a minimal length

    International Nuclear Information System (INIS)

    Bouaziz, Djamil; Ferkous, Nourredine

    2010-01-01

    A momentum representation treatment of the hydrogen atom problem with a generalized uncertainty relation, which leads to a minimal length ΔX imin =(ℎ/2π)√(3β+β ' ), is presented. We show that the distance squared operator can be factorized in the case β ' =2β. We analytically solve the s-wave bound-state equation. The leading correction to the energy spectrum caused by the minimal length depends on √(β). An upper bound for the minimal length is found to be about 10 -9 fm.

  7. Design and Validation of Real-Time Optimal Control with ECMS to Minimize Energy Consumption for Parallel Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Aiyun Gao

    2017-01-01

    Full Text Available A real-time optimal control of parallel hybrid electric vehicles (PHEVs with the equivalent consumption minimization strategy (ECMS is presented in this paper, whose purpose is to achieve the total equivalent fuel consumption minimization and to maintain the battery state of charge (SOC within its operation range at all times simultaneously. Vehicle and assembly models of PHEVs are established, which provide the foundation for the following calculations. The ECMS is described in detail, in which an instantaneous cost function including the fuel energy and the electrical energy is proposed, whose emphasis is the computation of the equivalent factor. The real-time optimal control strategy is designed through regarding the minimum of the total equivalent fuel consumption as the control objective and the torque split factor as the control variable. The validation of the control strategy proposed is demonstrated both in the MATLAB/Simulink/Advisor environment and under actual transportation conditions by comparing the fuel economy, the charge sustainability, and parts performance with other three control strategies under different driving cycles including standard, actual, and real-time road conditions. Through numerical simulations and real vehicle tests, the accuracy of the approach used for the evaluation of the equivalent factor is confirmed, and the potential of the proposed control strategy in terms of fuel economy and keeping the deviations of SOC at a low level is illustrated.

  8. Minimalism

    CERN Document Server

    Obendorf, Hartmut

    2009-01-01

    The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

  9. Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.

    Science.gov (United States)

    Will, Sebastian; Jabbari, Hosna

    2016-01-01

    RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free

  10. Minimality of critical scenarios with linear logic and cutsets

    African Journals Online (AJOL)

    DK

    Keywords: Dependability - Mechatronic systems -Petri net - Linear logic - Minimal Feared scenarios - Cutsets. ..... Energy supply. Detection high level. Relay. ET. Energy supply. Detection high level. Relay ..... Evaluation de la SdF des systèmes mécatroniques en utilisant ... in complex distributed systems, Proceedings of the.

  11. Ab initio study on stacking sequences, free energy, dynamical stability and potential energy surfaces of graphite structures

    International Nuclear Information System (INIS)

    Anees, P; Valsakumar, M C; Chandra, Sharat; Panigrahi, B K

    2014-01-01

    Ab initio simulations have been performed to study the structure, energetics and stability of several plausible stacking sequences in graphite. These calculations suggest that in addition to the standard structures, graphite can also exist in AA-simple hexagonal, AB-orthorhombic and ABC-hexagonal type stacking. The free energy difference between these structures is very small (∼1 meV/atom), and hence all the structures can coexist from purely energetic considerations. Calculated x-ray diffraction patterns are similar to those of the standard structures for 2θ ⩽ 70°. Shear elastic constant C 44 is negative in AA-simple hexagonal, AB-orthorhombic and ABC-hexagonal structures, suggesting that these structures are mechanically unstable. Phonon dispersions show that the frequencies of some modes along the Γ–A direction in the Brillouin zone are imaginary in all of the new structures, implying that these structures are dynamically unstable. Incorporation of zero point vibrational energy via the quasi-harmonic approximation does not result in the restoration of dynamical stability. Potential energy surfaces for the unstable normal modes are seen to have the topography of a potential hill for all the new structures, confirming that all of the new structures are inherently unstable. The fact that the potential energy surface is not in the form of a double well implies that the structures are linearly as well as globally unstable. (paper)

  12. Effects of thermal fluctuations on non-minimal regular magnetic black hole

    International Nuclear Information System (INIS)

    Jawad, Abdul; Shahzad, M.U.

    2017-01-01

    We analyze the effects of thermal fluctuations on a regular black hole (RBH) of the non-minimal Einstein-Yang-Mill theory with gauge field of magnetic Wu-Yang type and a cosmological constant. We consider the logarithmic corrected entropy in order to analyze the thermal fluctuations corresponding to non-minimal RBH thermodynamics. In this scenario, we develop various important thermodynamical quantities, such as entropy, pressure, specific heats, Gibb's free energy and Helmholtz free energy. We investigate the first law of thermodynamics in the presence of logarithmic corrected entropy and non-minimal RBH. We also discuss the stability of this RBH using various frameworks such as the γ factor (the ratio of heat capacities), phase transition, grand canonical ensemble and canonical ensemble. It is observed that the non-minimal RBH becomes globally and locally more stable if we increase the value of the cosmological constant. (orig.)

  13. Effects of thermal fluctuations on non-minimal regular magnetic black hole

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)

    2017-05-15

    We analyze the effects of thermal fluctuations on a regular black hole (RBH) of the non-minimal Einstein-Yang-Mill theory with gauge field of magnetic Wu-Yang type and a cosmological constant. We consider the logarithmic corrected entropy in order to analyze the thermal fluctuations corresponding to non-minimal RBH thermodynamics. In this scenario, we develop various important thermodynamical quantities, such as entropy, pressure, specific heats, Gibb's free energy and Helmholtz free energy. We investigate the first law of thermodynamics in the presence of logarithmic corrected entropy and non-minimal RBH. We also discuss the stability of this RBH using various frameworks such as the γ factor (the ratio of heat capacities), phase transition, grand canonical ensemble and canonical ensemble. It is observed that the non-minimal RBH becomes globally and locally more stable if we increase the value of the cosmological constant. (orig.)

  14. Power Minimization techniques for Networked Data Centers

    International Nuclear Information System (INIS)

    Low, Steven; Tang, Kevin

    2011-01-01

    Our objective is to develop a mathematical model to optimize energy consumption at multiple levels in networked data centers, and develop abstract algorithms to optimize not only individual servers, but also coordinate the energy consumption of clusters of servers within a data center and across geographically distributed data centers to minimize the overall energy cost and consumption of brown energy of an enterprise. In this project, we have formulated a variety of optimization models, some stochastic others deterministic, and have obtained a variety of qualitative results on the structural properties, robustness, and scalability of the optimal policies. We have also systematically derived from these models decentralized algorithms to optimize energy efficiency, analyzed their optimality and stability properties. Finally, we have conducted preliminary numerical simulations to illustrate the behavior of these algorithms. We draw the following conclusion. First, there is a substantial opportunity to minimize both the amount and the cost of electricity consumption in a network of datacenters, by exploiting the fact that traffic load, electricity cost, and availability of renewable generation fluctuate over time and across geographical locations. Judiciously matching these stochastic processes can optimize the tradeoff between brown energy consumption, electricity cost, and response time. Second, given the stochastic nature of these three processes, real-time dynamic feedback should form the core of any optimization strategy. The key is to develop decentralized algorithms that can be implemented at different parts of the network as simple, local algorithms that coordinate through asynchronous message passing.

  15. Robustness analysis of chiller sequencing control

    International Nuclear Information System (INIS)

    Liao, Yundan; Sun, Yongjun; Huang, Gongsheng

    2015-01-01

    Highlights: • Uncertainties with chiller sequencing control were systematically quantified. • Robustness of chiller sequencing control was systematically analyzed. • Different sequencing control strategies were sensitive to different uncertainties. • A numerical method was developed for easy selection of chiller sequencing control. - Abstract: Multiple-chiller plant is commonly employed in the heating, ventilating and air-conditioning system to increase operational feasibility and energy-efficiency under part load condition. In a multiple-chiller plant, chiller sequencing control plays a key role in achieving overall energy efficiency while not sacrifices the cooling sufficiency for indoor thermal comfort. Various sequencing control strategies have been developed and implemented in practice. Based on the observation that (i) uncertainty, which cannot be avoided in chiller sequencing control, has a significant impact on the control performance and may cause the control fail to achieve the expected control and/or energy performance; and (ii) in current literature few studies have systematically addressed this issue, this paper therefore presents a study on robustness analysis of chiller sequencing control in order to understand the robustness of various chiller sequencing control strategies under different types of uncertainty. Based on the robustness analysis, a simple and applicable method is developed to select the most robust control strategy for a given chiller plant in the presence of uncertainties, which will be verified using case studies

  16. δ-hydride habit plane determination in α-zirconium by strain energy minimization technique at 25 and 300 deg C

    International Nuclear Information System (INIS)

    Singh, R.N.; Stahle, P.; Sairam, K.; Ristmana, Matti; Banerjee, S.

    2008-01-01

    The objective of the present investigation is to predict the habit plane of δ-hydride precipitating in α-Zr at 25 and 300 deg C using strain energy minimization technique. The δ-hydride phase is modeled to undergo isotropic elastic and plastic deformation. The α-Zr phase was modeled to undergo transverse isotropic elastic deformation. Both isotropic plastic and transverse isotropic plastic deformations of α-Zr were considered. Further, both perfect and linear work-hardening plastic behaviors of zirconium and its hydride were considered. Accommodation strain energy of δ-hydrides forming in α-Zr crystal was computed using initial strain method as a function of hydride nuclei orientation. Hydride was modeled as disk with circular edge. The simulation was carried out using materials properties reported at 25 and 300 deg C. Contrary to several habit planes reported in literature for δ-hydrides precipitating in α-Zr crystal the total accommodation energy minima suggests only basal plane i.e. (0001) as the habit plane. (author)

  17. Iterated greedy algorithms to minimize the total family flow time for job-shop scheduling with job families and sequence-dependent set-ups

    Science.gov (United States)

    Kim, Ji-Su; Park, Jung-Hyeon; Lee, Dong-Ho

    2017-10-01

    This study addresses a variant of job-shop scheduling in which jobs are grouped into job families, but they are processed individually. The problem can be found in various industrial systems, especially in reprocessing shops of remanufacturing systems. If the reprocessing shop is a job-shop type and has the component-matching requirements, it can be regarded as a job shop with job families since the components of a product constitute a job family. In particular, sequence-dependent set-ups in which set-up time depends on the job just completed and the next job to be processed are also considered. The objective is to minimize the total family flow time, i.e. the maximum among the completion times of the jobs within a job family. A mixed-integer programming model is developed and two iterated greedy algorithms with different local search methods are proposed. Computational experiments were conducted on modified benchmark instances and the results are reported.

  18. Low energy implications of minimal superstring unification

    International Nuclear Information System (INIS)

    Khalil, S.; Vissani, F.; Masiero, A.

    1995-11-01

    We study the phenomenological implications of effective supergravities based on string vacua with spontaneously broken N =1 supersymmetry by dilation and moduli F-terms. We further require Minimal String Unification, namely that large string threshold corrections ensure the correct unification of the gauge couplings at the grand unification scale. The whole supersymmetric mass spectrum turns out to be determined in terms of only two independent parameters, the dilaton-moduli mixing angle and the gravitino mass. In particular we discuss the region of the parameter space where at least one superpartner is ''visible'' at LEPII. We find that the most likely candidates are the scalar partner of the right-handed electron and the lightest chargino, with interesting correlations between their masses and with the mass of the lightest higgs. We show how discovering SUSY particles at LEPII might rather sharply discriminate between scenarios with pure dilaton SUSY breaking and mixed dilaton-moduli breaking. (author). 10 refs, 7 figs

  19. Performance potential of mechanical ventilation systems with minimized pressure loss

    DEFF Research Database (Denmark)

    Terkildsen, Søren; Svendsen, Svend

    2013-01-01

    simulations that quantify fan power consumption, heating demand and indoor environmental conditions. The system was designed with minimal pressure loss in the duct system and heat exchanger. Also, it uses state-of-the-art components such as electrostatic precipitators, diffuse ceiling inlets and demand......In many locations mechanical ventilation has been the most widely used principle of ventilation over the last 50 years but the conventional system design must be revised to comply with future energy requirements. This paper examines the options and describes a concept for the design of mechanical...... ventilation systems with minimal pressure loss and minimal energy use. This can provide comfort ventilation and avoid overheating through increased ventilation and night cooling. Based on this concept, a test system was designed for a fictive office building and its performance was documented using building...

  20. Controlling the bond scission sequence of oxygenates for energy applications

    Science.gov (United States)

    Stottlemyer, Alan L.

    The so called "Holy Grail" of heterogeneous catalysis is a fundamental understanding of catalyzed chemical transformations which span multidimensional scales of both length and time, enabling rational catalyst design. Such an undertaking is realizable only with an atomic level understanding of bond formation and destruction with respect to intrinsic properties of the metal catalyst. In this study, we investigate the bond scission sequence of small oxygenates (methanol, ethanol, ethylene glycol) on bimetallic transition metal catalysts and transition metal carbide catalysts. Oxygenates are of interest both as hydrogen carriers for reforming to H2 and CO and as fuels in direct alcohol fuel cells (DAFC). To address the so-called "materials gap" and "pressure gap" this work adopted three parallel research approaches: (1) ultra high vacuum (UHV) studies including temperature programmed desorption (TPD) and high-resolution electron energy loss spectroscopy (HREELS) on polycrystalline surfaces; (2) DFT studies including thermodynamic and kinetic calculations; (3) electrochemical studies including cyclic voltammetry (CV) and chronoamperometry (CA). Recent studies have suggested that tungsten monocarbide (WC) may behave similarly to Pt for the electrooxidation of oxygenates. TPD was used to quantify the activity and selectivity of oxygenate decomposition for WC and Pt-modifiedWC (Pt/WC) as compared to Pt. While decomposition activity was generally higher on WC than on Pt, scission of the C-O bond resulted in alkane/alkene formation on WC, an undesired product for DAFC. When Pt was added to WC by physical vapor deposition C-O bond scission was limited, suggesting that Pt synergistically modifies WC to improve the selectivity toward C-H bond scission to produce H2 and CO. Additionally, TPD confirmed WC and Pt/WC to be more CO tolerant than Pt. HREELS results verified that surface intermediates were different on Pt/WC as compared to Pt or WC and evidence of aldehyde

  1. Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change

    Directory of Open Access Journals (Sweden)

    Knol Dirk L

    2006-08-01

    Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.

  2. Minimizing temperature instability of heat recovery hot water system utilizing optimized thermal energy storage

    Science.gov (United States)

    Suamir, I. N.; Sukadana, I. B. P.; Arsana, M. E.

    2018-01-01

    One energy-saving technology that starts gaining attractive for hotel industry application in Indonesia is the utilization of waste heat of a central air conditioning system to heat water for domestic hot water supply system. Implementing the technology for such application at a hotel was found that hot water capacity generated from the heat recovery system could satisfy domestic hot water demand of the hotel. The gas boilers installed in order to back up the system have never been used. The hot water supply, however, was found to be instable with hot water supply temperature fluctuated ranging from 45 °C to 62 °C. The temperature fluctuations reaches 17 °C, which is considered instable and can reduce hot water usage comfort level. This research is aimed to optimize the thermal energy storage in order to minimize the temperature instability of heat recovery hot water supply system. The research is a case study approach based on cooling and hot water demands of a hotel in Jakarta-Indonesia that has applied water cooled chillers with heat recovery systems. The hotel operation with 329 guest rooms and 8 function rooms showed that hot water production in the heat recovery system completed with 5 m3 thermal energy storage (TES) could not hold the hot water supply temperature constantly. The variations of the cooling demand and hot water demands day by day were identified. It was found that there was significant mismatched of available time (hours) between cooling demand which is directly correlated to the hot water production from the heat recovery system and hot water usage. The available TES system could not store heat rejected from the condenser of the chiller during cooling demand peak time between 14.00 and 18.00 hours. The extra heat from the heat recovery system consequently increases the temperature of hot water up to 62 °C. It is about 12 K above 50 °C the requirement hot water temperature of the hotel. In contrast, the TES could not deliver proper

  3. Deformation energy of a toroidal nucleus and plane fragmentation barriers

    International Nuclear Information System (INIS)

    Fauchard, C.; Royer, G.

    1996-01-01

    The path leading to pumpkin-like configurations and toroidal shapes is investigated using a one-parameter shape sequence. The deformation energy is determined within the analytical expressions obtained for the various shape-dependent functions and the generalized rotating liquid drop model taking into account the proximity energy and the temperature. With increasing mass and angular momentum, a potential well appears in the toroidal shape path. For the heaviest systems, the pocket is large and locally favourable with respect to the plane fragmentation barriers which might allow the formation of evanescent toroidal systems which would rapidly decay in several fragments to minimize the surface tension. (orig.)

  4. Minimal massive 3D gravity

    International Nuclear Information System (INIS)

    Bergshoeff, Eric; Merbis, Wout; Hohm, Olaf; Routh, Alasdair J; Townsend, Paul K

    2014-01-01

    We present an alternative to topologically massive gravity (TMG) with the same ‘minimal’ bulk properties; i.e. a single local degree of freedom that is realized as a massive graviton in linearization about an anti-de Sitter (AdS) vacuum. However, in contrast to TMG, the new ‘minimal massive gravity’ has both a positive energy graviton and positive central charges for the asymptotic AdS-boundary conformal algebra. (paper)

  5. From Never Born Proteins to Minimal Living Cells: two projects in synthetic biology.

    Science.gov (United States)

    Luisi, Pier Luigi; Chiarabelli, Cristiano; Stano, Pasquale

    2006-12-01

    The Never Born Proteins (NBPs) and the Minimal Cell projects are two currently developed research lines belonging to the field of synthetic biology. The first deals with the investigation of structural and functional properties of de novo proteins with random sequences, selected and isolated using phage display methods. The minimal cell is the simplest cellular construct which displays living properties, such as self-maintenance, self-reproduction and evolvability. The semi-synthetic approach to minimal cells involves the use of extant genes and proteins in order to build a supramolecular construct based on lipid vesicles. Results and outlooks on these two research lines are shortly discussed, mainly focusing on their relevance to the origin of life studies.

  6. Minim typing--a rapid and low cost MLST based typing tool for Klebsiella pneumoniae.

    Science.gov (United States)

    Andersson, Patiyan; Tong, Steven Y C; Bell, Jan M; Turnidge, John D; Giffard, Philip M

    2012-01-01

    Here we report a single nucleotide polymorphism (SNP) based genotyping method for Klebsiella pneumoniae utilising high-resolution melting (HRM) analysis of fragments within the multilocus sequence typing (MLST) loci. The approach is termed mini-MLST or Minim typing and it has previously been applied to Streptococcus pyogenes, Staphylococcus aureus and Enterococcus faecium. Six SNPs were derived from concatenated MLST sequences on the basis of maximisation of the Simpsons Index of Diversity (D). DNA fragments incorporating these SNPs and predicted to be suitable for HRM analysis were designed. Using the assumption that HRM alleles are defined by G+C content, Minim typing using six fragments was predicted to provide a D = 0.979 against known STs. The method was tested against 202 K. pneumoniae using a blinded approach in which the MLST analyses were performed after the HRM analyses. The HRM-based alleles were indeed in accordance with G+C content, and the Minim typing identified known STs and flagged new STs. The tonB MLST locus was determined to be very diverse, and the two Minim fragments located herein contribute greatly to the resolving power. However these fragments are refractory to amplification in a minority of isolates. Therefore, we assessed the performance of two additional formats: one using only the four fragments located outside the tonB gene (D = 0.929), and the other using HRM data from these four fragments in conjunction with sequencing of the tonB MLST fragment (D = 0.995). The HRM assays were developed on the Rotorgene 6000, and the method was shown to also be robust on the LightCycler 480, allowing a 384-well high through-put format. The assay provides rapid, robust and low-cost typing with fully portable results that can directly be related to current MLST data. Minim typing in combination with molecular screening for antibiotic resistance markers can be a powerful surveillance tool kit.

  7. Minim typing--a rapid and low cost MLST based typing tool for Klebsiella pneumoniae.

    Directory of Open Access Journals (Sweden)

    Patiyan Andersson

    Full Text Available Here we report a single nucleotide polymorphism (SNP based genotyping method for Klebsiella pneumoniae utilising high-resolution melting (HRM analysis of fragments within the multilocus sequence typing (MLST loci. The approach is termed mini-MLST or Minim typing and it has previously been applied to Streptococcus pyogenes, Staphylococcus aureus and Enterococcus faecium. Six SNPs were derived from concatenated MLST sequences on the basis of maximisation of the Simpsons Index of Diversity (D. DNA fragments incorporating these SNPs and predicted to be suitable for HRM analysis were designed. Using the assumption that HRM alleles are defined by G+C content, Minim typing using six fragments was predicted to provide a D = 0.979 against known STs. The method was tested against 202 K. pneumoniae using a blinded approach in which the MLST analyses were performed after the HRM analyses. The HRM-based alleles were indeed in accordance with G+C content, and the Minim typing identified known STs and flagged new STs. The tonB MLST locus was determined to be very diverse, and the two Minim fragments located herein contribute greatly to the resolving power. However these fragments are refractory to amplification in a minority of isolates. Therefore, we assessed the performance of two additional formats: one using only the four fragments located outside the tonB gene (D = 0.929, and the other using HRM data from these four fragments in conjunction with sequencing of the tonB MLST fragment (D = 0.995. The HRM assays were developed on the Rotorgene 6000, and the method was shown to also be robust on the LightCycler 480, allowing a 384-well high through-put format. The assay provides rapid, robust and low-cost typing with fully portable results that can directly be related to current MLST data. Minim typing in combination with molecular screening for antibiotic resistance markers can be a powerful surveillance tool kit.

  8. Minimal surfaces

    CERN Document Server

    Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht

    2010-01-01

    Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently

  9. Minimizing fuel wood consumption through the evolution of hot ston ...

    African Journals Online (AJOL)

    The central objective of this paper is to minimize fuelwood consumption through evolving alternative domestic energy. Data on alternative domestic energy sources, and use fuel wood consumption during scarcity of petroleum were collected using structured questionnaires. Data on time spent to cook yam, race and beans ...

  10. Observation of the energy transfer sequence in an organic host–guest system of a luminescent polymer and a phosphorescent molecule

    International Nuclear Information System (INIS)

    Basel, Tek; Sun, Dali; Gautam, Bhoj; Valy Vardeny, Z.

    2014-01-01

    We used steady state optical spectroscopies such as photoluminescence and photoinduced absorption (PA), and magnetic-field PA (MPA) for studying the energy transfer dynamics in films and organic light emitting diodes (OLED) based on host–guest blends with different guest concentrations of the fluorescent polymer poly-[2-methoxy, 5-(2′-ethyl-hexyloxy)phenylene vinylene] (MEHPPV-host), and phosphorescent molecule PtII-tetraphenyltetrabenzoporphyrin [Pt(tpbp); guest]. We show that the energy transfer process between the excited states of the host polymer and guest molecule takes a ‘ping-pong’ type sequence, because the lowest guest triplet exciton energy, E T (guest), lies higher than that of the host, E T (host). Upon photon excitation the photogenerated singlet excitons in the host polymer chains first undergo a Förster resonant energy transfer process to the guest singlet manifold, which subsequently reaches E T (guest) by intersystem crossing. Because E T (guest)>E T (host) there is a subsequent Dexter type energy transfer from E T (guest) to E T (host). This energy transfer sequence has profound influence on the photoluminescence and electroluminescence emission spectra in both films and OLED devices based on the MEHPPV-Pt(tpbp) system. - Highlights: • We studied electroluminescence of OLEDs based on host–guest blends. • The emission efficiency decreases with the guest concentration. • We found a dominant Dexter energy transfer from the triplet(guest) to triplet(host). • Energy transfer occurs from the host to guest and back to the host again

  11. National Institutes of Health: Mixed waste minimization and treatment

    International Nuclear Information System (INIS)

    1995-08-01

    The Appalachian States Low-Level Radioactive Waste Commission requested the US Department of Energy's National Low-Level Waste Management Program (NLLWMP) to assist the biomedical community in becoming more knowledgeable about its mixed waste streams, to help minimize the mixed waste stream generated by the biomedical community, and to identify applicable treatment technologies for these mixed waste streams. As the first step in the waste minimization process, liquid low-level radioactive mixed waste (LLMW) streams generated at the National Institutes of Health (NIH) were characterized and combined into similar process categories. This report identifies possible waste minimization and treatment approaches for the LLMW generated by the biomedical community identified in DOE/LLW-208. In development of the report, on site meetings were conducted with NIH personnel responsible for generating each category of waste identified as lacking disposal options. Based on the meetings and general waste minimization guidelines, potential waste minimization options were identified

  12. Two-Agent Scheduling to Minimize the Maximum Cost with Position-Dependent Jobs

    Directory of Open Access Journals (Sweden)

    Long Wan

    2015-01-01

    Full Text Available This paper investigates a single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs. There are two agents, each with a set of independent jobs, competing to perform their jobs on a common machine. In our scheduling setting, the actual position-dependent processing time of one job is characterized by variable function dependent on the position of the job in the sequence. Each agent wants to fulfil the objective of minimizing the maximum cost of its own jobs. We develop a feasible method to achieve all the Pareto optimal points in polynomial time.

  13. Waste minimization activity report for 1991

    International Nuclear Information System (INIS)

    Shoemaker, J.D.

    1992-01-01

    This is a waste reduction report for the Lawrence Livermore National Laboratory (LLNL) for 1991. The report covers the Main Site at Livermore and Site 300. Each research program at LLNL is described by its operation, administrative procedures, and waste minimization. Examples of the programs at LLNL are biomedical and environmental research, chemistry and materials science, and energy program and earth sciences. (MB)

  14. Minimal Residual Disease Detection and Evolved IGH Clones Analysis in Acute B Lymphoblastic Leukemia Using IGH Deep Sequencing.

    Science.gov (United States)

    Wu, Jinghua; Jia, Shan; Wang, Changxi; Zhang, Wei; Liu, Sixi; Zeng, Xiaojing; Mai, Huirong; Yuan, Xiuli; Du, Yuanping; Wang, Xiaodong; Hong, Xueyu; Li, Xuemei; Wen, Feiqiu; Xu, Xun; Pan, Jianhua; Li, Changgang; Liu, Xiao

    2016-01-01

    Acute B lymphoblastic leukemia (B-ALL) is one of the most common types of childhood cancer worldwide and chemotherapy is the main treatment approach. Despite good response rates to chemotherapy regiments, many patients eventually relapse and minimal residual disease (MRD) is the leading risk factor for relapse. The evolution of leukemic clones during disease development and treatment may have clinical significance. In this study, we performed immunoglobulin heavy chain ( IGH ) repertoire high throughput sequencing (HTS) on the diagnostic and post-treatment samples of 51 pediatric B-ALL patients. We identified leukemic IGH clones in 92.2% of the diagnostic samples and nearly half of the patients were polyclonal. About one-third of the leukemic clones have correct open reading frame in the complementarity determining region 3 (CDR3) of IGH , which demonstrates that the leukemic B cells were in the early developmental stage. We also demonstrated the higher sensitivity of HTS in MRD detection and investigated the clinical value of using peripheral blood in MRD detection and monitoring the clonal IGH evolution. In addition, we found leukemic clones were extensively undergoing continuous clonal IGH evolution by variable gene replacement. Dynamic frequency change and newly emerged evolved IGH clones were identified upon the pressure of chemotherapy. In summary, we confirmed the high sensitivity and universal applicability of HTS in MRD detection. We also reported the ubiquitous evolved IGH clones in B-ALL samples and their response to chemotherapy during treatment.

  15. Minimizing liability risks under the ACMG recommendations for reporting incidental findings in clinical exome and genome sequencing

    Science.gov (United States)

    Evans, Barbara J.

    2014-01-01

    Recent recommendations by the American College of Medical Genetics and Genomics (ACMG) for reporting incidental findings present novel ethical and legal issues. This article expresses no views on the ethical aspects of these recommendations and focuses strictly on liability risks and how to minimize them. The recommendations place labs and clinicians in a new liability environment that exposes them to intentional tort lawsuits as well to traditional suits for negligence. Intentional tort suits are especially troubling because of their potential to inflict ruinous personal financial losses on individual clinicians and laboratory personnel. This article surveys this new liability landscape and describes analytical approaches for minimizing tort liabilities. To a considerable degree, liability risks can be controlled by structuring activities in ways that make future lawsuits nonviable before the suits ever arise. Proactive liability analysis is an effective tool for minimizing tort liabilities in connection with the testing and reporting activities that the ACMG recommends. PMID:24030435

  16. Minimizing liability risks under the ACMG recommendations for reporting incidental findings in clinical exome and genome sequencing.

    Science.gov (United States)

    Evans, Barbara J

    2013-12-01

    Recent recommendations by the American College of Medical Genetics and Genomics (ACMG) for reporting incidental findings present novel ethical and legal issues. This article expresses no views on the ethical aspects of these recommendations and focuses strictly on liability risks and how to minimize them. The recommendations place labs and clinicians in a new liability environment that exposes them to intentional tort lawsuits as well to traditional suits for negligence. Intentional tort suits are especially troubling because of their potential to inflict ruinous personal financial losses on individual clinicians and laboratory personnel. This article surveys this new liability landscape and describes analytical approaches for minimizing tort liabilities. To a considerable degree, liability risks can be controlled by structuring activities in ways that make future lawsuits nonviable before the suits ever arise. Proactive liability analysis is an effective tool for minimizing tort liabilities in connection with the testing and reporting activities that the ACMG recommends.

  17. Advanced pyrochemical technologies for minimizing nuclear waste

    International Nuclear Information System (INIS)

    Bronson, M.C.; Dodson, K.E.; Riley, D.C.

    1994-01-01

    The Department of Energy (DOE) is seeking to reduce the size of the current nuclear weapons complex and consequently minimize operating costs. To meet this DOE objective, the national laboratories have been asked to develop advanced technologies that take uranium and plutonium, from retired weapons and prepare it for new weapons, long-term storage, and/or final disposition. Current pyrochemical processes generate residue salts and ceramic wastes that require aqueous processing to remove and recover the actinides. However, the aqueous treatment of these residues generates an estimated 100 liters of acidic transuranic (TRU) waste per kilogram of plutonium in the residue. Lawrence Livermore National Laboratory (LLNL) is developing pyrochemical techniques to eliminate, minimize, or more efficiently treat these residue streams. This paper will present technologies being developed at LLNL on advanced materials for actinide containment, reactors that minimize residues, and pyrochemical processes that remove actinides from waste salts

  18. Minimizing energy consumption of accelerators and storage ring facilities

    International Nuclear Information System (INIS)

    The discussion of energy usage falls naturally into three parts. The first is a review of what the problem is, the second is a description of steps that can be taken to conserve energy at existing facilities, and the third is a review of the implications of energy consumption on future facilities

  19. Environmental Restoration Program waste minimization and pollution prevention self-assessment

    International Nuclear Information System (INIS)

    1994-10-01

    The Environmental Restoration (ER) Program within Martin Marietta Energy Systems, Inc. is currently developing a more active waste minimization and pollution prevention program. To determine areas of programmatic improvements within the ER Waste Minimization and Pollution Prevention Awareness Program, the ER Program required an evaluation of the program across the Oak Ridge K-25 Site, the Oak Ridge National Laboratory, the Oak Ridge Y-12 Plant, the Paducah Environmental Restoration and Waste Minimization Site, and the Portsmouth Environmental Restoration and Waste Minimization Site. This document presents the status of the overall program as of fourth quarter FY 1994, presents pollution prevention cost avoidance data associated with FY 1994 activities, and identifies areas for improvement. Results of this assessment indicate that the ER Waste Minimization and Pollution Prevention Awareness Program is firmly established and is developing rapidly. Several procedural goals were met in FY 1994 and many of the sites implemented ER waste minimization options. Additional growth is needed, however, for the ER Waste Minimization and Pollution Prevention Awareness Program

  20. Computer simulation of replacement sequences in copper

    International Nuclear Information System (INIS)

    Schiffgens, J.O.; Schwartz, D.W.; Ariyasu, R.G.; Cascadden, S.E.

    1978-01-01

    Results of computer simulations of , , and replacement sequences in copper are presented, including displacement thresholds, focusing energies, energy losses per replacement, and replacement sequence lengths. These parameters are tabulated for six interatomic potentials and shown to vary in a systematic way with potential stiffness and range. Comparisons of results from calculations made with ADDES, a quasi-dynamical code, and COMENT, a dynamical code, show excellent agreement, demonstrating that the former can be calibrated and used satisfactorily in the analysis of low energy displacement cascades. Upper limits on , , and replacement sequences were found to be approximately 10, approximately 30, and approximately 14 replacements, respectively. (author)

  1. Attentional load and implicit sequence learning.

    Science.gov (United States)

    Shanks, David R; Rowland, Lee A; Ranger, Mandeep S

    2005-06-01

    A widely employed conceptualization of implicit learning hypothesizes that it makes minimal demands on attentional resources. This conjecture was investigated by comparing learning under single-task and dual-task conditions in the sequential reaction time (SRT) task. Participants learned probabilistic sequences, with dual-task participants additionally having to perform a counting task using stimuli that were targets in the SRT display. Both groups were then tested for sequence knowledge under single-task (Experiments 1 and 2) or dual-task (Experiment 3) conditions. Participants also completed a free generation task (Experiments 2 and 3) under inclusion or exclusion conditions to determine if sequence knowledge was conscious or unconscious in terms of its access to intentional control. The experiments revealed that the secondary task impaired sequence learning and that sequence knowledge was consciously accessible. These findings disconfirm both the notion that implicit learning is able to proceed normally under conditions of divided attention, and that the acquired knowledge is inaccessible to consciousness. A unitary framework for conceptualizing implicit and explicit learning is proposed.

  2. Disk-based compression of data from genome sequencing.

    Science.gov (United States)

    Grabowski, Szymon; Deorowicz, Sebastian; Roguski, Łukasz

    2015-05-01

    High-coverage sequencing data have significant, yet hard to exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA stream of large datasets, since the redundancy between overlapping reads cannot be easily captured in the (relatively small) main memory. More interesting solutions for this problem are disk based, where the better of these two, from Cox et al. (2012), is based on the Burrows-Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0 Gbp human genome sequencing collection with almost 45-fold coverage. We propose overlapping reads compression with minimizers, a compression algorithm dedicated to sequencing reads (DNA only). Our method makes use of a conceptually simple and easily parallelizable idea of minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to fit the 134.0 Gbp dataset into only 5.31 GB of space. http://sun.aei.polsl.pl/orcom under a free license. sebastian.deorowicz@polsl.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Inflation in non-minimal matter-curvature coupling theories

    Energy Technology Data Exchange (ETDEWEB)

    Gomes, C.; Bertolami, O. [Departamento de Física e Astronomia and Centro de Física do Porto, Faculdade de Ciências da Universidade do Porto, Rua do Campo Alegre s/n, 4169-007 Porto (Portugal); Rosa, J.G., E-mail: claudio.gomes@fc.up.pt, E-mail: joao.rosa@ua.pt, E-mail: orfeu.bertolami@fc.up.pt [Departamento de Física da Universidade de Aveiro and CIDMA, Campus de Santiago, 3810-183 Aveiro (Portugal)

    2017-06-01

    We study inflationary scenarios driven by a scalar field in the presence of a non-minimal coupling between matter and curvature. We show that the Friedmann equation can be significantly modified when the energy density during inflation exceeds a critical value determined by the non-minimal coupling, which in turn may considerably modify the spectrum of primordial perturbations and the inflationary dynamics. In particular, we show that these models are characterised by a consistency relation between the tensor-to-scalar ratio and the tensor spectral index that can differ significantly from the predictions of general relativity. We also give examples of observational predictions for some of the most commonly considered potentials and use the results of the Planck collaboration to set limits on the scale of the non-minimal coupling.

  4. Minim Typing – A Rapid and Low Cost MLST Based Typing Tool for Klebsiella pneumoniae

    Science.gov (United States)

    Andersson, Patiyan; Tong, Steven Y. C.; Bell, Jan M.; Turnidge, John D.; Giffard, Philip M.

    2012-01-01

    Here we report a single nucleotide polymorphism (SNP) based genotyping method for Klebsiella pneumoniae utilising high-resolution melting (HRM) analysis of fragments within the multilocus sequence typing (MLST) loci. The approach is termed mini-MLST or Minim typing and it has previously been applied to Streptococcus pyogenes, Staphylococcus aureus and Enterococcus faecium. Six SNPs were derived from concatenated MLST sequences on the basis of maximisation of the Simpsons Index of Diversity (D). DNA fragments incorporating these SNPs and predicted to be suitable for HRM analysis were designed. Using the assumption that HRM alleles are defined by G+C content, Minim typing using six fragments was predicted to provide a D = 0.979 against known STs. The method was tested against 202 K. pneumoniae using a blinded approach in which the MLST analyses were performed after the HRM analyses. The HRM-based alleles were indeed in accordance with G+C content, and the Minim typing identified known STs and flagged new STs. The tonB MLST locus was determined to be very diverse, and the two Minim fragments located herein contribute greatly to the resolving power. However these fragments are refractory to amplification in a minority of isolates. Therefore, we assessed the performance of two additional formats: one using only the four fragments located outside the tonB gene (D = 0.929), and the other using HRM data from these four fragments in conjunction with sequencing of the tonB MLST fragment (D = 0.995). The HRM assays were developed on the Rotorgene 6000, and the method was shown to also be robust on the LightCycler 480, allowing a 384-well high through-put format. The assay provides rapid, robust and low-cost typing with fully portable results that can directly be related to current MLST data. Minim typing in combination with molecular screening for antibiotic resistance markers can be a powerful surveillance tool kit. PMID:22428067

  5. SeqLib: a C ++ API for rapid BAM manipulation, sequence alignment and sequence assembly.

    Science.gov (United States)

    Wala, Jeremiah; Beroukhim, Rameen

    2017-03-01

    We present SeqLib, a C ++ API and command line tool that provides a rapid and user-friendly interface to BAM/SAM/CRAM files, global sequence alignment operations and sequence assembly. Four C libraries perform core operations in SeqLib: HTSlib for BAM access, BWA-MEM and BLAT for sequence alignment and Fermi for error correction and sequence assembly. Benchmarking indicates that SeqLib has lower CPU and memory requirements than leading C ++ sequence analysis APIs. We demonstrate an example of how minimal SeqLib code can extract, error-correct and assemble reads from a CRAM file and then align with BWA-MEM. SeqLib also provides additional capabilities, including chromosome-aware interval queries and read plotting. Command line tools are available for performing integrated error correction, micro-assemblies and alignment. SeqLib is available on Linux and OSX for the C ++98 standard and later at github.com/walaj/SeqLib. SeqLib is released under the Apache2 license. Additional capabilities for BLAT alignment are available under the BLAT license. jwala@broadinstitue.org ; rameen@broadinstitute.org. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. Observation of the energy transfer sequence in an organic host–guest system of a luminescent polymer and a phosphorescent molecule

    Energy Technology Data Exchange (ETDEWEB)

    Basel, Tek; Sun, Dali; Gautam, Bhoj; Valy Vardeny, Z., E-mail: val@physics.utah.edu

    2014-11-15

    We used steady state optical spectroscopies such as photoluminescence and photoinduced absorption (PA), and magnetic-field PA (MPA) for studying the energy transfer dynamics in films and organic light emitting diodes (OLED) based on host–guest blends with different guest concentrations of the fluorescent polymer poly-[2-methoxy, 5-(2′-ethyl-hexyloxy)phenylene vinylene] (MEHPPV-host), and phosphorescent molecule PtII-tetraphenyltetrabenzoporphyrin [Pt(tpbp); guest]. We show that the energy transfer process between the excited states of the host polymer and guest molecule takes a ‘ping-pong’ type sequence, because the lowest guest triplet exciton energy, E{sub T}(guest), lies higher than that of the host, E{sub T}(host). Upon photon excitation the photogenerated singlet excitons in the host polymer chains first undergo a Förster resonant energy transfer process to the guest singlet manifold, which subsequently reaches E{sub T}(guest) by intersystem crossing. Because E{sub T}(guest)>E{sub T}(host) there is a subsequent Dexter type energy transfer from E{sub T}(guest) to E{sub T}(host). This energy transfer sequence has profound influence on the photoluminescence and electroluminescence emission spectra in both films and OLED devices based on the MEHPPV-Pt(tpbp) system. - Highlights: • We studied electroluminescence of OLEDs based on host–guest blends. • The emission efficiency decreases with the guest concentration. • We found a dominant Dexter energy transfer from the triplet(guest) to triplet(host). • Energy transfer occurs from the host to guest and back to the host again.

  7. Waste minimization at Chalk River Laboratories

    Energy Technology Data Exchange (ETDEWEB)

    Kranz, P.; Wong, P.C.F. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2011-07-01

    Waste minimization supports Atomic Energy of Canada Limited (AECL) Environment Policy with regard to pollution prevention and has positive impacts on the environment, human health and safety, and economy. In accordance with the principle of pollution prevention, the quantities and degree of hazard of wastes requiring storage or disposition at facilities within or external to AECL sites shall be minimized, following the principles of Prevent, Reduce, Reuse, and Recycle, to the extent practical. Waste minimization is an important element in the Waste Management Program. The Waste Management Program has implemented various initiatives for waste minimization since 2007. The key initiatives have focused on waste reduction, segregation and recycling, and included: 1) developed waste minimization requirements and recycling procedure to establish the framework for applying the Waste Minimization Hierarchy; 2) performed waste minimization assessments for the facilities, which generate significant amounts of waste, to identify the opportunities for waste reduction and assist the waste generators to develop waste reduction targets and action plans to achieve the targets; 3) implemented the colour-coded, standardized waste and recycling containers to enhance waste segregation; 4) established partnership with external agents for recycling; 5) extended the likely clean waste and recyclables collection to selected active areas; 6) provided on-going communications to promote waste reduction and increase awareness for recycling; and 7) continually monitored performance, with respect to waste minimization, to identify opportunities for improvement and to communicate these improvements. After implementation of waste minimization initiatives at CRL, the solid waste volume generated from routine operations at CRL has significantly decreased, while the amount of recyclables diverted from the onsite landfill has significantly increased since 2007. The overall refuse volume generated at

  8. Waste minimization at Chalk River Laboratories

    International Nuclear Information System (INIS)

    Kranz, P.; Wong, P.C.F.

    2011-01-01

    Waste minimization supports Atomic Energy of Canada Limited (AECL) Environment Policy with regard to pollution prevention and has positive impacts on the environment, human health and safety, and economy. In accordance with the principle of pollution prevention, the quantities and degree of hazard of wastes requiring storage or disposition at facilities within or external to AECL sites shall be minimized, following the principles of Prevent, Reduce, Reuse, and Recycle, to the extent practical. Waste minimization is an important element in the Waste Management Program. The Waste Management Program has implemented various initiatives for waste minimization since 2007. The key initiatives have focused on waste reduction, segregation and recycling, and included: 1) developed waste minimization requirements and recycling procedure to establish the framework for applying the Waste Minimization Hierarchy; 2) performed waste minimization assessments for the facilities, which generate significant amounts of waste, to identify the opportunities for waste reduction and assist the waste generators to develop waste reduction targets and action plans to achieve the targets; 3) implemented the colour-coded, standardized waste and recycling containers to enhance waste segregation; 4) established partnership with external agents for recycling; 5) extended the likely clean waste and recyclables collection to selected active areas; 6) provided on-going communications to promote waste reduction and increase awareness for recycling; and 7) continually monitored performance, with respect to waste minimization, to identify opportunities for improvement and to communicate these improvements. After implementation of waste minimization initiatives at CRL, the solid waste volume generated from routine operations at CRL has significantly decreased, while the amount of recyclables diverted from the onsite landfill has significantly increased since 2007. The overall refuse volume generated at

  9. Late-time acceleration and phantom divide line crossing with non-minimal coupling and Lorentz-invariance violation

    International Nuclear Information System (INIS)

    Nozari, Kourosh; Sadatian, S.D.

    2008-01-01

    We consider two alternative dark-energy models: a Lorentz-invariance preserving model with a non-minimally coupled scalar field and a Lorentz-invariance violating model with a minimally coupled scalar field. We study accelerated expansion and the dynamics of the equation of state parameter in these scenarios. While a minimally coupled scalar field does not have the capability to be a successful dark-energy candidate with line crossing of the cosmological constant, a non-minimally coupled scalar field in the presence of Lorentz invariance or a minimally coupled scalar field with Lorentz-invariance violation have this capability. In the latter case, accelerated expansion and phantom divide line crossing are the results of the interactive nature of this Lorentz-violating scenario. (orig.)

  10. Mean-field approximation minimizes relative entropy

    International Nuclear Information System (INIS)

    Bilbro, G.L.; Snyder, W.E.; Mann, R.C.

    1991-01-01

    The authors derive the mean-field approximation from the information-theoretic principle of minimum relative entropy instead of by minimizing Peierls's inequality for the Weiss free energy of statistical physics theory. They show that information theory leads to the statistical mechanics procedure. As an example, they consider a problem in binary image restoration. They find that mean-field annealing compares favorably with the stochastic approach

  11. Cosmological perturbations of non-minimally coupled quintessence in the metric and Palatini formalisms

    International Nuclear Information System (INIS)

    Fan, Yize; Wu, Puxun; Yu, Hongwei

    2015-01-01

    Cosmological perturbations of the non-minimally coupled scalar field dark energy in both the metric and Palatini formalisms are studied in this paper. We find that on the large scales with the energy density of dark energy becoming more and more important in the low redshift region, the gravitational potential becomes smaller and smaller, and the effect of non-minimal coupling becomes more and more apparent. In the metric formalism the value of the gravitational potential in the non-minimally coupled case with a positive coupling constant is less than that in the minimally coupled case, while it is larger if the coupling constant is negative. This is different from that in the Palatini formalism where the value of gravitational potential is always smaller. Based upon the quasi-static approximation on the sub-horizon scales, the linear growth of matter is also analyzed. We obtain that the effective Newton's constants in the metric and Palatini formalisms have different forms. A negative coupling constant enhances the gravitational interaction, while a positive one weakens it. Although the metric and Palatini formalisms give different linear growth rates, the difference is very small and the current observation cannot distinguish them effectively

  12. Image denoising by a direct variational minimization

    Directory of Open Access Journals (Sweden)

    Pilipović Stevan

    2011-01-01

    Full Text Available Abstract In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.

  13. ASAP: an environment for automated preprocessing of sequencing data

    Directory of Open Access Journals (Sweden)

    Torstenson Eric S

    2013-01-01

    Full Text Available Abstract Background Next-generation sequencing (NGS has yielded an unprecedented amount of data for genetics research. It is a daunting task to process the data from raw sequence reads to variant calls and manually processing this data can significantly delay downstream analysis and increase the possibility for human error. The research community has produced tools to properly prepare sequence data for analysis and established guidelines on how to apply those tools to achieve the best results, however, existing pipeline programs to automate the process through its entirety are either inaccessible to investigators, or web-based and require a certain amount of administrative expertise to set up. Findings Advanced Sequence Automated Pipeline (ASAP was developed to provide a framework for automating the translation of sequencing data into annotated variant calls with the goal of minimizing user involvement without the need for dedicated hardware or administrative rights. ASAP works both on computer clusters and on standalone machines with minimal human involvement and maintains high data integrity, while allowing complete control over the configuration of its component programs. It offers an easy-to-use interface for submitting and tracking jobs as well as resuming failed jobs. It also provides tools for quality checking and for dividing jobs into pieces for maximum throughput. Conclusions ASAP provides an environment for building an automated pipeline for NGS data preprocessing. This environment is flexible for use and future development. It is freely available at http://biostat.mc.vanderbilt.edu/ASAP.

  14. ASAP: an environment for automated preprocessing of sequencing data.

    Science.gov (United States)

    Torstenson, Eric S; Li, Bingshan; Li, Chun

    2013-01-04

    Next-generation sequencing (NGS) has yielded an unprecedented amount of data for genetics research. It is a daunting task to process the data from raw sequence reads to variant calls and manually processing this data can significantly delay downstream analysis and increase the possibility for human error. The research community has produced tools to properly prepare sequence data for analysis and established guidelines on how to apply those tools to achieve the best results, however, existing pipeline programs to automate the process through its entirety are either inaccessible to investigators, or web-based and require a certain amount of administrative expertise to set up. Advanced Sequence Automated Pipeline (ASAP) was developed to provide a framework for automating the translation of sequencing data into annotated variant calls with the goal of minimizing user involvement without the need for dedicated hardware or administrative rights. ASAP works both on computer clusters and on standalone machines with minimal human involvement and maintains high data integrity, while allowing complete control over the configuration of its component programs. It offers an easy-to-use interface for submitting and tracking jobs as well as resuming failed jobs. It also provides tools for quality checking and for dividing jobs into pieces for maximum throughput. ASAP provides an environment for building an automated pipeline for NGS data preprocessing. This environment is flexible for use and future development. It is freely available at http://biostat.mc.vanderbilt.edu/ASAP.

  15. ASAP: an environment for automated preprocessing of sequencing data

    Science.gov (United States)

    2013-01-01

    Background Next-generation sequencing (NGS) has yielded an unprecedented amount of data for genetics research. It is a daunting task to process the data from raw sequence reads to variant calls and manually processing this data can significantly delay downstream analysis and increase the possibility for human error. The research community has produced tools to properly prepare sequence data for analysis and established guidelines on how to apply those tools to achieve the best results, however, existing pipeline programs to automate the process through its entirety are either inaccessible to investigators, or web-based and require a certain amount of administrative expertise to set up. Findings Advanced Sequence Automated Pipeline (ASAP) was developed to provide a framework for automating the translation of sequencing data into annotated variant calls with the goal of minimizing user involvement without the need for dedicated hardware or administrative rights. ASAP works both on computer clusters and on standalone machines with minimal human involvement and maintains high data integrity, while allowing complete control over the configuration of its component programs. It offers an easy-to-use interface for submitting and tracking jobs as well as resuming failed jobs. It also provides tools for quality checking and for dividing jobs into pieces for maximum throughput. Conclusions ASAP provides an environment for building an automated pipeline for NGS data preprocessing. This environment is flexible for use and future development. It is freely available at http://biostat.mc.vanderbilt.edu/ASAP. PMID:23289815

  16. Selection of optimal pulse sequences for conventional and dynamic MR imaging with Gd-DTPA; A fundamental study

    Energy Technology Data Exchange (ETDEWEB)

    Maeda, Miho; Kita, Keisuke; Maeda, Masayuki (Wakayama Medical Coll. (Japan)) (and others)

    1989-11-01

    Gadolinium-DTPA (Gd-DTPA) enhances contrast between tissues in magnetic resonance (MR) imaging. The enhancement of tissues depends partly upon the pulse sequences, and the optimal pulse sequence is also influenced by the tissue cncentration of Gd-DTPA. We prepared phantoms of 25% albumin solutions with various concentrations of Gd-DTPA, and imaged them using various pulse sequences with 1.5-T MR system. We also performed MR imaging of 16 patients with tumors (10 brain tumors and 6 hepatic tumors) before and after intravenous administration of Gd-DTPA (0.1 mmol/kg); 6 patients with hepatic tumors underwent dynamic MR imaging during suspended respiration. We made a theoretical equation to calculate the concentration of Gd-DTPA and estimated its tissue concentration in tumors at 0{approx}0.2 mmol/kg. Within these tissue concentrations, the enhancement-to-noise (E/N) ratio was larger in FISP (flip angle of 90deg, TR pf 300 msec, minimal TE) and SE (TR of 400 msec, minimal TE) sequences than in other sequences observed. These sequences may be preferable for conventional enhanced-MRI. Among the pulse sequences with TR of less than 100 msec, FISP (flip angle of 90deg, TR of less than 100 msec, minimal TE) had the largest E/N ratio; which may be useful for dynamic MRI during suspended respiration. The importance of selecting the optimal pulse sequences according to the imaging modality used will be discussed. (author).

  17. Detection of non-coding RNAs on the basis of predicted secondary structure formation free energy change

    Directory of Open Access Journals (Sweden)

    Uzilov Andrew V

    2006-03-01

    Full Text Available Abstract Background Non-coding RNAs (ncRNAs have a multitude of roles in the cell, many of which remain to be discovered. However, it is difficult to detect novel ncRNAs in biochemical screens. To advance biological knowledge, computational methods that can accurately detect ncRNAs in sequenced genomes are therefore desirable. The increasing number of genomic sequences provides a rich dataset for computational comparative sequence analysis and detection of novel ncRNAs. Results Here, Dynalign, a program for predicting secondary structures common to two RNA sequences on the basis of minimizing folding free energy change, is utilized as a computational ncRNA detection tool. The Dynalign-computed optimal total free energy change, which scores the structural alignment and the free energy change of folding into a common structure for two RNA sequences, is shown to be an effective measure for distinguishing ncRNA from randomized sequences. To make the classification as a ncRNA, the total free energy change of an input sequence pair can either be compared with the total free energy changes of a set of control sequence pairs, or be used in combination with sequence length and nucleotide frequencies as input to a classification support vector machine. The latter method is much faster, but slightly less sensitive at a given specificity. Additionally, the classification support vector machine method is shown to be sensitive and specific on genomic ncRNA screens of two different Escherichia coli and Salmonella typhi genome alignments, in which many ncRNAs are known. The Dynalign computational experiments are also compared with two other ncRNA detection programs, RNAz and QRNA. Conclusion The Dynalign-based support vector machine method is more sensitive for known ncRNAs in the test genomic screens than RNAz and QRNA. Additionally, both Dynalign-based methods are more sensitive than RNAz and QRNA at low sequence pair identities. Dynalign can be used as a

  18. Implementation of Waste Minimization at a complex R ampersand D site

    International Nuclear Information System (INIS)

    Lang, R.E.; Thuot, J.R.; Devgun, J.S.

    1995-01-01

    Under the 1994 Waste Minimization/Pollution Prevention Crosscut Plan, the Department of Energy (DOE) has set a goal of 50% reduction in waste at its facilities by the end of 1999. Each DOE site is required to set site-specific goals to reduce generation of all types of waste including hazardous, radioactive, and mixed. To meet these goals, Argonne National Laboratory (ANL), Argonne, IL, has developed and implemented a comprehensive Pollution Prevention/Waste Minimization (PP/WMin) Program. The facilities and activities at the site vary from research into basic sciences and research into nuclear fuel cycle to high energy physics and decontamination and decommissioning projects. As a multidisciplinary R ampersand D facility and a multiactivity site, ANL generates waste streams that are varied, in physical form as well as in chemical constituents. This in turn presents a significant challenge to put a cohesive site-wide PP/WMin Program into action. In this paper, we will describe ANL's key activities and waste streams, the regulatory drivers for waste minimization, and the DOE goals in this area, and we will discuss ANL's strategy for waste minimization and it's implementation across the site

  19. Quantum N-body problem with a minimal length

    International Nuclear Information System (INIS)

    Buisseret, Fabien

    2010-01-01

    The quantum N-body problem is studied in the context of nonrelativistic quantum mechanics with a one-dimensional deformed Heisenberg algebra of the form [x,p]=i(1+βp 2 ), leading to the existence of a minimal observable length √(β). For a generic pairwise interaction potential, analytical formulas are obtained that allow estimation of the ground-state energy of the N-body system by finding the ground-state energy of a corresponding two-body problem. It is first shown that in the harmonic oscillator case, the β-dependent term grows faster with increasing N than the β-independent term. Then, it is argued that such a behavior should also be observed with generic potentials and for D-dimensional systems. Consequently, quantum N-body bound states might be interesting places to look at nontrivial manifestations of a minimal length, since the more particles that are present, the more the system deviates from standard quantum-mechanical predictions.

  20. The Functionality of Minimal PiggyBac Transposons in Mammalian Cells

    Directory of Open Access Journals (Sweden)

    Boris Troyanovsky

    2016-01-01

    Full Text Available Minimal piggyBac vectors are a modified single-plasmid version of the classical piggyBac delivery system that can be used for stable transgene integration. These vectors have a truncated terminal domain in the delivery cassette and thus, integrate significantly less flanking transposon DNA into host cell chromatin than classical piggyBac vectors. Herein, we test various characteristics of this modified transposon. The integration efficiency of minimal piggyBac vectors was inversely related to the size of both the transposon and the entire plasmid, but inserts as large as 15 kb were efficiently integrated. Open and super-coiled vectors demonstrated the same integration efficiency while DNA methylation decreased the integration efficiency and silenced the expression of previously integrated sequences in some cell types. Importantly, the incidence of plasmid backbone integration was not increased above that seen in nontransposon control vectors. In BALB/c mice, we demonstrated prolonged expression of two transgenes (intracellular mCherry and secretable Gaussia luciferase when delivered by the minimal piggyBac that resulted in a more sustained antibody production against the immunogenic luciferase than when delivered by a transient (nontransposon vector plasmid. We conclude that minimal piggyBac vectors are an effective alternative to other integrative systems for stable DNA delivery in vitro and in vivo.

  1. Numerical solution of large nonlinear boundary value problems by quadratic minimization techniques

    International Nuclear Information System (INIS)

    Glowinski, R.; Le Tallec, P.

    1984-01-01

    The objective of this paper is to describe the numerical treatment of large highly nonlinear two or three dimensional boundary value problems by quadratic minimization techniques. In all the different situations where these techniques were applied, the methodology remains the same and is organized as follows: 1) derive a variational formulation of the original boundary value problem, and approximate it by Galerkin methods; 2) transform this variational formulation into a quadratic minimization problem (least squares methods) or into a sequence of quadratic minimization problems (augmented lagrangian decomposition); 3) solve each quadratic minimization problem by a conjugate gradient method with preconditioning, the preconditioning matrix being sparse, positive definite, and fixed once for all in the iterative process. This paper will illustrate the methodology above on two different examples: the description of least squares solution methods and their application to the solution of the unsteady Navier-Stokes equations for incompressible viscous fluids; the description of augmented lagrangian decomposition techniques and their application to the solution of equilibrium problems in finite elasticity

  2. Starting-up sequence of the AWEC-60 wind turbine

    International Nuclear Information System (INIS)

    Avia, F.; Cruz, M. de la.

    1991-01-01

    One of the most critical status of the wind turbines operation is the starting-up sequence and the connection to the grid, due to the actuating loads that could be several times the loads during operation at rated conditions. Due to this fact, the control strategy is very important during the starting-up sequence in order to minimize the loads on the machine. For this purpose it is necessary to analyze the behaviour of the wind turbine during that sequence in different wind conditions and machine conditions. This report shows the graphic information about fifty starting-up sequences of the AWEC-60 wind turbine of 60 m. diameter and 1200 kW of rated power, recorded in April 1991 and cut-out wind speed. (author)

  3. Stars of bosons with non-minimal energy-momentum tensor

    International Nuclear Information System (INIS)

    van der Bij, J.J.; Gleiser, M.

    1987-02-01

    We obtain spherically symmetric solutions for scalar fields with a non-minimal coupling ξ absolute value of phi 2 R to gravity. We find, for fields of mass m, maximum masses and number of particles of order M/sub max/ ∼ 0.73ξ/sup 1/2/ M/sub Planck/ 2 /m, and N/sub max/ ∼ 0.88ξ/sup 1/2/ M/sub Planck/ 2 /m 2 respectively, for large positive ξ. For large negative ξ we find, M/sub max/ ∼ 0.66 absolute value of ξ/sup 1/2/ M/sub Planck/ 2 /m, and N/sub max/ ∼ 0.72 absolute value of ξ/sup 1/2/ M/sub Planck/ 2 /m 2

  4. Minimization of mixed waste in explosive testing operations

    International Nuclear Information System (INIS)

    Gonzalez, M.A.; Sator, F.E.; Simmons, L.F.

    1993-02-01

    In the 1970s and 1980s, efforts to manage mixed waste and reduce pollution focused largely on post-process measures. In the late 1980s, the approach to waste management and pollution control changed, focusing on minimization and prevention rather than abatement, treatment, and disposal. The new approach, and the formulated guidance from the US Department of Energy, was to take all necessary measures to minimize waste and prevent the release of pollutants to the environment. Two measures emphasized in particular were source reduction (reducing the volume and toxicity of the waste source) and recycling. In 1988, a waste minimization and pollution prevention program was initiated at Site 300, where the Lawrence Livermore National Laboratory (LLNL) conducts explosives testing. LLNL's Defense Systems/Nuclear Design (DS/ND) Program has adopted a variety of conservation techniques to minimize waste generation and cut disposal costs associated with ongoing operations. The techniques include minimizing the generation of depleted uranium and lead mixed waste through inventory control and material substitution measures and through developing a management system to recycle surplus explosives. The changes implemented have reduced annual mixed waste volumes by more than 95% and reduced overall radioactive waste generation (low-level and mixed) by more than 75%. The measures employed were cost-effective and easily implemented

  5. Optimized core loading sequence for Ukraine WWER-1000 reactors

    International Nuclear Information System (INIS)

    Dye, M.; Shah, H.

    2015-01-01

    Fuel Assemblies (WFAs) experienced mechanical damage of the grids during loading at both South Ukraine 2 (SU2) and South Ukraine 3 (SU3). The grids were damaged due to high lateral loads exceeding their strength limit. The high lateral loads were caused by a combination of distortion and stiffness of the mixed core fuel assemblies and significant fuel assembly-to-fuel assembly interaction combined with the core loading sequence being used. To prevent damage of the WFA grids during core loading, Westinghouse has developed a loading sequence technique and loading aides (smooth sided dummies and top nozzle loading guides) designed to minimize fuel assembly-to-fuel assembly interaction while maximizing the potential for successful loading (i.e., no fuel assembly damage and minimized loading time). The loading sequence technique accounts for cycle-specific core loading patterns and is based on previous Westinghouse WWER core loading experience and fundamental principles. The loading aids are developed to “open-up” the target core location or to provide guidance into a target core location. The Westinghouse optimized core loading sequence and smooth sided dummies were utilized during the successful loading of SU3 Cycle 25 mixed core in March 2015, with no instances of fuel assembly damage and yet still provided considerable time savings relative to the 2012 and 2013 SU3 reload campaigns. (authors)

  6. RNAPattMatch: a web server for RNA sequence/structure motif detection based on pattern matching with flexible gaps

    Science.gov (United States)

    Drory Retwitzer, Matan; Polishchuk, Maya; Churkin, Elena; Kifer, Ilona; Yakhini, Zohar; Barash, Danny

    2015-01-01

    Searching for RNA sequence-structure patterns is becoming an essential tool for RNA practitioners. Novel discoveries of regulatory non-coding RNAs in targeted organisms and the motivation to find them across a wide range of organisms have prompted the use of computational RNA pattern matching as an enhancement to sequence similarity. State-of-the-art programs differ by the flexibility of patterns allowed as queries and by their simplicity of use. In particular—no existing method is available as a user-friendly web server. A general program that searches for RNA sequence-structure patterns is RNA Structator. However, it is not available as a web server and does not provide the option to allow flexible gap pattern representation with an upper bound of the gap length being specified at any position in the sequence. Here, we introduce RNAPattMatch, a web-based application that is user friendly and makes sequence/structure RNA queries accessible to practitioners of various background and proficiency. It also extends RNA Structator and allows a more flexible variable gaps representation, in addition to analysis of results using energy minimization methods. RNAPattMatch service is available at http://www.cs.bgu.ac.il/rnapattmatch. A standalone version of the search tool is also available to download at the site. PMID:25940619

  7. Westinghouse Hanford Company waste minimization and pollution prevention awareness program plan

    International Nuclear Information System (INIS)

    Craig, P.A.; Nichols, D.H.; Lindsey, D.W.

    1991-08-01

    The purpose of this plan is to establish the Westinghouse Hanford Company's Waste Minimization Program. The plan specifies activities and methods that will be employed to reduce the quantity and toxicity of waste generated at Westinghouse Hanford Company (Westinghouse Hanford). It is designed to satisfy the US Department of Energy (DOE) and other legal requirements that are discussed in Subsection C of the section. The Pollution Prevention Awareness Program is included with the Waste Minimization Program as permitted by DOE Order 5400.1 (DOE 1988a). This plan is based on the Hanford Site Waste Minimization and Pollution Prevention Awareness Program Plan, which directs DOE Field Office, Richland contractors to develop and maintain a waste minimization program. This waste minimization program is an organized, comprehensive, and continual effort to systematically reduce waste generation. The Westinghouse Hanford Waste Minimization Program is designed to prevent or minimize pollutant releases to all environmental media from all aspects of Westinghouse Hanford operations and offers increased protection of public health and the environment. 14 refs., 2 figs., 1 tab

  8. Management of High-Throughput DNA Sequencing Projects: Alpheus.

    Science.gov (United States)

    Miller, Neil A; Kingsmore, Stephen F; Farmer, Andrew; Langley, Raymond J; Mudge, Joann; Crow, John A; Gonzalez, Alvaro J; Schilkey, Faye D; Kim, Ryan J; van Velkinburgh, Jennifer; May, Gregory D; Black, C Forrest; Myers, M Kathy; Utsey, John P; Frost, Nicholas S; Sugarbaker, David J; Bueno, Raphael; Gullans, Stephen R; Baxter, Susan M; Day, Steve W; Retzel, Ernest F

    2008-12-26

    High-throughput DNA sequencing has enabled systems biology to begin to address areas in health, agricultural and basic biological research. Concomitant with the opportunities is an absolute necessity to manage significant volumes of high-dimensional and inter-related data and analysis. Alpheus is an analysis pipeline, database and visualization software for use with massively parallel DNA sequencing technologies that feature multi-gigabase throughput characterized by relatively short reads, such as Illumina-Solexa (sequencing-by-synthesis), Roche-454 (pyrosequencing) and Applied Biosystem's SOLiD (sequencing-by-ligation). Alpheus enables alignment to reference sequence(s), detection of variants and enumeration of sequence abundance, including expression levels in transcriptome sequence. Alpheus is able to detect several types of variants, including non-synonymous and synonymous single nucleotide polymorphisms (SNPs), insertions/deletions (indels), premature stop codons, and splice isoforms. Variant detection is aided by the ability to filter variant calls based on consistency, expected allele frequency, sequence quality, coverage, and variant type in order to minimize false positives while maximizing the identification of true positives. Alpheus also enables comparisons of genes with variants between cases and controls or bulk segregant pools. Sequence-based differential expression comparisons can be developed, with data export to SAS JMP Genomics for statistical analysis.

  9. Minimization of the occupational doses during the liquidation of the radiation accident consequences

    International Nuclear Information System (INIS)

    Kuryndina, Lidia; Stroganov, Anatoly; Kuryndin, Anton

    2008-01-01

    Full text: As known the accident on the Chernobylskaya npp is the heaviest one in the nuclear energy history. It showed how considerable can be radiation levels on the breakdown nuclear facility. Nevertheless Russian specialists on radiation protection worked out and successfully realized a conception of the working in such conditions during the liquidation of the accident consequences. The conception based out on using ALARA principle, included the methods of radiation fields structure analysis and allowed to minimize of the occupational doses at operations of the accident consequences liquidation. The main idea of the conception is in strongly dependence between the radiation dose of the personnel performing the liquidation operations and concrete sequence of these operations. Also it is necessary from time to time to receive the experimental information about radiation situation dynamics on the breakdown facility and to make variant calculations for optimizing for the successful implementation of such approach. The structure of these calculations includes variable fraction for the actual state of the facility before the accident and after one and not variable fraction depend on the geometric and protection characteristics of the facility. And the second part is more complicated and bigger. Therefore the most part of these calculations required for the any successful liquidation of the accident consequences can be made on the facility projecting stage. If it will be made the following tasks can be solved in case of the accident: 1) To estimate a distribution of the contamination source using the radiation control system indications; 2) To determine a contribution from each source to the dose rate for any contaminated area; 3) To estimate the radiation doses of the personnel participated in the accident consequences liquidation; 4) To select and to realize the sequence of the liquidation operations giving the minimal doses. The paper will overview the description

  10. Ontogeny of hepatic energy metabolism genes in mice as revealed by RNA-sequencing.

    Directory of Open Access Journals (Sweden)

    Helen J Renaud

    Full Text Available The liver plays a central role in metabolic homeostasis by coordinating synthesis, storage, breakdown, and redistribution of nutrients. Hepatic energy metabolism is dynamically regulated throughout different life stages due to different demands for energy during growth and development. However, changes in gene expression patterns throughout ontogeny for factors important in hepatic energy metabolism are not well understood. We performed detailed transcript analysis of energy metabolism genes during various stages of liver development in mice. Livers from male C57BL/6J mice were collected at twelve ages, including perinatal and postnatal time points (n = 3/age. The mRNA was quantified by RNA-Sequencing, with transcript abundance estimated by Cufflinks. One thousand sixty energy metabolism genes were examined; 794 were above detection, of which 627 were significantly changed during at least one developmental age compared to adult liver. Two-way hierarchical clustering revealed three major clusters dependent on age: GD17.5-Day 5 (perinatal-enriched, Day 10-Day 20 (pre-weaning-enriched, and Day 25-Day 60 (adolescence/adulthood-enriched. Clustering analysis of cumulative mRNA expression values for individual pathways of energy metabolism revealed three patterns of enrichment: glycolysis, ketogenesis, and glycogenesis were all perinatally-enriched; glycogenolysis was the only pathway enriched during pre-weaning ages; whereas lipid droplet metabolism, cholesterol and bile acid metabolism, gluconeogenesis, and lipid metabolism were all enriched in adolescence/adulthood. This study reveals novel findings such as the divergent expression of the fatty acid β-oxidation enzymes Acyl-CoA oxidase 1 and Carnitine palmitoyltransferase 1a, indicating a switch from mitochondrial to peroxisomal β-oxidation after weaning; as well as the dynamic ontogeny of genes implicated in obesity such as Stearoyl-CoA desaturase 1 and Elongation of very long chain fatty

  11. A perturbation technique for shield weight minimization

    International Nuclear Information System (INIS)

    Watkins, E.F.; Greenspan, E.

    1993-01-01

    The radiation shield optimization code SWAN (Ref. 1) was originally developed for minimizing the thickness of a shield that will meet a given dose (or another) constraint or for extremizing a performance parameter of interest (e.g., maximizing energy multiplication or minimizing dose) while maintaining the shield volume constraint. The SWAN optimization process proved to be highly effective (e.g., see Refs. 2, 3, and 4). The purpose of this work is to investigate the applicability of the SWAN methodology to problems in which the weight rather than the volume is the relevant shield characteristic. Such problems are encountered in shield design for space nuclear power systems. The investigation is carried out using SWAN with the coupled neutron-photon cross-section library FLUNG (Ref. 5)

  12. A fast one-chip event-preprocessor and sequencer for the Simbol-X Low Energy Detector

    Science.gov (United States)

    Schanz, T.; Tenzer, C.; Maier, D.; Kendziorra, E.; Santangelo, A.

    2010-12-01

    We present an FPGA-based digital camera electronics consisting of an Event-Preprocessor (EPP) for on-board data preprocessing and a related Sequencer (SEQ) to generate the necessary signals to control the readout of the detector. The device has been originally designed for the Simbol-X low energy detector (LED). The EPP operates on 64×64 pixel images and has a real-time processing capability of more than 8000 frames per second. The already working releases of the EPP and the SEQ are now combined into one Digital-Camera-Controller-Chip (D3C).

  13. A fast one-chip event-preprocessor and sequencer for the Simbol-X Low Energy Detector

    Energy Technology Data Exchange (ETDEWEB)

    Schanz, T., E-mail: schanz@astro.uni-tuebingen.d [Kepler Center for Astro- and Particlephysics, Institut fuer Astronomie und Astrophysik Tuebingen, Sand 1, 72076 Tuebingen (Germany); Tenzer, C., E-mail: tenzer@astro.uni-tuebingen.d [Kepler Center for Astro- and Particlephysics, Institut fuer Astronomie und Astrophysik Tuebingen, Sand 1, 72076 Tuebingen (Germany); Maier, D.; Kendziorra, E.; Santangelo, A. [Kepler Center for Astro- and Particlephysics, Institut fuer Astronomie und Astrophysik Tuebingen, Sand 1, 72076 Tuebingen (Germany)

    2010-12-11

    We present an FPGA-based digital camera electronics consisting of an Event-Preprocessor (EPP) for on-board data preprocessing and a related Sequencer (SEQ) to generate the necessary signals to control the readout of the detector. The device has been originally designed for the Simbol-X low energy detector (LED). The EPP operates on 64x64 pixel images and has a real-time processing capability of more than 8000 frames per second. The already working releases of the EPP and the SEQ are now combined into one Digital-Camera-Controller-Chip (D3C).

  14. A fast one-chip event-preprocessor and sequencer for the Simbol-X Low Energy Detector

    International Nuclear Information System (INIS)

    Schanz, T.; Tenzer, C.; Maier, D.; Kendziorra, E.; Santangelo, A.

    2010-01-01

    We present an FPGA-based digital camera electronics consisting of an Event-Preprocessor (EPP) for on-board data preprocessing and a related Sequencer (SEQ) to generate the necessary signals to control the readout of the detector. The device has been originally designed for the Simbol-X low energy detector (LED). The EPP operates on 64x64 pixel images and has a real-time processing capability of more than 8000 frames per second. The already working releases of the EPP and the SEQ are now combined into one Digital-Camera-Controller-Chip (D3C).

  15. Hybridization Capture Using Short PCR Products Enriches Small Genomes by Capturing Flanking Sequences (CapFlank)

    DEFF Research Database (Denmark)

    Tsangaras, Kyriakos; Wales, Nathan; Sicheritz-Pontén, Thomas

    2014-01-01

    nucleotides) can result in enrichment across entire mitochondrial and bacterial genomes. Our findings suggest that some of the off-target sequences derived in capture experiments are non-randomly enriched, and that CapFlank will facilitate targeted enrichment of large contiguous sequences with minimal prior...

  16. A Streaming Distance Transform Algorithm for Neighborhood-Sequence Distances

    Directory of Open Access Journals (Sweden)

    Nicolas Normand

    2014-09-01

    Full Text Available We describe an algorithm that computes a “translated” 2D Neighborhood-Sequence Distance Transform (DT using a look up table approach. It requires a single raster scan of the input image and produces one line of output for every line of input. The neighborhood sequence is specified either by providing one period of some integer periodic sequence or by providing the rate of appearance of neighborhoods. The full algorithm optionally derives the regular (centered DT from the “translated” DT, providing the result image on-the-fly, with a minimal delay, before the input image is fully processed. Its efficiency can benefit all applications that use neighborhood- sequence distances, particularly when pipelined processing architectures are involved, or when the size of objects in the source image is limited.

  17. Minimization of Linear Functionals Defined on| Solutions of Large-Scale Discrete Ill-Posed Problems

    DEFF Research Database (Denmark)

    Elden, Lars; Hansen, Per Christian; Rojas, Marielba

    2003-01-01

    The minimization of linear functionals de ned on the solutions of discrete ill-posed problems arises, e.g., in the computation of con dence intervals for these solutions. In 1990, Elden proposed an algorithm for this minimization problem based on a parametric-programming reformulation involving...... the solution of a sequence of trust-region problems, and using matrix factorizations. In this paper, we describe MLFIP, a large-scale version of this algorithm where a limited-memory trust-region solver is used on the subproblems. We illustrate the use of our algorithm in connection with an inverse heat...

  18. Optimization for a fuel cell/battery/capacity tram with equivalent consumption minimization strategy

    International Nuclear Information System (INIS)

    Zhang, Wenbin; Li, Jianqiu; Xu, Liangfei; Ouyang, Minggao

    2017-01-01

    Highlights: • The hybridization of the fuel cell with the energy storage systems is realized for the tram. • A protype tram is tested based on an operation mode switching method. • An equivalent consumption minimization strategy is proposed and verified for optimization. - Abstract: This paper describes a hybrid tram powered by a Proton Exchange Membrane (PEM) fuel cell (FC) stack supported by an energy storage system (ESS) composed of a Li-ion battery (LB) pack and an ultra-capacitor (UC) pack. This configuration allows the tram to operate without grid connection. The hybrid tram with its full load is tested in the CRRC Qingdao Sifang Co.; Ltd. It firstly works on the operation mode switching method (OPMS) without energy regenerative and proper power management. Therefore, an equivalent consumption minimization strategy (ECMS) aimed at minimizing the hydrogen consumption is proposed to improve the characteristics of the tram. The results show that the proposed control system enhances drivability and economy, and is effective for application to this hybrid system.

  19. Spatio-temporal alignment of pedobarographic image sequences.

    Science.gov (United States)

    Oliveira, Francisco P M; Sousa, Andreia; Santos, Rubim; Tavares, João Manuel R S

    2011-07-01

    This article presents a methodology to align plantar pressure image sequences simultaneously in time and space. The spatial position and orientation of a foot in a sequence are changed to match the foot represented in a second sequence. Simultaneously with the spatial alignment, the temporal scale of the first sequence is transformed with the aim of synchronizing the two input footsteps. Consequently, the spatial correspondence of the foot regions along the sequences as well as the temporal synchronizing is automatically attained, making the study easier and more straightforward. In terms of spatial alignment, the methodology can use one of four possible geometric transformation models: rigid, similarity, affine, or projective. In the temporal alignment, a polynomial transformation up to the 4th degree can be adopted in order to model linear and curved time behaviors. Suitable geometric and temporal transformations are found by minimizing the mean squared error (MSE) between the input sequences. The methodology was tested on a set of real image sequences acquired from a common pedobarographic device. When used in experimental cases generated by applying geometric and temporal control transformations, the methodology revealed high accuracy. In addition, the intra-subject alignment tests from real plantar pressure image sequences showed that the curved temporal models produced better MSE results (P alignment of pedobarographic image data, since previous methods can only be applied on static images.

  20. Holistic virtual machine scheduling in cloud datacenters towards minimizing total energy

    OpenAIRE

    Li, Xiang; Garraghan, Peter; Jiang, Xiaohong; Wu, Zhaohui; Xu, Jie

    2018-01-01

    Energy consumed by Cloud datacenters has dramatically increased, driven by rapid uptake of applications and services globally provisioned through virtualization. By applying energy-aware virtual machine scheduling, Cloud providers are able to achieve enhanced energy efficiency and reduced operation cost. Energy consumption of datacenters consists of computing energy and cooling energy. However, due to the complexity of energy and thermal modeling of realistic Cloud datacenter operation, tradi...

  1. Phenomenology of anomaly-mediated supersymmetry breaking scenarios with non-minimal flavour violation

    Energy Technology Data Exchange (ETDEWEB)

    Fuks, Benjamin [Strasbourg Univ. (France). Inst. Pluridisciplinaire Hubert Curien; Herrmann, Bjoern [Savoie Univ., Annecy-le-Vieux (France). LAPTh; Klasen, Michael [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1

    2011-12-15

    In minimal anomaly-mediated supersymmetry breaking models, tachyonic sleptons are avoided by introducing a common scalar mass similar to the one introduced in minimal supergravity. This may lead to non-minimal flavour-violating interactions, e.g., in the squark sector. In this paper, we analyze the viable anomaly-mediated supersymmetry breaking parameter space in the light of the latest limits on low-energy observables and LHC searches, complete our analytical calculations of flavour-violating supersymmetric particle production at hadron colliders with those related to gluino production, and study the phenomenological consequences of non-minimal flavour violation in anomaly-mediated supersymmetry breaking scenarios at the LHC. Related cosmological aspects are also briefly discussed.

  2. A majorization-minimization approach to design of power distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jason K [Los Alamos National Laboratory; Chertkov, Michael [Los Alamos National Laboratory

    2010-01-01

    We consider optimization approaches to design cost-effective electrical networks for power distribution. This involves a trade-off between minimizing the power loss due to resistive heating of the lines and minimizing the construction cost (modeled by a linear cost in the number of lines plus a linear cost on the conductance of each line). We begin with a convex optimization method based on the paper 'Minimizing Effective Resistance of a Graph' [Ghosh, Boyd & Saberi]. However, this does not address the Alternating Current (AC) realm and the combinatorial aspect of adding/removing lines of the network. Hence, we consider a non-convex continuation method that imposes a concave cost of the conductance of each line thereby favoring sparser solutions. By varying a parameter of this penalty we extrapolate from the convex problem (with non-sparse solutions) to the combinatorial problem (with sparse solutions). This is used as a heuristic to find good solutions (local minima) of the non-convex problem. To perform the necessary non-convex optimization steps, we use the majorization-minimization algorithm that performs a sequence of convex optimizations obtained by iteratively linearizing the concave part of the objective. A number of examples are presented which suggest that the overall method is a good heuristic for network design. We also consider how to obtain sparse networks that are still robust against failures of lines and/or generators.

  3. Mixed low-level waste minimization at Los Alamos

    International Nuclear Information System (INIS)

    Starke, T.P.

    1998-01-01

    During the first six months of University of California 98 Fiscal Year (July--December) Los Alamos National Laboratory has achieved a 57% reduction in mixed low-level waste generation. This has been accomplished through a systems approach that identified and minimized the largest MLLW streams. These included surface-contaminated lead, lead-lined gloveboxes, printed circuit boards, and activated fluorescent lamps. Specific waste minimization projects have been initiated to address these streams. In addition, several chemical processing equipment upgrades are being implemented. Use of contaminated lead is planned for several high energy proton beam stop applications and stainless steel encapsulated lead is being evaluated for other radiological control area applications. INEEL is assisting Los Alamos with a complete systems analysis of analytical chemistry derived mixed wastes at the CMR building and with a minimum life-cycle cost standard glovebox design. Funding for waste minimization upgrades has come from several sources: generator programs, waste management, the generator set-aside program, and Defense Programs funding to INEEL

  4. Mixed low-level waste minimization at Los Alamos

    Energy Technology Data Exchange (ETDEWEB)

    Starke, T.P.

    1998-12-01

    During the first six months of University of California 98 Fiscal Year (July--December) Los Alamos National Laboratory has achieved a 57% reduction in mixed low-level waste generation. This has been accomplished through a systems approach that identified and minimized the largest MLLW streams. These included surface-contaminated lead, lead-lined gloveboxes, printed circuit boards, and activated fluorescent lamps. Specific waste minimization projects have been initiated to address these streams. In addition, several chemical processing equipment upgrades are being implemented. Use of contaminated lead is planned for several high energy proton beam stop applications and stainless steel encapsulated lead is being evaluated for other radiological control area applications. INEEL is assisting Los Alamos with a complete systems analysis of analytical chemistry derived mixed wastes at the CMR building and with a minimum life-cycle cost standard glovebox design. Funding for waste minimization upgrades has come from several sources: generator programs, waste management, the generator set-aside program, and Defense Programs funding to INEEL.

  5. How do providers discuss the results of pediatric exome sequencing with families?

    Science.gov (United States)

    Walser, Sarah A; Werner-Lin, Allison; Mueller, Rebecca; Miller, Victoria A; Biswas, Sawona; Bernhardt, Barbara A

    2017-09-01

    This study provides preliminary data on the process and content of returning results from exome sequencing offered to children through one of the Clinical Sequencing Exploratory Research (CSER) projects. We recorded 25 sessions where providers returned diagnostic and secondary sequencing results to families. Data interpretation utilized inductive thematic analysis. Typically, providers followed a results report and discussed diagnostic findings using technical genomic and sequencing concepts. We identified four provider processes for returning results: teaching genetic concepts; assessing family response; personalizing findings; and strengthening patient-provider relationships. Sessions should reflect family interest in medical management and next steps, and minimize detailed genomic concepts. As the scope and complexity of sequencing increase, the traditional information-laden counseling model requires revision.

  6. Development of Bi-phase sodium-oxygen-hydrogen chemical equilibrium calculation program (BISHOP) using Gibbs free energy minimization method

    International Nuclear Information System (INIS)

    Okano, Yasushi

    1999-08-01

    In order to analyze the reaction heat and compounds due to sodium combustion, the multiphase chemical equilibrium calculation program for chemical reaction among sodium, oxygen and hydrogen is developed in this study. The developed numerical program is named BISHOP; which denotes Bi-Phase, Sodium - Oxygen - Hydrogen, Chemical Equilibrium Calculation Program'. Gibbs free energy minimization method is used because of the special merits that easily add and change chemical species, and generally deal many thermochemical reaction systems in addition to constant temperature and pressure one. Three new methods are developed for solving multi-phase sodium reaction system in this study. One is to construct equation system by simplifying phase, and the other is to expand the Gibbs free energy minimization method into multi-phase system, and the last is to establish the effective searching method for the minimum value. Chemical compounds by the combustion of sodium in the air are calculated using BISHOP. The Calculated temperature and moisture conditions where sodium-oxide and hydroxide are formed qualitatively agree with the experiments. Deformation of sodium hydride is calculated by the program. The estimated result of the relationship between the deformation temperature and pressure closely agree with the well known experimental equation of Roy and Rodgers. It is concluded that BISHOP can be used for evaluated the combustion and deformation behaviors of sodium and its compounds. Hydrogen formation condition of the dump-tank room at the sodium leak event of FBR is quantitatively evaluated by BISHOP. It can be concluded that to keep the temperature of dump-tank room lower is effective method to suppress the formation of hydrogen. In case of choosing the lower inflammability limit of 4.1 mol% as the hydrogen concentration criterion, formation reaction of sodium hydride from sodium and hydrogen is facilitated below the room temperature of 800 K, and concentration of hydrogen

  7. Higher Integrability for Minimizers of the Mumford-Shah Functional

    Science.gov (United States)

    De Philippis, Guido; Figalli, Alessio

    2014-08-01

    We prove higher integrability for the gradient of local minimizers of the Mumford-Shah energy functional, providing a positive answer to a conjecture of De Giorgi (Free discontinuity problems in calculus of variations. Frontiers in pure and applied mathematics, North-Holland, Amsterdam, pp 55-62, 1991).

  8. Minimization and parameter estimation for seminorm regularization models with I-divergence constraints

    International Nuclear Information System (INIS)

    Teuber, T; Steidl, G; Chan, R H

    2013-01-01

    In this paper, we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded I-divergence D(b, H · ) for rather general linear operators H and L. The I-divergence is also known as Kullback–Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data but also in the case of multiplicative Gamma noise. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. A central part of this paper consists in proving relations between the parameters of I-divergence constrained and penalized problems. To solve the I-divergence constrained problem, we consider various first-order primal–dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an I-divergence constrained least-squares problem which can be solved based on Morozov’s discrepancy principle by a Newton method. We prove that these algorithms produce not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which converges to a regularization parameter so that the corresponding penalized problem has the same solution. Furthermore, we derive a rule for automatically setting the constraint parameter for data corrupted by multiplicative Gamma noise. The performance of the various algorithms is finally demonstrated for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise. (paper)

  9. One-parameter family of solitons from minimal surfaces

    Indian Academy of Sciences (India)

    solitons arising from a one parameter family of minimal surfaces. The process enables us to generate a new solution of the B–I equation from a given complex solution of a special type (which are abundant). We illustrate this with many examples. We find that the action or the energy of this family of solitons remains invariant ...

  10. Double quantum dot as a minimal thermoelectric generator

    OpenAIRE

    Donsa, S.; Andergassen, S.; Held, K.

    2014-01-01

    Based on numerical renormalization group calculations, we demonstrate that experimentally realized double quantum dots constitute a minimal thermoelectric generator. In the Kondo regime, one quantum dot acts as an n-type and the other one as a p-type thermoelectric device. Properly connected the double quantum dot provides a miniature power supply utilizing the thermal energy of the environment.

  11. Computers and the Environment: Minimizing the Carbon Footprint

    Science.gov (United States)

    Kaestner, Rich

    2009-01-01

    Computers can be good and bad for the environment; one can maximize the good and minimize the bad. When dealing with environmental issues, it's difficult to ignore the computing infrastructure. With an operations carbon footprint equal to the airline industry's, computer energy use is only part of the problem; everyone is also dealing with the use…

  12. Tritium pellet injection sequences for TFTR

    International Nuclear Information System (INIS)

    Houlberg, W.A.; Milora, S.L.; Attenberger, S.E.; Singer, C.E.; Schmidt, G.L.

    1983-01-01

    Tritium pellet injection into neutral deuterium, beam heated deuterium plasmas in the Tokamak Fusion Test Reactor (TFTR) is shown to be an attractive means of (1) minimizing tritium use per tritium discharge and over a sequence of tritium discharges; (2) greatly reducing the tritium load in the walls, limiters, getters, and cryopanels; (3) maintaining or improving instantaneous neutron production (Q); (4) reducing or eliminating deuterium-tritium (D-T) neutron production in non-optimized discharges; and (5) generally adding flexibility to the experimental sequences leading to optimal Q operation. Transport analyses of both compression and full-bore TFTR plasmas are used to support the above observations and to provide the basis for a proposed eight-pellet gas gun injector for the 1986 tritium experiments

  13. Guidelines for Induction and Intubation Sequence Fast in Emergency Service

    OpenAIRE

    Pérez Perilla, Patricia; Pontificia Universidad Javeriana-Hospital Universitario San Ignacio; Moreno Carrillo, Atilio; Pontificia Universidad Javeriana-Hospital Universitario San Ignacio; Gempeler Rueda, Fritz E.; Pontificia Universidad Javeriana-Hospital Universitario San Ignacio

    2012-01-01

    The rapid sequence intubation (RSI) is a procedure designed to minimize the time spent in securing the airway by endotracheal tube placement in emergency situations in patients at high risk of aspiration. Being clear about this situation, it is unquestionable the importance of education and training related to rapid sequence intubation to be made to the physicians responsible for the recovery rooms, emergency services and paramedics responsible for managing emergencies and disasters field . T...

  14. Stars of bosons with non-minimal energy-momentum tensor

    International Nuclear Information System (INIS)

    Van der Bij, J.J.; Gleiser, M.

    1987-01-01

    We obtain spherically symmetric solutions for scalar fields with a non-minimal coupling ξvertical strokeφvertical stroke 2 R to gravity. We find, for zeronode fields of mass m, maximum masses and number of particles of order M max ≅ 0.73ξ 1/2 M Planck 2 /m, and N max ≅ 0.88ξ 1/2 x M Planck 2 /m 2 respectively, for large positive ξ. For large negative ξ we find M max ≅ 0.66vertical strokeξvertical stroke 1/2 M Planck 2 /m, and N max ≅ 0.72vertical strokeξvertical stroke 1/2 x M Planck 2 /m 2 . We also calculate the critical mass and particle number for higher radial nodes of the scalar field and find that both quantities grow approximately linearly for large node number n. (orig.)

  15. Wavelengths, classifications, and ionization energies in the isoelectronic sequences from Yb II and Yb III through Bi XV and Bi XVI

    International Nuclear Information System (INIS)

    Kaufman, V.; Sugar, J.

    1976-01-01

    Spectral observations are reported for transitions to the ground term and first excited term of the one-electron configurations in the 4f/sup 14/5p 6 nl isoelectronic sequence from Yb II through Bi XV. Resonance lines are reported for the isoelectronic sequence Yb III through Bi XVI in which the ground state is 4f/sup 14/5p 6 1 S 0 and the upper levels are the J = 1 levels of the 4f/sup 13/5p 6 nd, 4f/sup 14/5p 5 nd, and 4f/sup 14/5p 5 ns configurations. The wavelengths fall in the range 70--3700 A. The spectra were produced by means of sliding and triggered spark discharges and photographed with 10.7 m normal and grazing incidence spectrographs. The data in the Yb III sequence demonstrate the crossing of binding energies of the 4f and 5p shells at W VII. Rydberg series terms were found in a sufficient number of cases to provide extrapolation curves through Bi XV and Bi XVI. These data enabled us to calculate ionization energies for each of these ions with an uncertainty of approx.1% or better

  16. Gelada vocal sequences follow Menzerath’s linguistic law

    Science.gov (United States)

    Gustison, Morgan L.; Semple, Stuart; Ferrer-i-Cancho, Ramon; Bergman, Thore J.

    2016-01-01

    Identifying universal principles underpinning diverse natural systems is a key goal of the life sciences. A powerful approach in addressing this goal has been to test whether patterns consistent with linguistic laws are found in nonhuman animals. Menzerath’s law is a linguistic law that states that, the larger the construct, the smaller the size of its constituents. Here, to our knowledge, we present the first evidence that Menzerath’s law holds in the vocal communication of a nonhuman species. We show that, in vocal sequences of wild male geladas (Theropithecus gelada), construct size (sequence size in number of calls) is negatively correlated with constituent size (duration of calls). Call duration does not vary significantly with position in the sequence, but call sequence composition does change with sequence size and most call types are abbreviated in larger sequences. We also find that intercall intervals follow the same relationship with sequence size as do calls. Finally, we provide formal mathematical support for the idea that Menzerath’s law reflects compression—the principle of minimizing the expected length of a code. Our findings suggest that a common principle underpins human and gelada vocal communication, highlighting the value of exploring the applicability of linguistic laws in vocal systems outside the realm of language. PMID:27091968

  17. Minimal flavour violation in the quark and lepton sector and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Uhlig, S.L.

    2008-01-07

    We address to explain the matter-antimatter asymmetry of the universe in a framework that generalizes the quark minimal flavour violation hypothesis to the lepton sector. We study the impact of CP violation present at low and high energies and investigate the existence of correlations among leptogenesis and lepton flavour violation. Further we present an approach alternative to minimal flavour violation where the suppression of flavour changing transitions involving quarks and leptons is governed by hierarchical fermion wave functions. (orig.)

  18. Risk management of energy system for identifying optimal power mix with financial-cost minimization and environmental-impact mitigation under uncertainty

    International Nuclear Information System (INIS)

    Nie, S.; Li, Y.P.; Liu, J.; Huang, Charley Z.

    2017-01-01

    An interval-stochastic risk management (ISRM) method is launched to control the variability of the recourse cost as well as to capture the notion of risk in stochastic programming. The ISRM method can examine various policy scenarios that are associated with economic penalties under uncertainties presented as probability distributions and interval values. An ISRM model is then formulated to identify the optimal power mix for the Beijing's energy system. Tradeoffs between risk and cost are evaluated, indicating any change in targeted cost and risk level would yield different expected costs. Results reveal that the inherent uncertainty of system components and risk attitude of decision makers have significant effects on the city's energy-supply and electricity-generation schemes as well as system cost and probabilistic penalty. Results also disclose that import electricity as a recourse action to compensate the local shortage would be enforced. The import electricity would increase with a reduced risk level; under every risk level, more electricity would be imported with an increased demand. The findings can facilitate the local authority in identifying desired strategies for the city's energy planning and management in association with financial-cost minimization and environmental-impact mitigation. - Highlights: • Interval-stochastic risk management method is launched to identify optimal power mix. • It is advantageous in capturing the notion of risk in stochastic programming. • Results reveal that risk attitudes can affect optimal power mix and financial cost. • Developing renewable energies would enhance the sustainability of energy management. • Import electricity as an action to compensate the local shortage would be enforced.

  19. Designing small universal k-mer hitting sets for improved analysis of high-throughput sequencing.

    Directory of Open Access Journals (Sweden)

    Yaron Orenstein

    2017-10-01

    Full Text Available With the rapidly increasing volume of deep sequencing data, more efficient algorithms and data structures are needed. Minimizers are a central recent paradigm that has improved various sequence analysis tasks, including hashing for faster read overlap detection, sparse suffix arrays for creating smaller indexes, and Bloom filters for speeding up sequence search. Here, we propose an alternative paradigm that can lead to substantial further improvement in these and other tasks. For integers k and L > k, we say that a set of k-mers is a universal hitting set (UHS if every possible L-long sequence must contain a k-mer from the set. We develop a heuristic called DOCKS to find a compact UHS, which works in two phases: The first phase is solved optimally, and for the second we propose several efficient heuristics, trading set size for speed and memory. The use of heuristics is motivated by showing the NP-hardness of a closely related problem. We show that DOCKS works well in practice and produces UHSs that are very close to a theoretical lower bound. We present results for various values of k and L and by applying them to real genomes show that UHSs indeed improve over minimizers. In particular, DOCKS uses less than 30% of the 10-mers needed to span the human genome compared to minimizers. The software and computed UHSs are freely available at github.com/Shamir-Lab/DOCKS/ and acgt.cs.tau.ac.il/docks/, respectively.

  20. Constraining non-minimally coupled tachyon fields by the Noether symmetry

    International Nuclear Information System (INIS)

    De Souza, Rudinei C; Kremer, Gilberto M

    2009-01-01

    A model for a homogeneous and isotropic Universe whose gravitational sources are a pressureless matter field and a tachyon field non-minimally coupled to the gravitational field is analyzed. The Noether symmetry is used to find expressions for the potential density and for the coupling function, and it is shown that both must be exponential functions of the tachyon field. Two cosmological solutions are investigated: (i) for the early Universe whose only source of gravitational field is a non-minimally coupled tachyon field which behaves as an inflaton and leads to an exponential accelerated expansion and (ii) for the late Universe whose gravitational sources are a pressureless matter field and a non-minimally coupled tachyon field which plays the role of dark energy and is responsible for the decelerated-accelerated transition period.

  1. Investigations on quantum mechanics with minimal length

    International Nuclear Information System (INIS)

    Chargui, Yassine

    2009-01-01

    We consider a modified quantum mechanics where the coordinates and momenta are assumed to satisfy a non-standard commutation relation of the form( X i , P j ) = iℎ(δ ij (1+βP 2 )+β'P i P j ). Such an algebra results in a generalized uncertainty relation which leads to the existence of a minimal observable length. Moreover, it incorporates an UV/IR mixing and non commutative position space. We analyse the possible representations in terms of differential operators. The latter are used to study the low energy effects of the minimal length by considering different quantum systems : the harmonic oscillator, the Klein-Gordon oscillator, the spinless Salpeter Coulomb problem, and the Dirac equation with a linear confining potential. We also discuss whether such effects are observable in precision measurements on a relativistic electron trapped in strong magnetic field.

  2. Minimally refined biomass fuel. [carbohydrate-water-alcohol mixture

    Energy Technology Data Exchange (ETDEWEB)

    Pearson, R.K.; Hirschfeld, T.B.

    1981-03-26

    A minimally refined fluid composition, suitable as a fuel mixture and derived from biomass material, is comprised of one or more water-soluble carbohydrates such as sucrose, one or more alcohols having less than four carbons, and water. The carbohydrate provides the fuel source; water-solubilizes the carbohydrate; and the alcohol aids in the combustion of the carbohydrate and reduces the viscosity of the carbohydrate/water solution. Because less energy is required to obtain the carbohydrate from the raw biomass than alcohol, an overall energy savings is realized compared to fuels employing alcohol as the primary fuel.

  3. Holographic fluctuations and the principle of minimal complexity

    Energy Technology Data Exchange (ETDEWEB)

    Chemissany, Wissam [Institut für Theoretische Physik, Leibniz Universität Hannover,Appelstr. 2, 30167 Hannover (Germany); Department of Mechanical Engineering, MIT,Cambridge MA 02139 (United States); Osborne, Tobias J. [Institut für Theoretische Physik, Leibniz Universität Hannover,Appelstr. 2, 30167 Hannover (Germany)

    2016-12-14

    We discuss, from a quantum information perspective, recent proposals of Maldacena, Ryu, Takayanagi, van Raamsdonk, Swingle, and Susskind that spacetime is an emergent property of the quantum entanglement of an associated boundary quantum system. We review the idea that the informational principle of minimal complexity determines a dual holographic bulk spacetime from a minimal quantum circuit U preparing a given boundary state from a trivial reference state. We describe how this idea may be extended to determine the relationship between the fluctuations of the bulk holographic geometry and the fluctuations of the boundary low-energy subspace. In this way we obtain, for every quantum system, an Einstein-like equation of motion for what might be interpreted as a bulk gravity theory dual to the boundary system.

  4. Computational analysis of sequence selection mechanisms.

    Science.gov (United States)

    Meyerguz, Leonid; Grasso, Catherine; Kleinberg, Jon; Elber, Ron

    2004-04-01

    Mechanisms leading to gene variations are responsible for the diversity of species and are important components of the theory of evolution. One constraint on gene evolution is that of protein foldability; the three-dimensional shapes of proteins must be thermodynamically stable. We explore the impact of this constraint and calculate properties of foldable sequences using 3660 structures from the Protein Data Bank. We seek a selection function that receives sequences as input, and outputs survival probability based on sequence fitness to structure. We compute the number of sequences that match a particular protein structure with energy lower than the native sequence, the density of the number of sequences, the entropy, and the "selection" temperature. The mechanism of structure selection for sequences longer than 200 amino acids is approximately universal. For shorter sequences, it is not. We speculate on concrete evolutionary mechanisms that show this behavior.

  5. Generalised teleparallel quintom dark energy non-minimally coupled with the scalar torsion and a boundary term

    Science.gov (United States)

    Bahamonde, Sebastian; Marciu, Mihai; Rudra, Prabir

    2018-04-01

    Within this work, we propose a new generalised quintom dark energy model in the teleparallel alternative of general relativity theory, by considering a non-minimal coupling between the scalar fields of a quintom model with the scalar torsion component T and the boundary term B. In the teleparallel alternative of general relativity theory, the boundary term represents the divergence of the torsion vector, B=2∇μTμ, and is related to the Ricci scalar R and the torsion scalar T, by the fundamental relation: R=‑T+B. We have investigated the dynamical properties of the present quintom scenario in the teleparallel alternative of general relativity theory by performing a dynamical system analysis in the case of decomposable exponential potentials. The study analysed the structure of the phase space, revealing the fundamental dynamical effects of the scalar torsion and boundary couplings in the case of a more general quintom scenario. Additionally, a numerical approach to the model is presented to analyse the cosmological evolution of the system.

  6. Separations: The path to waste minimization

    International Nuclear Information System (INIS)

    Bell, J.T.

    1992-01-01

    Waste materials usually are composed of large amounts of innocuous and frequently useful components mixed with lesser amounts of one or more hazardous components. The ultimate path to waste minimization is the separation of the lesser quantities of hazardous components from the innocuous components, and then recycle the useful components. This vision is so simple that everyone would be expected to properly manage waste. Several parameters interfere with this proper waste management, which encourages the open-quotes sweep it under the rugclose quotes or the open-quotes bury it allclose quotes attitudes, both of which delay and complicate proper waste management. The two primary parameters that interfere with proper waste management are: economics drives a process to a product without concerns of waste minimization, and emergency needs for immediate production of a product usually delays proper waste management. A third parameter in recent years is also interfering with proper waste management: quick relief of waste insults to political and public perceptions is promoting the open-quotes bury it allclose quotes attitude. A fourth parameter can promote better waste management for any scenario that suffers either or all of the first three parameters: separations technology can minimize wastes when the application of this technology is not voided by influence of the first three parameters. The US Department of Energy's management of nuclear waste has been seriously affected by the above four parameters. This paper includes several points about how the generation and management of DOE wastes have been, and continue to be, affected by these parameters. Particular separations technologies for minimizing the DOE wastes that must be stored for long periods are highlighted

  7. Annual Waste Minimization Summary Report, Calendar Year 2008

    International Nuclear Information System (INIS)

    2009-01-01

    This report summarizes the waste minimization efforts undertaken by National Security Technologies, LLC (NSTec), for the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office (NNSA/NSO), during calendar year 2008. This report was developed in accordance with the requirements of the Nevada Test Site (NTS) Resource Conservation and Recovery Act (RCRA) Permit (No. NEV HW0021), and as clarified in a letter dated April 21, 1995, from Paul Liebendorfer of the Nevada Division of Environmental Protection to Donald Elle of the U.S. Department of Energy, Nevada Operations Office. The NNSA/NSO Pollution Prevention (P2) Program establishes a process to reduce the volume and toxicity of waste generated by NNSA/NSO activities and ensures that proposed methods of treatment, storage, and/or disposal of waste minimize potential threats to human health and the environment. The following information provides an overview of the P2 Program, major P2 accomplishments during the reporting year, a comparison of the current year waste generation to prior years, and a description of efforts undertaken during the year to reduce the volume and toxicity of waste generated by the NNSA/NSO

  8. Annual Waste Minimization Summary Report, Calendar Year 2009

    International Nuclear Information System (INIS)

    2010-01-01

    This report summarizes the waste minimization efforts undertaken by National Security Technologies, LLC, for the U. S. Department of Energy, National Nuclear Security Administration Nevada Site Office (NNSA/NSO), during calendar year 2009. This report was developed in accordance with the requirements of the Nevada Test Site Resource Conservation and Recovery Act Permit (No. NEV HW0021), and as clarified in a letter dated April 21, 1995, from Paul Liebendorfer of the Nevada Division of Environmental Protection to Donald Elle of the U.S. Department of Energy, Nevada Operations Office. The NNSA/NSO Pollution Prevention (P2) Program establishes a process to reduce the volume and toxicity of waste generated by NNSA/NSO activities and ensures that proposed methods of treatment, storage, and/or disposal of waste minimize potential threats to human health and the environment. The following information provides an overview of the P2 Program, major P2 accomplishments during the reporting year, a comparison of the current year waste generation to prior years, and a description of efforts undertaken during the year to reduce the volume and toxicity of waste generated by NNSA/NSO.

  9. Annual Waste Minimization Summary Report Calendar Year 2007

    International Nuclear Information System (INIS)

    NSTec Environmental Management

    2008-01-01

    This report summarizes the waste minimization efforts undertaken by National Security Technologies, LLC (NSTec), for the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office (NNSA/NSO), during calendar year (CY) 2007. This report was developed in accordance with the requirements of the Nevada Test Site (NTS) Resource Conservation and Recovery Act (RCRA) Permit (number NEV HW0021), and as clarified in a letter dated April 21, 1995, from Paul Liebendorfer of the Nevada Division of Environmental Protection to Donald Elle of the U.S. Department of Energy, Nevada Operations Office. The NNSA/NSO Pollution Prevention (P2) Program establishes a process to reduce the volume and toxicity of waste generated by the NNSA/NSO and ensures that proposed methods of treatment, storage, and/or disposal of waste minimize potential threats to human health and the environment. The following information provides an overview of the P2 Program, major P2 accomplishments during the reporting year, a comparison of the current year waste generation to prior years, and a description of efforts undertaken during the year to reduce the volume and toxicity of waste generated by the NNSA/NSO

  10. Energy-aware design of digital systems

    Energy Technology Data Exchange (ETDEWEB)

    Gruian, F.

    2000-02-01

    Power and energy consumption are important issues in many digital applications, for reasons such as packaging cost and battery life-span. With the development of portable computing and communication, an increasing number of research groups are addressing power and energy related issues at various stages during the design process. Most of the work done in this area focuses on lower abstraction levels, such as gate or transistor level. Ideally, a power and energy-efficient design flow should consider the power and energy issues at every stage in the design process. Therefore, power and energy aware methods, applicable early in the design process are required. In this trend, the thesis presents two high-level design methods addressing power and energy consumption minimization. The first of the two approaches we describe, targets power consumption minimization during behavioral synthesis. This is carried out by minimizing the switching activity, while taking the correlations between signals into account. The second approach performs energy consumption minimization during system-level design, by choosing the most energy-efficient schedule and configuration of resources. Both methods make use of the constraint programming paradigm to model the problems in an elegant manner. The experimental results presented in this thesis show the impact of addressing the power and energy related issues early in the design process.

  11. The Biomolecule Sequencer Project: Nanopore Sequencing as a Dual-Use Tool for Crew Health and Astrobiology Investigations

    Science.gov (United States)

    John, K. K.; Botkin, D. S.; Burton, A. S.; Castro-Wallace, S. L.; Chaput, J. D.; Dworkin, J. P.; Lehman, N.; Lupisella, M. L.; Mason, C. E.; Smith, D. J.; hide

    2016-01-01

    . Consequently, nanopore-based sequencers could be made flight-ready with only minimal modifications.

  12. In silico evidence for sequence-dependent nucleosome sliding

    Energy Technology Data Exchange (ETDEWEB)

    Lequieu, Joshua; Schwartz, David C.; de Pablo, Juan J.

    2017-10-18

    Nucleosomes represent the basic building block of chromatin and provide an important mechanism by which cellular processes are controlled. The locations of nucleosomes across the genome are not random but instead depend on both the underlying DNA sequence and the dynamic action of other proteins within the nucleus. These processes are central to cellular function, and the molecular details of the interplay between DNA sequence and nudeosome dynamics remain poorly understood. In this work, we investigate this interplay in detail by relying on a molecular model, which permits development of a comprehensive picture of the underlying free energy surfaces and the corresponding dynamics of nudeosome repositioning. The mechanism of nudeosome repositioning is shown to be strongly linked to DNA sequence and directly related to the binding energy of a given DNA sequence to the histone core. It is also demonstrated that chromatin remodelers can override DNA-sequence preferences by exerting torque, and the histone H4 tail is then identified as a key component by which DNA-sequence, histone modifications, and chromatin remodelers could in fact be coupled.

  13. Equilibrium modeling of gasification: Gibbs free energy minimization approach and its application to spouted bed and spout-fluid bed gasifiers

    International Nuclear Information System (INIS)

    Jarungthammachote, S.; Dutta, A.

    2008-01-01

    Spouted beds have been found in many applications, one of which is gasification. In this paper, the gasification processes of conventional and modified spouted bed gasifiers were considered. The conventional spouted bed is a central jet spouted bed, while the modified spouted beds are circular split spouted bed and spout-fluid bed. The Gibbs free energy minimization method was used to predict the composition of the producer gas. The major six components, CO, CO 2 , CH 4 , H 2 O, H 2 and N 2 , were determined in the mixture of the producer gas. The results showed that the carbon conversion in the gasification process plays an important role in the model. A modified model was developed by considering the carbon conversion in the constraint equations and in the energy balance calculation. The results from the modified model showed improvements. The higher heating values (HHV) were also calculated and compared with the ones from experiments. The agreements of the calculated and experimental values of HHV, especially in the case of the circular split spouted bed and the spout-fluid bed were observed

  14. Minimally invasive orthognathic surgery.

    Science.gov (United States)

    Resnick, Cory M; Kaban, Leonard B; Troulis, Maria J

    2009-02-01

    Minimally invasive surgery is defined as the discipline in which operative procedures are performed in novel ways to diminish the sequelae of standard surgical dissections. The goals of minimally invasive surgery are to reduce tissue trauma and to minimize bleeding, edema, and injury, thereby improving the rate and quality of healing. In orthognathic surgery, there are two minimally invasive techniques that can be used separately or in combination: (1) endoscopic exposure and (2) distraction osteogenesis. This article describes the historical developments of the fields of orthognathic surgery and minimally invasive surgery, as well as the integration of the two disciplines. Indications, techniques, and the most current outcome data for specific minimally invasive orthognathic surgical procedures are presented.

  15. Regularity of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

    2010-01-01

    "Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

  16. Hybridization Capture Using Short PCR Products Enriches Small Genomes by Capturing Flanking Sequences (CapFlank)

    DEFF Research Database (Denmark)

    Tsangaras, Kyriakos; Wales, Nathan; Sicheritz-Pontén, Thomas

    2014-01-01

    , a non-negligible fraction of the resulting sequence reads are not homologous to the bait. We demonstrate that during capture, the bait-hybridized library molecules add additional flanking library sequences iteratively, such that baits limited to targeting relatively short regions (e.g. few hundred...... nucleotides) can result in enrichment across entire mitochondrial and bacterial genomes. Our findings suggest that some of the off-target sequences derived in capture experiments are non-randomly enriched, and that CapFlank will facilitate targeted enrichment of large contiguous sequences with minimal prior...

  17. A step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy and minimization of gate fee.

    Science.gov (United States)

    Kyriakis, Efstathios; Psomopoulos, Constantinos; Kokkotis, Panagiotis; Bourtsalas, Athanasios; Themelis, Nikolaos

    2017-06-23

    This study attempts the development of an algorithm in order to present a step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy, also considering the basic obstacle which is in many cases, the gate fee. Various parameters identified and evaluated in order to formulate the proposed decision making method in the form of an algorithm. The principle simulation input is the amount of municipal solid wastes (MSW) available for incineration and along with its net calorific value are the most important factors for the feasibility of the plant. Moreover, the research is focused both on the parameters that could increase the energy production and those that affect the R1 energy efficiency factor. Estimation of the final gate fee is achieved through the economic analysis of the entire project by investigating both expenses and revenues which are expected according to the selected site and outputs of the facility. In this point, a number of commonly revenue methods were included in the algorithm. The developed algorithm has been validated using three case studies in Greece-Athens, Thessaloniki, and Central Greece, where the cities of Larisa and Volos have been selected for the application of the proposed decision making tool. These case studies were selected based on a previous publication made by two of the authors, in which these areas where examined. Results reveal that the development of a «solid» methodological approach in selecting the site and the size of waste-to-energy (WtE) facility can be feasible. However, the maximization of the energy efficiency factor R1 requires high utilization factors while the minimization of the final gate fee requires high R1 and high metals recovery from the bottom ash as well as economic exploitation of recovered raw materials if any.

  18. Recent developments in the DOE Waste Minimization Pollution Prevention Program

    International Nuclear Information System (INIS)

    Hancock, J.K.

    1993-01-01

    The U.S. Department of Energy (DOE) is involved in a wide variety of research and development, remediation, and production activities at more than 100 sites throughout the United States. The wastes generated cover a diverse spectrum of sanitary, hazardous, and radioactive waste streams, including typical office environments, power generation facilities, laboratories, remediation sites, production facilities, and defense facilities. The DOE's initial waste minimization activities pre-date the Pollution Prevention Act of 1990 and focused on the defense program. Little emphasis was placed on nonproduction activities. In 1991 the Office of Waste Management Operations developed the Waste Minimization Division with the intention of coordinating and expanding the waste minimization pollution prevention approach to the entire complex. The diverse nature of DOE activities has led to several unique problems in addressing the needs of waste minimization and pollution prevention. The first problem is developing a program that addresses the geographical and institutional hurdles that exist; the second is developing a monitoring and reporting mechanism that one can use to assess the overall performance of the program

  19. Hanford Site waste minimization and pollution prevention awareness program plan

    International Nuclear Information System (INIS)

    Place, B.G.

    1998-01-01

    This plan, which is required by US Department of Energy (DOE) Order 5400. 1, provides waste minimization and pollution prevention guidance for all Hanford Site contractors. The plan is primary in a hierarchical series that includes the Hanford Site Waste Minimization and Pollution Prevention Awareness Program Plan, Prime contractor implementation plans, and the Hanford Site Guide for Preparing and Maintaining Generator Group Pollution Prevention Program Documentation (DOE-RL, 1997a) describing programs required by Resource Conservation and Recovery Act of 1976 (RCRA) 3002(b) and 3005(h) (RCRA and EPA, 1994). Items discussed include the pollution prevention policy and regulatory background, organizational structure, the major objectives and goals of Hanford Site's pollution prevention program, and an itemized description of the Hanford Site pollution prevention program. The document also includes US Department of Energy, Richland Operations Office's (RL's) statement of policy on pollution prevention as well as a listing of regulatory drivers that require a pollution prevention program

  20. Waste minimization applications at a remediation site

    International Nuclear Information System (INIS)

    Allmon, L.A.

    1995-01-01

    The Fernald Environmental Management Project (FEMP) owned by the Department of Energy was used for the processing of uranium. In 1989 Fernald suspended production of uranium metals and was placed on the National Priorities List (NPL). The site's mission has changed from one of production to environmental restoration. Many groups necessary for producing a product were deemed irrelevant for remediation work, including Waste Minimization. Waste Minimization does not readily appear to be applicable to remediation work. Environmental remediation is designed to correct adverse impacts to the environment from past operations and generates significant amounts of waste requiring management. The premise of pollution prevention is to avoid waste generation, thus remediation is in direct conflict with this premise. Although greater amounts of waste will be generated during environmental remediation, treatment capacities are not always available and disposal is becoming more difficult and costly. This creates the need for pollution prevention and waste minimization. Applying waste minimization principles at a remediation site is an enormous challenge. If the remediation site is also radiologically contaminated it is even a bigger challenge. Innovative techniques and ideas must be utilized to achieve reductions in the amount of waste that must be managed or dispositioned. At Fernald the waste minimization paradigm was shifted from focusing efforts on source reduction to focusing efforts on recycle/reuse by inverting the EPA waste management hierarchy. A fundamental difference at remediation sites is that source reduction has limited applicability to legacy wastes but can be applied successfully on secondary waste generation. The bulk of measurable waste reduction will be achieved by the recycle/reuse of primary wastes and by segregation and decontamination of secondary wastestreams. Each effort must be measured in terms of being economically and ecologically beneficial

  1. Pulse sequences and visualization of instruments

    International Nuclear Information System (INIS)

    Merkle, E.M.; Ulm Univ.; Wendt, M.; Chung, Y.C.; Duerk, J.L.; University Hospitals of Cleveland and Case Western Reserve University, OH; Lewin, J.S.

    1998-01-01

    While initially advocated primarily for intrasurgical visualization (e.g., craniotomy), interventional MRI rapidly evolved into roles in image-guided localization for needle-based procedures, minimally invasive neurosurgical procedures, and thermal ablation of cancer. In this contest, MRI pulse sequences and scanning methods serve one of four primary roles: (1) speed improvement, (2) device localization, (3) anatomy/lesion differentiation and (4) temperature sensitivity. The first part of this manuscript deals with passive visualization of MR-compatible needles and the effects of field strength, sequence design, and orientation of the needle relative to the static magnetic field of the scanner. Issues and recommendations are given for low-field as well as high-field scanners. The second part contains methods reported to achieve improved acquisition efficiency over conventional phase encoding (wavelets, locally focused imaging, singular value decomposition and keyhole imaging). Finally, the last part of the manuscrpt reports the current status of thermosensitive sequences and their dependence on spinlattice relaxation time (T1), water diffusion coefficient (D) and proton chemical shift (δ). (orig.) [de

  2. Interventional MRI of the breast: minimally invasive therapy

    International Nuclear Information System (INIS)

    Hall-Craggs, M.A.

    2000-01-01

    In recent years a variety of minimally invasive therapies have been applied to the treatment of breast lesions. These therapies include thermal treatments (interstitial laser coagulation, focused ultrasound, radiofrequency and cryotherapy), percutaneous excision, and interstitial radiotherapy. Magnetic resonance has been used in these treatments to visualize lesions before, during and after therapy and to guide interventions. ''Temperature-sensitive'' sequences have shown changes with thermal ablation which broadly correlate with areas of tumour necrosis. Consequently, MR has the potential to monitor treatment at the time of therapy. To date, experience in the treatment of breast cancer has been restricted to small studies. Large controlled studies are required to validate the efficacy and safety of these therapies in malignant disease. (orig.)

  3. Starting-up sequence of the AWEC-6 0 wind turbine

    International Nuclear Information System (INIS)

    Avia, F.; Cruz, M. de la

    1991-01-01

    One of the most critical status of the wind turbines operation is the starting-up sequence and the connection to the grid, due to the actuating loads that could be several times the loads during operation at rated conditions. Due to this fact, the control strategy is very important during the starting-up sequence in order to minimize the loads on the machine. For this purpose it is necessary to analyze the behaviour of the wind turbine during that sequence in different wind conditions and machine conditions This report shows the graphic Information about fifty starting-up sequences of the AWEC-60 wind turbine of 60 m diameter and 1200 kW of rated power, recorded in April 1991 and covering all the operation range between cut-in and cut-out wind speed. (Author) 2 refs

  4. IMPORTANCE, Minimal Cut Sets and System Availability from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Lambert, H. W.

    1987-01-01

    1 - Description of problem or function: IMPORTANCE computes various measures of probabilistic importance of basic events and minimal cut sets to a fault tree or reliability network diagram. The minimal cut sets, the failure rates and the fault duration times (i.e., the repair times) of all basic events contained in the minimal cut sets are supplied as input data. The failure and repair distributions are assumed to be exponential. IMPORTANCE, a quantitative evaluation code, then determines the probability of the top event and computes the importance of minimal cut sets and basic events by a numerical ranking. Two measures are computed. The first describes system behavior at one point in time; the second describes sequences of failures that cause the system to fail in time. All measures are computed assuming statistical independence of basic events. In addition, system unavailability and expected number of system failures are computed by the code. 2 - Method of solution: Seven measures of basic event importance and two measures of cut set importance can be computed. Birnbaum's measure of importance (i.e., the partial derivative) and the probability of the top event are computed using the min cut upper bound. If there are no replicated events in the minimal cut sets, then the min cut upper bound is exact. If basic events are replicated in the minimal cut sets, then based on experience the min cut upper bound is accurate if the probability of the top event is less than 0.1. Simpson's rule is used in computing the time-integrated measures of importance. Newton's method for approximating the roots of an equation is employed in the options where the importance measures are computed as a function of the probability of the top event, and a shell sort puts the output in descending order of importance

  5. Minimal Poems Written in 1979 Minimal Poems Written in 1979

    Directory of Open Access Journals (Sweden)

    Sandra Sirangelo Maggio

    2008-04-01

    Full Text Available The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism.

  6. Neutral buoyancy is optimal to minimize the cost of transport in horizontally swimming seals.

    Science.gov (United States)

    Sato, Katsufumi; Aoki, Kagari; Watanabe, Yuuki Y; Miller, Patrick J O

    2013-01-01

    Flying and terrestrial animals should spend energy to move while supporting their weight against gravity. On the other hand, supported by buoyancy, aquatic animals can minimize the energy cost for supporting their body weight and neutral buoyancy has been considered advantageous for aquatic animals. However, some studies suggested that aquatic animals might use non-neutral buoyancy for gliding and thereby save energy cost for locomotion. We manipulated the body density of seals using detachable weights and floats, and compared stroke efforts of horizontally swimming seals under natural conditions using animal-borne recorders. The results indicated that seals had smaller stroke efforts to swim a given speed when they were closer to neutral buoyancy. We conclude that neutral buoyancy is likely the best body density to minimize the cost of transport in horizontal swimming by seals.

  7. Discretized energy minimization in a wave guide with point sources

    Science.gov (United States)

    Propst, G.

    1994-01-01

    An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.

  8. Pollution prevention/waste minimization program 1998 fiscal year work plan - WBS 1.11.2.1

    International Nuclear Information System (INIS)

    Howald, S.C.; Merry, D.S.

    1997-09-01

    Pollution Prevention/Waste Minimization (P2/WMin) is the Department of Energy's preferred approach to environmental management. The P2/WMin mission is to eliminate or minimize waste generation, pollutant releases to the environment, use of toxic substances, and to conserve resources by implementing cost-effective pollution prevention technologies, practices, and polices

  9. Energy entanglement relation for quantum energy teleportation

    Energy Technology Data Exchange (ETDEWEB)

    Hotta, Masahiro, E-mail: hotta@tuhep.phys.tohoku.ac.j [Department of Physics, Faculty of Science, Tohoku University, Sendai 980-8578 (Japan)

    2010-07-26

    Protocols of quantum energy teleportation (QET), while retaining causality and local energy conservation, enable the transportation of energy from a subsystem of a many-body quantum system to a distant subsystem by local operations and classical communication through ground-state entanglement. We prove two energy-entanglement inequalities for a minimal QET model. These relations help us to gain a profound understanding of entanglement itself as a physical resource by relating entanglement to energy as an evident physical resource.

  10. A hybrid metaheuristic method to optimize the order of the sequences in continuous-casting

    Directory of Open Access Journals (Sweden)

    Achraf Touil

    2016-06-01

    Full Text Available In this paper, we propose a hybrid metaheuristic algorithm to maximize the production and to minimize the processing time in the steel-making and continuous casting (SCC by optimizing the order of the sequences where a sequence is a group of jobs with the same chemical characteristics. Based on the work Bellabdaoui and Teghem (2006 [Bellabdaoui, A., & Teghem, J. (2006. A mixed-integer linear programming model for the continuous casting planning. International Journal of Production Economics, 104(2, 260-270.], a mixed integer linear programming for scheduling steelmaking continuous casting production is presented to minimize the makespan. The order of the sequences in continuous casting is assumed to be fixed. The main contribution is to analyze an additional way to determine the optimal order of sequences. A hybrid method based on simulated annealing and genetic algorithm restricted by a tabu list (SA-GA-TL is addressed to obtain the optimal order. After parameter tuning of the proposed algorithm, it is tested on different instances using a.NET application and the commercial software solver Cplex v12.5. These results are compared with those obtained by SA-TL (simulated annealing restricted by tabu list.

  11. Stable structures of Al510–800 clusters and lowest energy sequence of truncated octahedral Al clusters up to 10,000 atoms

    International Nuclear Information System (INIS)

    Wu, Xia; He, Chengdong

    2012-01-01

    Highlights: ► The stable structures of Al 510–800 clusters are obtained with the NP-B potential. ► Al 510–800 clusters adopt truncated octahedral (TO) growth pattern based on complete TOs at Al 405 , Al 586 , and Al 711 . ► The lowest energy sequence of complete TOs up to the size 10,000 is proposed. -- Abstract: The stable structures of Al 510–800 clusters are obtained using dynamic lattice searching with constructed cores (DLSc) method by the NP-B potential. According to the structural growth rule, octahedra and truncated octahedra (TO) configurations are adopted as the inner cores in DLSc method. The results show that in the optimized structures two complete TO structures are found at Al 586 and Al 711 . Furthermore, Al 510–800 clusters adopt TO growth pattern on complete TOs at Al 405 , Al 586 , and Al 711 , and the configurations of the surface atoms are investigated. On the other hand, Al clusters with complete TO motifs are studied up to the size 10,000 by the geometrical construction method. The structural characteristics of complete TOs are denoted by the term “family”, and the growth sequence of Al clusters is investigated. The lowest energy sequence of complete TOs is proposed.

  12. Correlates of minimal dating.

    Science.gov (United States)

    Leck, Kira

    2006-10-01

    Researchers have associated minimal dating with numerous factors. The present author tested shyness, introversion, physical attractiveness, performance evaluation, anxiety, social skill, social self-esteem, and loneliness to determine the nature of their relationships with 2 measures of self-reported minimal dating in a sample of 175 college students. For women, shyness, introversion, physical attractiveness, self-rated anxiety, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. For men, physical attractiveness, observer-rated social skill, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. The patterns of relationships were not identical for the 2 indicators of minimal dating, indicating the possibility that minimal dating is not a single construct as researchers previously believed. The present author discussed implications and suggestions for future researchers.

  13. Minimal Super Technicolor

    DEFF Research Database (Denmark)

    Antola, M.; Di Chiara, S.; Sannino, F.

    2011-01-01

    We introduce novel extensions of the Standard Model featuring a supersymmetric technicolor sector (supertechnicolor). As the first minimal conformal supertechnicolor model we consider N=4 Super Yang-Mills which breaks to N=1 via the electroweak interactions. This is a well defined, economical......, between unparticle physics and Minimal Walking Technicolor. We consider also other N =1 extensions of the Minimal Walking Technicolor model. The new models allow all the standard model matter fields to acquire a mass....

  14. Manufacturing of mushroom-shaped structures and its hydrophobic robustness analysis based on energy minimization approach

    Science.gov (United States)

    Wang, Li; Yang, Xiaonan; Wang, Quandai; Yang, Zhiqiang; Duan, Hui; Lu, Bingheng

    2017-07-01

    The construction of stable hydrophobic surfaces has increasingly gained attention owing to its wide range of potential applications. However, these surfaces may become wet and lose their slip effect owing to insufficient hydrophobic stability. Pillars with a mushroom-shaped tip are believed to enhance hydrophobicity stability. This work presents a facile method of manufacturing mushroom-shaped structures, where, compared with the previously used method, the modulation of the cap thickness, cap diameter, and stem height of the structures is more convenient. The effects of the development time on the cap diameter and overhanging angle are investigated and well-defined mushroom-shaped structures are demonstrated. The effect of the microstructure geometry on the contact state of a droplet is predicted by taking an energy minimization approach and is experimentally validated with nonvolatile ultraviolet-curable polymer with a low surface tension by inspecting the profiles of liquid-vapor interface deformation and tracking the trace of the receding contact line after exposure to ultraviolet light. Theoretical and experimental results show that, compared with regular pillar arrays having a vertical sidewall, the mushroom-like structures can effectively enhance hydrophobic stability. The proposed manufacturing method will be useful for fabricating robust hydrophobic surfaces in a cost-effective and convenient manner.

  15. Several Families of Sequences with Low Correlation and Large Linear Span

    Science.gov (United States)

    Zeng, Fanxin; Zhang, Zhenyu

    In DS-CDMA systems and DS-UWB radios, low correlation of spreading sequences can greatly help to minimize multiple access interference (MAI) and large linear span of spreading sequences can reduce their predictability. In this letter, new sequence sets with low correlation and large linear span are proposed. Based on the construction Trm1[Trnm(αbt+γiαdt)]r for generating p-ary sequences of period pn-1, where n=2m, d=upm±v, b=u±v, γi∈GF(pn), and p is an arbitrary prime number, several methods to choose the parameter d are provided. The obtained sequences with family size pn are of four-valued, five-valued, six-valued or seven-valued correlation and the maximum nontrivial correlation value is (u+v-1)pm-1. The simulation by a computer shows that the linear span of the new sequences is larger than that of the sequences with Niho-type and Welch-type decimations, and similar to that of [10].

  16. Probing gravitational non-minimal coupling with dark energy surveys

    Energy Technology Data Exchange (ETDEWEB)

    Geng, Chao-Qiang [Chongqing University of Posts and Telecommunications, Chongqing (China); National Tsing Hua University, Department of Physics, Hsinchu (China); National Center for Theoretical Sciences, Hsinchu (China); Lee, Chung-Chi [National Center for Theoretical Sciences, Hsinchu (China); Wu, Yi-Peng [Academia Sinica, Institute of Physics, Taipei (China)

    2017-03-15

    We investigate observational constraints on a specific one-parameter extension to the minimal quintessence model, where the quintessence field acquires a quadratic coupling to the scalar curvature through a coupling constant ξ. The value of ξ is highly suppressed in typical tracker models if the late-time cosmic acceleration is driven at some field values near the Planck scale. We test ξ in a second class of models in which the field value today becomes a free model parameter. We use the combined data from type-Ia supernovae, cosmic microwave background, baryon acoustic oscillations and matter power spectrum, to weak lensing measurements and find a best-fit value ξ > 0.289 where ξ = 0 is excluded outside the 95% confidence region. The effective gravitational constant G{sub eff} subject to the hint of a non-zero ξ is constrained to -0.003 < 1 - G{sub eff}/G < 0.033 at the same confidence level on cosmological scales, and it can be narrowed down to 1 - G{sub eff}/G < 2.2 x 10{sup -5} when combining with Solar System tests. (orig.)

  17. Model Arrhenius untuk Pendugaan Laju Respirasi Brokoli Terolah Minimal

    Directory of Open Access Journals (Sweden)

    Nurul Imamah

    2016-04-01

    Full Text Available Minimally processed broccoli are perishable product because it still has some metabolism process during the storage period. One of the metabolism process is respiration. Respiration rate is varied depend on the commodity and storage temperature. The purpose of this research are: to review the respiration pattern of minimally processed broccoli during storage period, to study the effect of storage temperature to respiration rate, and to review the correlation between respiration rate and temperature based on Arrhenius model. Broccoli from farming organization “Agro Segar” was processed minimally and then measure the respiration rate. Closed system method is used to measure O2 and CO2 concentration. Minimally processed broccoli is stored at a temperature of 0oC, 5oC, 10oC and 15oC. The experimental design used was completely randomized design of the factors to analyze the rate of respiration. The result shows that broccoli is a climacteric vegetable. It is indicated by the increasing of O2 consumption and CO2 production during senescence phase. The respiration rate increase as high as the increasing of temperature storage. Models Arrhenius can describe correlation between respiration rate and temperature with R2 = 0.953-0.947. The constant value of activation energy (Eai and pre-exponential factor (Roi from Arrhenius model can be used to predict the respiration rate of minimally processed broccoli in every storage temperature

  18. Defining reference sequences for Nocardia species by similarity and clustering analyses of 16S rRNA gene sequence data.

    Directory of Open Access Journals (Sweden)

    Manal Helal

    Full Text Available BACKGROUND: The intra- and inter-species genetic diversity of bacteria and the absence of 'reference', or the most representative, sequences of individual species present a significant challenge for sequence-based identification. The aims of this study were to determine the utility, and compare the performance of several clustering and classification algorithms to identify the species of 364 sequences of 16S rRNA gene with a defined species in GenBank, and 110 sequences of 16S rRNA gene with no defined species, all within the genus Nocardia. METHODS: A total of 364 16S rRNA gene sequences of Nocardia species were studied. In addition, 110 16S rRNA gene sequences assigned only to the Nocardia genus level at the time of submission to GenBank were used for machine learning classification experiments. Different clustering algorithms were compared with a novel algorithm or the linear mapping (LM of the distance matrix. Principal Components Analysis was used for the dimensionality reduction and visualization. RESULTS: The LM algorithm achieved the highest performance and classified the set of 364 16S rRNA sequences into 80 clusters, the majority of which (83.52% corresponded with the original species. The most representative 16S rRNA sequences for individual Nocardia species have been identified as 'centroids' in respective clusters from which the distances to all other sequences were minimized; 110 16S rRNA gene sequences with identifications recorded only at the genus level were classified using machine learning methods. Simple kNN machine learning demonstrated the highest performance and classified Nocardia species sequences with an accuracy of 92.7% and a mean frequency of 0.578. CONCLUSION: The identification of centroids of 16S rRNA gene sequence clusters using novel distance matrix clustering enables the identification of the most representative sequences for each individual species of Nocardia and allows the quantitation of inter- and intra

  19. Progress toward an aberration-corrected low energy electron microscope for DNA sequencing and surface analysis.

    Science.gov (United States)

    Mankos, Marian; Shadman, Khashayar; N'diaye, Alpha T; Schmid, Andreas K; Persson, Henrik H J; Davis, Ronald W

    2012-11-01

    Monochromatic, aberration-corrected, dual-beam low energy electron microscopy (MAD-LEEM) is a novel imaging technique aimed at high resolution imaging of macromolecules, nanoparticles, and surfaces. MAD-LEEM combines three innovative electron-optical concepts in a single tool: a monochromator, a mirror aberration corrector, and dual electron beam illumination. The monochromator reduces the energy spread of the illuminating electron beam, which significantly improves spectroscopic and spatial resolution. The aberration corrector is needed to achieve subnanometer resolution at landing energies of a few hundred electronvolts. The dual flood illumination approach eliminates charging effects generated when a conventional, single-beam LEEM is used to image insulating specimens. The low landing energy of electrons in the range of 0 to a few hundred electronvolts is also critical for avoiding radiation damage, as high energy electrons with kilo-electron-volt kinetic energies cause irreversible damage to many specimens, in particular biological molecules. The performance of the key electron-optical components of MAD-LEEM, the aberration corrector combined with the objective lens and a magnetic beam separator, was simulated. Initial results indicate that an electrostatic electron mirror has negative spherical and chromatic aberration coefficients that can be tuned over a large parameter range. The negative aberrations generated by the electron mirror can be used to compensate the aberrations of the LEEM objective lens for a range of electron energies and provide a path to achieving subnanometer spatial resolution. First experimental results on characterizing DNA molecules immobilized on Au substrates in a LEEM are presented. Images obtained in a spin-polarized LEEM demonstrate that high contrast is achievable at low electron energies in the range of 1-10 eV and show that small changes in landing energy have a strong impact on the achievable contrast. The MAD-LEEM approach

  20. BLEACHING EUCALYPTUS PULPS WITH SHORT SEQUENCES

    Directory of Open Access Journals (Sweden)

    Flaviana Reis Milagres

    2011-03-01

    Full Text Available Eucalyptus spp kraft pulp, due to its high content of hexenuronic acids, is quite easy to bleach. Therefore, investigations have been made attempting to decrease the number of stages in the bleaching process in order to minimize capital costs. This study focused on the evaluation of short ECF (Elemental Chlorine Free and TCF (Totally Chlorine Free sequences for bleaching oxygen delignified Eucalyptus spp kraft pulp to 90% ISO brightness: PMoDP (Molybdenum catalyzed acid peroxide, chlorine dioxide and hydrogen peroxide, PMoD/P (Molybdenum catalyzed acid peroxide, chlorine dioxide and hydrogen peroxide, without washing PMoD(PO (Molybdenum catalyzed acid peroxide, chlorine dioxide and pressurized peroxide, D(EPODP (chlorine dioxide, extraction oxidative with oxygen and peroxide, chlorine dioxide and hydrogen peroxide, PMoQ(PO (Molybdenum catalyzed acid peroxide, DTPA and pressurized peroxide, and XPMoQ(PO (Enzyme, molybdenum catalyzed acid peroxide, DTPA and pressurized peroxide. Uncommon pulp treatments, such as molybdenum catalyzed acid peroxide (PMo and xylanase (X bleaching stages, were used. Among the ECF alternatives, the two-stage PMoD/P sequence proved highly cost-effective without affecting pulp quality in relation to the traditional D(EPODP sequence and produced better quality effluent in relation to the reference. However, a four stage sequence, XPMoQ(PO, was required to achieve full brightness using the TCF technology. This sequence was highly cost-effective although it only produced pulp of acceptable quality.

  1. Freezing and extreme-value statistics in a random energy model with logarithmically correlated potential

    International Nuclear Information System (INIS)

    Fyodorov, Yan V; Bouchaud, Jean-Philippe

    2008-01-01

    We investigate some implications of the freezing scenario proposed by Carpentier and Le Doussal (CLD) for a random energy model (REM) with logarithmically correlated random potential. We introduce a particular (circular) variant of the model, and show that the integer moments of the partition function in the high-temperature phase are given by the well-known Dyson Coulomb gas integrals. The CLD freezing scenario allows one to use those moments for extracting the distribution of the free energy in both high- and low-temperature phases. In particular, it yields the full distribution of the minimal value in the potential sequence. This provides an explicit new class of extreme-value statistics for strongly correlated variables, manifestly different from the standard Gumbel class. (fast track communication)

  2. Freezing and extreme-value statistics in a random energy model with logarithmically correlated potential

    Energy Technology Data Exchange (ETDEWEB)

    Fyodorov, Yan V [School of Mathematical Sciences, University of Nottingham, Nottingham NG72RD (United Kingdom); Bouchaud, Jean-Philippe [Science and Finance, Capital Fund Management 6-8 Bd Haussmann, 75009 Paris (France)

    2008-09-19

    We investigate some implications of the freezing scenario proposed by Carpentier and Le Doussal (CLD) for a random energy model (REM) with logarithmically correlated random potential. We introduce a particular (circular) variant of the model, and show that the integer moments of the partition function in the high-temperature phase are given by the well-known Dyson Coulomb gas integrals. The CLD freezing scenario allows one to use those moments for extracting the distribution of the free energy in both high- and low-temperature phases. In particular, it yields the full distribution of the minimal value in the potential sequence. This provides an explicit new class of extreme-value statistics for strongly correlated variables, manifestly different from the standard Gumbel class. (fast track communication)

  3. Proceedings of the Department of Energy Defense Programs hazardous and mixed waste minimization workshop: Hazardous Waste Remedial Actions Program

    International Nuclear Information System (INIS)

    1988-09-01

    The first workshop on hazardous and mixed waste minimization was held in Las Vegas, Nevada, on July 26--28, 1988. The objective of this workshop was to establish an interchange between DOE headquarters (DOE-HQ) DP, Operations Offices, and contractors of waste minimization strategies and successes. The first day of the workshop began with presentations stressing the importance of establishing a waste minimization program at each site as required by RCRA, the land ban restrictions, and the decrease in potential liabilities associated with waste disposal. Discussions were also centered on pending legislation which would create an Office of Waste Reduction in the Environmental Protection Agency (EPA). The Waste Minimization and Avoidance Study was initiated by DOE as an addition to the long-term productivity study to address the issues of evolving requirements facing RCRA waste management activities at the DP sites, to determine how major operations will be affected by these requirements, and to determine the available strategies and options for waste minimization and avoidance. Waste minimization was defined in this study as source reduction and recycling

  4. Waste Minimization Policy at the Romanian Nuclear Power Plant

    International Nuclear Information System (INIS)

    Andrei, V.; Daian, I.

    2002-01-01

    The radioactive waste management system at Cernavoda Nuclear Power Plant (NPP) in Romania was designed to maintain acceptable levels of safety for workers and to protect human health and the environment from exposure to unacceptable levels of radiation. In accordance with terminology of the International Atomic Energy Agency (IAEA), this system consists of the ''pretreatment'' of solid and organic liquid radioactive waste, which may include part or all of the following activities: collection, handling, volume reduction (by an in-drum compactor, if appropriate), and storage. Gaseous and aqueous liquid wastes are managed according to the ''dilute and discharge'' strategy. Taking into account the fact that treatment/conditioning and disposal technologies are still not established, waste minimization at the source is a priority environmental management objective, while waste minimization at the disposal stage is presently just a theoretical requirement for future adopted technologies . The necessary operational and maintenance procedures are in place at Cernavoda to minimize the production and contamination of waste. Administrative and technical measures are established to minimize waste volumes. Thus, an annual environmental target of a maximum 30 m3 of radioactive waste volume arising from operation and maintenance has been established. Within the first five years of operations at Cernavoda NPP, this target has been met. The successful implementation of the waste minimization policy has been accompanied by a cost reduction while the occupational doses for plant workers have been maintained at as low as reasonably practicable levels. This paper will describe key features of the waste management system along with the actual experience that has been realized with respect to minimizing the waste volumes at the Cernavoda NPP

  5. Accident sequence quantification with KIRAP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong

    1997-01-01

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP`s cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs.

  6. Accident sequence quantification with KIRAP

    International Nuclear Information System (INIS)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong.

    1997-01-01

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP's cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs

  7. Engineering fuel reloading sequence optimization for in-core shuffling system

    International Nuclear Information System (INIS)

    Jeong, Seo G.; Suh, Kune Y.

    2008-01-01

    Optimizing the nuclear fuel reloading process is central to enhancing the economics of nuclear power plant (NPP). There are two kinds of reloading method: in-core shuffling and ex-core shuffling. In-core shuffling has an advantage of reloading time when compared with ex-core shuffling. It is, however, not easy to adopt an in-core shuffling because of additional facilities required and regulations involved at the moment. The in-core shuffling necessitates minimizing the movement of refueling machine because reloading paths can be varied according to differing reloading sequences. In the past, the reloading process depended on the expert's knowledge and experience. Recent advances in computer technology have apparently facilitated the heuristic approach to nuclear fuel reloading sequence optimization. This work presents a first in its kind of in-core shuffling whereas all the Korean NPPs have so far adopted ex-core shuffling method. Several plants recently applied the in-core shuffling strategy, thereby saving approximately 24 to 48 hours of outage time. In case of in-core shuffling one need minimize the movement of refueling machine because reloading path can be varied according to different reloading sequences. Advances in computer technology have enabled optimizing the in-core shuffling by solving a traveling salesman problem. To solve this problem, heuristic algorithm is used, such as ant colony algorithm and genetic algorithm. The Systemic Engineering Reload Analysis (SERA) program is written to optimize shuffling sequence based on heuristic algorithms. SERA is applied to the Optimized Power Reactor 1000 MWe (OPR1000) on the assumption that the NPP adopts the in-core shuffling in the foreseeable future. It is shown that the optimized shuffling sequence resulted in reduced reloading time. (author)

  8. Health Physics and Waste Minimization Best Practices benchmarking study

    International Nuclear Information System (INIS)

    Levin, V.

    1995-01-01

    The Health Physics and Waste Minimization Best Practices project examines the usefulness of benchmarking as a tool for identifying health physics and waste minimization best practices for low-level solid radioactive waste (LLW) in the U.S. Department of Energy (DOE) complex. The goal of the project is to identify best practices from the nuclear power industry that will reduce the amount of LLW going to disposal in a cost-effective manner. An increase in worker efficiency and productivity is a secondary goal. These practices must be adaptable for implementation in the DOE complex. Once best practices are identified, ranked, and funded for implementation, a pilot implementation will be done at the Chemistry and Metallurgy Research (CMR) building at Los Alamos National Laboratory

  9. Parallel motif extraction from very long sequences

    KAUST Repository

    Sahli, Majed

    2013-01-01

    Motifs are frequent patterns used to identify biological functionality in genomic sequences, periodicity in time series, or user trends in web logs. In contrast to a lot of existing work that focuses on collections of many short sequences, modern applications require mining of motifs in one very long sequence (i.e., in the order of several gigabytes). For this case, there exist statistical approaches that are fast but inaccurate; or combinatorial methods that are sound and complete. Unfortunately, existing combinatorial methods are serial and very slow. Consequently, they are limited to very short sequences (i.e., a few megabytes), small alphabets (typically 4 symbols for DNA sequences), and restricted types of motifs. This paper presents ACME, a combinatorial method for extracting motifs from a single very long sequence. ACME arranges the search space in contiguous blocks that take advantage of the cache hierarchy in modern architectures, and achieves almost an order of magnitude performance gain in serial execution. It also decomposes the search space in a smart way that allows scalability to thousands of processors with more than 90% speedup. ACME is the only method that: (i) scales to gigabyte-long sequences; (ii) handles large alphabets; (iii) supports interesting types of motifs with minimal additional cost; and (iv) is optimized for a variety of architectures such as multi-core systems, clusters in the cloud, and supercomputers. ACME reduces the extraction time for an exact-length query from 4 hours to 7 minutes on a typical workstation; handles 3 orders of magnitude longer sequences; and scales up to 16, 384 cores on a supercomputer. Copyright is held by the owner/author(s).

  10. An Efficiency Improved Active Power Decoupling Circuit with Minimized Implementation Cost

    DEFF Research Database (Denmark)

    Tang, Yi; Blaabjerg, Frede

    2014-01-01

    topology does not require additional passive component, e.g. inductors or film capacitors for ripple energy storage because this task can be accomplished by the dc-link capacitors themselves, and therefore its implementation cost can be minimized. Another unique feature of the proposed topology...

  11. The simplest non-minimal matter-geometry coupling in the f(R, T) cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Moraes, P.H.R.S. [ITA - Instituto Tecnologico de Aeronautica, Departamento de Fisica, Sao Paulo (Brazil); Sahoo, P.K. [Birla Institute of Technology and Science-Pilani, Department of Mathematics, Hyderabad (India)

    2017-07-15

    f(R, T) gravity is an extended theory of gravity in which the gravitational action contains general terms of both the Ricci scalar R and the trace of the energy-momentum tensor T. In this way, f(R, T) models are capable of describing a non-minimal coupling between geometry (through terms in R) and matter (through terms in T). In this article we construct a cosmological model from the simplest non-minimal matter-geometry coupling within the f(R, T) gravity formalism, by means of an effective energy-momentum tensor, given by the sum of the usual matter energy-momentum tensor with a dark energy contribution, with the latter coming from the matter-geometry coupling terms. We apply the energy conditions to our solutions in order to obtain a range of values for the free parameters of the model which yield a healthy and well-behaved scenario. For some values of the free parameters which are submissive to the energy conditions application, it is possible to predict a transition from a decelerated period of the expansion of the universe to a period of acceleration (dark energy era). We also propose further applications of this particular case of the f(R, T) formalism in order to check its reliability in other fields, rather than cosmology. (orig.)

  12. The simplest non-minimal matter-geometry coupling in the f(R, T) cosmology

    International Nuclear Information System (INIS)

    Moraes, P.H.R.S.; Sahoo, P.K.

    2017-01-01

    f(R, T) gravity is an extended theory of gravity in which the gravitational action contains general terms of both the Ricci scalar R and the trace of the energy-momentum tensor T. In this way, f(R, T) models are capable of describing a non-minimal coupling between geometry (through terms in R) and matter (through terms in T). In this article we construct a cosmological model from the simplest non-minimal matter-geometry coupling within the f(R, T) gravity formalism, by means of an effective energy-momentum tensor, given by the sum of the usual matter energy-momentum tensor with a dark energy contribution, with the latter coming from the matter-geometry coupling terms. We apply the energy conditions to our solutions in order to obtain a range of values for the free parameters of the model which yield a healthy and well-behaved scenario. For some values of the free parameters which are submissive to the energy conditions application, it is possible to predict a transition from a decelerated period of the expansion of the universe to a period of acceleration (dark energy era). We also propose further applications of this particular case of the f(R, T) formalism in order to check its reliability in other fields, rather than cosmology. (orig.)

  13. Validation of Minim typing for fast and accurate discrimination of extended-spectrum, beta-lactamase-producing Klebsiella pneumoniae isolates in tertiary care hospital.

    Science.gov (United States)

    Brhelova, Eva; Kocmanova, Iva; Racil, Zdenek; Hanslianova, Marketa; Antonova, Mariya; Mayer, Jiri; Lengerova, Martina

    2016-09-01

    Minim typing is derived from the multi-locus sequence typing (MLST). It targets the same genes, but sequencing is replaced by high resolution melt analysis. Typing can be performed by analysing six loci (6MelT), four loci (4MelT) or using data from four loci plus sequencing the tonB gene (HybridMelT). The aim of this study was to evaluate Minim typing to discriminate extended-spectrum beta-lactamase producing Klebsiella pneumoniae (ESBL-KLPN) isolates at our hospital. In total, 380 isolates were analyzed. The obtained alleles were assigned according to both the 6MelT and 4MelT typing scheme. In 97 isolates, the tonB gene was sequenced to enable HybridMelT typing. We found that the presented method is suitable to quickly monitor isolates of ESBL-KLPN; results are obtained in less than 2 hours and at a lower cost than MLST. We identified a local ESBL-KLPN outbreak and a comparison of colonizing and invasive isolates revealed a long term colonization of patients with the same strain. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Asymptotically safe non-minimal inflation

    Energy Technology Data Exchange (ETDEWEB)

    Tronconi, Alessandro, E-mail: Alessandro.Tronconi@bo.infn.it [Dipartimento di Fisica e Astronomia and INFN, Via Irnerio 46,40126 Bologna (Italy)

    2017-07-01

    We study the constraints imposed by the requirement of Asymptotic Safety on a class of inflationary models with an inflaton field non-minimally coupled to the Ricci scalar. The critical surface in the space of theories is determined by the improved renormalization group flow which takes into account quantum corrections beyond the one loop approximation. The combination of constraints deriving from Planck observations and those from theory puts severe bounds on the values of the parameters of the model and predicts a quite large tensor to scalar ratio. We finally comment on the dependence of the results on the definition of the infrared energy scale which parametrises the running on the critical surface.

  15. A reassessment of phylogenetic relationships within the phaeophyceae based on RUBISCO large subunit and ribosomal DNA sequences

    NARCIS (Netherlands)

    Draisma, S.G A; Prud'homme van Reine, W.F; Stam, W.T.; Olsen, J.L.

    To better assess the current state of phaeophycean phylogeny, we compiled all currently available rbcL, 18S, and 26S rDNA sequences from the EMBL/GenBank database and added 21 new rbcL sequences of our own. We then developed three new alignments designed to maximize taxon sampling while minimizing

  16. Interventional MRI of the breast: minimally invasive therapy

    Energy Technology Data Exchange (ETDEWEB)

    Hall-Craggs, M.A. [MR Unit, Middlesex Hospital, London (United Kingdom)

    2000-01-01

    In recent years a variety of minimally invasive therapies have been applied to the treatment of breast lesions. These therapies include thermal treatments (interstitial laser coagulation, focused ultrasound, radiofrequency and cryotherapy), percutaneous excision, and interstitial radiotherapy. Magnetic resonance has been used in these treatments to visualize lesions before, during and after therapy and to guide interventions. ''Temperature-sensitive'' sequences have shown changes with thermal ablation which broadly correlate with areas of tumour necrosis. Consequently, MR has the potential to monitor treatment at the time of therapy. To date, experience in the treatment of breast cancer has been restricted to small studies. Large controlled studies are required to validate the efficacy and safety of these therapies in malignant disease. (orig.)

  17. The feasibility study of non-invasive fetal trisomy 18 and 21 detection with semiconductor sequencing platform.

    Directory of Open Access Journals (Sweden)

    Young Joo Jeon

    Full Text Available OBJECTIVE: Recent non-invasive prenatal testing (NIPT technologies are based on next-generation sequencing (NGS. NGS allows rapid and effective clinical diagnoses to be determined with two common sequencing systems: Illumina and Ion Torrent platforms. The majority of NIPT technology is associated with Illumina platform. We investigated whether fetal trisomy 18 and 21 were sensitively and specifically detectable by semiconductor sequencer: Ion Proton. METHODS: From March 2012 to October 2013, we enrolled 155 pregnant women with fetuses who were diagnosed as high risk of fetal defects at Xiamen Maternal & Child Health Care Hospital (Xiamen, Fujian, China. Adapter-ligated DNA libraries were analyzed by the Ion Proton™ System (Life Technologies, Grand Island, NY, USA with an average 0.3× sequencing coverage per nucleotide. Average total raw reads per sample was 6.5 million and mean rate of uniquely mapped reads was 59.0%. The results of this study were derived from BWA mapping. Z-score was used for fetal trisomy 18 and 21 detection. RESULTS: Interactive dot diagrams showed the minimal z-score values to discriminate negative versus positive cases of fetal trisomy 18 and 21. For fetal trisomy 18, the minimal z-score value of 2.459 showed 100% positive predictive and negative predictive values. The minimal z-score of 2.566 was used to classify negative versus positive cases of fetal trisomy 21. CONCLUSION: These results provide the evidence that fetal trisomy 18 and 21 detection can be performed with semiconductor sequencer. Our data also suggest that a prospective study should be performed with a larger cohort of clinically diverse obstetrics patients.

  18. SSR_pipeline--computer software for the identification of microsatellite sequences from paired-end Illumina high-throughput DNA sequence data

    Science.gov (United States)

    Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (SSRs; for example, microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains three analysis modules along with a fourth control module that can be used to automate analyses of large volumes of data. The modules are used to (1) identify the subset of paired-end sequences that pass quality standards, (2) align paired-end reads into a single composite DNA sequence, and (3) identify sequences that possess microsatellites conforming to user specified parameters. Each of the three separate analysis modules also can be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc). All modules are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, Windows). The program suite relies on a compiled Python extension module to perform paired-end alignments. Instructions for compiling the extension from source code are provided in the documentation. Users who do not have Python installed on their computers or who do not have the ability to compile software also may choose to download packaged executable files. These files include all Python scripts, a copy of the compiled extension module, and a minimal installation of Python in a single binary executable. See program documentation for more information.

  19. Minimizing Mutual Couping

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna.......Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna....

  20. Long-read sequencing data analysis for yeasts.

    Science.gov (United States)

    Yue, Jia-Xing; Liti, Gianni

    2018-06-01

    Long-read sequencing technologies have become increasingly popular due to their strengths in resolving complex genomic regions. As a leading model organism with small genome size and great biotechnological importance, the budding yeast Saccharomyces cerevisiae has many isolates currently being sequenced with long reads. However, analyzing long-read sequencing data to produce high-quality genome assembly and annotation remains challenging. Here, we present a modular computational framework named long-read sequencing data analysis for yeasts (LRSDAY), the first one-stop solution that streamlines this process. Starting from the raw sequencing reads, LRSDAY can produce chromosome-level genome assembly and comprehensive genome annotation in a highly automated manner with minimal manual intervention, which is not possible using any alternative tool available to date. The annotated genomic features include centromeres, protein-coding genes, tRNAs, transposable elements (TEs), and telomere-associated elements. Although tailored for S. cerevisiae, we designed LRSDAY to be highly modular and customizable, making it adaptable to virtually any eukaryotic organism. When applying LRSDAY to an S. cerevisiae strain, it takes ∼41 h to generate a complete and well-annotated genome from ∼100× Pacific Biosciences (PacBio) running the basic workflow with four threads. Basic experience working within the Linux command-line environment is recommended for carrying out the analysis using LRSDAY.

  1. Quantum-Sequencing: Biophysics of quantum tunneling through nucleic acids

    Science.gov (United States)

    Casamada Ribot, Josep; Chatterjee, Anushree; Nagpal, Prashant

    2014-03-01

    Tunneling microscopy and spectroscopy has extensively been used in physical surface sciences to study quantum tunneling to measure electronic local density of states of nanomaterials and to characterize adsorbed species. Quantum-Sequencing (Q-Seq) is a new method based on tunneling microscopy for electronic sequencing of single molecule of nucleic acids. A major goal of third-generation sequencing technologies is to develop a fast, reliable, enzyme-free single-molecule sequencing method. Here, we present the unique ``electronic fingerprints'' for all nucleotides on DNA and RNA using Q-Seq along their intrinsic biophysical parameters. We have analyzed tunneling spectra for the nucleotides at different pH conditions and analyzed the HOMO, LUMO and energy gap for all of them. In addition we show a number of biophysical parameters to further characterize all nucleobases (electron and hole transition voltage and energy barriers). These results highlight the robustness of Q-Seq as a technique for next-generation sequencing.

  2. Self-organization, free energy minimization, and optimal grip on a field of affordances.

    Science.gov (United States)

    Bruineberg, Jelle; Rietveld, Erik

    2014-01-01

    In this paper, we set out to develop a theoretical and conceptual framework for the new field of Radical Embodied Cognitive Neuroscience. This framework should be able to integrate insights from several relevant disciplines: theory on embodied cognition, ecological psychology, phenomenology, dynamical systems theory, and neurodynamics. We suggest that the main task of Radical Embodied Cognitive Neuroscience is to investigate the phenomenon of skilled intentionality from the perspective of the self-organization of the brain-body-environment system, while doing justice to the phenomenology of skilled action. In previous work, we have characterized skilled intentionality as the organism's tendency toward an optimal grip on multiple relevant affordances simultaneously. Affordances are possibilities for action provided by the environment. In the first part of this paper, we introduce the notion of skilled intentionality and the phenomenon of responsiveness to a field of relevant affordances. Second, we use Friston's work on neurodynamics, but embed a very minimal version of his Free Energy Principle in the ecological niche of the animal. Thus amended, this principle is helpful for understanding the embeddedness of neurodynamics within the dynamics of the system "brain-body-landscape of affordances." Next, we show how we can use this adjusted principle to understand the neurodynamics of selective openness to the environment: interacting action-readiness patterns at multiple timescales contribute to the organism's selective openness to relevant affordances. In the final part of the paper, we emphasize the important role of metastable dynamics in both the brain and the brain-body-environment system for adequate affordance-responsiveness. We exemplify our integrative approach by presenting research on the impact of Deep Brain Stimulation on affordance responsiveness of OCD patients.

  3. What drives energy consumers? : Engaging people in a sustainable energy transition

    NARCIS (Netherlands)

    Steg, Linda; Shwom, Rachel; Dietz, Thomas

    Providing clean, safe, reliable, and affordable energy for people everywhere will require converting to an energy system in which the use of fossil fuels is minimal. A sustainable energy transition means substantial changes in technology and the engagement of the engineering community. But it will

  4. Thermodynamic analysis of ethanol/water system in a fuel cell reformer with the Gibbs energy minimization method

    International Nuclear Information System (INIS)

    Lima da Silva, Aline; De Fraga Malfatti, Celia; Heck, Nestor Cesar

    2003-01-01

    The use of fuel cells is a promising technology in the conversion of chemical to electrical energy. Due to environmental concerns related to the reduction of atmospheric pollution and greenhouse gases emissions such as CO 2 , NO x and hydrocarbons, there have been many researches about fuel cells using hydrogen as fuel. Hydrogen gas can be produced by several routes; a promising one is the steam reforming of ethanol. This route may become an important industrial process, especially for sugarcane producing countries. Ethanol is renewable energy and presents several advantages over other sources related to natural availability, storage and handling safety. In order to contribute to the understanding of the steam reforming of ethanol inside the reformer, this work displays a detailed thermodynamic analysis of the ethanol/water system, in the temperature range of 500-1200K, considering different H 2 O/ethanol reforming ratios. The equilibrium determinations were done with the help of the Gibbs energy minimization method using the Generalized Reduced Gradient algorithm (GRG). Based on literature data, the species considered in calculations were: H 2 , H 2 O, CO, CO 2 , CH 4 , C 2 H 4 , CH 3 CHO, C 2 H 5 OH (gas phase) and C gr . (graphite phase). The thermodynamic conditions for carbon deposition (probably soot) on catalyst during gas reforming were analyzed, in order to establish temperature ranges and H 2 O/ethanol ratios where carbon precipitation is not thermodynamically feasible. Experimental results from literature show that carbon deposition causes catalyst deactivation during reforming. This deactivation is due to encapsulating carbon that covers active phases on a catalyst substrate, e.g. Ni over Al 2 O 3 . In the present study, a mathematical relationship between Lagrange multipliers and the carbon activity (with reference to the graphite phase) was deduced, unveiling the carbon activity in the reformer atmosphere. From this, it is possible to foreseen if soot

  5. Neutral Higgs bosons in the standard model and in the minimal ...

    Indian Academy of Sciences (India)

    assumed to be CP invariant. Finally, we discuss an alternative MSSM scenario including. CP violation in the Higgs sector. Keywords. Higgs bosons; standard model; minimal supersymmetric model; searches at LEP. 1. Introduction. One of the challenges in high-energy particle physics is the discovery of Higgs bosons.

  6. Waste minimization/pollution prevention study of high-priority waste streams

    International Nuclear Information System (INIS)

    Ogle, R.B.

    1994-03-01

    Although waste minimization has been practiced by the Metals and Ceramics (M ampersand C) Division in the past, the effort has not been uniform or formalized. To establish the groundwork for continuous improvement, the Division Director initiated a more formalized waste minimization and pollution prevention program. Formalization of the division's pollution prevention efforts in fiscal year (FY) 1993 was initiated by a more concerted effort to determine the status of waste generation from division activities. The goal for this effort was to reduce or minimize the wastes identified as having the greatest impact on human health, the environment, and costs. Two broad categories of division wastes were identified as solid/liquid wastes and those relating to energy use (primarily electricity and steam). This report presents information on the nonradioactive solid and liquid wastes generated by division activities. More specifically, the information presented was generated by teams of M ampersand C staff members empowered by the Division Director to study specific waste streams

  7. Centroid based clustering of high throughput sequencing reads based on n-mer counts.

    Science.gov (United States)

    Solovyov, Alexander; Lipkin, W Ian

    2013-09-08

    Many problems in computational biology require alignment-free sequence comparisons. One of the common tasks involving sequence comparison is sequence clustering. Here we apply methods of alignment-free comparison (in particular, comparison using sequence composition) to the challenge of sequence clustering. We study several centroid based algorithms for clustering sequences based on word counts. Study of their performance shows that using k-means algorithm with or without the data whitening is efficient from the computational point of view. A higher clustering accuracy can be achieved using the soft expectation maximization method, whereby each sequence is attributed to each cluster with a specific probability. We implement an open source tool for alignment-free clustering. It is publicly available from github: https://github.com/luscinius/afcluster. We show the utility of alignment-free sequence clustering for high throughput sequencing analysis despite its limitations. In particular, it allows one to perform assembly with reduced resources and a minimal loss of quality. The major factor affecting performance of alignment-free read clustering is the length of the read.

  8. New Approaches and Technologies to Sequence de novo Plant reference Genomes (2013 DOE JGI Genomics of Energy and Environment 8th Annual User Meeting)

    Energy Technology Data Exchange (ETDEWEB)

    Schmutz, Jeremy

    2013-03-01

    Jeremy Schmutz of the HudsonAlpha Institute for Biotechnology on New approaches and technologies to sequence de novo plant reference genomes at the 8th Annual Genomics of Energy Environment Meeting on March 27, 2013 in Walnut Creek, CA.

  9. Probing gravitational non-minimal coupling with dark energy surveys

    International Nuclear Information System (INIS)

    Geng, Chao-Qiang; Lee, Chung-Chi; Wu, Yi-Peng

    2017-01-01

    We investigate observational constraints on a specific one-parameter extension to the minimal quintessence model, where the quintessence field acquires a quadratic coupling to the scalar curvature through a coupling constant ξ. The value of ξ is highly suppressed in typical tracker models if the late-time cosmic acceleration is driven at some field values near the Planck scale. We test ξ in a second class of models in which the field value today becomes a free model parameter. We use the combined data from type-Ia supernovae, cosmic microwave background, baryon acoustic oscillations and matter power spectrum, to weak lensing measurements and find a best-fit value ξ > 0.289 where ξ = 0 is excluded outside the 95% confidence region. The effective gravitational constant G_e_f_f subject to the hint of a non-zero ξ is constrained to -0.003 < 1 - G_e_f_f/G < 0.033 at the same confidence level on cosmological scales, and it can be narrowed down to 1 - G_e_f_f/G < 2.2 x 10"-"5 when combining with Solar System tests. (orig.)

  10. Limit behavior of mass critical Hartree minimization problems with steep potential wells

    Science.gov (United States)

    Guo, Yujin; Luo, Yong; Wang, Zhi-Qiang

    2018-06-01

    We consider minimizers of the following mass critical Hartree minimization problem: eλ(N ) ≔inf {u ∈H1(Rd ) , ‖u‖2 2=N } Eλ(u ) , where d ≥ 3, λ > 0, and the Hartree energy functional Eλ(u) is defined by Eλ(u ) ≔∫Rd|∇u (x ) |2d x +λ ∫Rdg (x ) u2(x ) d x -1/2 ∫Rd∫Rdu/2(x ) u2(y ) |x -y |2 d x d y . Here the steep potential g(x) satisfies 0 =g (0 ) =infRdg (x ) ≤g (x ) ≤1 and 1 -g (x ) ∈Ld/2(Rd ) . We prove that there exists a constant N* > 0, independent of λg(x), such that if N ≥ N*, then eλ(N) does not admit minimizers for any λ > 0; if 0 N N*, then there exists a constant λ*(N) > 0 such that eλ(N) admits minimizers for any λ > λ*(N) and eλ(N) does not admit minimizers for 0 N). For any given 0 N N*, the limit behavior of positive minimizers for eλ(N) is also studied as λ → ∞, where the mass concentrates at the bottom of g(x).

  11. 2013 Los Alamos National Laboratory Hazardous Waste Minimization Report

    Energy Technology Data Exchange (ETDEWEB)

    Salzman, Sonja L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); English, Charles J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-08-24

    Waste minimization and pollution prevention are inherent goals within the operating procedures of Los Alamos National Security, LLC (LANS). The US Department of Energy (DOE) and LANS are required to submit an annual hazardous waste minimization report to the New Mexico Environment Department (NMED) in accordance with the Los Alamos National Laboratory (LANL or the Laboratory) Hazardous Waste Facility Permit. The report was prepared pursuant to the requirements of Section 2.9 of the LANL Hazardous Waste Facility Permit. This report describes the hazardous waste minimization program (a component of the overall Waste Minimization/Pollution Prevention [WMin/PP] Program) administered by the Environmental Stewardship Group (ENV-ES). This report also supports the waste minimization and pollution prevention goals of the Environmental Programs Directorate (EP) organizations that are responsible for implementing remediation activities and describes its programs to incorporate waste reduction practices into remediation activities and procedures. LANS was very successful in fiscal year (FY) 2013 (October 1-September 30) in WMin/PP efforts. Staff funded four projects specifically related to reduction of waste with hazardous constituents, and LANS won four national awards for pollution prevention efforts from the National Nuclear Security Administration (NNSA). In FY13, there was no hazardous, mixedtransuranic (MTRU), or mixed low-level (MLLW) remediation waste generated at the Laboratory. More hazardous waste, MTRU waste, and MLLW was generated in FY13 than in FY12, and the majority of the increase was related to MTRU processing or lab cleanouts. These accomplishments and analysis of the waste streams are discussed in much more detail within this report.

  12. MinGenome: An In Silico Top-Down Approach for the Synthesis of Minimized Genomes.

    Science.gov (United States)

    Wang, Lin; Maranas, Costas D

    2018-02-16

    Genome minimized strains offer advantages as production chassis by reducing transcriptional cost, eliminating competing functions and limiting unwanted regulatory interactions. Existing approaches for identifying stretches of DNA to remove are largely ad hoc based on information on presumably dispensable regions through experimentally determined nonessential genes and comparative genomics. Here we introduce a versatile genome reduction algorithm MinGenome that implements a mixed-integer linear programming (MILP) algorithm to identify in size descending order all dispensable contiguous sequences without affecting the organism's growth or other desirable traits. Known essential genes or genes that cause significant fitness or performance loss can be flagged and their deletion can be prohibited. MinGenome also preserves needed transcription factors and promoter regions ensuring that retained genes will be properly transcribed while also avoiding the simultaneous deletion of synthetic lethal pairs. The potential benefit of removing even larger contiguous stretches of DNA if only one or two essential genes (to be reinserted elsewhere) are within the deleted sequence is explored. We applied the algorithm to design a minimized E. coli strain and found that we were able to recapitulate the long deletions identified in previous experimental studies and discover alternative combinations of deletions that have not yet been explored in vivo.

  13. Appearance of a Minimal Length in $e^+ e^-$ Annihilation

    CERN Document Server

    Dymnikova, Irina; Ulbricht, Jürgen

    2014-01-01

    Experimental data reveal with a 5$\\sigma$ significance the existence of a characteristic minimal length $l_e$= 1.57 × 10$^{−17}$ cm at the scale E = 1.253 TeV in the annihilation reaction $e^+e^- \\to \\gamma\\gamma(\\gamma)$ . Nonlinear electrodynamics coupled to gravity and satisfying the weak energy condition predicts, for an arbitrary gauge invariant Lagrangian, the existence of spinning charged electromagnetic soliton asymptotically Kerr-Newman for a distant observer with the gyromagnetic ratio g=2 . Its internal structure includes a rotating equatorial disk of de Sitter vacuum which has properties of a perfect conductor and ideal diamagnetic, displays superconducting behavior, supplies a particle with the finite positive electromagnetic mass related to breaking of space-time symmetry, and gives some idea about the physical origin of a minimal length in annihilation.

  14. A Bac Library and Paired-PCR Approach to Mapping and Completing the Genome Sequence of Sulfolobus Solfataricus P2

    DEFF Research Database (Denmark)

    She, Qunxin; Confalonieri, F.; Zivanovic, Y.

    2000-01-01

    The original strategy used in the Sulfolobus solfatnricus genome project was to sequence non overlapping, or minimally overlapping, cosmid or lambda inserts without constructing a physical map. However, after only about two thirds of the genome sequence was completed, this approach became counter......-productive because there was a high sequence bias in the cosmid and lambda libraries. Therefore, a new approach was devised for linking the sequenced regions which may be generally applicable. BAC libraries were constructed and terminal sequences of the clones were determined and used for both end mapping and PCR...

  15. Legal incentives for minimizing waste

    International Nuclear Information System (INIS)

    Clearwater, S.W.; Scanlon, J.M.

    1991-01-01

    Waste minimization, or pollution prevention, has become an integral component of federal and state environmental regulation. Minimizing waste offers many economic and public relations benefits. In addition, waste minimization efforts can also dramatically reduce potential criminal requirements. This paper addresses the legal incentives for minimizing waste under current and proposed environmental laws and regulations

  16. Minimization of energy and surface roughness of the products machined by milling

    Science.gov (United States)

    Belloufi, A.; Abdelkrim, M.; Bouakba, M.; Rezgui, I.

    2017-08-01

    Metal cutting represents a large portion in the manufacturing industries, which makes this process the largest consumer of energy. Energy consumption is an indirect source of carbon footprint, we know that CO2 emissions come from the production of energy. Therefore high energy consumption requires a large production, which leads to high cost and a large amount of CO2 emissions. At this day, a lot of researches done on the Metal cutting, but the environmental problems of the processes are rarely discussed. The right selection of cutting parameters is an effective method to reduce energy consumption because of the direct relationship between energy consumption and cutting parameters in machining processes. Therefore, one of the objectives of this research is to propose an optimization strategy suitable for machining processes (milling) to achieve the optimum cutting conditions based on the criterion of the energy consumed during the milling. In this paper the problem of energy consumed in milling is solved by an optimization method chosen. The optimization is done according to the different requirements in the process of roughing and finishing under various technological constraints.

  17. Pairwise local structural alignment of RNA sequences with sequence similarity less than 40%

    DEFF Research Database (Denmark)

    Havgaard, Jakob Hull; Lyngsø, Rune B.; Stormo, Gary D.

    2005-01-01

    detect two genes with low sequence similarity, where the genes are part of a larger genomic region. Results: Here we present such an approach for pairwise local alignment which is based on FILDALIGN and the Sankoff algorithm for simultaneous structural alignment of multiple sequences. We include...... the ability to conduct mutual scans of two sequences of arbitrary length while searching for common local structural motifs of some maximum length. This drastically reduces the complexity of the algorithm. The scoring scheme includes structural parameters corresponding to those available for free energy....... The structure prediction performance for a family is typically around 0.7 using Matthews correlation coefficient. In case (2), the algorithm is successful at locating RNA families with an average sensitivity of 0.8 and a positive predictive value of 0.9 using a BLAST-like hit selection scheme. Availability...

  18. Mycoplasmas and their host: emerging and re-emerging minimal pathogens.

    Science.gov (United States)

    Citti, Christine; Blanchard, Alain

    2013-04-01

    Commonly known as mycoplasmas, bacteria of the class Mollicutes include the smallest and simplest life forms capable of self replication outside of a host. Yet, this minimalism hides major human and animal pathogens whose prevalence and occurrence have long been underestimated. Owing to advances in sequencing methods, large data sets have become available for a number of mycoplasma species and strains, providing new diagnostic approaches, typing strategies, and means for comprehensive studies. A broader picture is thus emerging in which mycoplasmas are successful pathogens having evolved a number of mechanisms and strategies for surviving hostile environments and adapting to new niches or hosts. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. MOCUS, Minimal Cut Sets and Minimal Path Sets from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.; Henry, E.B.; Marshall, N.H.

    1976-01-01

    1 - Description of problem or function: From a description of the Boolean failure logic of a system, called a fault tree, and control parameters specifying the minimal cut set length to be obtained MOCUS determines the system failure modes, or minimal cut sets, and the system success modes, or minimal path sets. 2 - Method of solution: MOCUS uses direct resolution of the fault tree into the cut and path sets. The algorithm used starts with the main failure of interest, the top event, and proceeds to basic independent component failures, called primary events, to resolve the fault tree to obtain the minimal sets. A key point of the algorithm is that an and gate alone always increases the number of path sets; an or gate alone always increases the number of cut sets and increases the size of path sets. Other types of logic gates must be described in terms of and and or logic gates. 3 - Restrictions on the complexity of the problem: Output from MOCUS can include minimal cut and path sets for up to 20 gates

  20. Sequencing, lot sizing and scheduling in job shops: the common cycle approach

    NARCIS (Netherlands)

    Ouenniche, J.; Boctor, F.F.

    1998-01-01

    This paper deals with the multi-product, finite horizon, static demand, sequencing, lot sizing and scheduling problem in a job shop environment where the objective is to minimize the sum of setup and inventory holding costs while satisfying the demand with no backlogging. To solve this problem, we

  1. Is non-minimal inflation eternal?

    International Nuclear Information System (INIS)

    Feng, Chao-Jun; Li, Xin-Zhou

    2010-01-01

    The possibility that the non-minimal coupling inflation could be eternal is investigated. We calculate the quantum fluctuation of the inflaton in a Hubble time and find that it has the same value as that in the minimal case in the slow-roll limit. Armed with this result, we have studied some concrete non-minimal inflationary models including the chaotic inflation and the natural inflation, in which the inflaton is non-minimally coupled to the gravity. We find that the non-minimal coupling inflation could be eternal in some parameter spaces.

  2. Constrained energy minimization applied to apparent reflectance and single-scattering albedo spectra: a comparison

    Science.gov (United States)

    Resmini, Ronald G.; Graver, William R.; Kappus, Mary E.; Anderson, Mark E.

    1996-11-01

    Constrained energy minimization (CEM) has been applied to the mapping of the quantitative areal distribution of the mineral alunite in an approximately 1.8 km2 area of the Cuprite mining district, Nevada. CEM is a powerful technique for rapid quantitative mineral mapping which requires only the spectrum of the mineral to be mapped. A priori knowledge of background spectral signatures is not required. Our investigation applies CEM to calibrated radiance data converted to apparent reflectance (AR) and to single scattering albedo (SSA) spectra. The radiance data were acquired by the 210 channel, 0.4 micrometers to 2.5 micrometers airborne Hyperspectral Digital Imagery Collection Experiment sensor. CEM applied to AR spectra assumes linear mixing of the spectra of the materials exposed at the surface. This assumption is likely invalid as surface materials, which are often mixtures of particulates of different substances, are more properly modeled as intimate mixtures and thus spectral mixing analyses must take account of nonlinear effects. One technique for approximating nonlinear mixing requires the conversion of AR spectra to SSA spectra. The results of CEM applied to SSA spectra are compared to those of CEM applied to AR spectra. The occurrence of alunite is similar though not identical to mineral maps produced with both the SSA and AR spectra. Alunite is slightly more widespread based on processing with the SSA spectra. Further, fractional abundances derived from the SSA spectra are, in general, higher than those derived from AR spectra. Implications for the interpretation of quantitative mineral mapping with hyperspectral remote sensing data are discussed.

  3. Self-organization, free energy minimization, and optimal grip on a field of affordances

    Directory of Open Access Journals (Sweden)

    Jelle eBruineberg

    2014-08-01

    Full Text Available In this paper, we set out to develop a theoretical and conceptual framework for the new field of Radical Embodied Cognitive Neuroscience. This framework should be able to integrate insights from several relevant disciplines: theory on embodied cognition, ecological psychology, phenomenology, dynamical systems theory, and neurodynamics. We suggest that the main task of Radical Embodied Cognitive Neuroscience is to investigate the phenomenon of skilled intentionality from the perspective of the self-organization of the brain-body-environment system, while doing justice to the phenomenology of skilled action. In previous work, we have characterized skilled intentionality as the organism’s tendency towards an optimal grip on multiple relevant affordances simultaneously. Affordances are possibilities for action provided by the environment. In the first part of this paper, we introduce the notion of skilled intentionality and the phenomenon of responsiveness to a field of relevant affordances. Second, we use Friston’s work on neurodynamics, but embed a very minimal version of his Free Energy Principle in the ecological niche of the animal. Thus amended, this principle is helpful for understanding the embeddedness of neurodynamics within the dynamics of the brain-body-environment system. Next, we show how we can use this adjusted principle to understand the neurodynamics of selective openness to the environment: interacting action-readiness patterns at multiple timescales contribute to the organism’s selective openness to relevant affordances. In the final part of the paper, we emphasize the important role of metastable dynamics in both the brain and the brain-body-environment system for adequate affordance-responsiveness. We exemplify our integrative approach by presenting research on the impact of Deep Brain Stimulation on affordance responsiveness of OCD patients.

  4. Channel and Timeslot Co-Scheduling with Minimal Channel Switching for Data Aggregation in MWSNs

    Directory of Open Access Journals (Sweden)

    Sanggil Yeoum

    2017-05-01

    Full Text Available Collision-free transmission and efficient data transfer between nodes can be achieved through a set of channels in multichannel wireless sensor networks (MWSNs. While using multiple channels, we have to carefully consider channel interference, channel and time slot (resources optimization, channel switching delay, and energy consumption. Since sensor nodes operate on low battery power, the energy consumed in channel switching becomes an important challenge. In this paper, we propose channel and time slot scheduling for minimal channel switching in MWSNs, while achieving efficient and collision-free transmission between nodes. The proposed scheme constructs a duty-cycled tree while reducing the amount of channel switching. As a next step, collision-free time slots are assigned to every node based on the minimal data collection delay. The experimental results demonstrate that the validity of our scheme reduces the amount of channel switching by 17.5%, reduces energy consumption for channel switching by 28%, and reduces the schedule length by 46%, as compared to the existing schemes.

  5. Front end power dissipation minimization and optimal transmission rate for wireless receivers

    NARCIS (Netherlands)

    Heuvel, van den J.H.C.; Wu, Y.; Baltus, P.G.M.; Linnartz, J.P.M.G.; Roermund, van A.H.M.

    2014-01-01

    Most wireless battery-operated devices spend more energy receiving than transmitting. Hence, minimizing the power dissipation in the receiver front end, which, in many cases, is the prominent power consuming part of the receiver, is an important challenge. This paper addresses this challenge by

  6. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-11-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal families of a given surface.The classification of minimal families of curves can be reduced to the classification of minimal families which cover weak Del Pezzo surfaces. We classify the minimal families of weak Del Pezzo surfaces and present a table with the number of minimal families of each weak Del Pezzo surface up to Weyl equivalence.As an application of this classification we generalize some results of Schicho. We classify algebraic surfaces that carry a family of conics. We determine the minimal lexicographic degree for the parametrization of a surface that carries at least 2 minimal families. © 2014 Elsevier B.V.

  7. Hexavalent Chromium Minimization Strategy

    Science.gov (United States)

    2011-05-01

    Logistics 4 Initiative - DoD Hexavalent Chromium Minimization Non- Chrome Primer IIEXAVAJ ENT CHRO:M I~UMI CHROMIUM (VII Oil CrfVli.J CANCEfl HAnRD CD...Management Office of the Secretary of Defense Hexavalent Chromium Minimization Strategy Report Documentation Page Form ApprovedOMB No. 0704-0188...00-2011 4. TITLE AND SUBTITLE Hexavalent Chromium Minimization Strategy 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  8. Advanced Design of Dumbbell-shaped Genetic Minimal Vectors Improves Non-coding and Coding RNA Expression.

    Science.gov (United States)

    Jiang, Xiaoou; Yu, Han; Teo, Cui Rong; Tan, Genim Siu Xian; Goh, Sok Chin; Patel, Parasvi; Chua, Yiqiang Kevin; Hameed, Nasirah Banu Sahul; Bertoletti, Antonio; Patzel, Volker

    2016-09-01

    Dumbbell-shaped DNA minimal vectors lacking nontherapeutic genes and bacterial sequences are considered a stable, safe alternative to viral, nonviral, and naked plasmid-based gene-transfer systems. We investigated novel molecular features of dumbbell vectors aiming to reduce vector size and to improve the expression of noncoding or coding RNA. We minimized small hairpin RNA (shRNA) or microRNA (miRNA) expressing dumbbell vectors in size down to 130 bp generating the smallest genetic expression vectors reported. This was achieved by using a minimal H1 promoter with integrated transcriptional terminator transcribing the RNA hairpin structure around the dumbbell loop. Such vectors were generated with high conversion yields using a novel protocol. Minimized shRNA-expressing dumbbells showed accelerated kinetics of delivery and transcription leading to enhanced gene silencing in human tissue culture cells. In primary human T cells, minimized miRNA-expressing dumbbells revealed higher stability and triggered stronger target gene suppression as compared with plasmids and miRNA mimics. Dumbbell-driven gene expression was enhanced up to 56- or 160-fold by implementation of an intron and the SV40 enhancer compared with control dumbbells or plasmids. Advanced dumbbell vectors may represent one option to close the gap between durable expression that is achievable with integrating viral vectors and short-term effects triggered by naked RNA.

  9. Energy Investment Allowance. Energy List 2000

    International Nuclear Information System (INIS)

    2000-01-01

    The title regulation (EIA, abbreviated in Dutch) offers entrepreneurs in the Netherlands financial incentives to invest in energy efficient capital equipment and renewable energy. Minimal 40% of the investment costs with a maximum of 208 million Dutch guilders can be deducted from fiscal profits. For one or more years less income tax or corporation taxes have to be paid. In this brochure it is outlined what the EIA means and how it can be used. The Energy List contains brief descriptions of examples of different energy efficient options that can be applied to qualify for the EIA

  10. Environmental Restoration Contractor Waste Minimization and Pollution Prevention Plan

    International Nuclear Information System (INIS)

    Lewis, R.A.

    1994-11-01

    The purpose of this plan is to establish the Environmental Restoration Contractor (ERC) Waste Minimization and Pollution Prevention (WMin/P2) Program and outline the activities and schedules that will be employed to reduce the quantity and toxicity of wastes generated as a result of restoration and remediation activities. It is intended to satisfy the US Department of Energy (DOE) and other legal requirements. As such, the Pollution Prevention Awareness program required by DOE Order 5400.1 is included with the Pollution Prevention Program. This plan is also intended to aid projects in meeting and documenting compliance with the various requirements for WMin/P2, and contains the policy, objectives, strategy, and support activities of the WMin/P2 program. The basic elements of the plan are pollution prevention goals, waste assessments of major waste streams, implementation of feasible waste minimization opportunities, and a process for reporting achievements. Various pollution prevention techniques will be implemented with the support of employee training and awareness programs to reduce waste and still meet applicable requirements. Information about the Hanford Site is in the Hanford Site Waste Minimization and Pollution Prevention Awareness Program Plan

  11. Design Mixers to Minimize Effects of Erosion and Corrosion Erosion

    Directory of Open Access Journals (Sweden)

    Julian Fasano

    2012-01-01

    Full Text Available A thorough review of the major parameters that affect solid-liquid slurry wear on impellers and techniques for minimizing wear is presented. These major parameters include (i chemical environment, (ii hardness of solids, (iii density of solids, (iv percent solids, (v shape of solids, (vi fluid regime (turbulent, transitional, or laminar, (vii hardness of the mixer's wetted parts, (viii hydraulic efficiency of the impeller (kinetic energy dissipation rates near the impeller blades, (ix impact velocity, and (x impact frequency. Techniques for minimizing the wear on impellers cover the choice of impeller, size and speed of the impeller, alloy selection, and surface coating or coverings. An example is provided as well as an assessment of the approximate life improvement.

  12. Long-term optimal energy mix planning towards high energy security and low GHG emission

    International Nuclear Information System (INIS)

    Thangavelu, Sundar Raj; Khambadkone, Ashwin M.; Karimi, Iftekhar A.

    2015-01-01

    Highlights: • We develop long-term energy planning considering the future uncertain inputs. • We analyze the effect of uncertain inputs on the energy cost and energy security. • Conventional energy mix prone to cause high energy cost and energy security issues. • Stochastic and optimal energy mix show benefits over conventional energy planning. • Nuclear option consideration reduces the energy cost and carbon emissions. - Abstract: Conventional energy planning focused on energy cost, GHG emission and renewable contribution based on future energy demand, fuel price, etc. Uncertainty in the projected variables such as energy demand, volatile fuel price and evolution of renewable technologies will influence the cost of energy when projected over a period of 15–30 years. Inaccurate projected variables could affect energy security and lead to the risk of high energy cost, high emission and low energy security. The energy security is an ability of generation capacity to meet the future energy demand. In order to minimize the risks, a generic methodology is presented to determine an optimal energy mix for a period of around 15 years. The proposed optimal energy mix is a right combination of energy sources that minimize the risk caused due to future uncertainties related to the energy sources. The proposed methodology uses stochastic optimization to address future uncertainties over a planning horizon and minimize the variations in the desired performance criteria such as energy security and costs. The developed methodology is validated using a case study for a South East Asian region with diverse fuel sources consists of wind, solar, geothermal, coal, biomass and natural gas, etc. The derived optimal energy mix decision outperformed the conventional energy planning by remaining stable and feasible against 79% of future energy demand scenarios at the expense of 0–10% increase in the energy cost. Including the nuclear option in the energy mix resulted 26

  13. 2016 Los Alamos National Laboratory Hazardous Waste Minimization Report

    Energy Technology Data Exchange (ETDEWEB)

    Salzman, Sonja L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); English, Charles Joe [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-12-02

    Waste minimization and pollution prevention are goals within the operating procedures of Los Alamos National Security, LLC (LANS). The US Department of Energy (DOE), inclusive of the National Nuclear Security Administration (NNSA) and the Office of Environmental Management, and LANS are required to submit an annual hazardous waste minimization report to the New Mexico Environment Department (NMED) in accordance with the Los Alamos National Laboratory (LANL or the Laboratory) Hazardous Waste Facility Permit. The report was prepared pursuant to the requirements of Section 2.9 of the LANL Hazardous Waste Facility Permit. This report describes the hazardous waste minimization program, which is a component of the overall Pollution Prevention (P2) Program, administered by the Environmental Stewardship Group (EPC-ES). This report also supports the waste minimization and P2 goals of the Associate Directorate of Environmental Management (ADEM) organizations that are responsible for implementing remediation activities and describes its programs to incorporate waste reduction practices into remediation activities and procedures. This report includes data for all waste shipped offsite from LANL during fiscal year (FY) 2016 (October 1, 2015 – September 30, 2016). LANS was active during FY2016 in waste minimization and P2 efforts. Multiple projects were funded that specifically related to reduction of hazardous waste. In FY2016, there was no hazardous, mixed-transuranic (MTRU), or mixed low-level (MLLW) remediation waste shipped offsite from the Laboratory. More non-remediation hazardous waste and MLLW was shipped offsite from the Laboratory in FY2016 compared to FY2015. Non-remediation MTRU waste was not shipped offsite during FY2016. These accomplishments and analysis of the waste streams are discussed in much more detail within this report.

  14. Criteria for evaluating alternative uses of energy resources

    Energy Technology Data Exchange (ETDEWEB)

    Hogg, R. J.

    1977-10-15

    Criteria that should be considered in evaluating the alternative use of energy resources are examined, e.g., energy policies must be compatible with overall national objectives; the demands of the energy sector must be sustainable; energy supplies must be reliable; resource depletion rates must be minimized; community interests must be protected; and economic costs must be minimized. Case studies using electricity and natural gas for the application of these criteria are presented.

  15. Super-acceleration from massless, minimally coupled phi sup 4

    CERN Document Server

    Onemli, V K

    2002-01-01

    We derive a simple form for the propagator of a massless, minimally coupled scalar in a locally de Sitter geometry of arbitrary spacetime dimension. We then employ it to compute the fully renormalized stress tensor at one- and two-loop orders for a massless, minimally coupled phi sup 4 theory which is released in Bunch-Davies vacuum at t=0 in co-moving coordinates. In this system, the uncertainty principle elevates the scalar above the minimum of its potential, resulting in a phase of super-acceleration. With the non-derivative self-interaction the scalar's breaking of de Sitter invariance becomes observable. It is also worth noting that the weak-energy condition is violated on cosmological scales. An interesting subsidiary result is that cancelling overlapping divergences in the stress tensor requires a conformal counterterm which has no effect on purely scalar diagrams.

  16. Portuguese Lexical Clusters and CVC Sequences in Speech Perception and Production.

    Science.gov (United States)

    Cunha, Conceição

    2015-01-01

    This paper investigates similarities between lexical consonant clusters and CVC sequences differing in the presence or absence of a lexical vowel in speech perception and production in two Portuguese varieties. The frequent high vowel deletion in the European variety (EP) and the realization of intervening vocalic elements between lexical clusters in Brazilian Portuguese (BP) may minimize the contrast between lexical clusters and CVC sequences in the two Portuguese varieties. In order to test this hypothesis we present a perception experiment with 72 participants and a physiological analysis of 3-dimensional movement data from 5 EP and 4 BP speakers. The perceptual results confirmed a gradual confusion of lexical clusters and CVC sequences in EP, which corresponded roughly to the gradient consonantal overlap found in production. © 2015 S. Karger AG, Basel.

  17. Fixing Formalin: A Method to Recover Genomic-Scale DNA Sequence Data from Formalin-Fixed Museum Specimens Using High-Throughput Sequencing.

    Directory of Open Access Journals (Sweden)

    Sarah M Hykin

    Full Text Available For 150 years or more, specimens were routinely collected and deposited in natural history collections without preserving fresh tissue samples for genetic analysis. In the case of most herpetological specimens (i.e. amphibians and reptiles, attempts to extract and sequence DNA from formalin-fixed, ethanol-preserved specimens-particularly for use in phylogenetic analyses-has been laborious and largely ineffective due to the highly fragmented nature of the DNA. As a result, tens of thousands of specimens in herpetological collections have not been available for sequence-based phylogenetic studies. Massively parallel High-Throughput Sequencing methods and the associated bioinformatics, however, are particularly suited to recovering meaningful genetic markers from severely degraded/fragmented DNA sequences such as DNA damaged by formalin-fixation. In this study, we compared previously published DNA extraction methods on three tissue types subsampled from formalin-fixed specimens of Anolis carolinensis, followed by sequencing. Sufficient quality DNA was recovered from liver tissue, making this technique minimally destructive to museum specimens. Sequencing was only successful for the more recently collected specimen (collected ~30 ybp. We suspect this could be due either to the conditions of preservation and/or the amount of tissue used for extraction purposes. For the successfully sequenced sample, we found a high rate of base misincorporation. After rigorous trimming, we successfully mapped 27.93% of the cleaned reads to the reference genome, were able to reconstruct the complete mitochondrial genome, and recovered an accurate phylogenetic placement for our specimen. We conclude that the amount of DNA available, which can vary depending on specimen age and preservation conditions, will determine if sequencing will be successful. The technique described here will greatly improve the value of museum collections by making many formalin-fixed specimens

  18. Fixing Formalin: A Method to Recover Genomic-Scale DNA Sequence Data from Formalin-Fixed Museum Specimens Using High-Throughput Sequencing.

    Science.gov (United States)

    Hykin, Sarah M; Bi, Ke; McGuire, Jimmy A

    2015-01-01

    For 150 years or more, specimens were routinely collected and deposited in natural history collections without preserving fresh tissue samples for genetic analysis. In the case of most herpetological specimens (i.e. amphibians and reptiles), attempts to extract and sequence DNA from formalin-fixed, ethanol-preserved specimens-particularly for use in phylogenetic analyses-has been laborious and largely ineffective due to the highly fragmented nature of the DNA. As a result, tens of thousands of specimens in herpetological collections have not been available for sequence-based phylogenetic studies. Massively parallel High-Throughput Sequencing methods and the associated bioinformatics, however, are particularly suited to recovering meaningful genetic markers from severely degraded/fragmented DNA sequences such as DNA damaged by formalin-fixation. In this study, we compared previously published DNA extraction methods on three tissue types subsampled from formalin-fixed specimens of Anolis carolinensis, followed by sequencing. Sufficient quality DNA was recovered from liver tissue, making this technique minimally destructive to museum specimens. Sequencing was only successful for the more recently collected specimen (collected ~30 ybp). We suspect this could be due either to the conditions of preservation and/or the amount of tissue used for extraction purposes. For the successfully sequenced sample, we found a high rate of base misincorporation. After rigorous trimming, we successfully mapped 27.93% of the cleaned reads to the reference genome, were able to reconstruct the complete mitochondrial genome, and recovered an accurate phylogenetic placement for our specimen. We conclude that the amount of DNA available, which can vary depending on specimen age and preservation conditions, will determine if sequencing will be successful. The technique described here will greatly improve the value of museum collections by making many formalin-fixed specimens available for

  19. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory

    Science.gov (United States)

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A.

    2016-01-01

    There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of “maximum flow-minimum cut” graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered. PMID:27089174

  20. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory.

    Science.gov (United States)

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A

    2016-08-25

    There are several applications in computational biophysics that require the optimization of discrete interacting states, for example, amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of "maximum flow-minimum cut" graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.

  1. Energy house - dream house

    Energy Technology Data Exchange (ETDEWEB)

    1981-07-01

    An energy house a prefabricated house with an extensive minimization of heat losses, is air-conditioned by means of a combined heating system consisting of hot water cycle and recirculating heating. The energy system is trivalent: wind power, solar energy with heat pumps and normal oil heating.

  2. Non-minimally coupled tachyon field in teleparallel gravity

    Energy Technology Data Exchange (ETDEWEB)

    Fazlpour, Behnaz [Department of Physics, Babol Branch, Islamic Azad University, Shariati Street, Babol (Iran, Islamic Republic of); Banijamali, Ali, E-mail: b.fazlpour@umz.ac.ir, E-mail: a.banijamali@nit.ac.ir [Department of Basic Sciences, Babol University of Technology, Shariati Street, Babol (Iran, Islamic Republic of)

    2015-04-01

    We perform a full investigation on dynamics of a new dark energy model in which the four-derivative of a non-canonical scalar field (tachyon) is non-minimally coupled to the vector torsion. Our analysis is done in the framework of teleparallel equivalent of general relativity which is based on torsion instead of curvature. We show that in our model there exists a late-time scaling attractor (point P{sub 4}), corresponding to an accelerating universe with the property that dark energy and dark matter densities are of the same order. Such a point can help to alleviate the cosmological coincidence problem. Existence of this point is the most significant difference between our model and another model in which a canonical scalar field (quintessence) is used instead of tachyon field.

  3. Non-minimally coupled tachyon field in teleparallel gravity

    International Nuclear Information System (INIS)

    Fazlpour, Behnaz; Banijamali, Ali

    2015-01-01

    We perform a full investigation on dynamics of a new dark energy model in which the four-derivative of a non-canonical scalar field (tachyon) is non-minimally coupled to the vector torsion. Our analysis is done in the framework of teleparallel equivalent of general relativity which is based on torsion instead of curvature. We show that in our model there exists a late-time scaling attractor (point P 4 ), corresponding to an accelerating universe with the property that dark energy and dark matter densities are of the same order. Such a point can help to alleviate the cosmological coincidence problem. Existence of this point is the most significant difference between our model and another model in which a canonical scalar field (quintessence) is used instead of tachyon field

  4. Low-energy levels calculation for 193Ir

    International Nuclear Information System (INIS)

    Zahn, Guilherme Soares; Zamboni, Cibele Bugno; Genezini, Frederico Antonio; Mesa-Hormaza, Joel; Cruz, Manoel Tiago Freitas da

    2006-01-01

    In this work, a model based on single particle plus pairing residual interaction was used to study the low-lying excited states of the 193 Ir nucleus. In this model, the deformation parameters in equilibrium were obtained by minimizing the total energy calculated by the Strutinsky prescription; the macroscopic contribution to the potential was taken from the Liquid Droplet Model, with the shell and paring corrections used as as microscopic contributions. The nuclear shape was described using the Cassinian ovoids as base figures; the single particle energy spectra and wave functions for protons and neutrons were calculated in a deformed Woods-Saxon potential, where the parameters for neutrons were obtained from the literature and the parameters for protons were adjusted in order to describe the main sequence of angular momentum and parity of the band heads, as well as the proton binding energy of 193 Ir. The residual pairing interaction was calculated using the BCS prescription with Lipkin-Nogami approximation. The results obtained for the first three band heads (the 3/2 + ground state, the 1/2 + excited state at E ∼ 73 keV and the the 11/2 - isomeric state at E ∼ 80 keV) showed a very good agreement, but the model so far greatly overestimated the energy of the next band head, a 7/2 - at E ∼ 299 keV. (author)

  5. Development and pilot demonstration program of a waste minimization plan at Argonne National Laboratory

    International Nuclear Information System (INIS)

    Peters, R.W.; Wentz, C.A.; Thuot, J.R.

    1991-01-01

    In response to US Department of Energy directives, Argonne National Laboratory (ANL) has developed a waste minimization plan aimed at reducing the amount of wastes at this national research and development laboratory. Activities at ANL are primarily research- oriented and as such affect the amount and type of source reduction that can be achieved at this facility. The objective of ANL's waste minimization program is to cost-effectively reduce all types of wastes, including hazardous, mixed, radioactive, and nonhazardous wastes. The ANL Waste Minimization Plan uses a waste minimization audit as a systematic procedure to determine opportunities to reduce or eliminate waste. To facilitate these audits, a computerized bar-coding procedure is being implemented at ANL to track hazardous wastes from where they are generated to their ultimate disposal. This paper describes the development of the ANL Waste Minimization Plan and a pilot demonstration of the how the ANL Plan audited the hazardous waste generated within a selected divisions of ANL. It includes quantitative data on the generation and disposal of hazardous waste at ANL and describes potential ways to minimize hazardous wastes. 2 refs., 5 figs., 8 tabs

  6. Protein sequence annotation in the genome era: the annotation concept of SWISS-PROT+TREMBL.

    Science.gov (United States)

    Apweiler, R; Gateau, A; Contrino, S; Martin, M J; Junker, V; O'Donovan, C; Lang, F; Mitaritonna, N; Kappus, S; Bairoch, A

    1997-01-01

    SWISS-PROT is a curated protein sequence database which strives to provide a high level of annotation, a minimal level of redundancy and high level of integration with other databases. Ongoing genome sequencing projects have dramatically increased the number of protein sequences to be incorporated into SWISS-PROT. Since we do not want to dilute the quality standards of SWISS-PROT by incorporating sequences without proper sequence analysis and annotation, we cannot speed up the incorporation of new incoming data indefinitely. However, as we also want to make the sequences available as fast as possible, we introduced TREMBL (TRanslation of EMBL nucleotide sequence database), a supplement to SWISS-PROT. TREMBL consists of computer-annotated entries in SWISS-PROT format derived from the translation of all coding sequences (CDS) in the EMBL nucleotide sequence database, except for CDS already included in SWISS-PROT. While TREMBL is already of immense value, its computer-generated annotation does not match the quality of SWISS-PROTs. The main difference is in the protein functional information attached to sequences. With this in mind, we are dedicating substantial effort to develop and apply computer methods to enhance the functional information attached to TREMBL entries.

  7. Minimal Gromov-Witten rings

    International Nuclear Information System (INIS)

    Przyjalkowski, V V

    2008-01-01

    We construct an abstract theory of Gromov-Witten invariants of genus 0 for quantum minimal Fano varieties (a minimal class of varieties which is natural from the quantum cohomological viewpoint). Namely, we consider the minimal Gromov-Witten ring: a commutative algebra whose generators and relations are of the form used in the Gromov-Witten theory of Fano varieties (of unspecified dimension). The Gromov-Witten theory of any quantum minimal variety is a homomorphism from this ring to C. We prove an abstract reconstruction theorem which says that this ring is isomorphic to the free commutative ring generated by 'prime two-pointed invariants'. We also find solutions of the differential equation of type DN for a Fano variety of dimension N in terms of the generating series of one-pointed Gromov-Witten invariants

  8. The self-force on a non-minimally coupled static scalar charge outside a Schwarzschild black hole

    International Nuclear Information System (INIS)

    Cho, Demian H J; Tsokaros, Antonios A; Wiseman, Alan G

    2007-01-01

    The finite part of the self-force on a static, non-minimally coupled scalar test charge outside a Schwarzschild black hole is zero. This result is determined from the work required to slowly raise or lower the charge through an infinitesimal distance. Unlike similar force calculations for minimally-coupled scalar charges or electric charges, we find that we must account for a flux of field energy that passes through the horizon and changes the mass and area of the black hole when the charge is displaced. This occurs even for an arbitrarily slow displacement of the non-minimally coupled scalar charge. For a positive coupling constant, the area of the hole increases when the charge is lowered and decreases when the charge is raised. The fact that the self-force vanishes for a static, non-minimally coupled scalar charge in Schwarzschild spacetime agrees with a simple prediction of the Quinn-Wald axioms. However, Zel'nikov and Frolov computed a non-vanishing self-force for a non-minimally coupled charge. Our method of calculation closely parallels the derivation of Zel'nikov and Frolov, and we show that their omission of this unusual flux is responsible for their (incorrect) result. When the flux is accounted for, the self-force vanishes. This correction eliminates a potential counter example to the Quinn-Wald axioms. The fact that the area of the black hole changes when the charge is displaced brings up two interesting questions that did not arise in similar calculations for static electric charges and minimally coupled scalar charges. (1) How can we reconcile a decrease in the area of the black hole horizon with the area theorem which concludes that δArea horizon ≥ 0? The key hypothesis of the area theorem is that the stress-energy tensor must satisfy a null-energy condition T αβ l α l β ≥ 0 for any null vector l α . We explicitly show that the stress-energy associated with a non-minimally coupled field does not satisfy this condition, and this violation of

  9. Energy Efficient Multi-Core Processing

    Directory of Open Access Journals (Sweden)

    Charles Leech

    2014-06-01

    Full Text Available This paper evaluates the present state of the art of energy-efficient embedded processor design techniques and demonstrates, how small, variable-architecture embedded processors may exploit a run-time minimal architectural synthesis technique to achieve greater energy and area efficiency whilst maintaining performance. The picoMIPS architecture is presented, inspired by the MIPS, as an example of a minimal and energy efficient processor. The picoMIPS is a variablearchitecture RISC microprocessor with an application-specific minimised instruction set. Each implementation will contain only the necessary datapath elements in order to maximise area efficiency. Due to the relationship between logic gate count and power consumption, energy efficiency is also maximised in the processor therefore the system is designed to perform a specific task in the most efficient processor-based form. The principles of the picoMIPS processor are illustrated with an example of the discrete cosine transform (DCT and inverse DCT (IDCT algorithms implemented in a multi-core context to demonstrate the concept of minimal architecture synthesis and how it can be used to produce an application specific, energy efficient processor.

  10. Minimal Marking: A Success Story

    Science.gov (United States)

    McNeilly, Anne

    2014-01-01

    The minimal-marking project conducted in Ryerson's School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The "minimal-marking" concept (Haswell, 1983), which requires…

  11. Energy-efficient algorithm for broadcasting in ad hoc wireless sensor networks.

    Science.gov (United States)

    Xiong, Naixue; Huang, Xingbo; Cheng, Hongju; Wan, Zheng

    2013-04-12

    Broadcasting is a common and basic operation used to support various network protocols in wireless networks. To achieve energy-efficient broadcasting is especially important for ad hoc wireless sensor networks because sensors are generally powered by batteries with limited lifetimes. Energy consumption for broadcast operations can be reduced by minimizing the number of relay nodes based on the observation that data transmission processes consume more energy than data reception processes in the sensor nodes, and how to improve the network lifetime is always an interesting issue in sensor network research. The minimum-energy broadcast problem is then equivalent to the problem of finding the minimum Connected Dominating Set (CDS) for a connected graph that is proved NP-complete. In this paper, we introduce an Efficient Minimum CDS algorithm (EMCDS) with help of a proposed ordered sequence list. EMCDS does not concern itself with node energy and broadcast operations might fail if relay nodes are out of energy. Next we have proposed a Minimum Energy-consumption Broadcast Scheme (MEBS) with a modified version of EMCDS, and aimed at providing an efficient scheduling scheme with maximized network lifetime. The simulation results show that the proposed EMCDS algorithm can find smaller CDS compared with related works, and the MEBS can help to increase the network lifetime by efficiently balancing energy among nodes in the networks.

  12. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-01-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal

  13. Perspectives of Nanotechnology in Minimally Invasive Therapy of Breast Cancer

    Directory of Open Access Journals (Sweden)

    Yamin Yang

    2013-01-01

    Full Text Available Breast cancer, the most common type of cancer among women in the western world, affects approximately one out of every eight women over their lifetime. In recognition of the high invasiveness of surgical excision and severe side effects of chemical and radiation therapies, increasing efforts are made to seek minimally invasive modalities with fewer side effects. Nanoparticles (<100 nm in size have shown promising capabilities for delivering targeted therapeutic drugs to cancer cells and confining the treatment mainly within tumors. Additionally, some nanoparticles exhibit distinct properties, such as conversion of photonic energy into heat, and these properties enable eradication of cancer cells. In this review, current utilization of nanostructures for cancer therapy, especially in minimally invasive therapy, is summarized with a particular interest in breast cancer.

  14. Minimal residual disease in chronic lymphocytic leukaemia.

    Science.gov (United States)

    García Vela, José Antonio; García Marco, José Antonio

    2018-02-23

    Minimal residual disease (MRD) assessment is an important endpoint in the treatment of chronic lymphocytic leukaemia (CLL). It is highly predictive of prolonged progression-free survival (PFS) and overall survival and could be considered a surrogate for PFS in the context of chemoimmunotherapy based treatment. Evaluation of MRD level by flow cytometry or molecular techniques in the era of the new BCR and Bcl-2 targeted inhibitors could identify the most cost-effective and durable treatment sequencing. A therapeutic approach guided by the level of MRD might also determine which patients would benefit from an early stop or consolidation therapy. In this review, we discuss the different MRD methods of analysis, which source of tumour samples must be analysed, the future role of the detection of circulating tumour DNA, and the potential role of MRD negativity in clinical practice in the modern era of CLL therapy. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.

  15. Renewable energy.

    Science.gov (United States)

    Destouni, Georgia; Frank, Harry

    2010-01-01

    The Energy Committee of the Royal Swedish Academy of Sciences has in a series of projects gathered information and knowledge on renewable energy from various sources, both within and outside the academic world. In this article, we synthesize and summarize some of the main points on renewable energy from the various Energy Committee projects and the Committee's Energy 2050 symposium, regarding energy from water and wind, bioenergy, and solar energy. We further summarize the Energy Committee's scenario estimates of future renewable energy contributions to the global energy system, and other presentations given at the Energy 2050 symposium. In general, international coordination and investment in energy research and development is crucial to enable future reliance on renewable energy sources with minimal fossil fuel use.

  16. IMPROVING THE TRANSMISSION PERFORMANCE BASED ON MINIMIZING ENERGY IN MOBILE ADHOC NETWORKS

    Directory of Open Access Journals (Sweden)

    Gundala Swathi

    2015-06-01

    Full Text Available Networking is collectively no of mobile nodes allocate users to correctly detect a distant environment. These wireless mobile networks want strong but simple, scalable, energy efficient and also self organize routing algorithms. In Mobile technology small quantity of power electronics and less power radio frequency have permit the expansion of small, comparatively economical and less power nodes, are associated in a wireless mobile networkIn this study we proposed method are: energy effectiveness, energetic occurrence zone and multiple hop TRANSMIT, taking into concern between the energy of transmit nodes and distance from the transmit node to the trusted neighbor node, link weight energy utilization and distance are measured as most important constraint for decide on greatest possible path from Zone Head (ZH to the neighbor node. In this we use the different constraints and lessen the quantity of distribution messages during the Transmit node choice point to decrease the energy utilization of the complete network.

  17. Session: Avoiding, minimizing, and mitigating avian and bat impacts

    Energy Technology Data Exchange (ETDEWEB)

    Thelander, Carl; Kerlinger, Paul

    2004-09-01

    This session at the Wind Energy and Birds/Bats workshop consisted of two presentations followed by a discussion/question answer period. The session addressed a variety of questions related to avoiding, minimizing, and mitigating the avian and bat impacts of wind power development including: what has been learned from operating turbines and mitigating impacts where they are unavoidable, such as at Altamont Pass WRA, and should there be mitigation measures such as habitat creation or land conservation where impacts occur. Other impact minimization and mitigation approaches discussed included: location and siting evaluations; options for construction and operation of wind facilities; turbine lighting; and the physical alignment/orientation. Titles and authors of the presentations were: 'Bird Fatalities in the Altamont Pass Wind Resource Area: A Case Study, Part II' by Carl Thelander and 'Prevention and Mitigation of Avian Impacts at Wind Power Facilities' by Paul Kerlinger.

  18. Waste minimization and pollution prevention awareness plan. Revision 1

    International Nuclear Information System (INIS)

    1994-07-01

    The purpose of this plan is to document Lawrence Livermore National Laboratory (LLNL) projections for present and future waste minimization and pollution prevention. The plan specifies those activities and methods that are or will be used to reduce the quantity and toxicity of wastes generated at the site. It is intended to satisfy Department of Energy (DOE) requirements. This Waste Minimization and Pollution Prevention Awareness Plan provides an overview of projected activities from FY 1994 through FY 1999. The plans are broken into site-wide and problem-specific activities. All directorates at LLNL have had an opportunity to contribute input, estimate budgets, and review the plan. In addition to the above, this plan records LLNL's goals for pollution prevention, regulatory drivers for those activities, assumptions on which the cost estimates are based, analyses of the strengths of the projects, and the barriers to increasing pollution prevention activities

  19. Session: Avoiding, minimizing, and mitigating avian and bat impacts

    International Nuclear Information System (INIS)

    Thelander, Carl; Kerlinger, Paul

    2004-01-01

    This session at the Wind Energy and Birds/Bats workshop consisted of two presentations followed by a discussion/question answer period. The session addressed a variety of questions related to avoiding, minimizing, and mitigating the avian and bat impacts of wind power development including: what has been learned from operating turbines and mitigating impacts where they are unavoidable, such as at Altamont Pass WRA, and should there be mitigation measures such as habitat creation or land conservation where impacts occur. Other impact minimization and mitigation approaches discussed included: location and siting evaluations; options for construction and operation of wind facilities; turbine lighting; and the physical alignment/orientation. Titles and authors of the presentations were: 'Bird Fatalities in the Altamont Pass Wind Resource Area: A Case Study, Part II' by Carl Thelander and 'Prevention and Mitigation of Avian Impacts at Wind Power Facilities' by Paul Kerlinger

  20. Waste minimization assessment procedure

    International Nuclear Information System (INIS)

    Kellythorne, L.L.

    1993-01-01

    Perry Nuclear Power Plant began developing a waste minimization plan early in 1991. In March of 1991 the plan was documented following a similar format to that described in the EPA Waste Minimization Opportunity Assessment Manual. Initial implementation involved obtaining management's commitment to support a waste minimization effort. The primary assessment goal was to identify all hazardous waste streams and to evaluate those streams for minimization opportunities. As implementation of the plan proceeded, non-hazardous waste streams routinely generated in large volumes were also evaluated for minimization opportunities. The next step included collection of process and facility data which would be useful in helping the facility accomplish its assessment goals. This paper describes the resources that were used and which were most valuable in identifying both the hazardous and non-hazardous waste streams that existed on site. For each material identified as a waste stream, additional information regarding the materials use, manufacturer, EPA hazardous waste number and DOT hazard class was also gathered. Once waste streams were evaluated for potential source reduction, recycling, re-use, re-sale, or burning for heat recovery, with disposal as the last viable alternative

  1. Westinghouse Hanford Company waste minimization actions

    International Nuclear Information System (INIS)

    Greenhalgh, W.O.

    1988-09-01

    Companies that generate hazardous waste materials are now required by national regulations to establish a waste minimization program. Accordingly, in FY88 the Westinghouse Hanford Company formed a waste minimization team organization. The purpose of the team is to assist the company in its efforts to minimize the generation of waste, train personnel on waste minimization techniques, document successful waste minimization effects, track dollar savings realized, and to publicize and administer an employee incentive program. A number of significant actions have been successful, resulting in the savings of materials and dollars. The team itself has been successful in establishing some worthwhile minimization projects. This document briefly describes the waste minimization actions that have been successful to date. 2 refs., 26 figs., 3 tabs

  2. Resolving piping analysis issues to minimize impact on installation activities during refueling outage at nuclear power plants

    International Nuclear Information System (INIS)

    Bhavnani, D.

    1996-01-01

    While it is required to maintain piping code compliance for all phases of installation activities during outages at a nuclear plant, it is equally essential to reduce challenges to the installation personnel on how plant modification work should be performed. Plant betterment activities that incorporate proposed design changes are continually implemented during the outages. Supporting analysis are performed to back these activities for operable systems. The goal is to reduce engineering and craft man-hours and minimize outage time. This paper outlines how plant modification process can be streamlined to facilitate construction teams to do their tasks that involve safety related piping. In this manner, installation can proceed by minimizing on the spot analytical effort and reduce downtime to support the proposed modifications. Examples are provided that permit performance of installation work in any sequence. Piping and hangers including the branch lines are prequalified and determined operable. The system is up front analyzed for all possible scenarios. The modification instructions in the work packages is flexible enough to permit any possible installation sequence. The benefit to this approach is large enough in the sense that valuable outage time is not extended and on site analytical work is not required

  3. Isolation and sequence analysis of the Pseudomonas syringae pv. tomato gene encoding a 2,3-diphosphoglycerate-independent phosphoglyceromutase.

    Science.gov (United States)

    Morris, V L; Jackson, D P; Grattan, M; Ainsworth, T; Cuppels, D A

    1995-01-01

    Pseudomonas syringae pv. tomato DC3481, a Tn5-induced mutant of the tomato pathogen DC3000, cannot grow and elicit disease symptoms on tomato seedlings. It also cannot grow on minimal medium containing malate, citrate, or succinate, three of the major organic acids found in tomatoes. We report here that this mutant also cannot use, as a sole carbon and/or energy source, a wide variety of hexoses and intermediates of hexose catabolism. Uptake studies have shown that DC3481 is not deficient in transport. A 3.8-kb EcoRI fragment of DC3000 DNA, which complements the Tn5 mutation, has been cloned and sequenced. The deduced amino acid sequences of two of the three open reading frames (ORFs) present on this fragment, ORF2 and ORF3, had no significant homology with sequences in the GenBank databases. However, the 510-amino-acid sequence of ORF1, the site of the Tn5 insertion, strongly resembled the deduced amino acid sequences of the Bacillus subtilis and Zea mays genes encoding 2,3-diphosphoglycerate (DPG)-independent phosphoglyceromutase (PGM) (52% identity and 72% similarity and 37% identity and 57% similarity, respectively). PGMs not requiring the cofactor DPG are usually found in plants and algae. Enzyme assays confirmed that P. syringae PGM activity required an intact ORF1. Not only is DC3481 the first PGM-deficient pseudomonad mutant to be described, but the P. syringae pgm gene is the first gram-negative bacterial gene identified that appears to code for a DPG-independent PGM. PGM activity appears essential for the growth and pathogenicity of P. syringae pv. tomato on its host plant. PMID:7896694

  4. Isolation and sequence analysis of the Pseudomonas syringae pv. tomato gene encoding a 2,3-diphosphoglycerate-independent phosphoglyceromutase.

    Science.gov (United States)

    Morris, V L; Jackson, D P; Grattan, M; Ainsworth, T; Cuppels, D A

    1995-04-01

    Pseudomonas syringae pv. tomato DC3481, a Tn5-induced mutant of the tomato pathogen DC3000, cannot grow and elicit disease symptoms on tomato seedlings. It also cannot grow on minimal medium containing malate, citrate, or succinate, three of the major organic acids found in tomatoes. We report here that this mutant also cannot use, as a sole carbon and/or energy source, a wide variety of hexoses and intermediates of hexose catabolism. Uptake studies have shown that DC3481 is not deficient in transport. A 3.8-kb EcoRI fragment of DC3000 DNA, which complements the Tn5 mutation, has been cloned and sequenced. The deduced amino acid sequences of two of the three open reading frames (ORFs) present on this fragment, ORF2 and ORF3, had no significant homology with sequences in the GenBank databases. However, the 510-amino-acid sequence of ORF1, the site of the Tn5 insertion, strongly resembled the deduced amino acid sequences of the Bacillus subtilis and Zea mays genes encoding 2,3-diphosphoglycerate (DPG)-independent phosphoglyceromutase (PGM) (52% identity and 72% similarity and 37% identity and 57% similarity, respectively). PGMs not requiring the cofactor DPG are usually found in plants and algae. Enzyme assays confirmed that P. syringae PGM activity required an intact ORF1. Not only is DC3481 the first PGM-deficient pseudomonad mutant to be described, but the P. syringae pgm gene is the first gram-negative bacterial gene identified that appears to code for a DPG-independent PGM. PGM activity appears essential for the growth and pathogenicity of P. syringae pv. tomato on its host plant.

  5. Generalized min-max bound-based MRI pulse sequence design framework for wide-range T1 relaxometry: A case study on the tissue specific imaging sequence.

    Directory of Open Access Journals (Sweden)

    Yang Liu

    Full Text Available This paper proposes a new design strategy for optimizing MRI pulse sequences for T1 relaxometry. The design strategy optimizes the pulse sequence parameters to minimize the maximum variance of unbiased T1 estimates over a range of T1 values using the Cramér-Rao bound. In contrast to prior sequences optimized for a single nominal T1 value, the optimized sequence using our bound-based strategy achieves improved precision and accuracy for a broad range of T1 estimates within a clinically feasible scan time. The optimization combines the downhill simplex method with a simulated annealing process. To show the effectiveness of the proposed strategy, we optimize the tissue specific imaging (TSI sequence. Preliminary Monte Carlo simulations demonstrate that the optimized TSI sequence yields improved precision and accuracy over the popular driven-equilibrium single-pulse observation of T1 (DESPOT1 approach for normal brain tissues (estimated T1 700-2000 ms at 3.0T. The relative mean estimation error (MSE for T1 estimation is less than 1.7% using the optimized TSI sequence, as opposed to less than 7.0% using DESPOT1 for normal brain tissues. The optimized TSI sequence achieves good stability by keeping the MSE under 7.0% over larger T1 values corresponding to different lesion tissues and the cerebrospinal fluid (up to 5000 ms. The T1 estimation accuracy using the new pulse sequence also shows improvement, which is more pronounced in low SNR scenarios.

  6. Loss minimization control and efficiency determination of electric drives in traction applications

    Energy Technology Data Exchange (ETDEWEB)

    Windisch, Thomas; Hofmann, Wilfried [Technische Univ. Dresden (Germany). Lehrstuhl fuer Elektrische Maschinen und Antriebe

    2012-11-01

    High-power electric drives in automotive traction applications consume a large part of the disposable electric energy. For this reason the energy efficiency of the drives is of great importance for range and fuel consumption of the hybrid electric vehicle. The paper describes two possible drives with different electric motors from a control point of view. The electric power losses in the drive system are determined depending on the operating point of the machine. With these loss characteristics the control of the drives is optimized to produce minimal losses. Finally the energy efficiency for a realistic urban bus drive cycle is calculated to compare the two types. (orig.)

  7. Minimal but non-minimal inflation and electroweak symmetry breaking

    Energy Technology Data Exchange (ETDEWEB)

    Marzola, Luca [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia); Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu (Estonia); Racioppi, Antonio [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia)

    2016-10-07

    We consider the most minimal scale invariant extension of the standard model that allows for successful radiative electroweak symmetry breaking and inflation. The framework involves an extra scalar singlet, that plays the rôle of the inflaton, and is compatibile with current experimental bounds owing to the non-minimal coupling of the latter to gravity. This inflationary scenario predicts a very low tensor-to-scalar ratio r≈10{sup −3}, typical of Higgs-inflation models, but in contrast yields a scalar spectral index n{sub s}≃0.97 which departs from the Starobinsky limit. We briefly discuss the collider phenomenology of the framework.

  8. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Science.gov (United States)

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  9. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Directory of Open Access Journals (Sweden)

    Meznah Almutairy

    Full Text Available Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  10. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    Science.gov (United States)

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  11. Architecture and energy; Arkitektur og energi

    Energy Technology Data Exchange (ETDEWEB)

    Marsh, R.; Grupe Larsen, V.; Lauring, M.; Christensen, Morten

    2006-07-01

    The aim of this book is to illustrate the interaction between architecture and energy in an overall perspective starting from the new energy requirements. Architects make a lot of form related outlines early in the design process, and these have significant consequences for the energy consumption. Furthermore, the new energy requirements start from an overall evaluation, during which the architectural form is of decisive importance to minimization of the energy consumption. The book focuses on four themes: a) day lighting, which plays a decisive part in relation to our health and wellness inside buildings, b) solar heating; passive solar heating has traditionally been playing an important part in low-energy architecture, c) rough house; choice of materials can both increase and decrease buildings' energy consumption, and d) technology; modern buildings use a number of energy demanding installations, therefore the interaction between technology and energy is examined. (BA)

  12. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail

    2011-10-30

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.

  13. Treatment of Events Representing System Success in Accident Sequences in PSA Models with ET/FT Linking

    International Nuclear Information System (INIS)

    Vrbanic, I.; Spiler, J.; Mikulicic, V.; Simic, Z.

    2002-01-01

    Treatment of events that represent systems' successes in accident sequences is well known issue associated primarily with those PSA models that employ event tree / fault tree (ET / FT) linking technique. Even theoretically clear, practical implementation and usage creates for certain PSA models a number of difficulties regarding result correctness. Strict treatment of success-events would require consistent applying of de Morgan laws. However, there are several problems related to it. First, Boolean resolution of the overall model, such as the one representing occurrence of reactor core damage, becomes very challenging task if De Morgan rules are applied consistently at all levels. Even PSA tools of the newest generation have some problems with performing such a task in a reasonable time frame. The second potential issue is related to the presence of negated basic events in minimal cutsets. If all the basic events that result from strict applying of De Morgan rules are retained in presentation of minimal cutsets, their readability and interpretability may be impaired severely. It is also worth noting that the concept of a minimal cutset is tied to equipment failures, rather than to successes. For reasons like these, various simplifications are employed in PSA models and tools, when it comes to the treatment of success-events in the sequences. This paper provides a discussion of major concerns associated with the treatment of success-events in accident sequences of a typical PSA model. (author)

  14. Effectiveness of dynamic MRI for diagnosing pericicatricial minimal residual breast cancer following excisional biopsy

    International Nuclear Information System (INIS)

    Kawashima, Hiroko; Tawara, Mari; Suzuki, Masayuki; Matsui, Osamu; Kadoya, Masumi

    2001-01-01

    The purpose of this study was to investigate the effectiveness of dynamic MRI for diagnosing pericicatricial minimal residual breast cancer following excisional biopsy. Twenty-six patients who underwent excisional biopsy of a tumor or calcified lesion of the breast underwent gadolinium-enhanced dynamic MRI by the fat-saturated 2D fast spoiled gradient echo (SPGR) sequence (group 1), 24 patients by the spectral IR enhanced 3D fast gradient echo (Efgre3d) sequence (group 2). Pericicatricial residual cancer was confirmed histologically in 29 of the 50 patients. The overall sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of MRI for residual cancer diagnosis was 66, 81, 72, 83 and 63%. A nodular, thick and discontinuous enhanced rim around the scar is indicative of a residual tumor. However, false-positive findings due to granulation or proliferative fibrocystic change remain limitations

  15. Effectiveness of dynamic MRI for diagnosing pericicatricial minimal residual breast cancer following excisional biopsy

    Energy Technology Data Exchange (ETDEWEB)

    Kawashima, Hiroko E-mail: hirokok@med.kanazawa-u.ac.jp; Tawara, Mari; Suzuki, Masayuki; Matsui, Osamu; Kadoya, Masumi

    2001-10-01

    The purpose of this study was to investigate the effectiveness of dynamic MRI for diagnosing pericicatricial minimal residual breast cancer following excisional biopsy. Twenty-six patients who underwent excisional biopsy of a tumor or calcified lesion of the breast underwent gadolinium-enhanced dynamic MRI by the fat-saturated 2D fast spoiled gradient echo (SPGR) sequence (group 1), 24 patients by the spectral IR enhanced 3D fast gradient echo (Efgre3d) sequence (group 2). Pericicatricial residual cancer was confirmed histologically in 29 of the 50 patients. The overall sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of MRI for residual cancer diagnosis was 66, 81, 72, 83 and 63%. A nodular, thick and discontinuous enhanced rim around the scar is indicative of a residual tumor. However, false-positive findings due to granulation or proliferative fibrocystic change remain limitations.

  16. Minimization of power consumption during charging of superconducting accelerating cavities

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharyya, Anirban Krishna, E-mail: anirban.bhattacharyya@physics.uu.se; Ziemann, Volker; Ruber, Roger; Goryashko, Vitaliy

    2015-11-21

    The radio frequency cavities, used to accelerate charged particle beams, need to be charged to their nominal voltage after which the beam can be injected into them. The standard procedure for such cavity filling is to use a step charging profile. However, during initial stages of such a filling process a substantial amount of the total energy is wasted in reflection for superconducting cavities because of their extremely narrow bandwidth. The paper presents a novel strategy to charge cavities, which reduces total energy reflection. We use variational calculus to obtain analytical expression for the optimal charging profile. Energies, reflected and required, and generator peak power are also compared between the charging schemes and practical aspects (saturation, efficiency and gain characteristics) of power sources (tetrodes, IOTs and solid state power amplifiers) are also considered and analysed. The paper presents a methodology to successfully identify the optimal charging scheme for different power sources to minimize total energy requirement.

  17. Minimization of power consumption during charging of superconducting accelerating cavities

    International Nuclear Information System (INIS)

    Bhattacharyya, Anirban Krishna; Ziemann, Volker; Ruber, Roger; Goryashko, Vitaliy

    2015-01-01

    The radio frequency cavities, used to accelerate charged particle beams, need to be charged to their nominal voltage after which the beam can be injected into them. The standard procedure for such cavity filling is to use a step charging profile. However, during initial stages of such a filling process a substantial amount of the total energy is wasted in reflection for superconducting cavities because of their extremely narrow bandwidth. The paper presents a novel strategy to charge cavities, which reduces total energy reflection. We use variational calculus to obtain analytical expression for the optimal charging profile. Energies, reflected and required, and generator peak power are also compared between the charging schemes and practical aspects (saturation, efficiency and gain characteristics) of power sources (tetrodes, IOTs and solid state power amplifiers) are also considered and analysed. The paper presents a methodology to successfully identify the optimal charging scheme for different power sources to minimize total energy requirement.

  18. Quantum Field Theory with a Minimal Length Induced from Noncommutative Space

    International Nuclear Information System (INIS)

    Lin Bing-Sheng; Chen Wei; Heng Tai-Hua

    2014-01-01

    From the inspection of noncommutative quantum mechanics, we obtain an approximate equivalent relation for the energy dependence of the Planck constant in the noncommutative space, which means a minimal length of the space. We find that this relation is reasonable and it can inherit the main properties of the noncommutative space. Based on this relation, we derive the modified Klein—Gordon equation and Dirac equation. We investigate the scalar field and ϕ 4 model and then quantum electrodynamics in our theory, and derive the corresponding Feynman rules. These results may be considered as reasonable approximations to those of noncommutative quantum field theory. Our theory also shows a connection between the space with a minimal length and the noncommutative space. (physics of elementary particles and fields)

  19. Global Analysis of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J

    2010-01-01

    Many properties of minimal surfaces are of a global nature, and this is already true for the results treated in the first two volumes of the treatise. Part I of the present book can be viewed as an extension of these results. For instance, the first two chapters deal with existence, regularity and uniqueness theorems for minimal surfaces with partially free boundaries. Here one of the main features is the possibility of 'edge-crawling' along free parts of the boundary. The third chapter deals with a priori estimates for minimal surfaces in higher dimensions and for minimizers of singular integ

  20. Minimal Surfaces for Hitchin Representations

    DEFF Research Database (Denmark)

    Li, Qiongling; Dai, Song

    2018-01-01

    . In this paper, we investigate the properties of immersed minimal surfaces inside symmetric space associated to a subloci of Hitchin component: $q_n$ and $q_{n-1}$ case. First, we show that the pullback metric of the minimal surface dominates a constant multiple of the hyperbolic metric in the same conformal...... class and has a strong rigidity property. Secondly, we show that the immersed minimal surface is never tangential to any flat inside the symmetric space. As a direct corollary, the pullback metric of the minimal surface is always strictly negatively curved. In the end, we find a fully decoupled system...

  1. A cyclotron isotope production facility designed to maximize production and minimize radiation dose

    International Nuclear Information System (INIS)

    Dickie, W.J.; Stevenson, N.R.; Szlavik, F.F.

    1993-01-01

    Continuing increases in requirements from the nuclear medicine industry for cyclotron isotopes is increasing the demands being put on an aging stock of machines. In addition, with the 1990 recommendations of the ICRP publication in place, strict dose limits will be required and this will have an effect on the way these machines are being operated. Recent advances in cyclotron design combined with lessons learned from two decades of commercial production mean that new facilities can result in a substantial charge on target, low personnel dose, and minimal residual activation. An optimal facility would utilize a well engineered variable energy/high current H - cyclotron design, multiple beam extraction, and individual target caves. Materials would be selected to minimize activation and absorb neutrons. Equipment would be designed to minimize maintenance activities performed in high radiation fields. (orig.)

  2. Waste Minimization Improvements Achieved Through Six Sigma Analysis Result In Significant Cost Savings

    International Nuclear Information System (INIS)

    Mousseau, Jeffrey D.; Jansen, John R.; Janke, David H.; Plowman, Catherine M.

    2003-01-01

    Improved waste minimization practices at the Department of Energy's (DOE) Idaho National Engineering and Environmental Laboratory (INEEL) are leading to a 15% reduction in the generation of hazardous and radioactive waste. Bechtel, BWXT Idaho, LLC (BBWI), the prime management and operations contractor at the INEEL, applied the Six Sigma improvement process to the INEEL Waste Minimization Program to review existing processes and define opportunities for improvement. Our Six Sigma analysis team: composed of an executive champion, process owner, a black belt and yellow belt, and technical and business team members used this statistical based process approach to analyze work processes and produced ten recommendations for improvement. Recommendations ranged from waste generator financial accountability for newly generated waste to enhanced employee recognition programs for waste minimization efforts. These improvements have now been implemented to reduce waste generation rates and are producing positive results

  3. Minimal Webs in Riemannian Manifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen

    2008-01-01

    For a given combinatorial graph $G$ a {\\it geometrization} $(G, g)$ of the graph is obtained by considering each edge of the graph as a $1-$dimensional manifold with an associated metric $g$. In this paper we are concerned with {\\it minimal isometric immersions} of geometrized graphs $(G, g......)$ into Riemannian manifolds $(N^{n}, h)$. Such immersions we call {\\em{minimal webs}}. They admit a natural 'geometric' extension of the intrinsic combinatorial discrete Laplacian. The geometric Laplacian on minimal webs enjoys standard properties such as the maximum principle and the divergence theorems, which...... are of instrumental importance for the applications. We apply these properties to show that minimal webs in ambient Riemannian spaces share several analytic and geometric properties with their smooth (minimal submanifold) counterparts in such spaces. In particular we use appropriate versions of the divergence...

  4. Waste minimization handbook, Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility`s life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996.

  5. Waste minimization handbook, Volume 1

    International Nuclear Information System (INIS)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility's life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996

  6. Energy Efficient Sensor Scheduling with a Mobile Sink Node for the Target Tracking Application

    Directory of Open Access Journals (Sweden)

    Malin Premaratne

    2009-01-01

    Full Text Available Measurement losses adversely affect the performance of target tracking. The sensor network’s life span depends on how efficiently the sensor nodes consume energy. In this paper, we focus on minimizing the total energy consumed by the sensor nodes whilst avoiding measurement losses. Since transmitting data over a long distance consumes a significant amount of energy, a mobile sink node collects the measurements and transmits them to the base station. We assume that the default transmission range of the activated sensor node is limited and it can be increased to maximum range only if the mobile sink node is out-side the default transmission range. Moreover, the active sensor node can be changed after a certain time period. The problem is to select an optimal sensor sequence which minimizes the total energy consumed by the sensor nodes. In this paper, we consider two different problems depend on the mobile sink node’s path. First, we assume that the mobile sink node’s position is known for the entire time horizon and use the dynamic programming technique to solve the problem. Second, the position of the sink node is varied over time according to a known Markov chain, and the problem is solved by stochastic dynamic programming. We also present sub-optimal methods to solve our problem. A numerical example is presented in order to discuss the proposed methods’ performance.

  7. GeTe sequences in superlattice phase change memories and their electrical characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Ohyanagi, T., E-mail: ohyanagi@leap.or.jp; Kitamura, M.; Takaura, N. [Low-Power Electronics Association and Projects (LEAP), Onogawa 16-1, Tsukuba, Ibaraki 305-8569 (Japan); Araidai, M. [Department of Computational Science and Engineering, Graduate School of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603 (Japan); Kato, S. [Graduate School of Pure and Applied Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8571 (Japan); Shiraishi, K. [Department of Computational Science and Engineering, Graduate School of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603 (Japan); Graduate School of Pure and Applied Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8571 (Japan)

    2014-06-23

    We studied GeTe structures in superlattice phase change memories (superlattice PCMs) with a [GeTe/Sb{sub 2}Te{sub 3}] stacked structure by X-ray diffraction (XRD) analysis. We examined the electrical characteristics of superlattice PCMs with films deposited at different temperatures. It was found that XRD spectra differed between the films deposited at 200 °C and 240 °C; the differences corresponded to the differences in the GeTe sequences in the films. We applied first-principles calculations to calculate the total energy of three different GeTe sequences. The results showed the Ge-Te-Ge-Te sequence had the lowest total energy of the three and it was found that with this sequence the superlattice PCMs did not run.

  8. Energy and environment in the 21st century : minimizing climate change.

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    Energy demand and economic output are coupled. Both are expected to vastly increase in this century, driven primarily by the economic and population growth of the developing world. If the present reliance on carbon-based fuels as primary energy sources continues, average global temperatures are projected to rise between 3° C and 6° C. Limiting climate change will require reduction in greenhouse gas emissions far beyond the Kyoto commitments. Time scales and options, including nuclear, will be reviewed.

  9. Energy use in pig production: an examination of current Iowa systems.

    Science.gov (United States)

    Lammers, P J; Kenealy, M D; Kliebenstein, J B; Harmon, J D; Helmers, M J; Honeyman, M S

    2012-03-01

    This paper compares energy use for different pig production systems in Iowa, a leader in US swine production. Pig production systems include not only the growth and performance of the pigs, but also the supporting infrastructure of pig production. This supporting infrastructure includes swine housing, facility management, feedstuff provision, swine diets, and manure management. Six different facility type × diet formulation × cropping sequence scenarios were modeled and compared. The baseline system examined produces 15,600 pigs annually using confinement facilities and a corn-soybean cropping sequence. Diet formulations for the baseline system were corn-soybean meal diets that included the synthetic AA l-lysine and exogenous phytase. The baseline system represents the majority of current US pork production in the Upper Midwest, where most US swine are produced. This system was found to require 744.6 MJ per 136-kg market pig. An alternative system that uses bedded hoop barns for grow-finish pigs and gestating sows would require 3% less (720.8 MJ) energy per 136-kg market pig. When swine production systems were assessed, diet type and feed ingredient processing were the major influences on energy use, accounting for 61 and 79% of total energy in conventional and hoop barn-based systems, respectively. Improving feed efficiency and better matching the diet formulation with the thermal environment and genetic potential are thus key aspects of reducing energy use by pig production, particularly in a hoop barn-based system. The most energy-intensive aspect of provisioning pig feed is the production of synthetic N for crop production; thus, effectively recycling manure nutrients to cropland is another important avenue for future research. Almost 25% of energy use by a conventional farrow-to-finish pig production system is attributable to operation of the swine buildings. Developing strategies to minimize energy use for heating and ventilation of swine buildings while

  10. A DNA Structure-Based Bionic Wavelet Transform and Its Application to DNA Sequence Analysis

    Directory of Open Access Journals (Sweden)

    Fei Chen

    2003-01-01

    Full Text Available DNA sequence analysis is of great significance for increasing our understanding of genomic functions. An important task facing us is the exploration of hidden structural information stored in the DNA sequence. This paper introduces a DNA structure-based adaptive wavelet transform (WT – the bionic wavelet transform (BWT – for DNA sequence analysis. The symbolic DNA sequence can be separated into four channels of indicator sequences. An adaptive symbol-to-number mapping, determined from the structural feature of the DNA sequence, was introduced into WT. It can adjust the weight value of each channel to maximise the useful energy distribution of the whole BWT output. The performance of the proposed BWT was examined by analysing synthetic and real DNA sequences. Results show that BWT performs better than traditional WT in presenting greater energy distribution. This new BWT method should be useful for the detection of the latent structural features in future DNA sequence analysis.

  11. Biomechanical energy harvesting: generating electricity during walking with minimal user effort.

    Science.gov (United States)

    Donelan, J M; Li, Q; Naing, V; Hoffer, J A; Weber, D J; Kuo, A D

    2008-02-08

    We have developed a biomechanical energy harvester that generates electricity during human walking with little extra effort. Unlike conventional human-powered generators that use positive muscle work, our technology assists muscles in performing negative work, analogous to regenerative braking in hybrid cars, where energy normally dissipated during braking drives a generator instead. The energy harvester mounts at the knee and selectively engages power generation at the end of the swing phase, thus assisting deceleration of the joint. Test subjects walking with one device on each leg produced an average of 5 watts of electricity, which is about 10 times that of shoe-mounted devices. The cost of harvesting-the additional metabolic power required to produce 1 watt of electricity-is less than one-eighth of that for conventional human power generation. Producing substantial electricity with little extra effort makes this method well-suited for charging powered prosthetic limbs and other portable medical devices.

  12. Adiabatic density perturbations and matter generation from the minimal supersymmetric standard model.

    Science.gov (United States)

    Enqvist, Kari; Kasuya, Shinta; Mazumdar, Anupam

    2003-03-07

    We propose that the inflaton is coupled to ordinary matter only gravitationally and that it decays into a completely hidden sector. In this scenario both baryonic and dark matter originate from the decay of a flat direction of the minimal supersymmetric standard model, which is shown to generate the desired adiabatic perturbation spectrum via the curvaton mechanism. The requirement that the energy density along the flat direction dominates over the inflaton decay products fixes the flat direction almost uniquely. The present residual energy density in the hidden sector is typically shown to be small.

  13. Heuristics for multiobjective multiple sequence alignment.

    Science.gov (United States)

    Abbasi, Maryam; Paquete, Luís; Pereira, Francisco B

    2016-07-15

    Aligning multiple sequences arises in many tasks in Bioinformatics. However, the alignments produced by the current software packages are highly dependent on the parameters setting, such as the relative importance of opening gaps with respect to the increase of similarity. Choosing only one parameter setting may provide an undesirable bias in further steps of the analysis and give too simplistic interpretations. In this work, we reformulate multiple sequence alignment from a multiobjective point of view. The goal is to generate several sequence alignments that represent a trade-off between maximizing the substitution score and minimizing the number of indels/gaps in the sum-of-pairs score function. This trade-off gives to the practitioner further information about the similarity of the sequences, from which she could analyse and choose the most plausible alignment. We introduce several heuristic approaches, based on local search procedures, that compute a set of sequence alignments, which are representative of the trade-off between the two objectives (substitution score and indels). Several algorithm design options are discussed and analysed, with particular emphasis on the influence of the starting alignment and neighborhood search definitions on the overall performance. A perturbation technique is proposed to improve the local search, which provides a wide range of high-quality alignments. The proposed approach is tested experimentally on a wide range of instances. We performed several experiments with sequences obtained from the benchmark database BAliBASE 3.0. To evaluate the quality of the results, we calculate the hypervolume indicator of the set of score vectors returned by the algorithms. The results obtained allow us to identify reasonably good choices of parameters for our approach. Further, we compared our method in terms of correctly aligned pairs ratio and columns correctly aligned ratio with respect to reference alignments. Experimental results show

  14. A plant pathology perspective of fungal genome sequencing.

    Science.gov (United States)

    Aylward, Janneke; Steenkamp, Emma T; Dreyer, Léanne L; Roets, Francois; Wingfield, Brenda D; Wingfield, Michael J

    2017-06-01

    The majority of plant pathogens are fungi and many of these adversely affect food security. This mini-review aims to provide an analysis of the plant pathogenic fungi for which genome sequences are publically available, to assess their general genome characteristics, and to consider how genomics has impacted plant pathology. A list of sequenced fungal species was assembled, the taxonomy of all species verified, and the potential reason for sequencing each of the species considered. The genomes of 1090 fungal species are currently (October 2016) in the public domain and this number is rapidly rising. Pathogenic species comprised the largest category (35.5 %) and, amongst these, plant pathogens are predominant. Of the 191 plant pathogenic fungal species with available genomes, 61.3 % cause diseases on food crops, more than half of which are staple crops. The genomes of plant pathogens are slightly larger than those of other fungal species sequenced to date and they contain fewer coding sequences in relation to their genome size. Both of these factors can be attributed to the expansion of repeat elements. Sequenced genomes of plant pathogens provide blueprints from which potential virulence factors were identified and from which genes associated with different pathogenic strategies could be predicted. Genome sequences have also made it possible to evaluate adaptability of pathogen genomes and genomic regions that experience selection pressures. Some genomic patterns, however, remain poorly understood and plant pathogen genomes alone are not sufficient to unravel complex pathogen-host interactions. Genomes, therefore, cannot replace experimental studies that can be complex and tedious. Ultimately, the most promising application lies in using fungal plant pathogen genomics to inform disease management and risk assessment strategies. This will ultimately minimize the risks of future disease outbreaks and assist in preparation for emerging pathogen outbreaks.

  15. Ternary-fragmentation-driving potential energies of 252Cf

    Science.gov (United States)

    Karthikraj, C.; Ren, Zhongzhou

    2017-12-01

    Within the framework of a simple macroscopic model, the ternary-fragmentation-driving potential energies of 252Cf are studied. In this work, all possible ternary-fragment combinations of 252Cf are generated by the use of atomic mass evaluation-2016 (AME2016) data and these combinations are minimized by using a two-dimensional minimization approach. This minimization process can be done in two ways: (i) with respect to proton numbers (Z1, Z2, Z3) and (ii) with respect to neutron numbers (N1, N2, N3) of the ternary fragments. In this paper, the driving potential energies for the ternary breakup of 252Cf are presented for both the spherical and deformed as well as the proton-minimized and neutron-minimized ternary fragments. From the proton-minimized spherical ternary fragments, we have obtained different possible ternary configurations with a minimum driving potential, in particular, the experimental expectation of Sn + Ni + Ca ternary fragmentation. However, the neutron-minimized ternary fragments exhibit a driving potential minimum in the true-ternary-fission (TTF) region as well. Further, the Q -value energy systematics of the neutron-minimized ternary fragments show larger values for the TTF fragments. From this, we have concluded that the TTF region fragments with the least driving potential and high Q values have a strong possibility in the ternary fragmentation of 252Cf. Further, the role of ground-state deformations (β2, β3, β4, and β6) in the ternary breakup of 252Cf is also studied. The deformed ternary fragmentation, which involves Z3=12 -19 fragments, possesses the driving potential minimum due to the larger oblate deformations. We also found that the ground-state deformations, particularly β2, strongly influence the driving potential energies and play a major role in determining the most probable fragment combinations in the ternary breakup of 252Cf.

  16. Concavity Theorems for Energy Surfaces

    OpenAIRE

    Giraud, B. G.; Karataglidis, S.

    2011-01-01

    Concavity properties prevent the existence of significant landscapes in energy surfaces obtained by strict constrained energy minimizations. The inherent contradiction is due to fluctuations of collective coordinates. A solution to those fluctuations is given.

  17. Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.

    Science.gov (United States)

    Temel, Burcin; Mills, Greg; Metiu, Horia

    2008-03-27

    We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.

  18. ABI Base Recall: Automatic Correction and Ends Trimming of DNA Sequences.

    Science.gov (United States)

    Elyazghi, Zakaria; Yazouli, Loubna El; Sadki, Khalid; Radouani, Fouzia

    2017-12-01

    Automated DNA sequencers produce chromatogram files in ABI format. When viewing chromatograms, some ambiguities are shown at various sites along the DNA sequences, because the program implemented in the sequencing machine and used to call bases cannot always precisely determine the right nucleotide, especially when it is represented by either a broad peak or a set of overlaying peaks. In such cases, a letter other than A, C, G, or T is recorded, most commonly N. Thus, DNA sequencing chromatograms need manual examination: checking for mis-calls and truncating the sequence when errors become too frequent. The purpose of this paper is to develop a program allowing the automatic correction of these ambiguities. This application is a Web-based program powered by Shiny and runs under R platform for an easy exploitation. As a part of the interface, we added the automatic ends clipping option, alignment against reference sequences, and BLAST. To develop and test our tool, we collected several bacterial DNA sequences from different laboratories within Institut Pasteur du Maroc and performed both manual and automatic correction. The comparison between the two methods was carried out. As a result, we note that our program, ABI base recall, accomplishes good correction with a high accuracy. Indeed, it increases the rate of identity and coverage and minimizes the number of mismatches and gaps, hence it provides solution to sequencing ambiguities and saves biologists' time and labor.

  19. Robust Automatic Target Recognition via HRRP Sequence Based on Scatterer Matching

    Directory of Open Access Journals (Sweden)

    Yuan Jiang

    2018-02-01

    Full Text Available High resolution range profile (HRRP plays an important role in wideband radar automatic target recognition (ATR. In order to alleviate the sensitivity to clutter and target aspect, employing a sequence of HRRP is a promising approach to enhance the ATR performance. In this paper, a novel HRRP sequence-matching method based on singular value decomposition (SVD is proposed. First, the HRRP sequence is decoupled into the angle space and the range space via SVD, which correspond to the span of the left and the right singular vectors, respectively. Second, atomic norm minimization (ANM is utilized to estimate dominant scatterers in the range space and the Hausdorff distance is employed to measure the scatter similarity between the test and training data. Next, the angle space similarity between the test and training data is evaluated based on the left singular vector correlations. Finally, the range space matching result and the angle space correlation are fused with the singular values as weights. Simulation and outfield experimental results demonstrate that the proposed matching metric is a robust similarity measure for HRRP sequence recognition.

  20. Dynamics of teleparallel dark energy

    International Nuclear Information System (INIS)

    Wei Hao

    2012-01-01

    Recently, Geng et al. proposed to allow a non-minimal coupling between quintessence and gravity in the framework of teleparallel gravity, motivated by the similar one in the framework of General Relativity (GR). They found that this non-minimally coupled quintessence in the framework of teleparallel gravity has a richer structure, and named it “teleparallel dark energy”. In the present work, we note that there might be a deep and unknown connection between teleparallel dark energy and Elko spinor dark energy. Motivated by this observation and the previous results of Elko spinor dark energy, we try to study the dynamics of teleparallel dark energy. We find that there exist only some dark-energy-dominated de Sitter attractors. Unfortunately, no scaling attractor has been found, even when we allow the possible interaction between teleparallel dark energy and matter. However, we note that w at the critical points is in agreement with observations (in particular, the fact that w=−1 independently of ξ is a great advantage).

  1. Constructal entransy dissipation minimization for 'volume-point' heat conduction

    International Nuclear Information System (INIS)

    Chen Lingen; Wei Shuhuan; Sun Fengrui

    2008-01-01

    The 'volume to point' heat conduction problem, which can be described as to how to determine the optimal distribution of high conductivity material through the given volume such that the heat generated at every point is transferred most effectively to its boundary, has became the focus of attention in the current constructal theory literature. In general, the minimization of the maximum temperature difference in the volume is taken as the optimization objective. A new physical quantity, entransy, has been identified as a basis for optimizing heat transfer processes in terms of the analogy between heat and electrical conduction recently. Heat transfer analyses show that the entransy of an object describes its heat transfer ability, just as the electrical energy in a capacitor describes its charge transfer ability. Entransy dissipation occurs during heat transfer processes, as a measure of the heat transfer irreversibility with the dissipation related thermal resistance. By taking equivalent thermal resistance (it corresponds to the mean temperature difference), which reflects the average heat conduction effect and is defined based on entransy dissipation, as an optimization objective, the 'volume to point' constructal problem is re-analysed and re-optimized in this paper. The constructal shape of the control volume with the best average heat conduction effect is deduced. For the elemental area and the first order construct assembly, when the thermal current density in the high conductive link is linear with the length, the optimized shapes of assembly based on the minimization of entransy dissipation are the same as those based on minimization of the maximum temperature difference, and the mean temperature difference is 2/3 of the maximum temperature difference. For the second and higher order construct assemblies, the thermal current densities in the high conductive link are not linear with the length, and the optimized shapes of the assembly based on the

  2. Phonon impedance matching: minimizing interfacial thermal resistance of thin films

    Science.gov (United States)

    Polanco, Carlos; Zhang, Jingjie; Ghosh, Avik

    2014-03-01

    The challenge to minimize interfacial thermal resistance is to allow a broad band spectrum of phonons, with non-linear dispersion and well defined translational and rotational symmetries, to cross the interface. We explain how to minimize this resistance using a frequency dependent broadening matrix that generalizes the notion of acoustic impedance to the whole phonon spectrum including symmetries. We show how to ``match'' two given materials by joining them with a single atomic layer, with a multilayer material and with a graded superlattice. Atomic layer ``matching'' requires a layer with a mass close to the arithmetic mean (or spring constant close to the harmonic mean) to favor high frequency phonon transmission. For multilayer ``matching,'' we want a material with a broadening close to the geometric mean to maximize transmission peaks. For graded superlattices, a continuous sequence of geometric means translates to an exponentially varying broadening that generates a wide-band antireflection coating for both the coherent and incoherent limits. Our results are supported by ``first principles'' calculations of thermal conductance for GaAs / Gax Al1 - x As / AlAs thin films using the Non-Equilibrium Greens Function formalism coupled with Density Functional Perturbation Theory. NSF-CAREER (QMHP 1028883), NSF-IDR (CBET 1134311), XSEDE.

  3. Preparation of highly multiplexed small RNA sequencing libraries.

    Science.gov (United States)

    Persson, Helena; Søkilde, Rolf; Pirona, Anna Chiara; Rovira, Carlos

    2017-08-01

    MicroRNAs (miRNAs) are ~22-nucleotide-long small non-coding RNAs that regulate the expression of protein-coding genes by base pairing to partially complementary target sites, preferentially located in the 3´ untranslated region (UTR) of target mRNAs. The expression and function of miRNAs have been extensively studied in human disease, as well as the possibility of using these molecules as biomarkers for prognostication and treatment guidance. To identify and validate miRNAs as biomarkers, their expression must be screened in large collections of patient samples. Here, we develop a scalable protocol for the rapid and economical preparation of a large number of small RNA sequencing libraries using dual indexing for multiplexing. Combined with the use of off-the-shelf reagents, more samples can be sequenced simultaneously on large-scale sequencing platforms at a considerably lower cost per sample. Sample preparation is simplified by pooling libraries prior to gel purification, which allows for the selection of a narrow size range while minimizing sample variation. A comparison with publicly available data from benchmarking of miRNA analysis platforms showed that this method captures absolute and differential expression as effectively as commercially available alternatives.

  4. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail; Pottmann, Helmut; Grohs, Philipp

    2011-01-01

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ

  5. Free Energy in Introductory Physics

    Science.gov (United States)

    Prentis, Jeffrey J.; Obsniuk, Michael J.

    2016-01-01

    Energy and entropy are two of the most important concepts in science. For all natural processes where a system exchanges energy with its environment, the energy of the system tends to decrease and the entropy of the system tends to increase. Free energy is the special concept that specifies how to balance the opposing tendencies to minimize energy…

  6. Genomic Sequence Variation Markup Language (GSVML).

    Science.gov (United States)

    Nakaya, Jun; Kimura, Michio; Hiroi, Kaei; Ido, Keisuke; Yang, Woosung; Tanaka, Hiroshi

    2010-02-01

    With the aim of making good use of internationally accumulated genomic sequence variation data, which is increasing rapidly due to the explosive amount of genomic research at present, the development of an interoperable data exchange format and its international standardization are necessary. Genomic Sequence Variation Markup Language (GSVML) will focus on genomic sequence variation data and human health applications, such as gene based medicine or pharmacogenomics. We developed GSVML through eight steps, based on case analysis and domain investigations. By focusing on the design scope to human health applications and genomic sequence variation, we attempted to eliminate ambiguity and to ensure practicability. We intended to satisfy the requirements derived from the use case analysis of human-based clinical genomic applications. Based on database investigations, we attempted to minimize the redundancy of the data format, while maximizing the data covering range. We also attempted to ensure communication and interface ability with other Markup Languages, for exchange of omics data among various omics researchers or facilities. The interface ability with developing clinical standards, such as the Health Level Seven Genotype Information model, was analyzed. We developed the human health-oriented GSVML comprising variation data, direct annotation, and indirect annotation categories; the variation data category is required, while the direct and indirect annotation categories are optional. The annotation categories contain omics and clinical information, and have internal relationships. For designing, we examined 6 cases for three criteria as human health application and 15 data elements for three criteria as data formats for genomic sequence variation data exchange. The data format of five international SNP databases and six Markup Languages and the interface ability to the Health Level Seven Genotype Model in terms of 317 items were investigated. GSVML was developed as

  7. Characterization of Request Sequences for List Accessing Problem and New Theoretical Results for MTF Algorithm

    OpenAIRE

    Mohanty, Rakesh; Sharma, Burle; Tripathy, Sasmita

    2011-01-01

    List Accessing Problem is a well studied research problem in the context of linear search. Input to the list accessing problem is an unsorted linear list of distinct elements along with a sequence of requests, where each request is an access operation on an element of the list. A list accessing algorithm reorganizes the list while processing a request sequence on the list in order to minimize the access cost. Move-To-Front algorithm has been proved to be the best performing list accessing onl...

  8. Living systems do not minimize free energy. Comment on "Answering Schrödinger's question: A free-energy formulation" by Maxwell James Dèsormeau Ramstead et al.

    Science.gov (United States)

    Martyushev, Leonid M.

    2018-03-01

    The paper [1] is certainly very useful and important for understanding living systems (e.g. brain) as adaptive, self-organizing patterns. There is no need to enumerate all advantages of the paper, they are obvious. The purpose of my brief comment is to discuss one issue which, as I see it, was not thought out by the authors well enough. As a consequence, their ideas do not find as wide distribution as they otherwise could have found. This issue is related to the name selected for the principle forming the basis of their approach: free-energy principle (FEP). According to the sec. 2.1 [1]: "It asserts that all biological systems maintain their integrity by actively reducing the disorder or dispersion (i.e., entropy) of their sensory and physiological states by minimizing their variational free energy." Let us note that the authors suggested different names for the principle in their earlier works (an objective function, a function of the ensemble density encoded by the organism's configuration and the sensory data to which it is exposed, etc.), and explicitly and correctly mentioned that the free energy and entropy considered by them had nothing in common with the quantities employed in physics [2,3]. It is also obvious that a purely information-theoretic approach used by the authors with regard to the problems under study allows many other wordings and interpretations. However, in spite of this fact, in their last papers as well as in the present paper, the authors choose specifically FEP. Apparently, it may be explained by the intent to additionally base their approach on the foundation of statistical thermodynamics and therefore to demonstrate the universality of the described method. However, this is exactly what might cause misunderstandings specifically among physicists and consequently in their rejection and ignoring of FEP. The physical analogy employed by the authors has the following fundamental inconsistencies: In physics, free energy is used to describe

  9. ATLAS Z Excess in Minimal Supersymmetric Standard Model

    International Nuclear Information System (INIS)

    Lu, Xiaochuan; Terada, Takahiro

    2015-06-01

    Recently the ATLAS collaboration reported a 3 sigma excess in the search for the events containing a dilepton pair from a Z boson and large missing transverse energy. Although the excess is not sufficiently significant yet, it is quite tempting to explain this excess by a well-motivated model beyond the standard model. In this paper we study a possibility of the minimal supersymmetric standard model (MSSM) for this excess. Especially, we focus on the MSSM spectrum where the sfermions are heavier than the gauginos and Higgsinos. We show that the excess can be explained by the reasonable MSSM mass spectrum.

  10. High energy KrCl electric discharge laser

    Science.gov (United States)

    Sze, Robert C.; Scott, Peter B.

    1981-01-01

    A high energy KrCl laser for producing coherent radiation at 222 nm. Output energies on the order of 100 mJ per pulse are produced utilizing a discharge excitation source to minimize formation of molecular ions, thereby minimizing absorption of laser radiation by the active medium. Additionally, HCl is used as a halogen donor which undergoes a harpooning reaction with metastable Kr.sub.M * to form KrCl.

  11. Size, Shape, and Sequence-Dependent Immunogenicity of RNA Nanoparticles

    Directory of Open Access Journals (Sweden)

    Sijin Guo

    2017-12-01

    Full Text Available RNA molecules have emerged as promising therapeutics. Like all other drugs, the safety profile and immune response are important criteria for drug evaluation. However, the literature on RNA immunogenicity has been controversial. Here, we used the approach of RNA nanotechnology to demonstrate that the immune response of RNA nanoparticles is size, shape, and sequence dependent. RNA triangle, square, pentagon, and tetrahedron with same shape but different sizes, or same size but different shapes were used as models to investigate the immune response. The levels of pro-inflammatory cytokines induced by these RNA nanoarchitectures were assessed in macrophage-like cells and animals. It was found that RNA polygons without extension at the vertexes were immune inert. However, when single-stranded RNA with a specific sequence was extended from the vertexes of RNA polygons, strong immune responses were detected. These immunostimulations are sequence specific, because some other extended sequences induced little or no immune response. Additionally, larger-size RNA square induced stronger cytokine secretion. 3D RNA tetrahedron showed stronger immunostimulation than planar RNA triangle. These results suggest that the immunogenicity of RNA nanoparticles is tunable to produce either a minimal immune response that can serve as safe therapeutic vectors, or a strong immune response for cancer immunotherapy or vaccine adjuvants.

  12. Optimizing Completion Time and Energy Consumption in a Bidirectional Relay Network

    DEFF Research Database (Denmark)

    Liu, Huaping; Sun, Fan; Thai, Chan

    2012-01-01

    consumption required for multiple flows depends on the current channel realizations, transmission methods used and, notably, the relation between the data sizes of different source nodes. In this paper we investigate the shortest completion time and minimal energy consumption in a two-way relay wireless...... arises for the minimal required energy. While the requirement for minimal energy consumption is obvious, the shortest completion time is relevant when certain multi-node network needs to reserve the wireless medium in order to carry out the data exchange among its nodes. The completion time/energy...... network. The system applies optimal time multiplexing of several known transmission methods, including one-way relaying and wireless network coding (WNC). We show that when the relay applies Amplify-and-Forward (AF), both minimizations are linear optimization problems. On the other hand, when the relay...

  13. Y-12 Plant waste minimization strategy

    International Nuclear Information System (INIS)

    Kane, M.A.

    1987-01-01

    The 1984 Amendments to the Resource Conservation and Recovery Act (RCRA) mandate that waste minimization be a major element of hazardous waste management. In response to this mandate and the increasing costs for waste treatment, storage, and disposal, the Oak Ridge Y-12 Plant developed a waste minimization program to encompass all types of wastes. Thus, waste minimization has become an integral part of the overall waste management program. Unlike traditional approaches, waste minimization focuses on controlling waste at the beginning of production instead of the end. This approach includes: (1) substituting nonhazardous process materials for hazardous ones, (2) recycling or reusing waste effluents, (3) segregating nonhazardous waste from hazardous and radioactive waste, and (4) modifying processes to generate less waste or less toxic waste. An effective waste minimization program must provide the appropriate incentives for generators to reduce their waste and provide the necessary support mechanisms to identify opportunities for waste minimization. This presentation focuses on the Y-12 Plant's strategy to implement a comprehensive waste minimization program. This approach consists of four major program elements: (1) promotional campaign, (2) process evaluation for waste minimization opportunities, (3) waste generation tracking system, and (4) information exchange network. The presentation also examines some of the accomplishments of the program and issues which need to be resolved

  14. Minimal open strings

    International Nuclear Information System (INIS)

    Hosomichi, Kazuo

    2008-01-01

    We study FZZT-branes and open string amplitudes in (p, q) minimal string theory. We focus on the simplest boundary changing operators in two-matrix models, and identify the corresponding operators in worldsheet theory through the comparison of amplitudes. Along the way, we find a novel linear relation among FZZT boundary states in minimal string theory. We also show that the boundary ground ring is realized on physical open string operators in a very simple manner, and discuss its use for perturbative computation of higher open string amplitudes.

  15. Minimal Composite Inflation

    DEFF Research Database (Denmark)

    Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco

    2011-01-01

    We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity...

  16. BioWord: A sequence manipulation suite for Microsoft Word

    Directory of Open Access Journals (Sweden)

    Anzaldi Laura J

    2012-06-01

    Full Text Available Abstract Background The ability to manipulate, edit and process DNA and protein sequences has rapidly become a necessary skill for practicing biologists across a wide swath of disciplines. In spite of this, most everyday sequence manipulation tools are distributed across several programs and web servers, sometimes requiring installation and typically involving frequent switching between applications. To address this problem, here we have developed BioWord, a macro-enabled self-installing template for Microsoft Word documents that integrates an extensive suite of DNA and protein sequence manipulation tools. Results BioWord is distributed as a single macro-enabled template that self-installs with a single click. After installation, BioWord will open as a tab in the Office ribbon. Biologists can then easily manipulate DNA and protein sequences using a familiar interface and minimize the need to switch between applications. Beyond simple sequence manipulation, BioWord integrates functionality ranging from dyad search and consensus logos to motif discovery and pair-wise alignment. Written in Visual Basic for Applications (VBA as an open source, object-oriented project, BioWord allows users with varying programming experience to expand and customize the program to better meet their own needs. Conclusions BioWord integrates a powerful set of tools for biological sequence manipulation within a handy, user-friendly tab in a widely used word processing software package. The use of a simple scripting language and an object-oriented scheme facilitates customization by users and provides a very accessible educational platform for introducing students to basic bioinformatics algorithms.

  17. BioWord: A sequence manipulation suite for Microsoft Word

    Science.gov (United States)

    2012-01-01

    Background The ability to manipulate, edit and process DNA and protein sequences has rapidly become a necessary skill for practicing biologists across a wide swath of disciplines. In spite of this, most everyday sequence manipulation tools are distributed across several programs and web servers, sometimes requiring installation and typically involving frequent switching between applications. To address this problem, here we have developed BioWord, a macro-enabled self-installing template for Microsoft Word documents that integrates an extensive suite of DNA and protein sequence manipulation tools. Results BioWord is distributed as a single macro-enabled template that self-installs with a single click. After installation, BioWord will open as a tab in the Office ribbon. Biologists can then easily manipulate DNA and protein sequences using a familiar interface and minimize the need to switch between applications. Beyond simple sequence manipulation, BioWord integrates functionality ranging from dyad search and consensus logos to motif discovery and pair-wise alignment. Written in Visual Basic for Applications (VBA) as an open source, object-oriented project, BioWord allows users with varying programming experience to expand and customize the program to better meet their own needs. Conclusions BioWord integrates a powerful set of tools for biological sequence manipulation within a handy, user-friendly tab in a widely used word processing software package. The use of a simple scripting language and an object-oriented scheme facilitates customization by users and provides a very accessible educational platform for introducing students to basic bioinformatics algorithms. PMID:22676326

  18. BioWord: a sequence manipulation suite for Microsoft Word.

    Science.gov (United States)

    Anzaldi, Laura J; Muñoz-Fernández, Daniel; Erill, Ivan

    2012-06-07

    The ability to manipulate, edit and process DNA and protein sequences has rapidly become a necessary skill for practicing biologists across a wide swath of disciplines. In spite of this, most everyday sequence manipulation tools are distributed across several programs and web servers, sometimes requiring installation and typically involving frequent switching between applications. To address this problem, here we have developed BioWord, a macro-enabled self-installing template for Microsoft Word documents that integrates an extensive suite of DNA and protein sequence manipulation tools. BioWord is distributed as a single macro-enabled template that self-installs with a single click. After installation, BioWord will open as a tab in the Office ribbon. Biologists can then easily manipulate DNA and protein sequences using a familiar interface and minimize the need to switch between applications. Beyond simple sequence manipulation, BioWord integrates functionality ranging from dyad search and consensus logos to motif discovery and pair-wise alignment. Written in Visual Basic for Applications (VBA) as an open source, object-oriented project, BioWord allows users with varying programming experience to expand and customize the program to better meet their own needs. BioWord integrates a powerful set of tools for biological sequence manipulation within a handy, user-friendly tab in a widely used word processing software package. The use of a simple scripting language and an object-oriented scheme facilitates customization by users and provides a very accessible educational platform for introducing students to basic bioinformatics algorithms.

  19. Minimal abdominal incisions

    Directory of Open Access Journals (Sweden)

    João Carlos Magi

    2017-04-01

    Full Text Available Minimally invasive procedures aim to resolve the disease with minimal trauma to the body, resulting in a rapid return to activities and in reductions of infection, complications, costs and pain. Minimally incised laparotomy, sometimes referred to as minilaparotomy, is an example of such minimally invasive procedures. The aim of this study is to demonstrate the feasibility and utility of laparotomy with minimal incision based on the literature and exemplifying with a case. The case in question describes reconstruction of the intestinal transit with the use of this incision. Male, young, HIV-positive patient in a late postoperative of ileotiflectomy, terminal ileostomy and closing of the ascending colon by an acute perforating abdomen, due to ileocolonic tuberculosis. The barium enema showed a proximal stump of the right colon near the ileostomy. The access to the cavity was made through the orifice resulting from the release of the stoma, with a lateral-lateral ileo-colonic anastomosis with a 25 mm circular stapler and manual closure of the ileal stump. These surgeries require their own tactics, such as rigor in the lysis of adhesions, tissue traction, and hemostasis, in addition to requiring surgeon dexterity – but without the need for investments in technology; moreover, the learning curve is reported as being lower than that for videolaparoscopy. Laparotomy with minimal incision should be considered as a valid and viable option in the treatment of surgical conditions. Resumo: Procedimentos minimamente invasivos visam resolver a doença com o mínimo de trauma ao organismo, resultando em retorno rápido às atividades, reduções nas infecções, complicações, custos e na dor. A laparotomia com incisão mínima, algumas vezes referida como minilaparotomia, é um exemplo desses procedimentos minimamente invasivos. O objetivo deste trabalho é demonstrar a viabilidade e utilidade das laparotomias com incisão mínima com base na literatura e

  20. Phenomenology of R-parity violating minimal supergravity

    International Nuclear Information System (INIS)

    Bernhardt, M.A.

    2008-02-01

    We investigate in detail the low-energy spectrum of the P 6 violating minimal supergravity model using the SOFTSUSY spectrum code. We impose the experimental constraints from the measurement of the anomalous magnetic moment of the muon (g-2) μ , the b→sγ decay, the branching ration of B s →μ + μ - , as well as the mass bound from direct searches at colliders, in particular the Higgs boson and the lightest Chargino. We focus on regions, where the lightest neutralino is not the lightest supersymmetric particle (LSP). In these regions of parameter space either the lightest scalar tau or one of the sneutrinos is the LSP. We suggest four benchmark points with typical spectra and novel collider signatures which we investigate with a parton level Monte-Carlo simulation. We give an outlook for their detailed phenomenological analysis and simulation by the LHC collaborations, then including detector effects. In addition, we discuss a full Monte-Carlo simulation for single slepton production in association with a single top quark via an LQD type operator at the hadron colliders LHC and Tevatron. We present these results and show a predicted range of detectability for this process- for small couplings in various minimal supergravity models at the LHC. (orig.)

  1. Phenomenology of R-parity violating minimal supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Bernhardt, M.A.

    2008-02-15

    We investigate in detail the low-energy spectrum of the P{sub 6} violating minimal supergravity model using the SOFTSUSY spectrum code. We impose the experimental constraints from the measurement of the anomalous magnetic moment of the muon (g-2){sub {mu}}, the b{yields}s{gamma} decay, the branching ration of B{sub s}{yields}{mu}{sup +}{mu}{sup -}, as well as the mass bound from direct searches at colliders, in particular the Higgs boson and the lightest Chargino. We focus on regions, where the lightest neutralino is not the lightest supersymmetric particle (LSP). In these regions of parameter space either the lightest scalar tau or one of the sneutrinos is the LSP. We suggest four benchmark points with typical spectra and novel collider signatures which we investigate with a parton level Monte-Carlo simulation. We give an outlook for their detailed phenomenological analysis and simulation by the LHC collaborations, then including detector effects. In addition, we discuss a full Monte-Carlo simulation for single slepton production in association with a single top quark via an LQD type operator at the hadron colliders LHC and Tevatron. We present these results and show a predicted range of detectability for this process- for small couplings in various minimal supergravity models at the LHC. (orig.)

  2. Policies and programs for sustainable energy innovations renewable energy and energy efficiency

    CERN Document Server

    Kim, Jisun; Iskin, Ibrahim; Taha, Rimal; Blommestein, Kevin

    2015-01-01

    This volume features research and case studies across a variety of industries to showcase technological innovations and policy initiatives designed to promote renewable energy and sustainable economic development. The first section focuses on policies for the adoption of renewable energy technologies, the second section covers the evaluation of energy efficiency programs, and the final section provides evaluations of energy technology innovations. Environmental concerns, energy availability, and political pressure have prompted governments to look for alternative energy resources that can minimize the undesirable effects for current energy systems.  For example, shifting away from conventional fuel resources and increasing the percentage of electricity generated from renewable resources, such as solar and wind power, is an opportunity to guarantee lower CO2 emissions and to create better economic opportunities for citizens in the long run.  Including discussions of such of timely topics and issues as global...

  3. Improving Energy Efficiency In Thermal Oil Recovery Surface Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Murthy Nadella, Narayana

    2010-09-15

    Thermal oil recovery methods such as Cyclic Steam Stimulation (CSS), Steam Assisted Gravity Drainage (SAGD) and In-situ Combustion are being used for recovering heavy oil and bitumen. These processes expend energy to recover oil. The process design of the surface facilities requires optimization to improve the efficiency of oil recovery by minimizing the energy consumption per barrel of oil produced. Optimization involves minimizing external energy use by heat integration. This paper discusses the unit processes and design methodology considering thermodynamic energy requirements and heat integration methods to improve energy efficiency in the surface facilities. A design case study is presented.

  4. Principles of light energy management

    Science.gov (United States)

    Davis, N.

    1994-01-01

    Six methods used to minimize excess energy effects associated with lighting systems for plant growth chambers are reviewed in this report. The energy associated with wall transmission and chamber operating equipment and the experimental requirements, such as fresh air and internal equipment, are not considered here. Only the energy associated with providing and removing the energy for lighting is considered.

  5. On Finite Interquark Potential in D=3 Driven by a Minimal Length

    International Nuclear Information System (INIS)

    Gaete, Patricio

    2014-01-01

    We address the effect of a quantum gravity induced minimal length on a physical observable for three-dimensional Yang-Mills. Our calculation is done within stationary perturbation theory. Interestingly enough, we find an ultraviolet finite interaction energy, which contains a regularized logarithmic function and a linear confining potential. This result highlights the role played by the new quantum of length in our discussion

  6. Waste minimization -- Hanford`s strategy for sustainability

    Energy Technology Data Exchange (ETDEWEB)

    Merry, D.S.

    1998-01-30

    The Hanford Site cleanup activity is an immense and challenging undertaking, which includes characterization and decommissioning of 149 single-shell storage tanks, treating waste stored in 28 double-shell tanks, safely disposing of over 2,100 metric tons of spent nuclear fuel stored onsite, removing thousands of structures, and dealing with significant solid waste, groundwater, and land restoration issues. The Pollution Prevention/Waste Minimization (P2/WMin) Program supports the Hanford Site mission to safely clean up and manage legacy waste and to develop and deploy science and technology in many ways. Once such way is through implementing and documenting over 231 waste reduction projects during the past five years, resulting in over $93 million in cost savings/avoidances. These savings/avoidances allowed other high priority cleanup work to be performed. Another way is by exceeding the Secretary of Energy`s waste reduction goals over two years ahead of schedule, thus reducing the amount of waste to be stored, treated and disposed. Six key elements are the foundation for these sustained P2/WMin results.

  7. Free Energy Self-Averaging in Protein-Sized Random Heteropolymers

    International Nuclear Information System (INIS)

    Chuang, Jeffrey; Grosberg, Alexander Yu.; Kardar, Mehran

    2001-01-01

    Current theories of heteropolymers are inherently macroscopic, but are applied to mesoscopic proteins. To compute the free energy over sequences, one assumes self-averaging -- a property established only in the macroscopic limit. By enumerating the states and energies of compact 18, 27, and 36mers on a lattice with an ensemble of random sequences, we test the self-averaging approximation. We find that fluctuations in the free energy between sequences are weak, and that self-averaging is valid at the scale of real proteins. The results validate sequence design methods which exponentially speed up computational design and simplify experimental realizations

  8. Mapping membrane activity in undiscovered peptide sequence space using machine learning.

    Science.gov (United States)

    Lee, Ernest Y; Fulan, Benjamin M; Wong, Gerard C L; Ferguson, Andrew L

    2016-11-29

    There are some ∼1,100 known antimicrobial peptides (AMPs), which permeabilize microbial membranes but have diverse sequences. Here, we develop a support vector machine (SVM)-based classifier to investigate ⍺-helical AMPs and the interrelated nature of their functional commonality and sequence homology. SVM is used to search the undiscovered peptide sequence space and identify Pareto-optimal candidates that simultaneously maximize the distance σ from the SVM hyperplane (thus maximize its "antimicrobialness") and its ⍺-helicity, but minimize mutational distance to known AMPs. By calibrating SVM machine learning results with killing assays and small-angle X-ray scattering (SAXS), we find that the SVM metric σ correlates not with a peptide's minimum inhibitory concentration (MIC), but rather its ability to generate negative Gaussian membrane curvature. This surprising result provides a topological basis for membrane activity common to AMPs. Moreover, we highlight an important distinction between the maximal recognizability of a sequence to a trained AMP classifier (its ability to generate membrane curvature) and its maximal antimicrobial efficacy. As mutational distances are increased from known AMPs, we find AMP-like sequences that are increasingly difficult for nature to discover via simple mutation. Using the sequence map as a discovery tool, we find a unexpectedly diverse taxonomy of sequences that are just as membrane-active as known AMPs, but with a broad range of primary functions distinct from AMP functions, including endogenous neuropeptides, viral fusion proteins, topogenic peptides, and amyloids. The SVM classifier is useful as a general detector of membrane activity in peptide sequences.

  9. PATACSDB—the database of polyA translational attenuators in coding sequences

    Directory of Open Access Journals (Sweden)

    Malgorzata Habich

    2016-02-01

    Full Text Available Recent additions to the repertoire of gene expression regulatory mechanisms are polyadenylate (polyA tracks encoding for poly-lysine runs in protein sequences. Such tracks stall the translation apparatus and induce frameshifting independently of the effects of charged nascent poly-lysine sequence on the ribosome exit channel. As such, they substantially influence the stability of mRNA and the amount of protein produced from a given transcript. Single base changes in these regions are enough to exert a measurable response on both protein and mRNA abundance; this makes each of these sequences a potentially interesting case study for the effects of synonymous mutation, gene dosage balance and natural frameshifting. Here we present PATACSDB, a resource that contain a comprehensive list of polyA tracks from over 250 eukaryotic genomes. Our data is based on the Ensembl genomic database of coding sequences and filtered with algorithm of 12A-1 which selects sequences of polyA tracks with a minimal length of 12 A’s allowing for one mismatched base. The PATACSDB database is accessible at: http://sysbio.ibb.waw.pl/patacsdb. The source code is available at http://github.com/habich/PATACSDB, and it includes the scripts with which the database can be recreated.

  10. Modeling and Simulation of Smart Energy Systems

    DEFF Research Database (Denmark)

    Connolly, David; Lund, Henrik; Mathiesen, Brian Vad

    2015-01-01

    At a global level, it is essential that the world transfers from fossil fuels to renewable energy resources to minimize the implications of climate change, which has been clearly demonstrated by the Intergovernmental Panel on Climate Change (IPCC, 2007a). At a national level, for most countries, ...... are presented on individual technologies and complete energy system strategies, which outline how it is possible to reach a 100% renewable energy system in the coming decades.......At a global level, it is essential that the world transfers from fossil fuels to renewable energy resources to minimize the implications of climate change, which has been clearly demonstrated by the Intergovernmental Panel on Climate Change (IPCC, 2007a). At a national level, for most countries......, the transition to renewable energy will improve energy security of supply, create new jobs, enhance trade, and consequently grow the national economy. However, even with such promising consequences, renewable energy only provided approximately 13% of the world's energy in 2007 (International Energy Agency, 2009a...

  11. An optimum analysis sequence for environmental gamma-ray spectrometry

    International Nuclear Information System (INIS)

    De la Torre, F.; Rios M, C.; Ruvalcaba A, M. G.; Mireles G, F.; Saucedo A, S.; Davila R, I.; Pinedo, J. L.

    2010-10-01

    This work aims to obtain an optimum analysis sequence for environmental gamma-ray spectroscopy by means of Genie 2000 (Canberra). Twenty different analysis sequences were customized using different peak area percentages and different algorithms for: 1) peak finding, and 2) peak area determination, and with or without the use of a library -based on evaluated nuclear data- of common gamma-ray emitters in environmental samples. The use of an optimum analysis sequence with certified nuclear information avoids the problems originated by the significant variations in out-of-date nuclear parameters of commercial software libraries. Interference-free gamma ray energies with absolute emission probabilities greater than 3.75% were included in the customized library. The gamma-ray spectroscopy system (based on a Ge Re-3522 Canberra detector) was calibrated both in energy and shape by means of the IAEA-2002 reference spectra for software intercomparison. To test the performance of the analysis sequences, the IAEA-2002 reference spectrum was used. The z-score and the reduced χ 2 criteria were used to determine the optimum analysis sequence. The results show an appreciable variation in the peak area determinations and their corresponding uncertainties. Particularly, the combination of second derivative peak locate with simple peak area integration algorithms provides the greater accuracy. Lower accuracy comes from the combination of library directed peak locate algorithm and Genie's Gamma-M peak area determination. (Author)

  12. Minimizing the water and air impacts of unconventional energy extraction

    Science.gov (United States)

    Jackson, R. B.

    2014-12-01

    Unconventional energy generates income and, done well, can reduce air pollution compared to other fossil fuels and even water use compared to fossil fuels and nuclear energy. Alternatively, it could slow the adoption of renewables and, done poorly, release toxic chemicals into water and air. Based on research to date, some primary threats to water resources come from surface spills, wastewater disposal, and drinking-water contamination through poor well integrity. For air resources, an increase in volatile organic compounds and air toxics locally is a potential health threat, but the switch from coal to natural gas for electricity generation will reduce sulfur, nitrogen, mercury, and particulate pollution regionally. Critical needs for future research include data for 1) estimated ultimate recovery (EUR) of unconventional hydrocarbons; 2) the potential for further reductions of water requirements and chemical toxicity; 3) whether unconventional resource development alters the frequency of well-integrity failures; 4) potential contamination of surface and ground waters from drilling and spills; and 5) the consequences of greenhouse gases and air pollution on ecosystems and human health.

  13. Hazardous waste minimization at Oak Ridge National Laboratory during 1987

    International Nuclear Information System (INIS)

    Kendrick, C.M.

    1988-03-01

    Oak Ridge National Laboratory (ORNL) is a multipurpose research and development facility owned and operated by the Department of Energy (DOE) and managed under subcontract by Martin Marietta Energy Systems, Inc. Its primary role is the support of energy technology through applied research and engineering development and scientific research in basic and physical sciences. ORNL also is a valuable resource in the solution of problems of national importance, such as nuclear and chemical waste management. In addition, useful radioactive and stable isotopes which are unavailable from the private sector are produced at ORNL. A formal hazardous waste minimization program for ORNL was launched in mid-1985 in response to the requirements of Section 3002 of the Resource Conservation and Recovery Act (RCRA). The plan for waste minimization has been modified several times and continues to be dynamic. During 1986, a task plan was developed. The six major tasks include: planning and implementation of a laboratory-wide chemical inventory and the subsequent distribution, treatment, storage, and/or disposal (TSD) of unneeded chemicals; establishment and implementation of a system for distributing surplus chemicals to other (internal and external) organizations; training and communication functions necessary to inform and motivate laboratory personnel; evaluation of current procurement and tracking systems for hazardous materials and recommendation and implementation of improvements; systematic review of applicable current and proposed ORNL procedures and ongoing and proposed activities for waste volume and/or toxicity reduction potential; and establishment of criteria by which to measure progress and reporting of significant achievements. Progress is being made toward completing these tasks and is described in this report. 13 refs., 1 fig., 7 tabs

  14. Mitochondrial DNA D-loop sequence variation among 5 maternal lines of the Zemaitukai horse breed

    Directory of Open Access Journals (Sweden)

    E. Gus Cothran

    2005-12-01

    Full Text Available Genetic variation in Zemaitukai horses was investigated using mitochondrial DNA (mtDNA sequencing. The study was performed on 421 bp of the mitochondrial DNA control region, which is known to be more variable than other sections of the mitochondrial genome. Samples from each of the remaining maternal family lines of Zemaitukai horses and three random samples for other Lithuanian (Lithuanian Heavy Draught, Zemaitukai large type and ten European horse breeds were sequenced. Five distinct haplotypes were obtained for the five Zemaitukai maternal families supporting the pedigree data. The minimal difference between two different sequence haplotypes was 6 and the maximal 11 nucleotides in Zemaitukai horse breed. A total of 20 nucleotide differences compared to the reference sequence were found in Lithuanian horse breeds. Genetic cluster analysis did not shown any clear pattern of relationship among breeds of different type.

  15. Optimization at different loads by minimization of irreversibilities

    International Nuclear Information System (INIS)

    Wong, K.F.V.; Niu, Z.

    1991-01-01

    This paper reports that the irreversibility of the power cycle was chosen as the objective function as this function can successfully measure both the quality and quantity of energy flow in the cycle. Minimization of the irreversibility ensures that the power cycle will operate more efficiently. One feature of the present work is that the boiler, turbine, condenser and heaters are treated as one system for the purpose of optimization. In the optimization model, nine regression formulae are used, which are obtained from the measured test data. From the results of the present work, it can be seen that the optimization model developed can represent the effect of operational parameters on the power plant first and second law efficiency. Some of the results can be used to provide guidance for the optimal operation of the power plant. When the power cycle works at full load, the main steam temperature and pressure should be at the upper limit for minimal irreversibility of the system. If the load is less than 65% of its design capacity, the steam temperature and pressure should be decreased for a lower irreversibility of the system

  16. Gauss–Bonnet cosmology with induced gravity and a non-minimally coupled scalar field on the brane

    International Nuclear Information System (INIS)

    Nozari, Kourosh; Fazlpour, Behnaz

    2008-01-01

    We construct a cosmological model with a non-minimally coupled scalar field on the brane, where Gauss–Bonnet and induced gravity effects are taken into account. This model has 5D character at both high and low energy limits but reduces to 4D gravity for intermediate scales. While induced gravity is a manifestation of the IR limit of the model, the Gauss–Bonnet term and non-minimal coupling of the scalar field and induced gravity are essentially related to the UV limit of the scenario. We study the cosmological implications of this scenario focusing on the late time behavior of the solutions. In this setup, non-minimal coupling plays the role of an additional fine-tuning parameter that controls the initial density of the predicted finite density big bang. Also, non-minimal coupling has important implications for the bouncing nature of the solutions

  17. Minimal Flavour Violation and Beyond

    CERN Document Server

    Isidori, Gino

    2012-01-01

    We review the formulation of the Minimal Flavour Violation (MFV) hypothesis in the quark sector, as well as some "variations on a theme" based on smaller flavour symmetry groups and/or less minimal breaking terms. We also review how these hypotheses can be tested in B decays and by means of other flavour-physics observables. The phenomenological consequences of MFV are discussed both in general terms, employing a general effective theory approach, and in the specific context of the Minimal Supersymmetric extension of the SM.

  18. Frameshift mutations in infectious cDNA clones of Citrus tristeza virus: a strategy to minimize the toxicity of viral sequences to Escherichia coli

    International Nuclear Information System (INIS)

    Satyanarayana, Tatineni; Gowda, Siddarame; Ayllon, Maria A.; Dawson, William O.

    2003-01-01

    The advent of reverse genetics revolutionized the study of positive-stranded RNA viruses that were amenable for cloning as cDNAs into high-copy-number plasmids of Escherichia coli. However, some viruses are inherently refractory to cloning in high-copy-number plasmids due to toxicity of viral sequences to E. coli. We report a strategy that is a compromise between infectivity of the RNA transcripts and toxicity to E. coli effected by introducing frameshift mutations into 'slippery sequences' near the viral 'toxicity sequences' in the viral cDNA. Citrus tristeza virus (CTV) has cDNA sequences that are toxic to E. coli. The original full-length infectious cDNA of CTV and a derivative replicon, CTV-ΔCla, cloned into pUC119, resulted in unusually limited E. coli growth. However, upon sequencing of these cDNAs, an additional uridinylate (U) was found in a stretch of U's between nts 3726 and 3731 that resulted in a change to a reading frame with a stop codon at nt 3734. Yet, in vitro produced RNA transcripts from these clones infected protoplasts, and the resulting progeny virus was repaired. Correction of the frameshift mutation in the CTV cDNA constructs resulted in increased infectivity of in vitro produced RNA transcripts, but also caused a substantial increase of toxicity to E. coli, now requiring 3 days to develop visible colonies. Frameshift mutations created in sequences not suspected to facilitate reading frame shifting and silent mutations introduced into oligo(U) regions resulted in complete loss of infectivity, suggesting that the oligo(U) region facilitated the repair of the frameshift mutation. Additional frameshift mutations introduced into other oligo(U) regions also resulted in transcripts with reduced infectivity similarly to the original clones with the +1 insertion. However, only the frameshift mutations introduced into oligo(U) regions that were near and before the toxicity region improved growth and stability in E. coli. These data demonstrate that

  19. Mixed waste and waste minimization: The effect of regulations and waste minimization on the laboratory

    International Nuclear Information System (INIS)

    Dagan, E.B.; Selby, K.B.

    1993-08-01

    The Hanford Site is located in the State of Washington and is subject to state and federal environmental regulations that hamper waste minimization efforts. This paper addresses the negative effect of these regulations on waste minimization and mixed waste issues related to the Hanford Site. Also, issues are addressed concerning the regulations becoming more lenient. In addition to field operations, the Hanford Site is home to the Pacific Northwest Laboratory which has many ongoing waste minimization activities of particular interest to laboratories

  20. Mortality-minimizing sandpipers vary stopover behavior dependent on age and geographic proximity to migrating predators

    NARCIS (Netherlands)

    Hope, D.D.; Lank, D.B.; Ydenberg, R.C.

    2014-01-01

    Ecological theory for long-distance avian migration considers time-, energy-, and mortality-minimizing tactics, but predictions about the latter have proven elusive. Migrants must make behavioral decisions that can favor either migratory speed or safety from predators, but often not both. We compare