WorldWideScience

Sample records for optimal cytoplasmatic density

  1. Optimization of Barron density estimates

    Czech Academy of Sciences Publication Activity Database

    Vajda, Igor; van der Meulen, E. C.

    2001-01-01

    Roč. 47, č. 5 (2001), s. 1867-1883 ISSN 0018-9448 R&D Projects: GA ČR GA102/99/1137 Grant - others:Copernicus(XE) 579 Institutional research plan: AV0Z1075907 Keywords : Barron estimator * chi-square criterion * density estimation Subject RIV: BD - Theory of Information Impact factor: 2.077, year: 2001

  2. Optimal stocking densities of snails [ Archachatina marginata ...

    African Journals Online (AJOL)

    Optimal stocking densities of breeding and fattening snails [Archachatina marginata Saturalis A.m.s (Swainson)] were determined through two experiments (five treatments, four replicates and randomised complete block design each) between April and December 1998.Experiment 1 had 3,6, 12, 17 and 22 A.m.s. adult ...

  3. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.; Franek, M.; Schonlieb, C.-B.

    2012-01-01

    for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations

  4. Dielectric Screening Meets Optimally Tuned Density Functionals.

    Science.gov (United States)

    Kronik, Leeor; Kümmel, Stephan

    2018-04-17

    A short overview of recent attempts at merging two independently developed methods is presented. These are the optimal tuning of a range-separated hybrid (OT-RSH) functional, developed to provide an accurate first-principles description of the electronic structure and optical properties of gas-phase molecules, and the polarizable continuum model (PCM), developed to provide an approximate but computationally tractable description of a solvent in terms of an effective dielectric medium. After a brief overview of the OT-RSH approach, its combination with the PCM as a potentially accurate yet low-cost approach to the study of molecular assemblies and solids, particularly in the context of photocatalysis and photovoltaics, is discussed. First, solvated molecules are considered, with an emphasis on the challenge of balancing eigenvalue and total energy trends. Then, it is shown that the same merging of methods can also be used to study the electronic and optical properties of molecular solids, with a similar discussion of the pros and cons. Tuning of the effective scalar dielectric constant as one recent approach that mitigates some of the difficulties in merging the two approaches is considered. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Local-scaling density-functional method: Intraorbit and interorbit density optimizations

    International Nuclear Information System (INIS)

    Koga, T.; Yamamoto, Y.; Ludena, E.V.

    1991-01-01

    The recently proposed local-scaling density-functional theory provides us with a practical method for the direct variational determination of the electron density function ρ(r). The structure of ''orbits,'' which ensures the one-to-one correspondence between the electron density ρ(r) and the N-electron wave function Ψ({r k }), is studied in detail. For the realization of the local-scaling density-functional calculations, procedures for intraorbit and interorbit optimizations of the electron density function are proposed. These procedures are numerically illustrated for the helium atom in its ground state at the beyond-Hartree-Fock level

  6. Optimal Bandwidth Selection for Kernel Density Functionals Estimation

    Directory of Open Access Journals (Sweden)

    Su Chen

    2015-01-01

    Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals ∫γ(xf2(xdx with appropriate choice of γ(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. ∫γ(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of ∫γ(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of ∫γ(xf2(xdx are provided: Normal scale bandwidth selection (namely, “Rule of Thumb” and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.

  7. Neutron density optimal control of A-1 reactor analoque model

    International Nuclear Information System (INIS)

    Grof, V.

    1975-01-01

    Two applications are described of the optimal control of a reactor analog model. Both cases consider the control of neutron density. Control loops containing the on-line controlled process, the reactor of the first Czechoslovak nuclear power plant A-1, are simulated on an analog computer. Two versions of the optimal control algorithm are derived using modern control theory (Pontryagin's maximum principle, the calculus of variations, and Kalman's estimation theory), the minimum time performance index, and the quadratic performance index. The results of the optimal control analysis are compared with the A-1 reactor conventional control. (author)

  8. Length scale and manufacturability in density-based topology optimization

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Wang, Fengwen; Sigmund, Ole

    2016-01-01

    Since its original introduction in structural design, density-based topology optimization has been applied to a number of other fields such as microelectromechanical systems, photonics, acoustics and fluid mechanics. The methodology has been well accepted in industrial design processes where it can...... provide competitive designs in terms of cost, materials and functionality under a wide set of constraints. However, the optimized topologies are often considered as conceptual due to loosely defined topologies and the need of postprocessing. Subsequent amendments can affect the optimized design...

  9. Maximum length scale in density based topology optimization

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Wang, Fengwen

    2017-01-01

    The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...

  10. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  11. Density-based penalty parameter optimization on C-SVM.

    Science.gov (United States)

    Liu, Yun; Lian, Jie; Bartolacci, Michael R; Zeng, Qing-An

    2014-01-01

    The support vector machine (SVM) is one of the most widely used approaches for data classification and regression. SVM achieves the largest distance between the positive and negative support vectors, which neglects the remote instances away from the SVM interface. In order to avoid a position change of the SVM interface as the result of an error system outlier, C-SVM was implemented to decrease the influences of the system's outliers. Traditional C-SVM holds a uniform parameter C for both positive and negative instances; however, according to the different number proportions and the data distribution, positive and negative instances should be set with different weights for the penalty parameter of the error terms. Therefore, in this paper, we propose density-based penalty parameter optimization of C-SVM. The experiential results indicated that our proposed algorithm has outstanding performance with respect to both precision and recall.

  12. Optimization of power and energy densities in supercapacitors

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, David B. [Sandia National Laboratories, PO Box 969 MS 9291, Livermore, CA 94551 (United States)

    2010-06-01

    Supercapacitors use nanoporous electrodes to store large amounts of charge on their high surface areas, and use the ions in electrolytes to carry charge into the pores. Their high power density makes them a potentially useful complement to batteries. However, ion transport through long, narrow channels still limits power and efficiency in these devices. Proper design can mitigate this. Current collector geometry must also be considered once this is done. Here, De Levie's model for porous electrodes is applied to quantitatively predict device performance and to propose optimal device designs for given specifications. Effects unique to nanoscale pores are considered, including that pores may not have enough salt to fully charge. Supercapacitors are of value for electric vehicles, portable electronics, and power conditioning in electrical grids with distributed renewable sources, and that value will increase as new device fabrication methods are developed and proper design accommodates those improvements. Example design outlines for vehicle applications are proposed and compared. (author)

  13. Density Changes in the Optimized CSSX Solvent System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, D.D.

    2002-11-25

    Density increases in caustic-side solvent extraction (CSSX) solvent have been observed in separate experimental programs performed by different groups of researchers. Such changes indicate a change in chemical composition. Increased density adversely affects separation of solvent from denser aqueous solutions present in the CSSX process. Identification and control of factors affecting solvent density are essential for design and operation of the centrifugal contactors. The goals of this research were to identify the factors affecting solvent density (composition) and to develop correlations between easily measured solvent properties (density and viscosity) and the chemical composition of the solvent, which will permit real-time determination and adjustment of the solvent composition. In evaporation experiments, virgin solvent was subjected to evaporation under quiescent conditions at 25, 35, and 45 C with continuously flowing dry air passing over the surface of the solvent. Density and viscosity were measured periodically, and chemical analysis was performed on the solvent samples. Chemical interaction tests were completed to determine if any chemical reaction takes place over extended contact time that changes the composition and/or physical properties. Solvent and simulant, solvent and strip solution, and solvent and wash solution were contacted continuously in agitated flasks. They were periodically sampled and the density measured (viscosity was also measured on some samples) and then submitted to the Chemical Sciences Division of Oak Ridge National Laboratory for analysis by nuclear magnetic resonance (NMR) spectrometry and high-performance liquid chromatography (HPLC) using the virgin solvent as the baseline. Chemical interaction tests showed that solvent densities and viscosities did not change appreciably during contact with simulant, strip, or wash solution. No effects on density and viscosity and no chemical changes in the solvent were noted within

  14. Geometry optimization of molecules within an LCGTO local-density functional approach

    International Nuclear Information System (INIS)

    Mintmire, J.W.

    1990-01-01

    We describe our implementation of geometry optimization techniques within the linear combination of Gaussian-type orbitals (LCGTO) approach to local-density functional theory. The algorithm for geometry optimization is based on the evaluation of the gradient of the total energy with respect to internal coordinates within the local-density functional scheme. We present optimization results for a range of small molecules which serve as test cases for our approach

  15. Density based topology optimization of turbulent flow heat transfer systems

    DEFF Research Database (Denmark)

    Dilgen, Sümer Bartug; Dilgen, Cetin Batur; Fuhrman, David R.

    2018-01-01

    The focus of this article is on topology optimization of heat sinks with turbulent forced convection. The goal is to demonstrate the extendibility, and the scalability of a previously developed fluid solver to coupled multi-physics and large 3D problems. The gradients of the objective and the con...... in the optimization process, while also demonstrating extension of the methodology to include coupling of heat transfer with turbulent flows.......The focus of this article is on topology optimization of heat sinks with turbulent forced convection. The goal is to demonstrate the extendibility, and the scalability of a previously developed fluid solver to coupled multi-physics and large 3D problems. The gradients of the objective...

  16. Iso-geometric shape optimization of magnetic density separators

    DEFF Research Database (Denmark)

    Dang Manh, Nguyen; Evgrafov, Anton; Gravesen, Jens

    2014-01-01

    Purpose The waste recycling industry increasingly relies on magnetic density separators. These devices generate an upward magnetic force in ferro-fluids allowing to separate the immersed particles according to their mass density. Recently, a new separator design has been proposed that significantly...... reduces the required amount of permanent magnet material. The purpose of this paper is to alleviate the undesired end-effects in this design by altering the shape of the ferromagnetic covers of the individual poles. Design/methodology/approach The paper represents the shape of the ferromagnetic pole...

  17. ICRF array module development and optimization for high power density

    International Nuclear Information System (INIS)

    Ryan, P.M.; Swain, D.W.

    1997-02-01

    This report describes the analysis and optimization of the proposed International Thermonuclear Experimental Reactor (ITER) Antenna Array for the ion cyclotron range of frequencies (ICRF). The objectives of this effort were to: (1) minimize the applied radiofrequency rf voltages occurring in vacuum by proper layout and shape of components, limit the component's surface/volumes where the rf voltage is high; (2) study the effects of magnetic insulation, as applied to the current design; (3) provide electrical characteristics of the antenna for the development and analysis of tuning, arc detection/suppression, and systems for discriminating between arcs and edge-localized modes (ELMs); (4) maintain close interface with mechanical design

  18. Optimization and Simulation of SLM Process for High Density H13 Tool Steel Parts

    Science.gov (United States)

    Laakso, Petri; Riipinen, Tuomas; Laukkanen, Anssi; Andersson, Tom; Jokinen, Antero; Revuelta, Alejandro; Ruusuvuori, Kimmo

    This paper demonstrates the successful printing and optimization of processing parameters of high-strength H13 tool steel by Selective Laser Melting (SLM). D-Optimal Design of Experiments (DOE) approach is used for parameter optimization of laser power, scanning speed and hatch width. With 50 test samples (1×1×1cm) we establish parameter windows for these three parameters in relation to part density. The calculated numerical model is found to be in good agreement with the density data obtained from the samples using image analysis. A thermomechanical finite element simulation model is constructed of the SLM process and validated by comparing the calculated densities retrieved from the model with the experimentally determined densities. With the simulation tool one can explore the effect of different parameters on density before making any printed samples. Establishing a parameter window provides the user with freedom for parameter selection such as choosing parameters that result in fastest print speed.

  19. Osmosensation in vasopressin neurons: changing actin density to optimize function.

    Science.gov (United States)

    Prager-Khoutorsky, Masha; Bourque, Charles W

    2010-02-01

    The proportional relation between circulating vasopressin concentration and plasma osmolality is fundamental for body fluid homeostasis. Although changes in the sensitivity of this relation are associated with pathophysiological conditions, central mechanisms modulating osmoregulatory gain are unknown. Here, we review recent data that sheds important light on this process. The cell autonomous osmosensitivity of vasopressin neurons depends on cation channels comprising a variant of the transient receptor potential vanilloid 1 (TRPV1) channel. Hyperosmotic activation is mediated by a mechanical process where sensitivity increases in proportion with actin filament density. Moreover, angiotensin II amplifies osmotic activation by a rapid stimulation of actin polymerization, suggesting that neurotransmitter-induced changes in cytoskeletal organization in osmosensory neurons can mediate central changes in osmoregulatory gain. (c) 2009 Elsevier Ltd. All rights reserved.

  20. Research on Power Factor Correction Boost Inductor Design Optimization – Efficiency vs. Power Density

    DEFF Research Database (Denmark)

    Li, Qingnan; Andersen, Michael A. E.; Thomsen, Ole Cornelius

    2011-01-01

    Nowadays, efficiency and power density are the most important issues for Power Factor Correction (PFC) converters development. However, it is a challenge to reach both high efficiency and power density in a system at the same time. In this paper, taking a Bridgeless PFC (BPFC) as an example......, a useful compromise between efficiency and power density of the Boost inductors on 3.2kW is achieved using an optimized design procedure. The experimental verifications based on the optimized inductor are carried out from 300W to 3.2kW at 220Vac input....

  1. Optimized Design of Spacing in Pulsed Neutron Gamma Density Logging While Drilling

    Directory of Open Access Journals (Sweden)

    ZHANG Feng;HAN Zhong-yue;WU He;HAN Fei

    2016-10-01

    Full Text Available Radioactive source, used in traditional density logging, has great impact on the environment, while the pulsed neutron source applied in the logging tool is more safety and greener. In our country, the pulsed neutron-gamma density logging technology is still in the stage of development. Optimizing the parameters of neutron-gamma density instrument is essential to improve the measuring accuracy. This paper mainly studied the effects of spacing to typical neutron-gamma density logging tool which included one D-T neutron generator and two gamma scintillation detectors. The optimization of spacing were based on measuring sensitivity and counting statistic. The short spacing from 25 to 35 cm and long spacing from 60 to 65 cm were selected as the optimal position for near and far detector respectively. The result can provide theoretical support for design and manufacture of the instrument.

  2. Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.

    Science.gov (United States)

    Tieng, Quang M; Vegh, Viktor; Brereton, Ian M

    2009-01-01

    An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.

  3. Application of response surface methodology to optimize uranium biological leaching at high pulp density

    International Nuclear Information System (INIS)

    Fatemi, Faezeh; Arabieh, Masoud; Jahani, Samaneh

    2016-01-01

    The aim of the present study was to carry out uranium bioleaching via optimization of the leaching process using response surface methodology. For this purpose, the native Acidithiobacillus sp. was adapted to different pulp densities following optimization process carried out at a high pulp density. Response surface methodology based on Box-Behnken design was used to optimize the uranium bioleaching. The effects of six key parameters on the bioleaching efficiency were investigated. The process was modeled with mathematical equation, including not only first and second order terms, but also with probable interaction effects between each pair of factors.The results showed that the extraction efficiency of uranium dropped from 100% at pulp densities of 2.5, 5, 7.5 and 10% to 68% at 12.5% of pulp density. Using RSM, the optimum conditions for uranium bioleaching (12.5% (w/v)) were identified as pH = 1.96, temperature = 30.90 C, stirring speed = 158 rpm, 15.7% inoculum, FeSO 4 . 7H 2 O concentration at 13.83 g/L and (NH 4 ) 2 SO 4 concentration at 3.22 g/L which achieved 83% of uranium extraction efficiency. The results of uranium bioleaching experiment using optimized parameter showed 81% uranium extraction during 15 d. The obtained results reveal that using RSM is reliable and appropriate for optimization of parameters involved in the uranium bioleaching process.

  4. Application of response surface methodology to optimize uranium biological leaching at high pulp density

    Energy Technology Data Exchange (ETDEWEB)

    Fatemi, Faezeh; Arabieh, Masoud; Jahani, Samaneh [NSTRI, Tehran (Iran, Islamic Republic of). Nuclear Fuel Cycle Research School

    2016-08-01

    The aim of the present study was to carry out uranium bioleaching via optimization of the leaching process using response surface methodology. For this purpose, the native Acidithiobacillus sp. was adapted to different pulp densities following optimization process carried out at a high pulp density. Response surface methodology based on Box-Behnken design was used to optimize the uranium bioleaching. The effects of six key parameters on the bioleaching efficiency were investigated. The process was modeled with mathematical equation, including not only first and second order terms, but also with probable interaction effects between each pair of factors.The results showed that the extraction efficiency of uranium dropped from 100% at pulp densities of 2.5, 5, 7.5 and 10% to 68% at 12.5% of pulp density. Using RSM, the optimum conditions for uranium bioleaching (12.5% (w/v)) were identified as pH = 1.96, temperature = 30.90 C, stirring speed = 158 rpm, 15.7% inoculum, FeSO{sub 4} . 7H{sub 2}O concentration at 13.83 g/L and (NH{sub 4}){sub 2}SO{sub 4} concentration at 3.22 g/L which achieved 83% of uranium extraction efficiency. The results of uranium bioleaching experiment using optimized parameter showed 81% uranium extraction during 15 d. The obtained results reveal that using RSM is reliable and appropriate for optimization of parameters involved in the uranium bioleaching process.

  5. Microscopically based energy density functionals for nuclei using the density matrix expansion. II. Full optimization and validation

    Science.gov (United States)

    Navarro Pérez, R.; Schunck, N.; Dyhdalo, A.; Furnstahl, R. J.; Bogner, S. K.

    2018-05-01

    Background: Energy density functional methods provide a generic framework to compute properties of atomic nuclei starting from models of nuclear potentials and the rules of quantum mechanics. Until now, the overwhelming majority of functionals have been constructed either from empirical nuclear potentials such as the Skyrme or Gogny forces, or from systematic gradient-like expansions in the spirit of the density functional theory for atoms. Purpose: We seek to obtain a usable form of the nuclear energy density functional that is rooted in the modern theory of nuclear forces. We thus consider a functional obtained from the density matrix expansion of local nuclear potentials from chiral effective field theory. We propose a parametrization of this functional carefully calibrated and validated on selected ground-state properties that is suitable for large-scale calculations of nuclear properties. Methods: Our energy functional comprises two main components. The first component is a non-local functional of the density and corresponds to the direct part (Hartree term) of the expectation value of local chiral potentials on a Slater determinant. Contributions to the mean field and the energy of this term are computed by expanding the spatial, finite-range components of the chiral potential onto Gaussian functions. The second component is a local functional of the density and is obtained by applying the density matrix expansion to the exchange part (Fock term) of the expectation value of the local chiral potential. We apply the UNEDF2 optimization protocol to determine the coupling constants of this energy functional. Results: We obtain a set of microscopically constrained functionals for local chiral potentials from leading order up to next-to-next-to-leading order with and without three-body forces and contributions from Δ excitations. These functionals are validated on the calculation of nuclear and neutron matter, nuclear mass tables, single-particle shell structure

  6. Optimized Min-Sum Decoding Algorithm for Low Density Parity Check Codes

    OpenAIRE

    Mohammad Rakibul Islam; Dewan Siam Shafiullah; Muhammad Mostafa Amir Faisal; Imran Rahman

    2011-01-01

    Low Density Parity Check (LDPC) code approaches Shannon–limit performance for binary field and long code lengths. However, performance of binary LDPC code is degraded when the code word length is small. An optimized min-sum algorithm for LDPC code is proposed in this paper. In this algorithm unlike other decoding methods, an optimization factor has been introduced in both check node and bit node of the Min-sum algorithm. The optimization factor is obtained before decoding program, and the sam...

  7. Genetic search for an optimal power flow solution from a high density cluster

    Energy Technology Data Exchange (ETDEWEB)

    Amarnath, R.V. [Hi-Tech College of Engineering and Technology, Hyderabad (India); Ramana, N.V. [JNTU College of Engineering, Jagityala (India)

    2008-07-01

    This paper proposed a novel method to solve optimal power flow (OPF) problems. The method is based on a genetic algorithm (GA) search from a High Density Cluster (GAHDC). The algorithm of the proposed method includes 3 stages, notably (1) a suboptimal solution is obtained via a conventional analytical method, (2) a high density cluster, which consists of other suboptimal data points from the first stage, is formed using a density-based cluster algorithm, and (3) a genetic algorithm based search is carried out for the exact optimal solution from a low population sized, high density cluster. The final optimal solution thoroughly satisfies the well defined fitness function. A standard IEEE 30-bus test system was considered for the simulation study. Numerical results were presented and compared with the results of other approaches. It was concluded that although there is not much difference in numerical values, the proposed method has the advantage of minimal computational effort and reduced CPU time. As such, the method would be suitable for online applications such as the present Optimal Power Flow problem. 24 refs., 2 tabs., 4 figs.

  8. Topology Optimization Design of 3D Continuum Structure with Reserved Hole Based on Variable Density Method

    Directory of Open Access Journals (Sweden)

    Bai Shiye

    2016-05-01

    Full Text Available An objective function defined by minimum compliance of topology optimization for 3D continuum structure was established to search optimal material distribution constrained by the predetermined volume restriction. Based on the improved SIMP (solid isotropic microstructures with penalization model and the new sensitivity filtering technique, basic iteration equations of 3D finite element analysis were deduced and solved by optimization criterion method. All the above procedures were written in MATLAB programming language, and the topology optimization design examples of 3D continuum structure with reserved hole were examined repeatedly by observing various indexes, including compliance, maximum displacement, and density index. The influence of mesh, penalty factors, and filter radius on the topology results was analyzed. Computational results showed that the finer or coarser the mesh number was, the larger the compliance, maximum displacement, and density index would be. When the filtering radius was larger than 1.0, the topology shape no longer appeared as a chessboard problem, thus suggesting that the presented sensitivity filtering method was valid. The penalty factor should be an integer because iteration steps increased greatly when it is a noninteger. The above modified variable density method could provide technical routes for topology optimization design of more complex 3D continuum structures in the future.

  9. Optimal Base Station Density of Dense Network: From the Viewpoint of Interference and Load.

    Science.gov (United States)

    Feng, Jianyuan; Feng, Zhiyong

    2017-09-11

    Network densification is attracting increasing attention recently due to its ability to improve network capacity by spatial reuse and relieve congestion by offloading. However, excessive densification and aggressive offloading can also cause the degradation of network performance due to problems of interference and load. In this paper, with consideration of load issues, we study the optimal base station density that maximizes the throughput of the network. The expected link rate and the utilization ratio of the contention-based channel are derived as the functions of base station density using the Poisson Point Process (PPP) and Markov Chain. They reveal the rules of deployment. Based on these results, we obtain the throughput of the network and indicate the optimal deployment density under different network conditions. Extensive simulations are conducted to validate our analysis and show the substantial performance gain obtained by the proposed deployment scheme. These results can provide guidance for the network densification.

  10. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    Directory of Open Access Journals (Sweden)

    Yongjun Ahn

    Full Text Available The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive

  11. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    Science.gov (United States)

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric

  12. Optimization of human corneal endothelial cell culture: density dependency of successful cultures in vitro.

    Science.gov (United States)

    Peh, Gary S L; Toh, Kah-Peng; Ang, Heng-Pei; Seah, Xin-Yi; George, Benjamin L; Mehta, Jodhbir S

    2013-05-03

    Global shortage of donor corneas greatly restricts the numbers of corneal transplantations performed yearly. Limited ex vivo expansion of primary human corneal endothelial cells is possible, and a considerable clinical interest exists for development of tissue-engineered constructs using cultivated corneal endothelial cells. The objective of this study was to investigate the density-dependent growth of human corneal endothelial cells isolated from paired donor corneas and to elucidate an optimal seeding density for their extended expansion in vitro whilst maintaining their unique cellular morphology. Established primary human corneal endothelial cells were propagated to the second passage (P2) before they were utilized for this study. Confluent P2 cells were dissociated and seeded at four seeding densities: 2,500 cells per cm2 ('LOW'); 5,000 cells per cm2 ('MID'); 10,000 cells per cm2 ('HIGH'); and 20,000 cells per cm2 ('HIGH(×2)'), and subsequently analyzed for their propensity to proliferate. They were also subjected to morphometric analyses comparing cell sizes, coefficient of variance, as well as cell circularity when each culture became confluent. At the two lower densities, proliferation rates were higher than cells seeded at higher densities, though not statistically significant. However, corneal endothelial cells seeded at lower densities were significantly larger in size, heterogeneous in shape and less circular (fibroblastic-like), and remained hypertrophic after one month in culture. Comparatively, cells seeded at higher densities were significantly homogeneous, compact and circular at confluence. Potentially, at an optimal seeding density of 10,000 cells per cm2, it is possible to obtain between 10 million to 25 million cells at the third passage. More importantly, these expanded human corneal endothelial cells retained their unique cellular morphology. Our results demonstrated a density dependency in the culture of primary human corneal endothelial

  13. Optimal catalyst curves: Connecting density functional theory calculations with industrial reactor design and catalyst selection

    DEFF Research Database (Denmark)

    Jacobsen, C.J.H.; Dahl, Søren; Boisen, A.

    2002-01-01

    For ammonia synthesis catalysts a volcano-type relationship has been found experimentally. We demonstrate that by combining density functional theory calculations with a microkinetic model the position of the maximum of the volcano curve is sensitive to the reaction conditions. The catalytic...... ammonia synthesis activity, to a first approximation, is a function only of the binding energy of nitrogen to the catalyst. Therefore, it is possible to evaluate which nitrogen binding energy is optimal under given reaction conditions. This leads to the concept of optimal catalyst curves, which illustrate...... the nitrogen binding energies of the optimal catalysts at different temperatures, pressures, and synthesis gas compositions. Using this concept together with the ability to prepare catalysts with desired binding energies it is possible to optimize the ammonia process. In this way a link between first...

  14. Distributed material density and anisotropy for optimized eigenfrequency of 2D continua

    DEFF Research Database (Denmark)

    Pedersen, Pauli; Pedersen, Niels Leergaard

    2015-01-01

    A practical approach to optimize a continuum/structural eigenfrequency is presented, including design of the distribution of material anisotropy. This is often termed free material optimization (FMO). An important aspect is the separation of the overall material distribution from the local design...... with respect to material density and from this values of the element OC. Each factor of this expression has a physical interpretation. Stated alternatively, the optimization problem of material distribution is converted into a problem of determining a design of uniform OC values. The constitutive matrices...... are described by non-dimensional matrices with unity norms of trace and Frobenius, and thus this part of the optimized design has no influence on the mass distribution. Gradients of eigenfrequency with respect to the components of these non-dimensional constitutive matrices are therefore simplified...

  15. Optimization of anisotropic photonic density of states for Raman cooling of solids

    Science.gov (United States)

    Chen, Yin-Chung; Ghosh, Indronil; Schleife, André; Carney, P. Scott; Bahl, Gaurav

    2018-04-01

    Optical refrigeration of solids holds tremendous promise for applications in thermal management. It can be achieved through multiple mechanisms including inelastic anti-Stokes Brillouin and Raman scattering. However, engineering of these mechanisms remains relatively unexplored. The major challenge lies in the natural unfavorable imbalance in transition rates for Stokes and anti-Stokes scattering. We consider the influence of anisotropic photonic density of states on Raman scattering and derive expressions for cooling in such photonically anisotropic systems. We demonstrate optimization of the Raman cooling figure of merit considering all possible orientations for the material crystal and two example photonic crystals. We find that the anisotropic description of the photonic density of states and the optimization process is necessary to obtain the best Raman cooling efficiency for systems having lower symmetry. This general result applies to a wide array of other laser cooling methods in the presence of anisotropy.

  16. Optimization of multiply acquired magnetic flux density Bz using ICNE-Multiecho train in MREIT

    International Nuclear Information System (INIS)

    Nam, Hyun Soo; Kwon, Oh In

    2010-01-01

    The aim of magnetic resonance electrical impedance tomography (MREIT) is to visualize the electrical properties, conductivity or current density of an object by injection of current. Recently, the prolonged data acquisition time when using the injected current nonlinear encoding (ICNE) method has been advantageous for measurement of magnetic flux density data, Bz, for MREIT in the signal-to-noise ratio (SNR). However, the ICNE method results in undesirable side artifacts, such as blurring, chemical shift and phase artifacts, due to the long data acquisition under an inhomogeneous static field. In this paper, we apply the ICNE method to a gradient and spin echo (GRASE) multi-echo train pulse sequence in order to provide the multiple k-space lines during a single RF pulse period. We analyze the SNR of the measured multiple B z data using the proposed ICNE-Multiecho MR pulse sequence. By determining a weighting factor for B z data in each of the echoes, an optimized inversion formula for the magnetic flux density data is proposed for the ICNE-Multiecho MR sequence. Using the ICNE-Multiecho method, the quality of the measured magnetic flux density is considerably increased by the injection of a long current through the echo train length and by optimization of the voxel-by-voxel noise level of the B z value. Agarose-gel phantom experiments have demonstrated fewer artifacts and a better SNR using the ICNE-Multiecho method. Experimenting with the brain of an anesthetized dog, we collected valuable echoes by taking into account the noise level of each of the echoes and determined B z data by determining optimized weighting factors for the multiply acquired magnetic flux density data.

  17. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

    Science.gov (United States)

    Nygaard, Cecilie R.; Olsen, Jeppe

    2013-03-01

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  18. Ion acceleration in electrostatic collisionless shock: on the optimal density profile for quasi-monoenergetic beams

    Science.gov (United States)

    Boella, E.; Fiúza, F.; Stockem Novo, A.; Fonseca, R.; Silva, L. O.

    2018-03-01

    A numerical study on ion acceleration in electrostatic shock waves is presented, with the aim of determining the best plasma configuration to achieve quasi-monoenergetic ion beams in laser-driven systems. It was recently shown that tailored near-critical density plasmas characterized by a long-scale decreasing rear density profile lead to beams with low energy spread (Fiúza et al 2012 Phys. Rev. Lett. 109 215001). In this work, a detailed parameter scan investigating different plasma scale lengths is carried out. As result, the optimal plasma spatial scale length that allows for minimizing the energy spread while ensuring a significant reflection of ions by the shock is identified. Furthermore, a new configuration where the required profile has been obtained by coupling micro layers of different densities is proposed. Results show that this new engineered approach is a valid alternative, guaranteeing a low energy spread with a higher level of controllability.

  19. Optimization and transferability of non-electrostatic repulsion in the polarizable density embedding model

    DEFF Research Database (Denmark)

    Hrsak, Dalibor; Olsen, Jógvan Magnus Haugaard; Kongsted, Jacob

    2017-01-01

    Embedding techniques in combination with response theory represent a successful approach to calculate molecular properties and excited states in large molecular systems such as solutions and proteins. Recently, the polarizable embedding model has been extended by introducing explicit electronic...... densities of the molecules in the nearest environment, resulting in the polarizable density embedding (PDE) model. This improvement provides a better description of the intermolecular interactions at short distances. However, the electronic densities of the environment molecules are calculated in isolation...... interaction energies calculated on the basis of full quantum-mechanical calculations. The obtained optimal factors are used in PDE calculations of various ground- and excited-state properties of molecules embedded in solvents described as polarizable environments. © 2017 Wiley Periodicals, Inc....

  20. PERFORMANCE OPTIMIZATION OF LINEAR INDUCTION MOTOR BY EDDY CURRENT AND FLUX DENSITY DISTRIBUTION ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. S. MANNA

    2011-12-01

    Full Text Available The development of electromagnetic devices as machines, transformers, heating devices confronts the engineers with several problems. For the design of an optimized geometry and the prediction of the operational behaviour an accurate knowledge of the dependencies of the field quantities inside the magnetic circuits is necessary. This paper provides the eddy current and core flux density distribution analysis in linear induction motor. Magnetic flux in the air gap of the Linear Induction Motor (LIM is reduced to various losses such as end effects, fringes, effect, skin effects etc. The finite element based software package COMSOL Multiphysics Inc. USA is used to get the reliable and accurate computational results for optimization the performance of Linear Induction Motor (LIM. The geometrical characteristics of LIM are varied to find the optimal point of thrust and minimum flux leakage during static and dynamic conditions.

  1. Optimization of protection and calibration of the moisture-density gages troxler

    International Nuclear Information System (INIS)

    RAKOTONDRAVANONA, J.E.

    2011-01-01

    The purpose of this work is the implementation of the principle of optimization of the protection and calibration of moisture-density gages TROXLER. The main objectives are the application of radiation protection and the feasibility of a calibration laboratory design. The calibration of density and moisture may confirm the calibration of moisture-density gages TROXLER. The calibration of density consists of the assembly of measurements on three calibration blocks (magnesium, aluminium and magnesium/aluminium) built in the TRACKER. The value of density uncertainty is ±32 Kg.m -3 . The calibration of moisture is carried out on two calibration blocks (magnesium and magnesium/polyethylene)The value of moisture uncertainty is ±16 Kg.m -3 . The design of the laboratory returns to the dose limitation. The laboratory is designed mainly wall out of ordinary concrete, a good attenuator of the gamma radiations and neutron. For the design, the value of term source gamma is 25.77±0.20μSv.h -1 and the value of term source neutron is 7.88±0.35μSv.h -1 are used for the thickness of the walls. The importance of the design makes it possible to attenuate to the maximum doses and rates dose until the total absorption of the radiations. [fr

  2. Use of Phytone Peptone to Optimize Growth and Cell Density of Lactobacillus reuteri.

    Science.gov (United States)

    Atilola, Olabiyi A; Gyawali, Rabin; Aljaloud, Sulaiman O; Ibrahim, Salam A

    2015-08-10

    The objective of this study was to determine the use of phytone peptone to optimize the growth and cell density of Lactobacillus reuteri. Four strains of L. reuteri (DSM 20016, SD 2112, CF 2-7F, and MF 2-3,) were used in this study. An overnight culture of individual strains was inoculated into fresh basal media with various protein sources (peptone, tryptone, proteose peptone #3, phytone peptone, tryptic soy broth, yeast extract, and beef extract). Samples were then mixed well and incubated at 37 °C for 15 h. Bacterial growth was monitored by measuring turbidity (optical density 610 nm) at different time intervals during the incubation period. At the end of incubation, samples were plated on de-Man Rogosa Sharpe (MRS) agar to determine the bacterial population. Our results showed that phytone peptone promoted the growth of L. reuteri ( p reuteri .

  3. Spin density and orbital optimization in open shell systems: A rational and computationally efficient proposal

    Energy Technology Data Exchange (ETDEWEB)

    Giner, Emmanuel, E-mail: gnrmnl@unife.it; Angeli, Celestino, E-mail: anc@unife.it [Dipartimento di Scienze Chimiche e Famaceutiche, Universita di Ferrara, Via Fossato di Mortara 17, I-44121 Ferrara (Italy)

    2016-03-14

    The present work describes a new method to compute accurate spin densities for open shell systems. The proposed approach follows two steps: first, it provides molecular orbitals which correctly take into account the spin delocalization; second, a proper CI treatment allows to account for the spin polarization effect while keeping a restricted formalism and avoiding spin contamination. The main idea of the optimization procedure is based on the orbital relaxation of the various charge transfer determinants responsible for the spin delocalization. The algorithm is tested and compared to other existing methods on a series of organic and inorganic open shell systems. The results reported here show that the new approach (almost black-box) provides accurate spin densities at a reasonable computational cost making it suitable for a systematic study of open shell systems.

  4. Optimized LTE cell planning for multiple user density subareas using meta-heuristic algorithms

    KAUST Repository

    Ghazzai, Hakim

    2014-09-01

    Base station deployment in cellular networks is one of the most fundamental problems in network design. This paper proposes a novel method for the cell planning problem for the fourth generation 4G-LTE cellular networks using meta heuristic algorithms. In this approach, we aim to satisfy both coverage and cell capacity constraints simultaneously by formulating a practical optimization problem. We start by performing a typical coverage and capacity dimensioning to identify the initial required number of base stations. Afterwards, we implement a Particle Swarm Optimization algorithm or a recently-proposed Grey Wolf Optimizer to find the optimal base station locations that satisfy both problem constraints in the area of interest which can be divided into several subareas with different user densities. Subsequently, an iterative approach is executed to eliminate eventual redundant base stations. We have also performed Monte Carlo simulations to study the performance of the proposed scheme and computed the average number of users in outage. Results show that our proposed approach respects in all cases the desired network quality of services even for large-scale dimension problems.

  5. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.

    Directory of Open Access Journals (Sweden)

    Xiao-Lin Wu

    Full Text Available Low-density (LD single nucleotide polymorphism (SNP arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD or high-density (HD SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE or haplotype-averaged Shannon entropy (HASE and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus

  6. Varying plant density and harvest time to optimize cowpea leaf yield and nutrient content

    Science.gov (United States)

    Ohler, T. A.; Nielsen, S. S.; Mitchell, C. A.

    1996-01-01

    Plant density and harvest time were manipulated to optimize vegetative (foliar) productivity of cowpea [Vigna unguiculata (L.) Walp.] canopies for future dietary use in controlled ecological life-support systems as vegetables or salad greens. Productivity was measured as total shoot and edible dry weights (DW), edible yield rate [(EYR) grams DW per square meter per day], shoot harvest index [(SHI) grams DW per edible gram DW total shoot], and yield-efficiency rate [(YER) grams DW edible per square meter per day per grams DW nonedible]. Cowpeas were grown in a greenhouse for leaf-only harvest at 14, 28, 42, 56, 84, or 99 plants/m2 and were harvested 20, 30, 40, or 50 days after planting (DAP). Shoot and edible dry weights increased as plant density and time to harvest increased. A maximum of 1189 g shoot DW/m2 and 594 g edible DW/m2 were achieved at an estimated plant density of 85 plants/m2 and harvest 50 DAP. EYR also increased as plant density and time to harvest increased. An EYR of 11 g m-2 day-1 was predicted to occur at 86 plants/m2 and harvest 50 DAP. SHI and YER were not affected by plant density. However, the highest values of SHI (64%) and YER (1.3 g m-2 day-1 g-1) were attained when cowpeas were harvested 20 DAP. The average fat and ash contents [dry-weight basis (dwb)] of harvested leaves remained constant regardless of harvest time. Average protein content increased from 25% DW at 30 DAP to 45% DW at 50 DAP. Carbohydrate content declined from 50% DW at 30 DAP to 45% DW at 50 DAP. Total dietary fiber content (dwb) of the leaves increased from 19% to 26% as time to harvest increased from 20 to 50 days.

  7. Tracing the Fingerprint of Chemical Bonds within the Electron Densities of Hydrocarbons: A Comparative Analysis of the Optimized and the Promolecule Densities.

    Science.gov (United States)

    Keyvani, Zahra Alimohammadi; Shahbazian, Shant; Zahedi, Mansour

    2016-10-18

    The equivalence of the molecular graphs emerging from the comparative analysis of the optimized and the promolecule electron densities in two hundred and twenty five unsubstituted hydrocarbons was recently demonstrated [Keyvani et al. Chem. Eur. J. 2016, 22, 5003]. Thus, the molecular graph of an optimized molecular electron density is not shaped by the formation of the C-H and C-C bonds. In the present study, to trace the fingerprint of the C-H and C-C bonds in the electron densities of the same set of hydrocarbons, the amount of electron density and its Laplacian at the (3, -1) critical points associated with these bonds are derived from both optimized and promolecule densities, and compared in a newly proposed comparative analysis. The analysis not only conforms to the qualitative picture of the electron density build up between two atoms upon formation of a bond in between, but also quantifies the resulting accumulation of the electron density at the (3, -1) critical points. The comparative analysis also reveals a unified mode of density accumulation in the case of 2318 studied C-H bonds, but various modes of density accumulation are observed in the case of 1509 studied C-C bonds and they are classified into four groups. The four emerging groups do not always conform to the traditional classification based on the bond orders. Furthermore, four C-C bonds described as exotic bonds in previous studies, for example the inverted C-C bond in 1,1,1-propellane, are naturally distinguished from the analysis. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. A model to optimize trap systems used for small mammal (Rodentia, Insectivora density estimates

    Directory of Open Access Journals (Sweden)

    Damiano Preatoni

    1997-12-01

    Full Text Available Abstract The environment found in the upper and lower Padane Plain and the adjoining hills isn't very homogeneous. In fact it is impossible to find biotopes extended enough to satisfy the necessary criteria for density estimate of small mammals based on the Removal method. This limitation has been partially overcome by adopting a reduced grid, counting 39 traps whose spacing depends on the studied species. Aim of this work was to verify - and eventually measure - the efficiency of a sampling method based on a "reduced" number of catch points. The efficiency of 18 trapping cycles, realized from 1991 to 1993, was evaluated as percent bias. For each of the trapping cycles, 100 computer simulations were performed, so obtaining a Monte-Carlo estimate of bias in density values. Then later, the efficiency of different trap arrangements was examined by varying the criteria. The numbers of traps ranged from 9 to 49, with trap spacing varying from 5 to 15 m and a trapping period duration from 5 to 9 nights. In this way an optimal grid system was found both for dimensions and time duration. The simulation processes involved, as a whole, 1511 different grid types, for 11347 virtual trapping cycles. Our results indicate that density estimates based on "reduced" grids are affected by an average -16% bias, that is an underestimate, and that an optimally sized grid must consist of 6x6 traps square, with about 8.7 m spacing. and be in operation for 7 nights.

  9. Self-Consistent Optimization of Excited States within Density-Functional Tight-Binding.

    Science.gov (United States)

    Kowalczyk, Tim; Le, Khoa; Irle, Stephan

    2016-01-12

    We present an implementation of energies and gradients for the ΔDFTB method, an analogue of Δ-self-consistent-field density functional theory (ΔSCF) within density-functional tight-binding, for the lowest singlet excited state of closed-shell molecules. Benchmarks of ΔDFTB excitation energies, optimized geometries, Stokes shifts, and vibrational frequencies reveal that ΔDFTB provides a qualitatively correct description of changes in molecular geometries and vibrational frequencies due to excited-state relaxation. The accuracy of ΔDFTB Stokes shifts is comparable to that of ΔSCF-DFT, and ΔDFTB performs similarly to ΔSCF with the PBE functional for vertical excitation energies of larger chromophores where the need for efficient excited-state methods is most urgent. We provide some justification for the use of an excited-state reference density in the DFTB expansion of the electronic energy and demonstrate that ΔDFTB preserves many of the properties of its parent ΔSCF approach. This implementation fills an important gap in the extended framework of DFTB, where access to excited states has been limited to the time-dependent linear-response approach, and affords access to rapid exploration of a valuable class of excited-state potential energy surfaces.

  10. Use of Phytone Peptone to Optimize Growth and Cell Density of Lactobacillus reuteri

    Directory of Open Access Journals (Sweden)

    Olabiyi A. Atilola

    2015-08-01

    Full Text Available The objective of this study was to determine the use of phytone peptone to optimize the growth and cell density of Lactobacillus reuteri. Four strains of L. reuteri (DSM 20016, SD 2112, CF 2-7F, and MF 2-3, were used in this study. An overnight culture of individual strains was inoculated into fresh basal media with various protein sources (peptone, tryptone, proteose peptone #3, phytone peptone, tryptic soy broth, yeast extract, and beef extract. Samples were then mixed well and incubated at 37 °C for 15 h. Bacterial growth was monitored by measuring turbidity (optical density 610 nm at different time intervals during the incubation period. At the end of incubation, samples were plated on de-Man Rogosa Sharpe (MRS agar to determine the bacterial population. Our results showed that phytone peptone promoted the growth of L. reuteri (p < 0.05 by 1.4 log CFU/mL on average compared to the control samples. Therefore, phytone peptone could be included in laboratory media to enhance growth and increase the cell density of L. reuteri.

  11. Optimized LTE Cell Planning with Varying Spatial and Temporal User Densities

    KAUST Repository

    Ghazzai, Hakim; Yaacoub, Elias; Alouini, Mohamed-Slim; Dawy, Zaher; Abu Dayya, Adnan

    2015-01-01

    Base station deployment in cellular networks is one of the fundamental problems in network design. This paper proposes a novel method for the cell planning problem for the fourth generation (4G) cellular networks using meta-heuristic algorithms. In this approach, we aim to satisfy both cell coverage and capacity constraints simultaneously by formulating an optimization problem that captures practical planning aspects. The starting point of the planning process is defined through a dimensioning exercise that captures both coverage and capacity constraints. Afterwards, we implement a meta-heuristic algorithm based on swarm intelligence (e.g., particle swarm optimization or the recently-proposed grey wolf optimizer) to find suboptimal base station locations that satisfy both problem constraints in the area of interest which can be divided into several subareas with different spatial user densities. Subsequently, an iterative approach is executed to eliminate eventual redundant base stations. We also perform Monte Carlo simulations to study the performance of the proposed scheme and compute the average number of users in outage. Next, the problems of green planning with regards to temporal traffic variation and planning with location constraints due to tight limits on electromagnetic radiations are addressed, using the proposed method. Finally, in our simulation results, we apply our proposed approach for different scenarios with different subareas and user distributions and show that the desired network quality of service targets are always reached even for large-scale problems.

  12. Optimized LTE Cell Planning with Varying Spatial and Temporal User Densities

    KAUST Repository

    Ghazzai, Hakim

    2015-03-09

    Base station deployment in cellular networks is one of the fundamental problems in network design. This paper proposes a novel method for the cell planning problem for the fourth generation (4G) cellular networks using meta-heuristic algorithms. In this approach, we aim to satisfy both cell coverage and capacity constraints simultaneously by formulating an optimization problem that captures practical planning aspects. The starting point of the planning process is defined through a dimensioning exercise that captures both coverage and capacity constraints. Afterwards, we implement a meta-heuristic algorithm based on swarm intelligence (e.g., particle swarm optimization or the recently-proposed grey wolf optimizer) to find suboptimal base station locations that satisfy both problem constraints in the area of interest which can be divided into several subareas with different spatial user densities. Subsequently, an iterative approach is executed to eliminate eventual redundant base stations. We also perform Monte Carlo simulations to study the performance of the proposed scheme and compute the average number of users in outage. Next, the problems of green planning with regards to temporal traffic variation and planning with location constraints due to tight limits on electromagnetic radiations are addressed, using the proposed method. Finally, in our simulation results, we apply our proposed approach for different scenarios with different subareas and user distributions and show that the desired network quality of service targets are always reached even for large-scale problems.

  13. Stochastic optimal control as non-equilibrium statistical mechanics: calculus of variations over density and current

    Science.gov (United States)

    Chernyak, Vladimir Y.; Chertkov, Michael; Bierkens, Joris; Kappen, Hilbert J.

    2014-01-01

    In stochastic optimal control (SOC) one minimizes the average cost-to-go, that consists of the cost-of-control (amount of efforts), cost-of-space (where one wants the system to be) and the target cost (where one wants the system to arrive), for a system participating in forced and controlled Langevin dynamics. We extend the SOC problem by introducing an additional cost-of-dynamics, characterized by a vector potential. We propose derivation of the generalized gauge-invariant Hamilton-Jacobi-Bellman equation as a variation over density and current, suggest hydrodynamic interpretation and discuss examples, e.g., ergodic control of a particle-within-a-circle, illustrating non-equilibrium space-time complexity.

  14. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-03-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.

  15. Optimization of stocking density in intensification of mud crab Scylla serrata cultivation in the resirculation system

    Directory of Open Access Journals (Sweden)

    Yuni Puji Hastuti

    2017-07-01

    Full Text Available ABSTRACT This study aimed to determine optimum stocking density of mud crab Scylla serrata through the applied of different stocking density in every treatment in recirculation system. Experimental design used was complete randomized design (CRD with three density treatments which were 5 (P1, 10 (P2, and 15 ind/container (P3. All treatments replicated three times. The crab with the average of body weight 150 g/ind cultured in a plastic box (40×30×30 cm. Crab was cultured within 60 days and were fed two times a day by at satiation method. The result showed that P2 treatment gave the best result of mangrove crabs production performance among all treatments with 73.33±5.77% survival rate, 0.68±0.01 g/ind/day absolute growth rate and food conversion ratio 10.11±0.01. Treatment P1 gave the good response of stress, it indicated by the lowest glucose of all tretamnets at the level of 31.91 mg/dL in the end of treatment periods. The water quality during study period was fluctuative as affected by different stocking density in the treatments. Keywords: mud crab, stocking density, production performance  ABSTRAK Penelitian ini bertujuan untuk menentukan padat tebar optimal kepiting bakau Scylla serrata melalui penerapan kepadatan tebar yang berbeda pada setiap perlakuan dalam sistem resirkulasi. Rancangan penelitian yang digunakan adalah rancangan acak lengkap dengan tiga perlakuan yaitu 5 (P1, 10 (P2, dan 15 ekor/wadah pemeliharaan (P3 dengan tiga ulangan. Kepiting bakau yang digunakan memiliki berat rata-rata 150 g/ekor. Wadah pemeliharaan yang digunakan selama pemeliharaan adalah kontainer plastik yang berukuran 40×30×30 cm. Pemeliharaan kepiting bakau dilaksanakan selama 60 hari dan diberikan pakan berupa ikan rucah dua kali sehari secara at satiation. Penelitian menunjukkan bahwa perlakuan P2 memberikan hasil kinerja produksi terbaik dibandingkan perlakuan lainnya dengan nilai kelangsungan hidup 73,33±5,77%, laju pertumbuhan mutlak 0,68

  16. Optimization of hetero-epitaxial growth for the threading dislocation density reduction of germanium epilayers

    Science.gov (United States)

    Chong, Haining; Wang, Zhewei; Chen, Chaonan; Xu, Zemin; Wu, Ke; Wu, Lan; Xu, Bo; Ye, Hui

    2018-04-01

    In order to suppress dislocation generation, we develop a "three-step growth" method to heteroepitaxy low dislocation density germanium (Ge) layers on silicon with the MBE process. The method is composed of 3 growth steps: low temperature (LT) seed layer, LT-HT intermediate layer as well as high temperature (HT) epilayer, successively. Threading dislocation density (TDD) of epitaxial Ge layers is measured as low as 1.4 × 106 cm-2 by optimizing the growth parameters. The results of Raman spectrum showed that the internal strain of heteroepitaxial Ge layers is tensile and homogeneous. During the growth of LT-HT intermediate layer, TDD reduction can be obtained by lowering the temperature ramping rate, and high rate deposition maintains smooth surface morphology in Ge epilayer. A mechanism based on thermodynamics is used to explain the TDD and surface morphological dependence on temperature ramping rate and deposition rate. Furthermore, we demonstrate that the Ge layer obtained can provide an excellent platform for III-V materials integrated on Si.

  17. Optimization of cellulose nanocrystal length and surface charge density through phosphoric acid hydrolysis

    Science.gov (United States)

    Vanderfleet, Oriana M.; Osorio, Daniel A.; Cranston, Emily D.

    2017-12-01

    Cellulose nanocrystals (CNCs) are emerging nanomaterials with a large range of potential applications. CNCs are typically produced through acid hydrolysis with sulfuric acid; however, phosphoric acid has the advantage of generating CNCs with higher thermal stability. This paper presents a design of experiments approach to optimize the hydrolysis of CNCs from cotton with phosphoric acid. Hydrolysis time, temperature and acid concentration were varied across nine experiments and a linear least-squares regression analysis was applied to understand the effects of these parameters on CNC properties. In all but one case, rod-shaped nanoparticles with a high degree of crystallinity and thermal stability were produced. A statistical model was generated to predict CNC length, and trends in phosphate content and zeta potential were elucidated. The CNC length could be tuned over a relatively large range (238-475 nm) and the polydispersity could be narrowed most effectively by increasing the hydrolysis temperature and acid concentration. The CNC phosphate content was most affected by hydrolysis temperature and time; however, the charge density and colloidal stability were considered low compared with sulfuric acid hydrolysed CNCs. This study provides insight into weak acid hydrolysis and proposes `design rules' for CNCs with improved size uniformity and charge density. This article is part of a discussion meeting issue `New horizons for cellulose nanotechnology'.

  18. Enhanced photocurrent density in graphene/Si based solar cell (GSSC) by optimizing active layer thickness

    Energy Technology Data Exchange (ETDEWEB)

    Rosikhin, Ahmad, E-mail: a.rosikhin86@yahoo.co.id; Hidayat, Aulia Fikri; Syuhada, Ibnu; Winata, Toto, E-mail: toto@fi.itb.ac.id [Department of physics, physics of electronic materials research division Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jl. Ganesha 10, Bandung 40132, Jawa Barat – Indonesia (Indonesia)

    2015-12-29

    Thickness dependent photocurrent density in active layer of graphene/Si based solar cell has been investigated via analytical – simulation study. This report is a preliminary comparison of experimental and analytical investigation of graphene/Si based solar cell. Graphene sheet was interfaced with Si thin film forming heterojunction solar cell that was treated as a device model for photocurrent generator. Such current can be enhanced by optimizing active layer thickness and involving metal oxide as supporting layer to shift photons absorption. In this case there are two type of devices model with and without TiO{sub 2} in which the silicon thickness varied at 20 – 100 nm. All of them have examined and also compared with each other to obtain an optimum value. From this calculation it found that generated currents almost linear with thickness but there are saturated conditions that no more enhancements will be achieved. Furthermore TiO{sub 2} layer is effectively increases photon absorption but reducing device stability, maximum current is fluctuates enough. This may caused by the disturbance of excitons diffusion and resistivity inside each layer. Finally by controlling active layer thickness, it is quite useful to estimate optimization in order to develop the next solar cell devices.

  19. Enhanced photocurrent density in graphene/Si based solar cell (GSSC) by optimizing active layer thickness

    International Nuclear Information System (INIS)

    Rosikhin, Ahmad; Hidayat, Aulia Fikri; Syuhada, Ibnu; Winata, Toto

    2015-01-01

    Thickness dependent photocurrent density in active layer of graphene/Si based solar cell has been investigated via analytical – simulation study. This report is a preliminary comparison of experimental and analytical investigation of graphene/Si based solar cell. Graphene sheet was interfaced with Si thin film forming heterojunction solar cell that was treated as a device model for photocurrent generator. Such current can be enhanced by optimizing active layer thickness and involving metal oxide as supporting layer to shift photons absorption. In this case there are two type of devices model with and without TiO 2 in which the silicon thickness varied at 20 – 100 nm. All of them have examined and also compared with each other to obtain an optimum value. From this calculation it found that generated currents almost linear with thickness but there are saturated conditions that no more enhancements will be achieved. Furthermore TiO 2 layer is effectively increases photon absorption but reducing device stability, maximum current is fluctuates enough. This may caused by the disturbance of excitons diffusion and resistivity inside each layer. Finally by controlling active layer thickness, it is quite useful to estimate optimization in order to develop the next solar cell devices

  20. Optimization of the sintering atmosphere for high-density hydroxyapatite–carbon nanotube composites

    Science.gov (United States)

    White, Ashley A.; Kinloch, Ian A.; Windle, Alan H.; Best, Serena M.

    2010-01-01

    Hydroxyapatite–carbon nanotube (HA–CNT) composites have the potential for improved mechanical properties over HA for use in bone graft applications. Finding an appropriate sintering atmosphere for this composite presents a dilemma, as HA requires water in the sintering atmosphere to remain phase pure and well hydroxylated, yet CNTs oxidize at the high temperatures required for sintering. The purpose of this study was to optimize the atmosphere for sintering these composites. While the reaction between carbon and water to form carbon monoxide and hydrogen at high temperatures (known as the ‘water–gas reaction’) would seem to present a problem for sintering these composites, Le Chatelier's principle suggests this reaction can be suppressed by increasing the concentration of carbon monoxide and hydrogen relative to the concentration of carbon and water, so as to retain the CNTs and keep the HA's structure intact. Eight sintering atmospheres were investigated, including standard atmospheres (such as air and wet Ar), as well as atmospheres based on the water–gas reaction. It was found that sintering in an atmosphere of carbon monoxide and hydrogen, with a small amount of water added, resulted in an optimal combination of phase purity, hydroxylation, CNT retention and density. PMID:20573629

  1. Trophic State and Toxic Cyanobacteria Density in Optimization Modeling of Multi-Reservoir Water Resource Systems

    Directory of Open Access Journals (Sweden)

    Andrea Sulis

    2014-04-01

    Full Text Available The definition of a synthetic index for classifying the quality of water bodies is a key aspect in integrated planning and management of water resource systems. In previous works [1,2], a water system optimization modeling approach that requires a single quality index for stored water in reservoirs has been applied to a complex multi-reservoir system. Considering the same modeling field, this paper presents an improved quality index estimated both on the basis of the overall trophic state of the water body and on the basis of the density values of the most potentially toxic Cyanobacteria. The implementation of the index into the optimization model makes it possible to reproduce the conditions limiting water use due to excessive nutrient enrichment in the water body and to the health hazard linked to toxic blooms. The analysis of an extended limnological database (1996–2012 in four reservoirs of the Flumendosa-Campidano system (Sardinia, Italy provides useful insights into the strengths and limitations of the proposed synthetic index.

  2. Trophic state and toxic cyanobacteria density in optimization modeling of multi-reservoir water resource systems.

    Science.gov (United States)

    Sulis, Andrea; Buscarinu, Paola; Soru, Oriana; Sechi, Giovanni M

    2014-04-22

    The definition of a synthetic index for classifying the quality of water bodies is a key aspect in integrated planning and management of water resource systems. In previous works [1,2], a water system optimization modeling approach that requires a single quality index for stored water in reservoirs has been applied to a complex multi-reservoir system. Considering the same modeling field, this paper presents an improved quality index estimated both on the basis of the overall trophic state of the water body and on the basis of the density values of the most potentially toxic Cyanobacteria. The implementation of the index into the optimization model makes it possible to reproduce the conditions limiting water use due to excessive nutrient enrichment in the water body and to the health hazard linked to toxic blooms. The analysis of an extended limnological database (1996-2012) in four reservoirs of the Flumendosa-Campidano system (Sardinia, Italy) provides useful insights into the strengths and limitations of the proposed synthetic index.

  3. Optimization of fusion power density in the two-energy-component tokamak reactor

    International Nuclear Information System (INIS)

    Jassby, D.L.

    1974-10-01

    The optimal plasma conditions for maximizing fusion power density P/sub f/ in a beam-driven D--T tokamak reactor (TCT) are considered. Given T/sub e/ = T/sub i/ and fixed total plasma pressure, there is an optimal n/sub e/tau/sub E/ for maximizing P/sub f/, viz. n/sub e/tau/sub E/ = 4 x 10 12 to 2 x 10 13 cm -3 sec for T/sub e/ = 3--15 keV and 200-keV D beams. The corresponding anti GAMMA equals (beam pressure/bulk-plasma pressure) is 0.96 to 0.70. P/sub fmax/ increases as T/sub e/ is reduced and can be an order of magnitude larger than the maximum P/sub f/ of a thermal reactor of the same beta, at any temperature. A lower practical limit to T/sub e/ may be set by requiring a minimum beam power multiplication Q/sub b/. For the purpose of fissile breeding, the minimum Q/sub b/ approximately 0.6, requiring T/sub e/ greater than or equal to 3 keV if Z = 1. The optimal operating conditions of a TCT for obtaining P/sub fmax/ are considerably different from those for enhancing Q/sub b/. Maximizing P/sub f/ requires restricting both T/sub e/ and n/sub e/tau/sub E/, maintaining a bulk plasma markedly enriched in tritium, and spoiling confinement of fusion alphas. Considerable impurity content can be tolerated without seriously degrading P/sub fmax/, and high-Z impurity radiation may be useful for regulating tau/sub E/. (auth)

  4. Energy Efficient Pico Cell Range Expansion and Density Joint Optimization for Heterogeneous Networks with eICIC

    Directory of Open Access Journals (Sweden)

    Yanzan Sun

    2018-03-01

    Full Text Available Heterogeneous networks, constituted by conventional macro cells and overlaying pico cells, have been deemed a promising paradigm to support the deluge of data traffic with higher spectral efficiency and Energy Efficiency (EE. In order to deploy pico cells in reality, the density of Pico Base Stations (PBSs and the pico Cell Range Expansion (CRE are two important factors for the network spectral efficiency as well as EE improvement. However, associated with the range and density evolution, the inter-tier interference within the heterogeneous architecture will be challenging, and the time domain Enhanced Inter-cell Interference Coordination (eICIC technique becomes necessary. Aiming to improve the network EE, the above factors are jointly considered in this paper. More specifically, we first derive the closed-form expression of the network EE as a function of the density of PBSs and pico CRE bias based on stochastic geometry theory, followed by a linear search algorithm to optimize the pico CRE bias and PBS density, respectively. Moreover, in order to realize the pico CRE bias and PBS density joint optimization, a heuristic algorithm is proposed to achieve the network EE maximization. Numerical simulations show that our proposed pico CRE bias and PBS density joint optimization algorithm can improve the network EE significantly with low computational complexity.

  5. Optimization Parameters and Some Electronic Properties of AlSb Diamondoids: A Density Function Theory Study

    Directory of Open Access Journals (Sweden)

    Hayder M. Abduljalil

    2018-05-01

    Full Text Available Density function theory with LSDA/3-21G basis set is used to investigate the optimization parameters such as (angles and bonds and some electronic properties include (cohesive energy, energy gap and lattice constant of AlSb at nano diamantine and different size of(Linear, Ring, Diamantine and Tetramantine. The results of the present work show that the angles of AlSbH nano molecule in range (96,21-126.05 Å are near to standard angle of diamond (109.47 Å. Therefore, it is found that the cohesive energy for molecules of studied in decrease state with increase size but the energy gap decreased in gradually shape from (5.2-2.1eV with increase of the number of atoms, that typical is on the lattice constant. It is finally shown that the size molecules has direct effect on electronic properties to material studied that can used this material in different applications and according to the purpose asked for

  6. Optimization of superconductor--normal-metal--superconductor Josephson junctions for high critical-current density

    International Nuclear Information System (INIS)

    Golub, A.; Horovitz, B.

    1994-01-01

    The application of superconducting Bi 2 Sr 2 CaCu 2 O 8 and YBa 2 Cu 3 O 7 wires or tapes to electronic devices requires the optimization of the transport properties in Ohmic contacts between the superconductor and the normal metal in the circuit. This paper presents results of tunneling theory in superconductor--normal-metal--superconductor (SNS) junctions, in both pure and dirty limits. We derive expressions for the critical-current density as a function of the normal-metal resistivity in the dirty limit or of the ratio of Fermi velocities and effective masses in the clean limit. In the latter case the critical current increases when the ratio γ of the Fermi velocity in the superconductor to that of the weak link becomes much less than 1 and it also has a local maximum if γ is close to 1. This local maximum is more pronounced if the ratio of effective masses is large. For temperatures well below the critical temperature of the superconductors the model with abrupt pair potential on the SN interfaces is considered and its applicability near the critical temperature is examined

  7. Systematic development and optimization of chemically defined medium supporting high cell density growth of Bacillus coagulans.

    Science.gov (United States)

    Chen, Yu; Dong, Fengqing; Wang, Yonghong

    2016-09-01

    With determined components and experimental reducibility, the chemically defined medium (CDM) and the minimal chemically defined medium (MCDM) are used in many metabolism and regulation studies. This research aimed to develop the chemically defined medium supporting high cell density growth of Bacillus coagulans, which is a promising producer of lactic acid and other bio-chemicals. In this study, a systematic methodology combining the experimental technique with flux balance analysis (FBA) was proposed to design and simplify a CDM. The single omission technique and single addition technique were employed to determine the essential and stimulatory compounds, before the optimization of their concentrations by the statistical method. In addition, to improve the growth rationally, in silico omission and addition were performed by FBA based on the construction of a medium-size metabolic model of B. coagulans 36D1. Thus, CDMs were developed to obtain considerable biomass production of at least five B. coagulans strains, in which two model strains B. coagulans 36D1 and ATCC 7050 were involved.

  8. Optimal city size and population density for the 21st century.

    Science.gov (United States)

    Speare A; White, M J

    1990-10-01

    The thesis that large scale urban areas result in greater efficiency, reduced costs, and a better quality of life is reexamined. The environmental and social costs are measured for different scales of settlement. The desirability and perceived problems of a particular place are examined in relation to size of place. The consequences of population decline are considered. New York city is described as providing both opportunities in employment, shopping, and cultural activities as well as a high cost of living, crime, and pollution. The historical development of large cities in the US is described. Immigration has contributed to a greater concentration of population than would have otherwise have occurred. The spatial proximity of goods and services argument (agglomeration economies) has changed with advancements in technology such as roads, trucking, and electronic communication. There is no optimal city size. The overall effect of agglomeration can be assessed by determining whether the markets for goods and labor are adequate to maximize well-being and balance the negative and positive aspects of urbanization. The environmental costs of cities increase with size when air quality, water quality, sewage treatment, and hazardous waste disposal is considered. Smaller scale and lower density cities have the advantages of a lower concentration of pollutants. Also, mobilization for program support is easier with homogenous population. Lower population growth in large cities would contribute to a higher quality of life, since large metropolitan areas have a concentration of immigrants, younger age distributions, and minority groups with higher than average birth rates. The negative consequences of decline can be avoided if reduction of population in large cities takes place gradually. For example, poorer quality housing can be removed for open space. Cities should, however, still attract all classes of people with opportunities equally available.

  9. Head and bit patterned media optimization at areal densities of 2.5 Tbit/in2 and beyond

    International Nuclear Information System (INIS)

    Bashir, M.A.; Schrefl, T.; Dean, J.; Goncharov, A.; Hrkac, G.; Allwood, D.A.; Suess, D.

    2012-01-01

    Global optimization of writing head is performed using micromagnetics and surrogate optimization. The shape of the pole tip is optimized for bit patterned, exchange spring recording media. The media characteristics define the effective write field and the threshold values for the head field that acts at islands in the adjacent track. Once the required head field characteristics are defined, the pole tip geometry is optimized in order to achieve a high gradient of the effective write field while keeping the write field at the adjacent track below a given value. We computed the write error rate and the adjacent track erasure for different maximum anisotropy in the multilayer, graded media. The results show a linear trade off between the error rate and the number of passes before erasure. For optimal head media combinations we found a bit error rate of 10 -6 with 10 8 pass lines before erasure at 2.5 Tbit/in 2 . - Research Highlights: → Global optimization of writing head is performed using micromagnetics and surrogate optimization. → A method is provided to optimize the pole tip shape while maintaining the head field that acts in the adjacent tracks. → Patterned media structures providing an area density of 2.5 Tbit/in 2 are discussed as a case study. → Media reliability is studied, while taking into account, the magnetostatic field interactions from neighbouring islands and adjacent track erasure under the influence of head field.

  10. Optimized Irregular Low-Density Parity-Check Codes for Multicarrier Modulations over Frequency-Selective Channels

    Directory of Open Access Journals (Sweden)

    Valérian Mannoni

    2004-09-01

    Full Text Available This paper deals with optimized channel coding for OFDM transmissions (COFDM over frequency-selective channels using irregular low-density parity-check (LDPC codes. Firstly, we introduce a new characterization of the LDPC code irregularity called “irregularity profile.” Then, using this parameterization, we derive a new criterion based on the minimization of the transmission bit error probability to design an irregular LDPC code suited to the frequency selectivity of the channel. The optimization of this criterion is done using the Gaussian approximation technique. Simulations illustrate the good performance of our approach for different transmission channels.

  11. Optimal control theory for quantum-classical systems: Ehrenfest molecular dynamics based on time-dependent density-functional theory

    International Nuclear Information System (INIS)

    Castro, A; Gross, E K U

    2014-01-01

    We derive the fundamental equations of an optimal control theory for systems containing both quantum electrons and classical ions. The system is modeled with Ehrenfest dynamics, a non-adiabatic variant of molecular dynamics. The general formulation, that needs the fully correlated many-electron wavefunction, can be simplified by making use of time-dependent density-functional theory. In this case, the optimal control equations require some modifications that we will provide. The abstract general formulation is complemented with the simple example of the H 2 + molecule in the presence of a laser field. (paper)

  12. Examples of density, orientation and shape optimal design for stiffness and/or strength with orthotropic materials

    DEFF Research Database (Denmark)

    Pedersen, Pauli

    2004-01-01

    The balance between stiffness and strength design is considered in the present paper. For materials with different levels of orthotropy (including isotropy), we optimize the density distribution as well as the orientational distribution for a short cantilever problem, and discuss the tendencies...... in design and response (energy distributions and stress directions). For a hole in a biaxial stress field, the shape design of the boundary hole is also incorporated. The resulting tapered density distributions may be difficult to manufacture, for example, in micro-mechanics production. For such problems...... a penalization approach to obtain "black and white" designs, i.e. uniform material or holes, is often applied in optimal design. A specific example is studied to show the effect of the penalization, but is restricted here to an isotropic material. When the total amount of material is not specified, a conflict...

  13. The Karush–Kuhn–Tucker optimality conditions in minimum weight design of elastic rotating disks with variable thickness and density

    Directory of Open Access Journals (Sweden)

    Sanaz Jafari

    2011-10-01

    Full Text Available Rotating discs work mostly at high angular velocity. High speed results in large centrifugal forces in discs and induces large stresses and deformations. Minimizing weight of such disks yields various benefits such as low dead weights and lower costs. In order to attain a certain and reliable analysis, disk with variable thickness and density is considered. Semi-analytical solutions for the elastic stress distribution in rotating annular disks with uniform and variable thicknesses and densities are obtained under plane stress assumption by authors in previous works. The optimum disk profile for minimum weight design is achieved by the Karush–Kuhn–Tucker (KKT optimality conditions. Inequality constrain equation is used in optimization to make sure that maximum von Mises stress is always less than yielding strength of the material of the disk.

  14. Optimization of Eisenia fetida stocking density for the bioconversion of rock phosphate enriched cow dung–waste paper mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Unuofin, F.O., E-mail: funmifrank2009@gmail.com; Mnkeni, P.N.S., E-mail: pmnkeni@ufh.ac.za

    2014-11-15

    Highlights: • Vermidegradation of RP-enriched waste mixtures is dependent on E. fetida stocking density. • A stocking density of 12.5 g-worms kg{sup -1} resulted in highly humified vermicomposts. • P release from RP-enriched waste vermicomposts increases with E. fetida stocking density. • RP-enriched waste vermicomposts had no inhibitory effect on seed germination. - Abstract: Vermitechnology is gaining recognition as an environmental friendly waste management strategy. Its successful implementation requires that the key operational parameters like earthworm stocking density be established for each target waste/waste mixture. One target waste mixture in South Africa is waste paper mixed with cow dung and rock phosphate (RP) for P enrichment. This study sought to establish optimal Eisenia fetida stocking density for maximum P release and rapid bioconversion of RP enriched cow dung–paper waste mixtures. E. fetida stocking densities of 0, 7.5, 12.5, 17.5 and 22.5 g-worms kg{sup −1} dry weight of cow dung–waste paper mixtures were evaluated. The stocking density of 12.5 g-worms kg{sup −1} resulted in the highest earthworm growth rate and humification of the RP enriched waste mixture as reflected by a C:N ratio of <12 and a humic acid/fulvic acid ratio of >1.9 in final vermicomposts. A germination test revealed that the resultant vermicompost had no inhibitory effect on the germination of tomato, carrot, and radish. Extractable P increased with stocking density up to 22.5 g-worm kg{sup −1} feedstock suggesting that for maximum P release from RP enriched wastes a high stocking density should be considered.

  15. Optimization of Eisenia fetida stocking density for the bioconversion of rock phosphate enriched cow dung–waste paper mixtures

    International Nuclear Information System (INIS)

    Unuofin, F.O.; Mnkeni, P.N.S.

    2014-01-01

    Highlights: • Vermidegradation of RP-enriched waste mixtures is dependent on E. fetida stocking density. • A stocking density of 12.5 g-worms kg -1 resulted in highly humified vermicomposts. • P release from RP-enriched waste vermicomposts increases with E. fetida stocking density. • RP-enriched waste vermicomposts had no inhibitory effect on seed germination. - Abstract: Vermitechnology is gaining recognition as an environmental friendly waste management strategy. Its successful implementation requires that the key operational parameters like earthworm stocking density be established for each target waste/waste mixture. One target waste mixture in South Africa is waste paper mixed with cow dung and rock phosphate (RP) for P enrichment. This study sought to establish optimal Eisenia fetida stocking density for maximum P release and rapid bioconversion of RP enriched cow dung–paper waste mixtures. E. fetida stocking densities of 0, 7.5, 12.5, 17.5 and 22.5 g-worms kg −1 dry weight of cow dung–waste paper mixtures were evaluated. The stocking density of 12.5 g-worms kg −1 resulted in the highest earthworm growth rate and humification of the RP enriched waste mixture as reflected by a C:N ratio of <12 and a humic acid/fulvic acid ratio of >1.9 in final vermicomposts. A germination test revealed that the resultant vermicompost had no inhibitory effect on the germination of tomato, carrot, and radish. Extractable P increased with stocking density up to 22.5 g-worm kg −1 feedstock suggesting that for maximum P release from RP enriched wastes a high stocking density should be considered

  16. Optimization of multiply acquired magnetic flux density B{sub z} using ICNE-Multiecho train in MREIT

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Hyun Soo; Kwon, Oh In [Department of Mathematics, Konkuk University, Seoul (Korea, Republic of)

    2010-05-07

    The aim of magnetic resonance electrical impedance tomography (MREIT) is to visualize the electrical properties, conductivity or current density of an object by injection of current. Recently, the prolonged data acquisition time when using the injected current nonlinear encoding (ICNE) method has been advantageous for measurement of magnetic flux density data, Bz, for MREIT in the signal-to-noise ratio (SNR). However, the ICNE method results in undesirable side artifacts, such as blurring, chemical shift and phase artifacts, due to the long data acquisition under an inhomogeneous static field. In this paper, we apply the ICNE method to a gradient and spin echo (GRASE) multi-echo train pulse sequence in order to provide the multiple k-space lines during a single RF pulse period. We analyze the SNR of the measured multiple B{sub z} data using the proposed ICNE-Multiecho MR pulse sequence. By determining a weighting factor for B{sub z} data in each of the echoes, an optimized inversion formula for the magnetic flux density data is proposed for the ICNE-Multiecho MR sequence. Using the ICNE-Multiecho method, the quality of the measured magnetic flux density is considerably increased by the injection of a long current through the echo train length and by optimization of the voxel-by-voxel noise level of the B{sub z} value. Agarose-gel phantom experiments have demonstrated fewer artifacts and a better SNR using the ICNE-Multiecho method. Experimenting with the brain of an anesthetized dog, we collected valuable echoes by taking into account the noise level of each of the echoes and determined B{sub z} data by determining optimized weighting factors for the multiply acquired magnetic flux density data.

  17. Therapeutic treatment plan optimization with probability density-based dose prescription

    International Nuclear Information System (INIS)

    Lian Jun; Cotrutz, Cristian; Xing Lei

    2003-01-01

    The dose optimization in inverse planning is realized under the guidance of an objective function. The prescription doses in a conventional approach are usually rigid values, defining in most instances an ill-conditioned optimization problem. In this work, we propose a more general dose optimization scheme based on a statistical formalism [Xing et al., Med. Phys. 21, 2348-2358 (1999)]. Instead of a rigid dose, the prescription to a structure is specified by a preference function, which describes the user's preference over other doses in case the most desired dose is not attainable. The variation range of the prescription dose and the shape of the preference function are predesigned by the user based on prior clinical experience. Consequently, during the iterative optimization process, the prescription dose is allowed to deviate, with a certain preference level, from the most desired dose. By not restricting the prescription dose to a fixed value, the optimization problem becomes less ill-defined. The conventional inverse planning algorithm represents a special case of the new formalism. An iterative dose optimization algorithm is used to optimize the system. The performance of the proposed technique is systematically studied using a hypothetical C-shaped tumor with an abutting circular critical structure and a prostate case. It is shown that the final dose distribution can be manipulated flexibly by tuning the shape of the preference function and that using a preference function can lead to optimized dose distributions in accordance with the planner's specification. The proposed framework offers an effective mechanism to formalize the planner's priorities over different possible clinical scenarios and incorporate them into dose optimization. The enhanced control over the final plan may greatly facilitate the IMRT treatment planning process

  18. Application of gamma radiation backscattering in determining density and Zsub(eff) of scattering material Monte Carlo optimization of configuration

    International Nuclear Information System (INIS)

    Cechak, T.

    1982-01-01

    Applying Gardner's method of double evaluation one detector should be positioned such that its response should be independent of the material density and the second detector should be positioned so as to maximize changes in response due to density changes. The experimental scanning for optimal energy is extremely time demanding. A program was written based on the Monte Carlo method which solves the problem of error magnitude in case the computation of gamma radiation backscattering neglects multiply scattered photons, the problem of how this error depends on the atomic number of the scattering material as well as the problem of whether the representation of individual scatterings in the spectrum of backscattered photons depends on the positioning of the detector. 42 detectors, 8 types of material and 10 different density values were considered. The computed dependences are given graphically. (M.D.)

  19. Globally-Optimized Local Pseudopotentials for (Orbital-Free) Density Functional Theory Simulations of Liquids and Solids.

    Science.gov (United States)

    Del Rio, Beatriz G; Dieterich, Johannes M; Carter, Emily A

    2017-08-08

    The accuracy of local pseudopotentials (LPSs) is one of two major determinants of the fidelity of orbital-free density functional theory (OFDFT) simulations. We present a global optimization strategy for LPSs that enables OFDFT to reproduce solid and liquid properties obtained from Kohn-Sham DFT. Our optimization strategy can fit arbitrary properties from both solid and liquid phases, so the resulting globally optimized local pseudopotentials (goLPSs) can be used in solid and/or liquid-phase simulations depending on the fitting process. We show three test cases proving that we can (1) improve solid properties compared to our previous bulk-derived local pseudopotential generation scheme; (2) refine predicted liquid and solid properties by adding force matching data; and (3) generate a from-scratch, accurate goLPS from the local channel of a non-local pseudopotential. The proposed scheme therefore serves as a full and improved LPS construction protocol.

  20. Model-based Optimization and Feedback Control of the Current Density Profile Evolution in NSTX-U

    Science.gov (United States)

    Ilhan, Zeki Okan

    Nuclear fusion research is a highly challenging, multidisciplinary field seeking contributions from both plasma physics and multiple engineering areas. As an application of plasma control engineering, this dissertation mainly explores methods to control the current density profile evolution within the National Spherical Torus eXperiment-Upgrade (NSTX-U), which is a substantial upgrade based on the NSTX device, which is located in Princeton Plasma Physics Laboratory (PPPL), Princeton, NJ. Active control of the toroidal current density profile is among those plasma control milestones that the NSTX-U program must achieve to realize its next-step operational goals, which are characterized by high-performance, long-pulse, MHD-stable plasma operation with neutral beam heating. Therefore, the aim of this work is to develop model-based, feedforward and feedback controllers that can enable time regulation of the current density profile in NSTX-U by actuating the total plasma current, electron density, and the powers of the individual neutral beam injectors. Motivated by the coupled, nonlinear, multivariable, distributed-parameter plasma dynamics, the first step towards control design is the development of a physics-based, control-oriented model for the current profile evolution in NSTX-U in response to non-inductive current drives and heating systems. Numerical simulations of the proposed control-oriented model show qualitative agreement with the high-fidelity physics code TRANSP. The next step is to utilize the proposed control-oriented model to design an open-loop actuator trajectory optimizer. Given a desired operating state, the optimizer produces the actuator trajectories that can steer the plasma to such state. The objective of the feedforward control design is to provide a more systematic approach to advanced scenario planning in NSTX-U since the development of such scenarios is conventionally carried out experimentally by modifying the tokamak's actuator

  1. A density-based topology optimization methodology for thermoelectric energy conversion problems

    DEFF Research Database (Denmark)

    Lundgaard, Christian; Sigmund, Ole

    2018-01-01

    , temperature dependent material parameters and relevant objective functions. Comprehensive implementation details of the methodology are provided, and seven different design problems are solved and discussed to demonstrate that the approach is well-suited for optimizing TEGs and TECs. The study reveals new...

  2. Studying the varied shapes of gold clusters by an elegant optimization algorithm that hybridizes the density functional tight-binding theory and the density functional theory

    Science.gov (United States)

    Yen, Tsung-Wen; Lim, Thong-Leng; Yoon, Tiem-Leong; Lai, S. K.

    2017-11-01

    We combined a new parametrized density functional tight-binding (DFTB) theory (Fihey et al. 2015) with an unbiased modified basin hopping (MBH) optimization algorithm (Yen and Lai 2015) and applied it to calculate the lowest energy structures of Au clusters. From the calculated topologies and their conformational changes, we find that this DFTB/MBH method is a necessary procedure for a systematic study of the structural development of Au clusters but is somewhat insufficient for a quantitative study. As a result, we propose an extended hybridized algorithm. This improved algorithm proceeds in two steps. In the first step, the DFTB theory is employed to calculate the total energy of the cluster and this step (through running DFTB/MBH optimization for given Monte-Carlo steps) is meant to efficiently bring the Au cluster near to the region of the lowest energy minimum since the cluster as a whole has explicitly considered the interactions of valence electrons with ions, albeit semi-quantitatively. Then, in the second succeeding step, the energy-minimum searching process will continue with a skilledly replacement of the energy function calculated by the DFTB theory in the first step by one calculated in the full density functional theory (DFT). In these subsequent calculations, we couple the DFT energy also with the MBH strategy and proceed with the DFT/MBH optimization until the lowest energy value is found. We checked that this extended hybridized algorithm successfully predicts the twisted pyramidal structure for the Au40 cluster and correctly confirms also the linear shape of C8 which our previous DFTB/MBH method failed to do so. Perhaps more remarkable is the topological growth of Aun: it changes from a planar (n =3-11) → an oblate-like cage (n =12-15) → a hollow-shape cage (n =16-18) and finally a pyramidal-like cage (n =19, 20). These varied forms of the cluster's shapes are consistent with those reported in the literature.

  3. Conjugate-gradient optimization method for orbital-free density functional calculations.

    Science.gov (United States)

    Jiang, Hong; Yang, Weitao

    2004-08-01

    Orbital-free density functional theory as an extension of traditional Thomas-Fermi theory has attracted a lot of interest in the past decade because of developments in both more accurate kinetic energy functionals and highly efficient numerical methodology. In this paper, we developed a conjugate-gradient method for the numerical solution of spin-dependent extended Thomas-Fermi equation by incorporating techniques previously used in Kohn-Sham calculations. The key ingredient of the method is an approximate line-search scheme and a collective treatment of two spin densities in the case of spin-dependent extended Thomas-Fermi problem. Test calculations for a quartic two-dimensional quantum dot system and a three-dimensional sodium cluster Na216 with a local pseudopotential demonstrate that the method is accurate and efficient. (c) 2004 American Institute of Physics.

  4. Classical and modern optimization methods in minimum weight design of elastic rotating disk with variable thickness and density

    International Nuclear Information System (INIS)

    Jafari, S.; Hojjati, M.H.; Fathi, A.

    2012-01-01

    Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk profiles for minimum weight design using the Karush-Kuhn-Tucker method (KKT) as a classical optimization method, simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. Some semi-analytical solutions for the elastic stress distribution in a rotating annular disk with uniform and variable thickness and density proposed by the authors in the previous works have been used. The von Mises failure criterion of optimum disk is used as an inequality constraint to make sure that the rotating disk does not fail. The results show that the minimum weight obtained for all three methods is almost identical. The KKT method gives a profile with slightly less weight (6% less than SA and 1% less than PSO) while the implementation of PSO and SA methods are easier and provide more flexibility compared with those of the KKT method. The effectiveness of the proposed optimization methods is shown. - Highlights: ► Karush-Kuhn-Tucker, simulated annealing and particle swarm methods are used. ► The KKT gives slightly less weight (6% less than SA and 1% less than PSO). ► Implementation of PSO and SA methods are easier and provide more flexibility. ► The effectiveness of the proposed optimization methods is shown.

  5. Classical and modern optimization methods in minimum weight design of elastic rotating disk with variable thickness and density

    Energy Technology Data Exchange (ETDEWEB)

    Jafari, S. [Faculty of Mechanical Engineering, Babol University of Technology, P.O. Box 484, Babol (Iran, Islamic Republic of); Hojjati, M.H., E-mail: Hojjati@nit.ac.ir [Faculty of Mechanical Engineering, Babol University of Technology, P.O. Box 484, Babol (Iran, Islamic Republic of); Fathi, A. [Faculty of Mechanical Engineering, Babol University of Technology, P.O. Box 484, Babol (Iran, Islamic Republic of)

    2012-04-15

    Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk profiles for minimum weight design using the Karush-Kuhn-Tucker method (KKT) as a classical optimization method, simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. Some semi-analytical solutions for the elastic stress distribution in a rotating annular disk with uniform and variable thickness and density proposed by the authors in the previous works have been used. The von Mises failure criterion of optimum disk is used as an inequality constraint to make sure that the rotating disk does not fail. The results show that the minimum weight obtained for all three methods is almost identical. The KKT method gives a profile with slightly less weight (6% less than SA and 1% less than PSO) while the implementation of PSO and SA methods are easier and provide more flexibility compared with those of the KKT method. The effectiveness of the proposed optimization methods is shown. - Highlights: Black-Right-Pointing-Pointer Karush-Kuhn-Tucker, simulated annealing and particle swarm methods are used. Black-Right-Pointing-Pointer The KKT gives slightly less weight (6% less than SA and 1% less than PSO). Black-Right-Pointing-Pointer Implementation of PSO and SA methods are easier and provide more flexibility. Black-Right-Pointing-Pointer The effectiveness of the proposed optimization methods is shown.

  6. Variational Optimization of the Second-Order Density Matrix Corresponding to a Seniority-Zero Configuration Interaction Wave Function.

    Science.gov (United States)

    Poelmans, Ward; Van Raemdonck, Mario; Verstichel, Brecht; De Baerdemacker, Stijn; Torre, Alicia; Lain, Luis; Massaccesi, Gustavo E; Alcoba, Diego R; Bultinck, Patrick; Van Neck, Dimitri

    2015-09-08

    We perform a direct variational determination of the second-order (two-particle) density matrix corresponding to a many-electron system, under a restricted set of the two-index N-representability P-, Q-, and G-conditions. In addition, we impose a set of necessary constraints that the two-particle density matrix must be derivable from a doubly occupied many-electron wave function, i.e., a singlet wave function for which the Slater determinant decomposition only contains determinants in which spatial orbitals are doubly occupied. We rederive the two-index N-representability conditions first found by Weinhold and Wilson and apply them to various benchmark systems (linear hydrogen chains, He, N2, and CN(-)). This work is motivated by the fact that a doubly occupied many-electron wave function captures in many cases the bulk of the static correlation. Compared to the general case, the structure of doubly occupied two-particle density matrices causes the associate semidefinite program to have a very favorable scaling as L(3), where L is the number of spatial orbitals. Since the doubly occupied Hilbert space depends on the choice of the orbitals, variational calculation steps of the two-particle density matrix are interspersed with orbital-optimization steps (based on Jacobi rotations in the space of the spatial orbitals). We also point to the importance of symmetry breaking of the orbitals when performing calculations in a doubly occupied framework.

  7. Density-optimized efficiency for magneto-optical production of a stable molecular Bose-Einstein condensate

    Energy Technology Data Exchange (ETDEWEB)

    Mackie, Matt [Helsinki Institute of Physics, PL 64, FIN-00014 Helsingin yliopisto (Finland); Collin, Anssi [Helsinki Institute of Physics, PL 64, FIN-00014 Helsingin yliopisto (Finland); Suominen, Kalle-Antti [Helsinki Institute of Physics, PL 64, FIN-00014 Helsingin yliopisto (Finland); Javanainen, Juha [Department of Physics, University of Connecticut, Storrs, CT 06269-3046 (United States)

    2003-08-01

    Although photoassociation and the Feshbach resonance are feasible means in principle for creating a molecular Bose-Einstein condensate (MBEC) from an already quantum-degenerate gas of atoms, collision-induced mean-field shifts and irreversible decay place practical constraints on the efficient Raman delivery of stable molecules. Focusing on stimulated Raman adiabatic passage, we propose that the efficiency of both mechanisms for producing a stable MBEC can be improved by treating the density of the initial atom condensate as an optimization parameter.

  8. Orbital-Optimized MP3 and MP2.5 with Density-Fitting and Cholesky Decomposition Approximations.

    Science.gov (United States)

    Bozkaya, Uğur

    2016-03-08

    Efficient implementations of the orbital-optimized MP3 and MP2.5 methods with the density-fitting (DF-OMP3 and DF-OMP2.5) and Cholesky decomposition (CD-OMP3 and CD-OMP2.5) approaches are presented. The DF/CD-OMP3 and DF/CD-OMP2.5 methods are applied to a set of alkanes to compare the computational cost with the conventional orbital-optimized MP3 (OMP3) [Bozkaya J. Chem. Phys. 2011, 135, 224103] and the orbital-optimized MP2.5 (OMP2.5) [Bozkaya and Sherrill J. Chem. Phys. 2014, 141, 204105]. Our results demonstrate that the DF-OMP3 and DF-OMP2.5 methods provide considerably lower computational costs than OMP3 and OMP2.5. Further application results show that the orbital-optimized methods are very helpful for the study of open-shell noncovalent interactions, aromatic bond dissociation energies, and hydrogen transfer reactions. We conclude that the DF-OMP3 and DF-OMP2.5 methods are very promising for molecular systems with challenging electronic structures.

  9. Manipulating Crop Density to Optimize Nitrogen and Water Use: An Application of Precision Agroecology

    Science.gov (United States)

    Brown, T. T.; Huggins, D. R.; Smith, J. L.; Keller, C. K.; Kruger, C.

    2011-12-01

    Rising levels of reactive nitrogen (Nr) in the environment coupled with increasing population positions agriculture as a major contributor for supplying food and ecosystem services to the world. The concept of Precision Agroecology (PA) explicitly recognizes the importance of time and place by combining the principles of precision farming with ecology creating a framework that can lead to improvements in Nr use efficiency. In the Palouse region of the Pacific Northwest, USA, relationships between productivity, N dynamics and cycling, water availability, and environmental impacts result from intricate spatial and temporal variations in soil, ecosystem processes, and socioeconomic factors. Our research goal is to investigate N use efficiency (NUE) in the context of factors that regulate site-specific environmental and economic conditions and to develop the concept of PA for use in sustainable agroecosystems and science-based Nr policy. Nitrogen and plant density field trials with winter wheat (Triticum aestivum L.) were conducted at the Washington State University Cook Agronomy Farm near Pullman, WA under long-term no-tillage management in 2010 and 2011. Treatments were imposed across environmentally heterogeneous field conditions to assess soil, crop and environmental interactions. Microplots with a split N application using 15N-labeled fertilizer were established in 2011 to examine the impact of N timing on uptake of fertilizer and soil N throughout the growing season for two plant density treatments. Preliminary data show that plant density manipulation combined with precision N applications regulated water and N use and resulted in greater wheat yield with less seed and N inputs. These findings indicate that improvements to NUE and agroecosystem sustainability should consider landscape-scale patterns driving productivity (e.g., spatial and temporal dynamics of water availability and N transformations) and would benefit from policy incentives that promote a PA

  10. On-the-Fly Machine Learning of Atomic Potential in Density Functional Theory Structure Optimization

    Science.gov (United States)

    Jacobsen, T. L.; Jørgensen, M. S.; Hammer, B.

    2018-01-01

    Machine learning (ML) is used to derive local stability information for density functional theory calculations of systems in relation to the recently discovered SnO2 (110 )-(4 ×1 ) reconstruction. The ML model is trained on (structure, total energy) relations collected during global minimum energy search runs with an evolutionary algorithm (EA). While being built, the ML model is used to guide the EA, thereby speeding up the overall rate by which the EA succeeds. Inspection of the local atomic potentials emerging from the model further shows chemically intuitive patterns.

  11. Design of a bovine low-density SNP array optimized for imputation.

    Directory of Open Access Journals (Sweden)

    Didier Boichard

    Full Text Available The Illumina BovineLD BeadChip was designed to support imputation to higher density genotypes in dairy and beef breeds by including single-nucleotide polymorphisms (SNPs that had a high minor allele frequency as well as uniform spacing across the genome except at the ends of the chromosome where densities were increased. The chip also includes SNPs on the Y chromosome and mitochondrial DNA loci that are useful for determining subspecies classification and certain paternal and maternal breed lineages. The total number of SNPs was 6,909. Accuracy of imputation to Illumina BovineSNP50 genotypes using the BovineLD chip was over 97% for most dairy and beef populations. The BovineLD imputations were about 3 percentage points more accurate than those from the Illumina GoldenGate Bovine3K BeadChip across multiple populations. The improvement was greatest when neither parent was genotyped. The minor allele frequencies were similar across taurine beef and dairy breeds as was the proportion of SNPs that were polymorphic. The new BovineLD chip should facilitate low-cost genomic selection in taurine beef and dairy cattle.

  12. A Probabilistic Model to Evaluate the Optimal Density of Stations Measuring Snowfall.

    Science.gov (United States)

    Schneebeli, Martin; Laternser, Martin

    2004-05-01

    Daily new snow measurements are very important for avalanche forecasting and tourism. A dense network of manual or automatic stations measuring snowfall is necessary to have spatially reliable data. Snow stations in Switzerland were built at partially subjective locations. A probabilistic model based on the frequency and spatial extent of areas covered by heavy snowfalls was developed to quantify the probability that snowfall events are measured by the stations. Area probability relations were calculated for different thresholds of daily accumulated snowfall. A probabilistic model, including autocorrelation, was used to calculate the optimal spacing of stations based on simulated triangular grids and to compare the capture probability of different networks and snowfall thresholds. The Swiss operational snow-stations network captured snowfall events with high probability, but the distribution of the stations could be optimized. The spatial variability increased with higher thresholds of daily accumulated snowfall, and the capture probability decreased with increasing thresholds. The method can be used for other areas where the area probability relation for threshold values of snow or rain can be calculated.

  13. Optimization of digital image processing to determine quantum dots' height and density from atomic force microscopy.

    Science.gov (United States)

    Ruiz, J E; Paciornik, S; Pinto, L D; Ptak, F; Pires, M P; Souza, P L

    2018-01-01

    An optimized method of digital image processing to interpret quantum dots' height measurements obtained by atomic force microscopy is presented. The method was developed by combining well-known digital image processing techniques and particle recognition algorithms. The properties of quantum dot structures strongly depend on dots' height, among other features. Determination of their height is sensitive to small variations in their digital image processing parameters, which can generate misleading results. Comparing the results obtained with two image processing techniques - a conventional method and the new method proposed herein - with the data obtained by determining the height of quantum dots one by one within a fixed area, showed that the optimized method leads to more accurate results. Moreover, the log-normal distribution, which is often used to represent natural processes, shows a better fit to the quantum dots' height histogram obtained with the proposed method. Finally, the quantum dots' height obtained were used to calculate the predicted photoluminescence peak energies which were compared with the experimental data. Again, a better match was observed when using the proposed method to evaluate the quantum dots' height. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. A generalized model for optimal transport of images including dissipation and density modulation

    KAUST Repository

    Maas, Jan

    2015-11-01

    © EDP Sciences, SMAI 2015. In this paper the optimal transport and the metamorphosis perspectives are combined. For a pair of given input images geodesic paths in the space of images are defined as minimizers of a resulting path energy. To this end, the underlying Riemannian metric measures the rate of transport cost and the rate of viscous dissipation. Furthermore, the model is capable to deal with strongly varying image contrast and explicitly allows for sources and sinks in the transport equations which are incorporated in the metric related to the metamorphosis approach by Trouvé and Younes. In the non-viscous case with source term existence of geodesic paths is proven in the space of measures. The proposed model is explored on the range from merely optimal transport to strongly dissipative dynamics. For this model a robust and effective variational time discretization of geodesic paths is proposed. This requires to minimize a discrete path energy consisting of a sum of consecutive image matching functionals. These functionals are defined on corresponding pairs of intensity functions and on associated pairwise matching deformations. Existence of time discrete geodesics is demonstrated. Furthermore, a finite element implementation is proposed and applied to instructive test cases and to real images. In the non-viscous case this is compared to the algorithm proposed by Benamou and Brenier including a discretization of the source term. Finally, the model is generalized to define discrete weighted barycentres with applications to textures and objects.

  15. Mapping Optimal Charge Density and Length of ROMP-Based PTDMs for siRNA Internalization.

    Science.gov (United States)

    Caffrey, Leah M; deRonde, Brittany M; Minter, Lisa M; Tew, Gregory N

    2016-10-10

    A fundamental understanding of how polymer structure impacts internalization and delivery of biologically relevant cargoes, particularly small interfering ribonucleic acid (siRNA), is of critical importance to the successful design of improved delivery reagents. Herein we report the use of ring-opening metathesis polymerization (ROMP) methods to synthesize two series of guanidinium-rich protein transduction domain mimics (PTDMs): one based on an imide scaffold that contains one guanidinium moiety per repeat unit, and another based on a diester scaffold that contains two guanidinium moieties per repeat unit. By varying both the degree of polymerization and, in effect, the relative number of cationic charges in each PTDM, the performances of the two ROMP backbones for siRNA internalization were evaluated and compared. Internalization of fluorescently labeled siRNA into Jurkat T cells demonstrated that fluorescein isothiocyanate (FITC)-siRNA internalization had a charge content dependence, with PTDMs containing approximately 40 to 60 cationic charges facilitating the most internalization. Despite this charge content dependence, the imide scaffold yielded much lower viabilities in Jurkat T cells than the corresponding diester PTDMs with similar numbers of cationic charges, suggesting that the diester scaffold is preferred for siRNA internalization and delivery applications. These developments will not only improve our understanding of the structural factors necessary for optimal siRNA internalization, but will also guide the future development of optimized PTDMs for siRNA internalization and delivery.

  16. Iron(II) and Iron(III) Spin Crossover: Toward an Optimal Density Functional

    DEFF Research Database (Denmark)

    Siig, Oliver S; Kepp, Kasper P.

    2018-01-01

    Spin crossover (SCO) plays a major role in biochemistry, catalysis, materials, and emerging technologies such as molecular electronics and sensors, and thus accurate prediction and design of SCO systems is of high priority. However, the main tool for this purpose, density functional theory (DFT......), is very sensitive to applied methodology. The most abundant SCO systems are Fe(II) and Fe(III) systems. Even with average good agreement, a functional may be significantly more accurate for Fe(II) or Fe(III) systems, preventing balanced study of SCO candidates of both types. The present work investigates....../precise, inaccurate/imprecise) are observed. More generally, our work illustrates the importance not only of overall accuracy but also of balanced accuracy for systems likely to occur in context....

  17. Optimization of flavanones extraction by modulating differential solvent densities and centrifuge temperatures.

    Science.gov (United States)

    Chebrolu, Kranthi K; Jayaprakasha, G K; Jifon, J; Patil, Bhimanagouda S

    2011-07-15

    Understanding the factors influencing flavonone extraction is critical for the knowledge in sample preparation. The present study was focused on the extraction parameters such as solvent, heat, centrifugal speed, centrifuge temperature, sample to solvent ratio, extraction cycles, sonication time, microwave time and their interactions on sample preparation. Flavanones were analyzed in a high performance liquid chromatography (HPLC) and later identified by liquid chromatography and mass spectrometry (LC-MS). The five flavanones were eluted by a binary mobile phase with 0.03% phosphoric acid and acetonitrile in 20 min and detected at 280 nm, and later identified by mass spectral analysis. Dimethylsulfoxide (DMSO) and dimethyl formamide (DMF) had optimum extraction levels of narirutin, naringin, neohesperidin, didymin and poncirin compared to methanol (MeOH), ethanol (EtOH) and acetonitrile (ACN). Centrifuge temperature had a significant effect on flavanone distribution in the extracts. The DMSO and DMF extracts had homogeneous distribution of flavanones compared to MeOH, EtOH and ACN after centrifugation. Furthermore, ACN showed clear phase separation due to differential densities in the extracts after centrifugation. The number of extraction cycles significantly increased the flavanone levels during extraction. Modulating the sample to solvent ratio increased naringin quantity in the extracts. Current research provides critical information on the role of centrifuge temperature, extraction solvent and their interactions on flavanone distribution in extracts. Published by Elsevier B.V.

  18. High-throughput density functional calculations to optimize properties and interfacial chemistry of piezoelectric materials

    Science.gov (United States)

    Barr, Jordan A.; Lin, Fang-Yin; Ashton, Michael; Hennig, Richard G.; Sinnott, Susan B.

    2018-02-01

    High-throughput density functional theory calculations are conducted to search through 1572 A B O3 compounds to find a potential replacement material for lead zirconate titanate (PZT) that exhibits the same excellent piezoelectric properties as PZT and lacks both its use of the toxic element lead (Pb) and the formation of secondary alloy phases with platinum (Pt) electrodes. The first screening criterion employed a search through the Materials Project database to find A -B combinations that do not form ternary compounds with Pt. The second screening criterion aimed to eliminate potential candidates through first-principles calculations of their electronic structure, in which compounds with a band gap of 0.25 eV or higher were retained. Third, thermodynamic stability calculations were used to compare the candidates in a Pt environment to compounds already calculated to be stable within the Materials Project. Formation energies below or equal to 100 meV/atom were considered to be thermodynamically stable. The fourth screening criterion employed lattice misfit to identify those candidate perovskites that have low misfit with the Pt electrode and high misfit of potential secondary phases that can be formed when Pt alloys with the different A and B components. To aid in the final analysis, dynamic stability calculations were used to determine those perovskites that have dynamic instabilities that favor the ferroelectric distortion. Analysis of the data finds three perovskites warranting further investigation: CsNb O3 , RbNb O3 , and CsTa O3 .

  19. Development of Optimized Core Design and Analysis Methods for High Power Density BWRs

    Science.gov (United States)

    Shirvan, Koroush

    Increasing the economic competitiveness of nuclear energy is vital to its future. Improving the economics of BWRs is the main goal of this work, focusing on designing cores with higher power density, to reduce the BWR capital cost. Generally, the core power density in BWRs is limited by the thermal Critical Power of its assemblies, below which heat removal can be accomplished with low fuel and cladding temperatures. The present study investigates both increases in the heat transfer area between ~he fuel and coolant and changes in operating parameters to achieve higher power levels while meeting the appropriate thermal as well as materials and neutronic constraints. A scoping study is conducted under the constraints of using fuel with cylindrical geometry, traditional materials and enrichments below 5% to enhance its licensability. The reactor vessel diameter is limited to the largest proposed thus far. The BWR with High power Density (BWR-HD) is found to have a power level of 5000 MWth, equivalent to 26% uprated ABWR, resulting into 20% cheaper O&M and Capital costs. This is achieved by utilizing the same number of assemblies, but with wider 16x16 assemblies and 50% shorter active fuel than that of the ABWR. The fuel rod diameter and pitch are reduced to just over 45% of the ABWR values. Traditional cruciform form control rods are used, which restricts the assembly span to less than 1.2 times the current GE14 design due to limitation on shutdown margin. Thus, it is possible to increase the power density and specific power by 65%, while maintaining the nominal ABWR Minimum Critical Power Ratio (MCPR) margin. The plant systems outside the vessel are assumed to be the same as the ABWR-Il design, utilizing a combination of active and passive safety systems. Safety analyses applied a void reactivity coefficient calculated by SIMULA TE-3 for an equilibrium cycle core that showed a 15% less negative coefficient for the BWR-HD compared to the ABWR. The feedwater

  20. Optimization of Region-of-Interest Sampling Strategies for Hepatic MRI Proton Density Fat Fraction Quantification

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.

    2017-01-01

    BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of

  1. Optimization of region-of-interest sampling strategies for hepatic MRI proton density fat fraction quantification.

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B

    2018-04-01

    Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance

  2. Optimized effective potential in real time: Problems and prospects in time-dependent density-functional theory

    International Nuclear Information System (INIS)

    Mundt, Michael; Kuemmel, Stephan

    2006-01-01

    The integral equation for the time-dependent optimized effective potential (TDOEP) in time-dependent density-functional theory is transformed into a set of partial-differential equations. These equations only involve occupied Kohn-Sham orbitals and orbital shifts resulting from the difference between the exchange-correlation potential and the orbital-dependent potential. Due to the success of an analog scheme in the static case, a scheme that propagates orbitals and orbital shifts in real time is a natural candidate for an exact solution of the TDOEP equation. We investigate the numerical stability of such a scheme. An approximation beyond the Krieger-Li-Iafrate approximation for the time-dependent exchange-correlation potential is analyzed

  3. Re-examining Prostate-specific Antigen (PSA) Density: Defining the Optimal PSA Range and Patients for Using PSA Density to Predict Prostate Cancer Using Extended Template Biopsy.

    Science.gov (United States)

    Jue, Joshua S; Barboza, Marcelo Panizzutti; Prakash, Nachiketh S; Venkatramani, Vivek; Sinha, Varsha R; Pavan, Nicola; Nahar, Bruno; Kanabur, Pratik; Ahdoot, Michael; Dong, Yan; Satyanarayana, Ramgopal; Parekh, Dipen J; Punnen, Sanoj

    2017-07-01

    To compare the predictive accuracy of prostate-specific antigen (PSA) density vs PSA across different PSA ranges and by prior biopsy status in a prospective cohort undergoing prostate biopsy. Men from a prospective trial underwent an extended template biopsy to evaluate for prostate cancer at 26 sites throughout the United States. The area under the receiver operating curve assessed the predictive accuracy of PSA density vs PSA across 3 PSA ranges (10 ng/mL). We also investigated the effect of varying the PSA density cutoffs on the detection of cancer and assessed the performance of PSA density vs PSA in men with or without a prior negative biopsy. Among 1290 patients, 585 (45%) and 284 (22%) men had prostate cancer and significant prostate cancer, respectively. PSA density performed better than PSA in detecting any prostate cancer within a PSA of 4-10 ng/mL (area under the receiver operating characteristic curve [AUC]: 0.70 vs 0.53, P PSA >10 mg/mL (AUC: 0.84 vs 0.65, P PSA density was significantly more predictive than PSA in detecting any prostate cancer in men without (AUC: 0.73 vs 0.67, P PSA increases, PSA density becomes a better marker for predicting prostate cancer compared with PSA alone. Additionally, PSA density performed better than PSA in men with a prior negative biopsy. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Application of support vector regression for optimization of vibration flow field of high-density polyethylene melts characterized by small angle light scattering

    Science.gov (United States)

    Xian, Guangming

    2018-03-01

    In this paper, the vibration flow field parameters of polymer melts in a visual slit die are optimized by using intelligent algorithm. Experimental small angle light scattering (SALS) patterns are shown to characterize the processing process. In order to capture the scattered light, a polarizer and an analyzer are placed before and after the polymer melts. The results reported in this study are obtained using high-density polyethylene (HDPE) with rotation speed at 28 rpm. In addition, support vector regression (SVR) analytical method is introduced for optimization the parameters of vibration flow field. This work establishes the general applicability of SVR for predicting the optimal parameters of vibration flow field.

  5. Correction: General optimization procedure towards the design of a new family of minimal parameter spin-component-scaled double-hybrid density functional theory.

    Science.gov (United States)

    Roch, Loïc M; Baldridge, Kim K

    2018-02-07

    Correction for 'General optimization procedure towards the design of a new family of minimal parameter spin-component-scaled double-hybrid density functional theory' by Loïc M. Roch and Kim K. Baldridge, Phys. Chem. Chem. Phys., 2017, 19, 26191-26200.

  6. Optimized design of a high-power-density PM-assisted synchronous reluctance machine with ferrite magnets for electric vehicles

    Directory of Open Access Journals (Sweden)

    Liu Xiping

    2017-06-01

    Full Text Available This paper proposes a permanent magnet (PM-assisted synchronous reluctance machine (PMASynRM using ferrite magnets with the same power density as rareearth PM synchronous motors employed in Toyota Prius 2010. A suitable rotor structure for high torque density and high power density is discussed with respect to the demagnetization of ferrite magnets, mechanical strength and torque ripple. Some electromagnetic characteristics including torque, output power, loss and efficiency are calculated by 2-D finite element analysis (FEA. The analysis results show that a high power density and high efficiency of PMASynRM are obtained by using ferrite magnets.

  7. Optimization of Methods for Articular Cartilage Surface Tissue Engineering: Cell Density and Transforming Growth Factor Beta Are Critical for Self-Assembly and Lubricin Secretion.

    Science.gov (United States)

    Iwasa, Kenjiro; Reddi, A Hari

    2017-07-01

    Lubricin/superficial zone protein (SZP)/proteoglycan4 (PRG4) plays an important role in boundary lubrication in articular cartilage. Lubricin is secreted by superficial zone chondrocytes and synoviocytes of the synovium. The specific objective of this investigation is to optimize the methods for tissue engineering of articular cartilage surface. The aim of this study is to investigate the effect of cell density on the self-assembly of superficial zone chondrocytes and lubricin secretion as a functional assessment. Superficial zone chondrocytes were cultivated as a monolayer at low, medium, and high densities. Chondrocytes at the three different densities were treated with transforming growth factor beta (TGF-β)1 twice a week or daily, and the accumulated lubricin in the culture medium was analyzed by immunoblots and quantitated by enzyme-linked immunosorbent assay (ELISA). Cell numbers in low and medium densities were increased by TGF-β1; whereas cell numbers in high-density cell cultures were decreased by twice-a-week treatment of TGF-β1. On the other hand, the cell numbers were maintained by daily TGF-β treatment. Immunoblots and quantitation of lubricin by ELISA analysis indicated that TGF-β1 stimulated lubricin secretion by superficial zone chondrocytes at all densities with twice-a-week TGF-β treatment. It is noteworthy that the daily treatment of TGF-β1 increased lubricin much higher compared with twice-a-week treatment. These data demonstrate that daily treatment is optimal for the TGF-β1 response in a higher density of monolayer cultures. These findings have implications for self-assembly of surface zone chondrocytes of articular cartilage for application in tissue engineering of articular cartilage surface.

  8. Complaint-adaptive power density optimization as a tool for HTP-guided steering in deep hyperthermia treatment of pelvic tumors

    International Nuclear Information System (INIS)

    Canters, R A M; Franckena, M; Zee, J van der; Rhoon, G C van

    2008-01-01

    For an efficient clinical use of HTP (hyperthermia treatment planning), optimization methods are needed. In this study, a complaint-adaptive PD (power density) optimization as a tool for HTP-guided steering in deep hyperthermia of pelvic tumors is developed and tested. PD distribution in patients is predicted using FE-models. Two goal functions, Opt1 and Opt2, are applied to optimize PD distributions. Optimization consists of three steps: initial optimization, adaptive optimization after a first complaint and increasing the weight of a region after recurring complaints. Opt1 initially considers only target PD whereas Opt2 also takes into account hot spots. After patient complaints though, both limit PD in a region. Opt1 and Opt2 are evaluated in a phantom test, using patient models and during hyperthermia treatment. The phantom test and a sensitivity study in ten patient models, show that HTP-guided steering is most effective in peripheral complaint regions. Clinical evaluation in two groups of five patients shows that time between complaints is longer using Opt2 (p = 0.007). However, this does not lead to significantly different temperatures (T50s of 40.3 (Opt1) versus 40.1 deg. C (Opt2) (p = 0.898)). HTP-guided steering is feasible in terms of PD reduction in complaint regions and in time consumption. Opt2 is preferable in future use, because of better complaint reduction and control.

  9. Bone marrow-derived cells for cardiovascular cell therapy: an optimized GMP method based on low-density gradient improves cell purity and function.

    Science.gov (United States)

    Radrizzani, Marina; Lo Cicero, Viviana; Soncin, Sabrina; Bolis, Sara; Sürder, Daniel; Torre, Tiziano; Siclari, Francesco; Moccetti, Tiziano; Vassalli, Giuseppe; Turchetto, Lucia

    2014-09-27

    Cardiovascular cell therapy represents a promising field, with several approaches currently being tested. The advanced therapy medicinal product (ATMP) for the ongoing METHOD clinical study ("Bone marrow derived cell therapy in the stable phase of chronic ischemic heart disease") consists of fresh mononuclear cells (MNC) isolated from autologous bone marrow (BM) through density gradient centrifugation on standard Ficoll-Paque. Cells are tested for safety (sterility, endotoxin), identity/potency (cell count, CD45/CD34/CD133, viability) and purity (contaminant granulocytes and platelets). BM-MNC were isolated by density gradient centrifugation on Ficoll-Paque. The following process parameters were optimized throughout the study: gradient medium density; gradient centrifugation speed and duration; washing conditions. A new manufacturing method was set up, based on gradient centrifugation on low density Ficoll-Paque, followed by 2 washing steps, of which the second one at low speed. It led to significantly higher removal of contaminant granulocytes and platelets, improving product purity; the frequencies of CD34+ cells, CD133+ cells and functional hematopoietic and mesenchymal precursors were significantly increased. The methodological optimization described here resulted in a significant improvement of ATMP quality, a crucial issue to clinical applications in cardiovascular cell therapy.

  10. Factors affecting optimal linear endovenous energy density for endovenous laser ablation in incompetent lower limb truncal veins - A review of the clinical evidence.

    Science.gov (United States)

    Cowpland, Christine A; Cleese, Amy L; Whiteley, Mark S

    2017-06-01

    Objectives The objective is to identify the factors that affect the optimal linear endovenous energy density (LEED) to ablate incompetent truncal veins. Methods We performed a literature review of clinical studies, which reported truncal vein ablation rates and LEED. A PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) flow diagram documents the search strategy. We analysed 13 clinical papers which fulfilled the criteria to be able to compare results of great saphenous vein occlusion as defined by venous duplex ultrasound, with the LEED used in the treatment. Results Evidence suggests that the optimal LEED for endovenous laser ablation of the great saphenous vein is >80 J/cm and water might have a lower optimal LEED. A LEED 80 J/cm and <95 J/cm based on current evidence for shorter wavelength lasers. There is evidence that longer wavelength lasers may be effective at LEEDs of <85 J/cm.

  11. Study of optimal X-ray exposure conditions in consideration of bone mineral density. Relation between bone mineral density and image contrast

    International Nuclear Information System (INIS)

    Kondo, Yuji

    2003-01-01

    Bone mineral density (BMD) increases through infancy and adolescence, reaching a maximum at 20-30 years of age. Thereafter, BMD gradually decreases with age in both sexes. The image contrast of radiographs of bones varies with the change in BMD owing to the changes in the X-ray absorption of bone. The image contrast of bone generally is higher in the young adult than in the older adult. To examine the relation between BMD and image visibility, we carried out the following experiments. We measured the image contrast of radiographs of a lumbar vertebra phantom in which BMD was equivalent to the average BMD for each developmental period. We examined image visibility at various levels of imaging contrast using the Howlett chart. The results indicated that differences in BMD affect the image contrast of radiographs, and, consequently, image visibility. It was also found that image visibility in the young adult was higher than that in the older adult. The findings showed that, in digital radiography of young adults with high BMD, X-ray exposure can be decreased according the ratio of improvement in image visibility. (author)

  12. Correlation Analysis of Rainstorm Runoff and Density Current in a Canyon-Shaped Source Water Reservoir: Implications for Reservoir Optimal Operation

    Directory of Open Access Journals (Sweden)

    Yang Li

    2018-04-01

    Full Text Available Extreme weather has recently become frequent. Heavy rainfall forms storm runoff, which is usually very turbid and contains a high concentration of organic matter, therefore affecting water quality when it enters reservoirs. The large canyon-shaped Heihe Reservoir is the most important raw water source for the city of Xi’an. During the flood season, storm runoff flows into the reservoir as a density current. We determined the relationship among inflow peak discharge (Q, suspended sediment concentration, inflow water temperature, and undercurrent water density. The relationships between (Q and inflow suspended sediment concentration (CS0 could be described by the equation CS0 = 0.3899 × e0.0025Q, that between CS0 and suspended sediment concentration at the entrance of the main reservoir area S1 (CS1 was determined using CS1 = 0.0346 × e0.2335CS0, and air temperature (Ta and inflow water temperature (Tw based on the meteorological data were related as follows: Tw = 0.7718 × Ta + 1.0979. Then, we calculated the density of the undercurrent layer. Compared to the vertical water density distribution at S1 before rainfall, the undercurrent elevation was determined based on the principle of equivalent density inflow. Based on our results, we proposed schemes for optimizing water intake selection and flood discharge during the flood season.

  13. Optimizing Power Density and Efficiency of a Double-Halbach Array Permanent-Magnet Ironless Axial-Flux Motor

    Science.gov (United States)

    Duffy, Kirsten P.

    2016-01-01

    NASA Glenn Research Center is investigating hybrid electric and turboelectric propulsion concepts for future aircraft to reduce fuel burn, emissions, and noise. Systems studies show that the weight and efficiency of the electric system components need to be improved for this concept to be feasible. This effort aims to identify design parameters that affect power density and efficiency for a double-Halbach array permanent-magnet ironless axial flux motor configuration. These parameters include both geometrical and higher-order parameters, including pole count, rotor speed, current density, and geometries of the magnets, windings, and air gap.

  14. Optimization of Nitrogen Rate and Planting Density for Improving Yield, Nitrogen Use Efficiency, and Lodging Resistance in Oilseed Rape

    OpenAIRE

    Khan, Shahbaz; Anwar, Sumera; Kuai, Jie; Ullah, Sana; Fahad, Shah; Zhou, Guangsheng

    2017-01-01

    Yield and lodging related traits are essential for improving rapeseed production. The objective of the present study was to investigate the influence of plant density (D) and nitrogen (N) rates on morphological and physiological traits related to yield and lodging in rapeseed. We evaluated Huayouza 9 for two consecutive growing seasons (2014–2016) under three plant densities (LD, 10 plants m−2; MD, 30 plants m−2; HD, 60 plants m−2) and four N rates (0, 60, 120, and 180 kg ha−1). Experiment wa...

  15. General optimization procedure towards the design of a new family of minimal parameter spin-component-scaled double-hybrid density functional theory.

    Science.gov (United States)

    Roch, Loïc M; Baldridge, Kim K

    2017-10-04

    A general optimization procedure towards the development and implementation of a new family of minimal parameter spin-component-scaled double-hybrid (mSD) density functional theory (DFT) is presented. The nature of the proposed exchange-correlation functional establishes a methodology with minimal empiricism. This new family of double-hybrid (DH) density functionals is demonstrated using the PBEPBE functional, illustrating the optimization procedure to the mSD-PBEPBE method, and the performance characteristics shown for a set of non-covalent complexes covering a broad regime of weak interactions. With only two parameters, mSD-PBEPBE and its cost-effective counterpart, RI-mSD-PBEPBE, show a mean absolute error of ca. 0.4 kcal mol -1 averaged over 66 weak interacting systems. Following a successive 2D-grid refinement for a CBS extrapolation of the coefficients, the optimization procedure can be recommended for the design and implementation of a variety of additional DH methods using any of the plethora of currently available functionals.

  16. Optimizing hill seeding density for high-yielding hybrid rice in a single rice cropping system in South China.

    Directory of Open Access Journals (Sweden)

    Danying Wang

    Full Text Available Mechanical hill direct seeding of hybrid rice could be the way to solve the problems of high seeding rates and uneven plant establishment now faced in direct seeded rice; however, it is not clear what the optimum hill seeding density should be for high-yielding hybrid rice in the single-season rice production system. Experiments were conducted in 2010 and 2011 to determine the effects of hill seeding density (25 cm×15 cm, 25 cm×17 cm, 25 cm×19 cm, 25 cm×21 cm, and 25 cm×23 cm; three to five seeds per hill on plant growth and grain yield of a hybrid variety, Nei2you6, in two fields with different fertility (soil fertility 1 and 2. In addition, in 2012 and 2013, comparisons among mechanical hill seeding, broadcasting, and transplanting were conducted with three hybrid varieties to evaluate the optimum seeding density. With increases in seeding spacing from 25 cm×15 cm to 25 cm×23 cm, productive tillers per hill increased by 34.2% and 50.0% in soil fertility 1 and 2. Panicles per m2 declined with increases in seeding spacing in soil fertility 1. In soil fertility 2, no difference in panicles per m2 was found at spacing ranging from 25 cm×17 cm to 25 cm×23 cm, while decreases in the area of the top three leaves and aboveground dry weight per shoot at flowering were observed. Grain yield was the maximum at 25 cm×17 cm spacing in both soil fertility fields. Our results suggest that a seeding density of 25 cm×17 cm was suitable for high-yielding hybrid rice. These results were verified through on-farm demonstration experiments, in which mechanical hill-seeded rice at this density had equal or higher grain yield than transplanted rice.

  17. Optimizing Use of Girdled Ash Trees for Management of Low-Density Emerald Ash Borer (Coleoptera: Buprestidae) Populations.

    Science.gov (United States)

    Siegert, Nathan W; McCullough, Deborah G; Poland, Therese M; Heyd, Robert L

    2017-06-01

    Effective survey methods to detect and monitor recently established, low-density infestations of emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), remain a high priority because they provide land managers and property owners with time to implement tactics to slow emerald ash borer population growth and the progression of ash mortality. We evaluated options for using girdled ash (Fraxinus spp.) trees for emerald ash borer detection and management in a low-density infestation in a forested area with abundant green ash (F. pennsylvanica). Across replicated 4-ha plots, we compared detection efficiency of 4 versus 16 evenly distributed girdled ash trees and between clusters of 3 versus 12 girdled trees. We also examined within-tree larval distribution in 208 girdled and nongirdled trees and assessed adult emerald ash borer emergence from detection trees felled 11 mo after girdling and left on site. Overall, current-year larvae were present in 85-97% of girdled trees and 57-72% of nongirdled trees, and larval density was 2-5 times greater on girdled than nongirdled trees. Low-density emerald ash borer infestations were readily detected with four girdled trees per 4-ha, and 3-tree clusters were as effective as 12-tree clusters. Larval densities were greatest 0.5 ± 0.4 m below the base of the canopy in girdled trees and 1.3 ± 0.7 m above the canopy base in nongirdled trees. Relatively few adult emerald ash borer emerged from trees felled 11 mo after girdling and left on site through the following summer, suggesting removal or destruction of girdled ash trees may be unnecessary. This could potentially reduce survey costs, particularly in forested areas with poor accessibility. Published by Oxford University Press on behalf of Entomological Society of America 2017. This work is written by US Government employees and is in the public domain in the US.

  18. Optimization of Nitrogen Rate and Planting Density for Improving Yield, Nitrogen Use Efficiency, and Lodging Resistance in Oilseed Rape

    Directory of Open Access Journals (Sweden)

    Shahbaz Khan

    2017-05-01

    Full Text Available Yield and lodging related traits are essential for improving rapeseed production. The objective of the present study was to investigate the influence of plant density (D and nitrogen (N rates on morphological and physiological traits related to yield and lodging in rapeseed. We evaluated Huayouza 9 for two consecutive growing seasons (2014–2016 under three plant densities (LD, 10 plants m−2; MD, 30 plants m−2; HD, 60 plants m−2 and four N rates (0, 60, 120, and 180 kg ha−1. Experiment was laid out in split plot design using density as a main factor and N as sub-plot factor with three replications each. Seed yield was increased by increasing density and N rate, reaching a peak at HD with 180 kg N ha−1. The effect of N rate was consistently positive in increasing the plant height, pod area index, 1,000 seed weight, shoot and root dry weights, and root neck diameter, reaching a peak at 180 kg N ha−1. Plant height was decreased by increasing D, whereas the maximum radiation interception (~80% and net photosynthetic rate were recorded at MD at highest N. Lodging resistance and nitrogen use efficiency significantly increased with increasing D from 10 to 30 plants m−2, and N rate up to 120 kg ha−1, further increase of D and N decreased lodging resistance and NUE. Hence, our study implies that planting density 30 plants m−2 can improve yield, nitrogen use efficiency, and enhance lodging resistance by improving crop canopy.

  19. Optimization of Packing Density of M30 Concrete With Steel Slag As Coarse Aggregate Using Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    Arivoli M.

    2017-09-01

    Full Text Available Concrete plays a vital role in the design and construction of the infrastructure. To meet the global demand of concrete in future, it is becoming a challenging task to find suitable alternatives to natural aggregates. Steel slag is a by-product of steel making process. The steel slag aggregates are characterized by studying particle size and shape, physical and chemical properties, and mechanical properties as per IS: 2386-1963. The characterization study reveals the better performance of steel slag aggregate over natural coarse aggregate. M30 grade of concrete is designed and natural coarse aggregate is completely replaced by steel slag aggregate. Packing density of aggregates affects the characteristics of concrete. The present paper proposes a fuzzy system for concrete mix proportioning which increases the packing density. The proposed fuzzy system have four sub fuzzy system to arrive compressive strength, water cement ratio, ideal grading curve and free water content for concrete mix proportioning. The results show, the concrete mix proportion of the given fuzzy model agrees with IS method. The comparison of results shows that both proposed fuzzy system and IS method, there is a remarkable increase in compressive strength and bulk density, with increment in the percentage replacement of steel slag.

  20. Coarse-grained models using local-density potentials optimized with the relative entropy: Application to implicit solvation

    International Nuclear Information System (INIS)

    Sanyal, Tanmoy; Shell, M. Scott

    2016-01-01

    Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one at which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.

  1. Optimization of E r-density profile for efficient pumping and high signal gain in Erbium-doped fiber amplifiers

    International Nuclear Information System (INIS)

    Arzi, E.; Hassani, A.; Esmaili Seraji, F.

    2000-01-01

    Recently, the Erbium-Doped Fiber Amplifier has been shown to have a great potentiality in Fiber-Optics Communication. A model is suggested for calculating the E r-density profile, using the propagation and rate equations of a homogeneous two-level laser medium in Erbium-Doped Fiber Amplifier, such that efficient pumping and high signal gain is achieved for different fiber waveguide structure. The result of this numerical calculation shows that the gain, compared with the gain of the existing Erbium-Doped Fiber Amplifier, is higher by a factor of 3.5. This model is applicable in all active waveguides and any other dopant as well

  2. Cost-based optimizations of power density and target-blanket modularity for 232Th/233U-based ADEP

    International Nuclear Information System (INIS)

    Krakowski, R.A.

    1995-01-01

    A cost-based parametric systems model is developed for an Accelerator-Driven Energy Production (ADEP) system based on a 232 Th/ 233 U fuel cycle and a molten-salt (LiF/BeF 2 /ThF 3 ) fluid-fuel primary system. Simplified neutron-balance, accelerator, reactor-core, chemical-processing, and balance-of-plant models are combined parametrically with a simplified costing model. The main focus of this model is to examine trade offs related to fission power density, reactor-core modularity, 233 U breeding rate, and fission product transmutation capacity

  3. An optimally weighted estimator of the linear power spectrum disentangling the growth of density perturbations across galaxy surveys

    International Nuclear Information System (INIS)

    Sorini, D.

    2017-01-01

    Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ''light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. An analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k ∼ 0.80 h Mpc −1 and within 10% up to k ∼ 0.94 h Mpc −1 , well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.

  4. Code Optimization, Frozen Glassy Phase and Improved Decoding Algorithms for Low-Density Parity-Check Codes

    International Nuclear Information System (INIS)

    Huang Hai-Ping

    2015-01-01

    The statistical physics properties of low-density parity-check codes for the binary symmetric channel are investigated as a spin glass problem with multi-spin interactions and quenched random fields by the cavity method. By evaluating the entropy function at the Nishimori temperature, we find that irregular constructions with heterogeneous degree distribution of check (bit) nodes have higher decoding thresholds compared to regular counterparts with homogeneous degree distribution. We also show that the instability of the mean-field calculation takes place only after the entropy crisis, suggesting the presence of a frozen glassy phase at low temperatures. When no prior knowledge of channel noise is assumed (searching for the ground state), we find that a reinforced strategy on normal belief propagation will boost the decoding threshold to a higher value than the normal belief propagation. This value is close to the dynamical transition where all local search heuristics fail to identify the true message (codeword or the ferromagnetic state). After the dynamical transition, the number of metastable states with larger energy density (than the ferromagnetic state) becomes exponentially numerous. When the noise level of the transmission channel approaches the static transition point, there starts to exist exponentially numerous codewords sharing the identical ferromagnetic energy. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  5. Optimal density assignment to 2D diode array detector for different dose calculation algorithms in patient specific VMAT QA

    International Nuclear Information System (INIS)

    Park, So Yeon; Park, Jong Min; Choi, Chang Heon; Chun, MinSoo; Han, Ji Hye; Cho, Jin Dong; Kim, Jung In

    2017-01-01

    The purpose of this study is to assign an appropriate density to virtual phantom for 2D diode array detector with different dose calculation algorithms to guarantee the accuracy of patient-specific QA. Ten VMAT plans with 6 MV photon beam and ten VMAT plans with 15 MV photon beam were selected retrospectively. The computed tomography (CT) images of MapCHECK2 with MapPHAN were acquired to design the virtual phantom images. For all plans, dose distributions were calculated for the virtual phantoms with four different materials by AAA and AXB algorithms. The four materials were polystyrene, 455 HU, Jursinic phantom, and PVC. Passing rates for several gamma criteria were calculated by comparing the measured dose distribution with calculated dose distributions of four materials. For validation of AXB modeling in clinic, the mean percentages of agreement in the cases of dose difference criteria of 1.0% and 2.0% for 6 MV were 97.2%±2.3%, and 99.4%±1.1%, respectively while those for 15 MV were 98.5%±0.85% and 99.8%±0.2%, respectively. In the case of 2%/2 mm, all mean passing rates were more than 96.0% and 97.2% for 6 MV and 15 MV, respectively, regardless of the virtual phantoms of different materials and dose calculation algorithms. The passing rates in all criteria slightly increased for AXB as well as AAA when using 455 HU rather than polystyrene. The virtual phantom which had a 455 HU values showed high passing rates for all gamma criteria. To guarantee the accuracy of patent-specific VMAT QA, each institution should fine-tune the mass density or HU values of this device

  6. Optimal density assignment to 2D diode array detector for different dose calculation algorithms in patient specific VMAT QA

    Energy Technology Data Exchange (ETDEWEB)

    Park, So Yeon; Park, Jong Min; Choi, Chang Heon; Chun, MinSoo; Han, Ji Hye; Cho, Jin Dong; Kim, Jung In [Dept. of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of)

    2017-03-15

    The purpose of this study is to assign an appropriate density to virtual phantom for 2D diode array detector with different dose calculation algorithms to guarantee the accuracy of patient-specific QA. Ten VMAT plans with 6 MV photon beam and ten VMAT plans with 15 MV photon beam were selected retrospectively. The computed tomography (CT) images of MapCHECK2 with MapPHAN were acquired to design the virtual phantom images. For all plans, dose distributions were calculated for the virtual phantoms with four different materials by AAA and AXB algorithms. The four materials were polystyrene, 455 HU, Jursinic phantom, and PVC. Passing rates for several gamma criteria were calculated by comparing the measured dose distribution with calculated dose distributions of four materials. For validation of AXB modeling in clinic, the mean percentages of agreement in the cases of dose difference criteria of 1.0% and 2.0% for 6 MV were 97.2%±2.3%, and 99.4%±1.1%, respectively while those for 15 MV were 98.5%±0.85% and 99.8%±0.2%, respectively. In the case of 2%/2 mm, all mean passing rates were more than 96.0% and 97.2% for 6 MV and 15 MV, respectively, regardless of the virtual phantoms of different materials and dose calculation algorithms. The passing rates in all criteria slightly increased for AXB as well as AAA when using 455 HU rather than polystyrene. The virtual phantom which had a 455 HU values showed high passing rates for all gamma criteria. To guarantee the accuracy of patent-specific VMAT QA, each institution should fine-tune the mass density or HU values of this device.

  7. Optimal III-nitride HEMTs: from materials and device design to compact model of the 2DEG charge density

    Science.gov (United States)

    Li, Kexin; Rakheja, Shaloo

    2017-02-01

    In this paper, we develop a physically motivated compact model of the charge-voltage (Q-V) characteristics in various III-nitride high-electron mobility transistors (HEMTs) operating under highly non-equilibrium transport conditions, i.e. high drain-source current. By solving the coupled Schrödinger-Poisson equation and incorporating the two-dimensional electrostatics in the channel, we obtain the charge at the top-of-the-barrier for various applied terminal voltages. The Q-V model accounts for cutting off of the negative momenta states from the drain terminal under high drain-source bias and when the transmission in the channel is quasi-ballistic. We specifically focus on AlGaN and AlInN as barrier materials and InGaN and GaN as the channel material in the heterostructure. The Q-V model is verified and calibrated against numerical results using the commercial TCAD simulator Sentaurus from Synopsys for a 20-nm channel length III-nitride HEMT. With 10 fitting parameters, most of which have a physical origin and can easily be obtained from numerical or experimental calibration, the compact Q-V model allows us to study the limits and opportunities of III-nitride technology. We also identify optimal material and geometrical parameters of the device that maximize the carrier concentration in the HEMT channel in order to achieve superior RF performance. Additionally, the compact charge model can be easily integrated in a hierarchical circuit simulator, such as Keysight ADS and CADENCE, to facilitate circuit design and optimization of various technology parameters.

  8. Comprehensive performance analyses and optimization of the irreversible thermodynamic cycle engines (TCE) under maximum power (MP) and maximum power density (MPD) conditions

    International Nuclear Information System (INIS)

    Gonca, Guven; Sahin, Bahri; Ust, Yasin; Parlak, Adnan

    2015-01-01

    This paper presents comprehensive performance analyses and comparisons for air-standard irreversible thermodynamic cycle engines (TCE) based on the power output, power density, thermal efficiency, maximum dimensionless power output (MP), maximum dimensionless power density (MPD) and maximum thermal efficiency (MEF) criteria. Internal irreversibility of the cycles occurred during the irreversible-adiabatic processes is considered by using isentropic efficiencies of compression and expansion processes. The performances of the cycles are obtained by using engine design parameters such as isentropic temperature ratio of the compression process, pressure ratio, stroke ratio, cut-off ratio, Miller cycle ratio, exhaust temperature ratio, cycle temperature ratio and cycle pressure ratio. The effects of engine design parameters on the maximum and optimal performances are investigated. - Highlights: • Performance analyses are conducted for irreversible thermodynamic cycle engines. • Comprehensive computations are performed. • Maximum and optimum performances of the engines are shown. • The effects of design parameters on performance and power density are examined. • The results obtained may be guidelines to the engine designers

  9. Analytic energy gradients for orbital-optimized MP3 and MP2.5 with the density-fitting approximation: An efficient implementation.

    Science.gov (United States)

    Bozkaya, Uğur

    2018-03-15

    Efficient implementations of analytic gradients for the orbital-optimized MP3 and MP2.5 and their standard versions with the density-fitting approximation, which are denoted as DF-MP3, DF-MP2.5, DF-OMP3, and DF-OMP2.5, are presented. The DF-MP3, DF-MP2.5, DF-OMP3, and DF-OMP2.5 methods are applied to a set of alkanes and noncovalent interaction complexes to compare the computational cost with the conventional MP3, MP2.5, OMP3, and OMP2.5. Our results demonstrate that density-fitted perturbation theory (DF-MP) methods considered substantially reduce the computational cost compared to conventional MP methods. The efficiency of our DF-MP methods arise from the reduced input/output (I/O) time and the acceleration of gradient related terms, such as computations of particle density and generalized Fock matrices (PDMs and GFM), solution of the Z-vector equation, back-transformations of PDMs and GFM, and evaluation of analytic gradients in the atomic orbital basis. Further, application results show that errors introduced by the DF approach are negligible. Mean absolute errors for bond lengths of a molecular set, with the cc-pCVQZ basis set, is 0.0001-0.0002 Å. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. An optimized content-aware image retargeting method: toward expanding the perceived visual field of the high-density retinal prosthesis recipients

    Science.gov (United States)

    Li, Heng; Zeng, Yajie; Lu, Zhuofan; Cao, Xiaofei; Su, Xiaofan; Sui, Xiaohong; Wang, Jing; Chai, Xinyu

    2018-04-01

    Objective. Retinal prosthesis devices have shown great value in restoring some sight for individuals with profoundly impaired vision, but the visual acuity and visual field provided by prostheses greatly limit recipients’ visual experience. In this paper, we employ computer vision approaches to seek to expand the perceptible visual field in patients implanted potentially with a high-density retinal prosthesis while maintaining visual acuity as much as possible. Approach. We propose an optimized content-aware image retargeting method, by introducing salient object detection based on color and intensity-difference contrast, aiming to remap important information of a scene into a small visual field and preserve their original scale as much as possible. It may improve prosthetic recipients’ perceived visual field and aid in performing some visual tasks (e.g. object detection and object recognition). To verify our method, psychophysical experiments, detecting object number and recognizing objects, are conducted under simulated prosthetic vision. As control, we use three other image retargeting techniques, including Cropping, Scaling, and seam-assisted shrinkability. Main results. Results show that our method outperforms in preserving more key features and has significantly higher recognition accuracy in comparison with other three image retargeting methods under the condition of small visual field and low-resolution. Significance. The proposed method is beneficial to expand the perceived visual field of prosthesis recipients and improve their object detection and recognition performance. It suggests that our method may provide an effective option for image processing module in future high-density retinal implants.

  11. Improved porous silicon (P-Si) microarray based PSA (prostate specific antigen) immunoassay by optimized surface density of the capture antibody

    Science.gov (United States)

    Lee, SangWook; Kim, Soyoun; Malm, Johan; Jeong, Ok Chan; Lilja, Hans; Laurell, Thomas

    2014-01-01

    Enriching the surface density of immobilized capture antibodies enhances the detection signal of antibody sandwich microarrays. In this study, we improved the detection sensitivity of our previously developed P-Si (porous silicon) antibody microarray by optimizing concentrations of the capturing antibody. We investigated immunoassays using a P-Si microarray at three different capture antibody (PSA - prostate specific antigen) concentrations, analyzing the influence of the antibody density on the assay detection sensitivity. The LOD (limit of detection) for PSA was 2.5ngmL−1, 80pgmL−1, and 800fgmL−1 when arraying the PSA antibody, H117 at the concentration 15µgmL−1, 35µgmL−1 and 154µgmL−1, respectively. We further investigated PSA spiked into human female serum in the range of 800fgmL−1 to 500ngmL−1. The microarray showed a LOD of 800fgmL−1 and a dynamic range of 800 fgmL−1 to 80ngmL−1 in serum spiked samples. PMID:24016590

  12. Optimization of a high-yield, low-areal-density fusion product source at the National Ignition Facility with applications in nucleosynthesis experiments

    Science.gov (United States)

    Gatu Johnson, M.; Casey, D. T.; Hohenberger, M.; Zylstra, A. B.; Bacher, A.; Brune, C. R.; Bionta, R. M.; Craxton, R. S.; Ellison, C. L.; Farrell, M.; Frenje, J. A.; Garbett, W.; Garcia, E. M.; Grim, G. P.; Hartouni, E.; Hatarik, R.; Herrmann, H. W.; Hohensee, M.; Holunga, D. M.; Hoppe, M.; Jackson, M.; Kabadi, N.; Khan, S. F.; Kilkenny, J. D.; Kohut, T. R.; Lahmann, B.; Le, H. P.; Li, C. K.; Masse, L.; McKenty, P. W.; McNabb, D. P.; Nikroo, A.; Parham, T. G.; Parker, C. E.; Petrasso, R. D.; Pino, J.; Remington, B.; Rice, N. G.; Rinderknecht, H. G.; Rosenberg, M. J.; Sanchez, J.; Sayre, D. B.; Schoff, M. E.; Shuldberg, C. M.; Séguin, F. H.; Sio, H.; Walters, Z. B.; Whitley, H. D.

    2018-05-01

    Polar-direct-drive exploding pushers are used as a high-yield, low-areal-density fusion product source at the National Ignition Facility with applications including diagnostic calibration, nuclear security, backlighting, electron-ion equilibration, and nucleosynthesis-relevant experiments. In this paper, two different paths to improving the performance of this platform are explored: (i) optimizing the laser drive, and (ii) optimizing the target. While the present study is specifically geared towards nucleosynthesis experiments, the results are generally applicable. Example data from T2/3He-gas-filled implosions with trace deuterium are used to show that yield and ion temperature (Tion) from 1.6 mm-outer-diameter thin-glass-shell capsule implosions are improved at a set laser energy by switching from a ramped to a square laser pulse shape, and that increased laser energy further improves yield and Tion, although by factors lower than predicted by 1 D simulations. Using data from D2/3He-gas-filled implosions, yield at a set Tion is experimentally verified to increase with capsule size. Uniform D3He-proton spectra from 3 mm-outer-diameter CH shell implosions demonstrate the utility of this platform for studying charged-particle-producing reactions relevant to stellar nucleosynthesis.

  13. Waist-to-Height Ratio and Triglycerides/High-Density Lipoprotein Cholesterol Were the Optimal Predictors of Metabolic Syndrome in Uighur Men and Women in Xinjiang, China.

    Science.gov (United States)

    Chen, Bang-Dang; Yang, Yi-Ning; Ma, Yi-Tong; Pan, Shuo; He, Chun-Hui; Liu, Fen; Ma, Xiang; Fu, Zhen-Yan; Li, Xiao-Mei; Xie, Xiang; Zheng, Ying-Ying

    2015-06-01

    This study aimed to identify the best single predictor of metabolic syndrome by comparing the predictive ability of various anthropometric and atherogenic parameters among a Uighur population in Xinjiang, northwest China. A total of 4767 Uighur participants were selected from the Cardiovascular Risk Survey (CRS), which was carried out from October, 2007, to March, 2010. Anthropometric data, blood pressure, serum concentration of serum total cholesterol (TC), triglycerides (TGs), low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), and fasting glucose were documented. Prevalence of metabolic syndrome and its individual components were confirmed according to International Diabetes Federation (IDF) criteria. Area under the receiver operating characteristic curve (AUC) of each variable for the presence of metabolic syndrome was compared. The sensitivity (Sen), specificity (Spe), distance in the receiver operating characteristic (ROC) curve, and cutoffs of each variable for the presence of metabolic syndrome were calculated. In all, 23.7% of men had the metabolic syndrome, whereas 40.1% of women had the metabolic syndrome in a Uighur population in Xinjiang; the prevalence of metabolic syndrome in women was significantly higher than that in men (PAUC value (AUC=0.838); it was followed by TGs/HDL-C (AUC=0.826), body mass index (BMI) (AUC=0.812), waist-to-hip ratio (WHR) (AUC=0.781), and body adiposity index (BAI) (AUC=0.709). In women, the TGs/HDL-C had the highest AUC value (AUC=0.815); it was followed by WHtR (AUC=0.780), WHR (AUC=0.730), BMI (AUC=0.719), and BAI (AUC=0.699). Similarly, among all five anthropometric and atherogenic parameters, the WHtR had the shortest ROC distance of 0.32 (Sen=85.40%, Spe=71.6%), and the optimal cutoff for WHtR was 0.55 in men. In women, TGs/HDL-C had the shortest ROC distance of 0.35 (Sen=75.29%, Spe=75.18%), and the optimal cutoff of TGs/HDL-C was 1.22. WHtR was the best predictor of metabolic

  14. The reduction of optimal heat treatment temperature and critical current density enhancement of ex situ processed MgB2 tapes using ball milled filling powder

    Science.gov (United States)

    Fujii, Hiroki; Iwanade, Akio; Kawada, Satoshi; Kitaguchi, Hitoshi

    2018-01-01

    The optimal heat treatment temperature (Topt) at which best performance in the critical current density (Jc) property at 4.2 K is obtained is influenced by the quality or reactivity of the filling powder in ex situ processed MgB2 tapes. Using a controlled fabrication process, the Topt decreases to 705-735 °C, which is lower than previously reported by more than 50 °C. The Topt decrease is effective to suppress both the decomposition of MgB2 and hence the formation of impurities such as MgB4, and the growth of crystallite size which decreases upper critical filed (Hc2). These bring about the Jc improvement and the Jc value at 4.2 K and 10 T reaches 250 A/mm2. The milling process also decreases the critical temperature (Tc) below 30 K. The milled powder is easily contaminated in air and thus, the Jc property of the contaminated tapes degrades severely. The contamination can raise the Topt by more than 50 °C, which is probably due to the increased sintering temperature required against contaminated surface layer around the grains acting as a barrier.

  15. Reference values assessment in a Mediterranean population for small dense low-density lipoprotein concentration isolated by an optimized precipitation method

    Directory of Open Access Journals (Sweden)

    Fernández-Cidón B

    2017-06-01

    Full Text Available Bárbara Fernández-Cidón,1–3 Ariadna Padró-Miquel,1 Pedro Alía-Ramos,1 María José Castro-Castro,1 Marta Fanlo-Maresma,4 Dolors Dot-Bach,1 José Valero-Politi,1 Xavier Pintó-Sala,4 Beatriz Candás-Estébanez1 1Clinical Laboratory, Hospital Universitari de Bellvitge, L’Hospitalet de Llobregat, Spain; 2Department of Biochemistry, Molecular Biology and Biomedicine, Autonomous University of Barcelona (UAB, Barcelona, Spain; 3Department of Pharmacotherapy, Pharmacogenetics and Pharmaceutical Technology, Institut d’Investigació Biomèdica de Bellvitge (IDIBELL, L’Hospitalet de Llobregat, Spain; 4Cardiovascular Risk Unit, Hospital Universitari de Bellvitge, L’Hospitalet de Llobregat, Spain Background: High serum concentrations of small dense low-density lipoprotein cholesterol (sd-LDL-c particles are associated with risk of cardiovascular disease (CVD. Their clinical application has been hindered as a consequence of the laborious current method used for their quantification. Objective: Optimize a simple and fast precipitation method to isolate sd-LDL particles and establish a reference interval in a Mediterranean population. Materials and methods: Forty-five serum samples were collected, and sd-LDL particles were isolated using a modified heparin-Mg2+ precipitation method. sd-LDL-c concentration was calculated by subtracting high-density lipoprotein cholesterol (HDL-c from the total cholesterol measured in the supernatant. This method was compared with the reference method (ultracentrifugation. Reference values were estimated according to the Clinical and Laboratory Standards Institute and The International Federation of Clinical Chemistry and Laboratory Medicine recommendations. sd-LDL-c concentration was measured in serums from 79 subjects with no lipid metabolism abnormalities. Results: The Passing–Bablok regression equation is y = 1.52 (0.72 to 1.73 + 0.07x (−0.1 to 0.13, demonstrating no significant statistical differences

  16. density functional theory approach

    Indian Academy of Sciences (India)

    YOGESH ERANDE

    2017-07-27

    Jul 27, 2017 ... a key role in all optical switching devices, since their optical properties can be .... optimized in the gas phase using Density Functional Theory. (DFT).39 The ...... The Mediation of Electrostatic Effects by Sol- vents J. Am. Chem.

  17. Optimization of hydrostatic pressure at varied sonication conditions--power density, intensity, very low frequency--for isothermal ultrasonic sludge treatment.

    Science.gov (United States)

    Delmas, Henri; Le, Ngoc Tuan; Barthe, Laurie; Julcour-Lebigue, Carine

    2015-07-01

    This work aims at investigating for the first time the key sonication (US) parameters: power density (DUS), intensity (IUS), and frequency (FS) - down to audible range, under varied hydrostatic pressure (Ph) and low temperature isothermal conditions (to avoid any thermal effect). The selected application was activated sludge disintegration, a major industrial US process. For a rational approach all comparisons were made at same specific energy input (ES, US energy per solid weight) which is also the relevant economic criterion. The decoupling of power density and intensity was obtained by either changing the sludge volume or most often by changing probe diameter, all other characteristics being unchanged. Comprehensive results were obtained by varying the hydrostatic pressure at given power density and intensity. In all cases marked maxima of sludge disintegration appeared at optimum pressures, which values increased at increasing power intensity and density. Such optimum was expected due to opposite effects of increasing hydrostatic pressure: higher cavitation threshold then smaller and fewer bubbles, but higher temperature and pressure at the end of collapse. In addition the first attempt to lower US frequency down to audible range was very successful: at any operation condition (DUS, IUS, Ph, sludge concentration and type) higher sludge disintegration was obtained at 12 kHz than at 20 kHz. The same values of optimum pressure were observed at 12 and 20 kHz. At same energy consumption the best conditions - obtained at 12 kHz, maximum power density 720 W/L and 3.25 bar - provided about 100% improvement with respect to usual conditions (1 bar, 20 kHz). Important energy savings and equipment size reduction may then be expected. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Quick-low-density parity check and dynamic threshold voltage optimization in 1X nm triple-level cell NAND flash memory with comprehensive analysis of endurance, retention-time, and temperature variation

    Science.gov (United States)

    Doi, Masafumi; Tokutomi, Tsukasa; Hachiya, Shogo; Kobayashi, Atsuro; Tanakamaru, Shuhei; Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken

    2016-08-01

    NAND flash memory’s reliability degrades with increasing endurance, retention-time and/or temperature. After a comprehensive evaluation of 1X nm triple-level cell (TLC) NAND flash, two highly reliable techniques are proposed. The first proposal, quick low-density parity check (Quick-LDPC), requires only one cell read in order to accurately estimate a bit-error rate (BER) that includes the effects of temperature, write and erase (W/E) cycles and retention-time. As a result, 83% read latency reduction is achieved compared to conventional AEP-LDPC. Also, W/E cycling is extended by 100% compared with conventional Bose-Chaudhuri-Hocquenghem (BCH) error-correcting code (ECC). The second proposal, dynamic threshold voltage optimization (DVO) has two parts, adaptive V Ref shift (AVS) and V TH space control (VSC). AVS reduces read error and latency by adaptively optimizing the reference voltage (V Ref) based on temperature, W/E cycles and retention-time. AVS stores the optimal V Ref’s in a table in order to enable one cell read. VSC further improves AVS by optimizing the voltage margins between V TH states. DVO reduces BER by 80%.

  19. Implementing optimal thinning strategies

    Science.gov (United States)

    Kurt H. Riitters; J. Douglas Brodie

    1984-01-01

    Optimal thinning regimes for achieving several management objectives were derived from two stand-growth simulators by dynamic programming. Residual mean tree volumes were then plotted against stand density management diagrams. The results supported the use of density management diagrams for comparing, checking, and implementing the results of optimization analyses....

  20. Sodium magnetic resonance imaging. Development of a 3D radial acquisition technique with optimized k-space sampling density and high SNR-efficiency

    International Nuclear Information System (INIS)

    Nagel, Armin Michael

    2009-01-01

    A 3D radial k-space acquisition technique with homogenous distribution of the sampling density (DA-3D-RAD) is presented. This technique enables short echo times (TE 23 Na-MRI, and provides a high SNR-efficiency. The gradients of the DA-3D-RAD-sequence are designed such that the average sampling density in each spherical shell of k-space is constant. The DA-3D-RAD-sequence provides 34% more SNR than a conventional 3D radial sequence (3D-RAD) if T 2 * -decay is neglected. This SNR-gain is enhanced if T 2 * -decay is present, so a 1.5 to 1.8 fold higher SNR is measured in brain tissue with the DA-3D-RAD-sequence. Simulations and experimental measurements show that the DA-3D-RAD sequence yields a better resolution in the presence of T 2 * -decay and less image artefacts when B 0 -inhomogeneities exist. Using the developed sequence, T 1 -, T 2 * - and Inversion-Recovery- 23 Na-image contrasts were acquired for several organs and 23 Na-relaxation times were measured (brain tissue: T 1 =29.0±0.3 ms; T 2s * ∼4 ms; T 2l * ∼31 ms; cerebrospinal fluid: T 1 =58.1±0.6 ms; T 2 * =55±3 ms (B 0 =3 T)). T 1 - und T 2 * -relaxation times of cerebrospinal fluid are independent of the selected magnetic field strength (B0 = 3T/7 T), whereas the relaxation times of brain tissue increase with field strength. Furthermore, 23 Na-signals of oedemata were suppressed in patients and thus signals from different tissue compartments were selectively measured. (orig.)

  1. Road density

    Data.gov (United States)

    U.S. Environmental Protection Agency — Road density is generally highly correlated with amount of developed land cover. High road densities usually indicate high levels of ecological disturbance. More...

  2. Optimization of High Temperature and Pressurized Steam Modified Wood Fibers for High-Density Polyethylene Matrix Composites Using the Orthogonal Design Method

    Directory of Open Access Journals (Sweden)

    Xun Gao

    2016-10-01

    Full Text Available The orthogonal design method was used to determine the optimum conditions for modifying poplar fibers through a high temperature and pressurized steam treatment for the subsequent preparation of wood fiber/high-density polyethylene (HDPE composites. The extreme difference, variance, and significance analyses were performed to reveal the effect of the modification parameters on the mechanical properties of the prepared composites, and they yielded consistent results. The main findings indicated that the modification temperature most strongly affected the mechanical properties of the prepared composites, followed by the steam pressure. A temperature of 170 °C, a steam pressure of 0.8 MPa, and a processing time of 20 min were determined as the optimum parameters for fiber modification. Compared to the composites prepared from untreated fibers, the tensile, flexural, and impact strength of the composites prepared from modified fibers increased by 20.17%, 18.5%, and 19.3%, respectively. The effect on the properties of the composites was also investigated by scanning electron microscopy and dynamic mechanical analysis. When the temperature, steam pressure, and processing time reached the highest values, the composites exhibited the best mechanical properties, which were also well in agreement with the results of the extreme difference, variance, and significance analyses. Moreover, the crystallinity and thermal stability of the fibers and the storage modulus of the prepared composites improved; however, the hollocellulose content and the pH of the wood fibers decreased.

  3. Lung density

    DEFF Research Database (Denmark)

    Garnett, E S; Webber, C E; Coates, G

    1977-01-01

    The density of a defined volume of the human lung can be measured in vivo by a new noninvasive technique. A beam of gamma-rays is directed at the lung and, by measuring the scattered gamma-rays, lung density is calculated. The density in the lower lobe of the right lung in normal man during quiet...... breathing in the sitting position ranged from 0.25 to 0.37 g.cm-3. Subnormal values were found in patients with emphsema. In patients with pulmonary congestion and edema, lung density values ranged from 0.33 to 0.93 g.cm-3. The lung density measurement correlated well with the findings in chest radiographs...... but the lung density values were more sensitive indices. This was particularly evident in serial observations of individual patients....

  4. A fully-automated software pipeline for integrating breast density and parenchymal texture analysis for digital mammograms: parameter optimization in a case-control breast cancer risk assessment study

    Science.gov (United States)

    Zheng, Yuanjie; Wang, Yan; Keller, Brad M.; Conant, Emily; Gee, James C.; Kontos, Despina

    2013-02-01

    Estimating a woman's risk of breast cancer is becoming increasingly important in clinical practice. Mammographic density, estimated as the percent of dense (PD) tissue area within the breast, has been shown to be a strong risk factor. Studies also support a relationship between mammographic texture and breast cancer risk. We have developed a fullyautomated software pipeline for computerized analysis of digital mammography parenchymal patterns by quantitatively measuring both breast density and texture properties. Our pipeline combines advanced computer algorithms of pattern recognition, computer vision, and machine learning and offers a standardized tool for breast cancer risk assessment studies. Different from many existing methods performing parenchymal texture analysis within specific breast subregions, our pipeline extracts texture descriptors for points on a spatial regular lattice and from a surrounding window of each lattice point, to characterize the local mammographic appearance throughout the whole breast. To demonstrate the utility of our pipeline, and optimize its parameters, we perform a case-control study by retrospectively analyzing a total of 472 digital mammography studies. Specifically, we investigate the window size, which is a lattice related parameter, and compare the performance of texture features to that of breast PD in classifying case-control status. Our results suggest that different window sizes may be optimal for raw (12.7mm2) versus vendor post-processed images (6.3mm2). We also show that the combination of PD and texture features outperforms PD alone. The improvement is significant (p=0.03) when raw images and window size of 12.7mm2 are used, having an ROC AUC of 0.66. The combination of PD and our texture features computed from post-processed images with a window size of 6.3 mm2 achieves an ROC AUC of 0.75.

  5. The Density of Sustainable Settlements

    DEFF Research Database (Denmark)

    Lauring, Michael; Silva, Victor; Jensen, Ole B.

    2010-01-01

    This paper is the initial result of a cross-disciplinary attempt to encircle an answer to the question of optimal densities of sustainable settlements. Urban density is an important component in the framework of sustainable development and influences not only the character and design of cities...

  6. Low Bone Density

    Science.gov (United States)

    ... Density Exam/Testing › Low Bone Density Low Bone Density Low bone density is when your bone density ... people with normal bone density. Detecting Low Bone Density A bone density test will determine whether you ...

  7. A novel probe density controllable electrochemiluminescence biosensor for ultra-sensitive detection of Hg2+ based on DNA hybridization optimization with gold nanoparticles array patterned self-assembly platform.

    Science.gov (United States)

    Gao, Wenhua; Zhang, An; Chen, Yunsheng; Chen, Zixuan; Chen, Yaowen; Lu, Fushen; Chen, Zhanguang

    2013-11-15

    Biosensor based on DNA hybridization holds great potential to get higher sensitivity as the optimal DNA hybridization efficiency can be achieved by controlling the distribution and orientation of probe strands on the transducer surface. In this work, an innovative strategy is reported to tap the sensitivity potential of current electrochemiluminescence (ECL) biosensing system by dispersedly anchoring the DNA beacons on the gold nanoparticles (GNPs) array which was electrodeposited on the glassy carbon electrode surface, rather than simply sprawling the coil-like strands onto planar gold surface. The strategy was developed by designing a "signal-on" ECL biosensing switch fabricated on the GNPs nanopatterned electrode surface for enhanced ultra-sensitivity detection of Hg(2+). A 57-mer hairpin-DNA labeled with ferrocene as ECL quencher and a 13-mer DNA labeled with Ru(bpy)3(2+) as reporter were hybridized to construct the signal generator in off-state. A 31-mer thymine (T)-rich capture-DNA was introduced to form T-T mismatches with the loop sequence of the hairpin-DNA in the presence of Hg(2+) and induce the stem-loop open, meanwhile the ECL "signal-on" was triggered. The peak sensitivity with the lowest detection limit of 0.1 nM was achieved with the optimal GNPs number density while exorbitant GNPs deposition resulted in sensitivity deterioration for the biosensor. We expect the present strategy could lead the renovation of the existing probe-immobilized ECL genosensor design to get an even higher sensitivity in ultralow level of target detection such as the identification of genetic diseases and disorders in basic research and clinical application. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Level densities

    International Nuclear Information System (INIS)

    Ignatyuk, A.V.

    1998-01-01

    For any applications of the statistical theory of nuclear reactions it is very important to obtain the parameters of the level density description from the reliable experimental data. The cumulative numbers of low-lying levels and the average spacings between neutron resonances are usually used as such data. The level density parameters fitted to such data are compiled in the RIPL Starter File for the tree models most frequently used in practical calculations: i) For the Gilber-Cameron model the parameters of the Beijing group, based on a rather recent compilations of the neutron resonance and low-lying level densities and included into the beijing-gc.dat file, are chosen as recommended. As alternative versions the parameters provided by other groups are given into the files: jaeri-gc.dat, bombay-gc.dat, obninsk-gc.dat. Additionally the iljinov-gc.dat, and mengoni-gc.dat files include sets of the level density parameters that take into account the damping of shell effects at high energies. ii) For the backed-shifted Fermi gas model the beijing-bs.dat file is selected as the recommended one. Alternative parameters of the Obninsk group are given in the obninsk-bs.dat file and those of Bombay in bombay-bs.dat. iii) For the generalized superfluid model the Obninsk group parameters included into the obninsk-bcs.dat file are chosen as recommended ones and the beijing-bcs.dat file is included as an alternative set of parameters. iv) For the microscopic approach to the level densities the files are: obninsk-micro.for -FORTRAN 77 source for the microscopical statistical level density code developed in Obninsk by Ignatyuk and coworkers, moller-levels.gz - Moeller single-particle level and ground state deformation data base, moller-levels.for -retrieval code for Moeller single-particle level scheme. (author)

  9. Self-consistent embedding of density-matrix renormalization group wavefunctions in a density functional environment.

    Science.gov (United States)

    Dresselhaus, Thomas; Neugebauer, Johannes; Knecht, Stefan; Keller, Sebastian; Ma, Yingjin; Reiher, Markus

    2015-01-28

    We present the first implementation of a density matrix renormalization group algorithm embedded in an environment described by density functional theory. The frozen density embedding scheme is used with a freeze-and-thaw strategy for a self-consistent polarization of the orbital-optimized wavefunction and the environmental densities with respect to each other.

  10. Optimization and Optimal Control

    CERN Document Server

    Chinchuluun, Altannar; Enkhbat, Rentsen; Tseveendorj, Ider

    2010-01-01

    During the last four decades there has been a remarkable development in optimization and optimal control. Due to its wide variety of applications, many scientists and researchers have paid attention to fields of optimization and optimal control. A huge number of new theoretical, algorithmic, and computational results have been observed in the last few years. This book gives the latest advances, and due to the rapid development of these fields, there are no other recent publications on the same topics. Key features: Provides a collection of selected contributions giving a state-of-the-art accou

  11. Optimally Stopped Optimization

    Science.gov (United States)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  12. Statistical theory of electron densities

    International Nuclear Information System (INIS)

    Pratt, L.R.; Hoffman, G.G.; Harris, R.A.

    1988-01-01

    An optimized Thomas--Fermi theory is proposed which retains the simplicity of the original theory and is a suitable reference theory for Monte Carlo density functional treatments of condensed materials. The key ingredient of the optimized theory is a neighborhood sampled potential which contains effects of the inhomogeneities in the one-electron potential. In contrast to the traditional Thomas--Fermi approach, the optimized theory predicts a finite electron density in the vicinity of a nucleus. Consideration of the example of an ideal electron gas subject to a central Coulomb field indicates that implementation of the approach is straightforward. The optimized theory is found to fail completely when a classically forbidden region is approached. However, these circumstances are not of primary interest for calculations of interatomic forces. It is shown how the energy functional of the density may be constructed by integration of a generalized Hellmann--Feynman relation. This generalized Hellmann--Feynman relation proves to be equivalent to the variational principle of density functional quantum mechanics, and, therefore, the present density theory can be viewed as a variational consequence of the constructed energy functional

  13. Density dependent effective interactions

    International Nuclear Information System (INIS)

    Dortmans, P.J.; Amos, K.

    1994-01-01

    An effective nucleon-nucleon interaction is defined by an optimal fit to select on-and half-off-of-the-energy shell t-and g-matrices determined by solutions of the Lippmann-Schwinger and Brueckner-Bethe-Goldstone equations with the Paris nucleon-nucleon interaction as input. As such, it is seen to better reproduce the interaction on which it is based than other commonly used density dependent effective interactions. The new (medium modified) effective interaction when folded with appropriate density matrices, has been used to define proton- 12 C and proton- 16 O optical potentials. With them elastic scattering data are well fit and the medium effects identifiable. 23 refs., 8 figs

  14. [Ultrastructural organization of cytoplasmatic membrane of Anaerobacter polyendosporus studied by electron microscopic cryofractography].

    Science.gov (United States)

    Duda, V I; Suzina, N E; Dmitriev, V V

    2001-01-01

    Anaerobacter polyendosporus cells do not have typical mesosomes. However, the analysis of this anaerobic multispore bacterium by electron microscopic cryofractography showed that its cytoplasmic membrane contains specific intramembrane structures in the form of flat lamellar inverted lipid membranes tenths of nanometers to several microns in size. It was found that these structures are located in the hydrophobic interior between the outer and inner leaflets of the cytoplasmic membrane and do not contain intramembrane particles that are commonly present on freeze-fracture replicas. The flat inverted lipid membranes were revealed in bacterial cells cultivated under normal growth conditions, indicating the existence of a complex-type compartmentalization in biological membranes, which manifests itself in the formation of intramembrane compartments having the appearance of vesicles and inverted lipid membranes.

  15. Coil Optimization for HTS Machines

    DEFF Research Database (Denmark)

    Mijatovic, Nenad; Jensen, Bogi Bech; Abrahamsen, Asger Bech

    An optimization approach of HTS coils in HTS synchronous machines (SM) is presented. The optimization is aimed at high power SM suitable for direct driven wind turbines applications. The optimization process was applied to a general radial flux machine with a peak air gap flux density of ~3T...... is suitable for which coil segment is presented. Thus, the performed study gives valuable input for the coil design of HTS machines ensuring optimal usage of HTS tapes....

  16. Filters in topology optimization

    DEFF Research Database (Denmark)

    Bourdin, Blaise

    1999-01-01

    In this article, a modified (``filtered'') version of the minimum compliance topology optimization problem is studied. The direct dependence of the material properties on its pointwise density is replaced by a regularization of the density field using a convolution operator. In this setting...... it is possible to establish the existence of solutions. Moreover, convergence of an approximation by means of finite elements can be obtained. This is illustrated through some numerical experiments. The ``filtering'' technique is also shown to cope with two important numerical problems in topology optimization...

  17. Slope constrained Topology Optimization

    DEFF Research Database (Denmark)

    Petersson, J.; Sigmund, Ole

    1998-01-01

    The problem of minimum compliance topology optimization of an elastic continuum is considered. A general continuous density-energy relation is assumed, including variable thickness sheet models and artificial power laws. To ensure existence of solutions, the design set is restricted by enforcing...

  18. Optimized Kernel Entropy Components.

    Science.gov (United States)

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  19. Landscape encodings enhance optimization.

    Directory of Open Access Journals (Sweden)

    Konstantin Klemm

    Full Text Available Hard combinatorial optimization problems deal with the search for the minimum cost solutions (ground states of discrete systems under strong constraints. A transformation of state variables may enhance computational tractability. It has been argued that these state encodings are to be chosen invertible to retain the original size of the state space. Here we show how redundant non-invertible encodings enhance optimization by enriching the density of low-energy states. In addition, smooth landscapes may be established on encoded state spaces to guide local search dynamics towards the ground state.

  20. Landscape Encodings Enhance Optimization

    Science.gov (United States)

    Klemm, Konstantin; Mehta, Anita; Stadler, Peter F.

    2012-01-01

    Hard combinatorial optimization problems deal with the search for the minimum cost solutions (ground states) of discrete systems under strong constraints. A transformation of state variables may enhance computational tractability. It has been argued that these state encodings are to be chosen invertible to retain the original size of the state space. Here we show how redundant non-invertible encodings enhance optimization by enriching the density of low-energy states. In addition, smooth landscapes may be established on encoded state spaces to guide local search dynamics towards the ground state. PMID:22496860

  1. Topology Optimization of Thermal Heat Sinks

    DEFF Research Database (Denmark)

    Klaas Haertel, Jan Hendrik; Engelbrecht, Kurt; Lazarov, Boyan Stefanov

    2015-01-01

    In this paper, topology optimization is applied to optimize the cooling performance of thermal heat sinks. The coupled two-dimensional thermofluid model of a heat sink cooled with forced convection and a density-based topology optimization including density filtering and projection are implemented...... in COMSOL Multiphysics. The optimization objective is to minimize the heat sink’s temperature for a prescribed pressure drop and fixed heat generation. To conduct the optimization, COMSOL’s Optimization Module with GCMMA as the optimization method is used. The implementation of this topology optimization...... approach in COMSOL Multiphysics is described in this paper and results for optimized two-dimensional heat sinks are presented. Furthermore, parameter studies regarding the effect of the prescribed pressure drop of the system on Reynolds number and realized heat sink temperature are presented and discussed....

  2. Laboratory Density Functionals

    OpenAIRE

    Giraud, B. G.

    2007-01-01

    We compare several definitions of the density of a self-bound system, such as a nucleus, in relation with its center-of-mass zero-point motion. A trivial deconvolution relates the internal density to the density defined in the laboratory frame. This result is useful for the practical definition of density functionals.

  3. Optimally segmented permanent magnet structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders

    2016-01-01

    We present an optimization approach which can be employed to calculate the globally optimal segmentation of a two-dimensional magnetic system into uniformly magnetized pieces. For each segment the algorithm calculates the optimal shape and the optimal direction of the remanent flux density vector......, with respect to a linear objective functional. We illustrate the approach with results for magnet design problems from different areas, such as a permanent magnet electric motor, a beam focusing quadrupole magnet for particle accelerators and a rotary device for magnetic refrigeration....

  4. Exploring the Effect of Increased Energy Density on the Environmental Impacts of Traction Batteries: A Comparison of Energy Optimized Lithium-Ion and Lithium-Sulfur Batteries for Mobility Applications

    Directory of Open Access Journals (Sweden)

    Felipe Cerdas

    2018-01-01

    Full Text Available The quest towards increasing the energy density of traction battery technologies has led to the emergence and diversification of battery materials. The lithium sulfur battery (LSB is in this regard a promising material for batteries due to its specific energy. However, due to its low volumetric energy density, the LSB faces challenges in mobility applications such as electric vehicles but also other transportation modes. To understand the potential environmental implication of LSB batteries, a comparative Life Cycle Assessment (LCA was performed. For this study, electrodes for both an NMC111 with an anode graphite and a LSB battery cell with a lithium metal foil as anode were manufactured. Data from disassembly experiments performed on a real battery system for a mid-size passenger vehicle were used to build the required life cycle inventory. The energy consumption during the use phase was calculated using a simulative approach. A set of thirteen impact categories was evaluated and characterized with the ReCiPe methodology. The results of the LCA in this study allow identification of the main sources of environmental problems as well as possible strategies to improve the environmental impact of LSB batteries. In this regard, the high requirements of N-Methyl-2-pyrrolidone (NMP for the processing of the sulfur cathode and the thickness of the lithium foil were identified as the most important drivers. We make recommendations for necessary further research in order to broaden the understanding concerning the potential environmental implication of the implementation of LSB batteries for mobility applications.

  5. Nonlinear optimization

    CERN Document Server

    Ruszczynski, Andrzej

    2011-01-01

    Optimization is one of the most important areas of modern applied mathematics, with applications in fields from engineering and economics to finance, statistics, management science, and medicine. While many books have addressed its various aspects, Nonlinear Optimization is the first comprehensive treatment that will allow graduate students and researchers to understand its modern ideas, principles, and methods within a reasonable time, but without sacrificing mathematical precision. Andrzej Ruszczynski, a leading expert in the optimization of nonlinear stochastic systems, integrates the theory and the methods of nonlinear optimization in a unified, clear, and mathematically rigorous fashion, with detailed and easy-to-follow proofs illustrated by numerous examples and figures. The book covers convex analysis, the theory of optimality conditions, duality theory, and numerical methods for solving unconstrained and constrained optimization problems. It addresses not only classical material but also modern top...

  6. Densities of carbon foils

    International Nuclear Information System (INIS)

    Stoner, J.O. Jr.

    1991-01-01

    The densities of arc-evaporated carbon target foils have been measured by several methods. The density depends upon the method used to measure it; for the same surface density, values obtained by different measurement techniques may differ by fifty percent or more. The most reliable density measurements are by flotation, yielding a density of 2.01±0.03 g cm -3 , and interferometric step height with the surface density known from auxiliary measurements, yielding a density of 2.61±0.4 g cm -3 . The difference between these density values mayy be due in part to the compressive stresses that carbon films have while still on their substrates, uncertainties in the optical calibration of surface densities of carbon foils, and systematic errors in step-height measurements. Mechanical thickness measurements by micrometer caliper are unreliable due to nonplanarity of these foils. (orig.)

  7. Website Optimization

    CERN Document Server

    King, Andrew

    2008-01-01

    Remember when an optimized website was one that merely didn't take all day to appear? Times have changed. Today, website optimization can spell the difference between enterprise success and failure, and it takes a lot more know-how to achieve success. This book is a comprehensive guide to the tips, techniques, secrets, standards, and methods of website optimization. From increasing site traffic to maximizing leads, from revving up responsiveness to increasing navigability, from prospect retention to closing more sales, the world of 21st century website optimization is explored, exemplified a

  8. Density-density functionals and effective potentials in many-body electronic structure calculations

    International Nuclear Information System (INIS)

    Reboredo, Fernando A.; Kent, Paul R.

    2008-01-01

    We demonstrate the existence of different density-density functionals designed to retain selected properties of the many-body ground state in a non-interacting solution starting from the standard density functional theory ground state. We focus on diffusion quantum Monte Carlo applications that require trial wave functions with optimal Fermion nodes. The theory is extensible and can be used to understand current practices in several electronic structure methods within a generalized density functional framework. The theory justifies and stimulates the search of optimal empirical density functionals and effective potentials for accurate calculations of the properties of real materials, but also cautions on the limits of their applicability. The concepts are tested and validated with a near-analytic model.

  9. Optimality Conditions in Vector Optimization

    CERN Document Server

    Jiménez, Manuel Arana; Lizana, Antonio Rufián

    2011-01-01

    Vector optimization is continuously needed in several science fields, particularly in economy, business, engineering, physics and mathematics. The evolution of these fields depends, in part, on the improvements in vector optimization in mathematical programming. The aim of this Ebook is to present the latest developments in vector optimization. The contributions have been written by some of the most eminent researchers in this field of mathematical programming. The Ebook is considered essential for researchers and students in this field.

  10. Medicinsk Optimering

    DEFF Research Database (Denmark)

    Birkholm, Klavs

    2010-01-01

    En undersøgelse af anvendelsen af medicin til optimering af koncentration, hukommelse og følelsestonus. Efterfulgt af etiske overvejelser og anbefalinger til det politiske system......En undersøgelse af anvendelsen af medicin til optimering af koncentration, hukommelse og følelsestonus. Efterfulgt af etiske overvejelser og anbefalinger til det politiske system...

  11. Structural optimization

    CERN Document Server

    MacBain, Keith M

    2009-01-01

    Intends to supplement the engineer's box of analysis and design tools making optimization as commonplace as the finite element method in the engineering workplace. This title introduces structural optimization and the methods of nonlinear programming such as Lagrange multipliers, Kuhn-Tucker conditions, and calculus of variations.

  12. Future Road Density

    Data.gov (United States)

    U.S. Environmental Protection Agency — Road density is generally highly correlated with amount of developed land cover. High road densities usually indicate high levels of ecological disturbance. More...

  13. Topology Optimization

    DEFF Research Database (Denmark)

    A. Kristensen, Anders Schmidt; Damkilde, Lars

    2007-01-01

    . A way to solve the initial design problem namely finding a form can be solved by so-called topology optimization. The idea is to define a design region and an amount of material. The loads and supports are also fidefined, and the algorithm finds the optimal material distribution. The objective function...... dictates the form, and the designer can choose e.g. maximum stiness, maximum allowable stresses or maximum lowest eigenfrequency. The result of the topology optimization is a relatively coarse map of material layout. This design can be transferred to a CAD system and given the necessary geometrically...... refinements, and then remeshed and reanalysed in other to secure that the design requirements are met correctly. The output of standard topology optimization has seldom well-defined, sharp contours leaving the designer with a tedious interpretation, which often results in less optimal structures. In the paper...

  14. Dispositional Optimism

    Science.gov (United States)

    Carver, Charles S.; Scheier, Michael F.

    2014-01-01

    Optimism is a cognitive construct (expectancies regarding future outcomes) that also relates to motivation: optimistic people exert effort, whereas pessimistic people disengage from effort. Study of optimism began largely in health contexts, finding positive associations between optimism and markers of better psychological and physical health. Physical health effects likely occur through differences in both health-promoting behaviors and physiological concomitants of coping. Recently, the scientific study of optimism has extended to the realm of social relations: new evidence indicates that optimists have better social connections, partly because they work harder at them. In this review, we examine the myriad ways this trait can benefit an individual, and our current understanding of the biological basis of optimism. PMID:24630971

  15. Achieving maximum baryon densities

    International Nuclear Information System (INIS)

    Gyulassy, M.

    1984-01-01

    In continuing work on nuclear stopping power in the energy range E/sub lab/ approx. 10 GeV/nucleon, calculations were made of the energy and baryon densities that could be achieved in uranium-uranium collisions. Results are shown. The energy density reached could exceed 2 GeV/fm 3 and baryon densities could reach as high as ten times normal nuclear densities

  16. Crowding and Density

    Science.gov (United States)

    Design and Environment, 1972

    1972-01-01

    Three-part report pinpointing problems and uncovering solutions for the dual concepts of density (ratio of people to space) and crowding (psychological response to density). Section one, A Primer on Crowding,'' reviews new psychological and social findings; section two, Density in the Suburbs,'' shows conflict between status quo and increased…

  17. Optimized constitutive distributions visualized by lamina formulas

    DEFF Research Database (Denmark)

    Pedersen, Pauli; Pedersen, Niels Leergaard

    2017-01-01

    For optimal design most parameters may be classified in size, shape, and topology, such as simple density variables and parameters for surface description. Density and surface can be rather directly visualized. Extending the design to material design in sense of design of distributions of constit......For optimal design most parameters may be classified in size, shape, and topology, such as simple density variables and parameters for surface description. Density and surface can be rather directly visualized. Extending the design to material design in sense of design of distributions...

  18. Probability densities and Lévy densities

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler

    For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated.......For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated....

  19. Portfolio Optimization

    OpenAIRE

    Issagali, Aizhan; Alshimbayeva, Damira; Zhalgas, Aidana

    2015-01-01

    In this paper Portfolio Optimization techniques were used to determine the most favorable investment portfolio. In particular, stock indices of three companies, namely Microsoft Corporation, Christian Dior Fashion House and Shevron Corporation were evaluated. Using this data the amounts invested in each asset when a portfolio is chosen on the efficient frontier were calculated. In addition, the Portfolio with minimum variance, tangency portfolio and optimal Markowitz portfolio are presented.

  20. Acoustic design by topology optimization

    DEFF Research Database (Denmark)

    Dühring, Maria Bayard; Jensen, Jakob Søndergaard; Sigmund, Ole

    2008-01-01

    To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions...... in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling...

  1. Why Density Dependent Propulsion?

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    In 2004 Khoury and Weltman produced a density dependent cosmology theory they call the Chameleon, as at its nature, it is hidden within known physics. The Chameleon theory has implications to dark matter/energy with universe acceleration properties, which implies a new force mechanism with ties to the far and local density environment. In this paper, the Chameleon Density Model is discussed in terms of propulsion toward new propellant-less engineering methods.

  2. Density limits in Tokamaks

    International Nuclear Information System (INIS)

    Tendler, M.

    1984-06-01

    The energy loss from a tokamak plasma due to neutral hydrogen radiation and recycling is of great importance for the energy balance at the periphery. It is shown that the requirement for thermal equilibrium implies a constraint on the maximum attainable edge density. The relation to other density limits is discussed. The average plasma density is shown to be a strong function of the refuelling deposition profile. (author)

  3. Nuclear Level Densities

    International Nuclear Information System (INIS)

    Grimes, S.M.

    2005-01-01

    Recent research in the area of nuclear level densities is reviewed. The current interest in nuclear astrophysics and in structure of nuclei off of the line of stability has led to the development of radioactive beam facilities with larger machines currently being planned. Nuclear level densities for the systems used to produce the radioactive beams influence substantially the production rates of these beams. The modification of level-density parameters near the drip lines would also affect nucleosynthesis rates and abundances

  4. Measurement of true density

    International Nuclear Information System (INIS)

    Carr-Brion, K.G.; Keen, E.F.

    1982-01-01

    System for determining the true density of a fluent mixture such as a liquid slurry, containing entrained gas, such as air comprises a restriction in pipe through which at least a part of the mixture is passed. Density measuring means such as gamma-ray detectors and source measure the apparent density of the mixture before and after its passage through the restriction. Solid-state pressure measuring devices are arranged to measure the pressure in the mixture before and after its passage through the restriction. Calculating means, such as a programmed microprocessor, determine the true density from these measurements using relationships given in the description. (author)

  5. Exchange-correlation energies of atoms from efficient density functionals: influence of the electron density

    Science.gov (United States)

    Tao, Jianmin; Ye, Lin-Hui; Duan, Yuhua

    2017-12-01

    The primary goal of Kohn-Sham density functional theory is to evaluate the exchange-correlation contribution to electronic properties. However, the accuracy of a density functional can be affected by the electron density. Here we apply the nonempirical Tao-Mo (TM) semilocal functional to study the influence of the electron density on the exchange and correlation energies of atoms and ions, and compare the results with the commonly used nonempirical semilocal functionals local spin-density approximation (LSDA), Perdew-Burke-Ernzerhof (PBE), Tao-Perdew-Staroverov-Scuseria (TPSS), and hybrid functional PBE0. We find that the spin-restricted Hartree-Fock density yields the exchange and correlation energies in good agreement with the Optimized Effective Potential method, particularly for spherical atoms and ions. However, the errors of these semilocal and hybrid functionals become larger for self-consistent densities. We further find that the quality of the electron density have greater effect on the exchange-correlation energies of kinetic energy density-dependent meta-GGA functionals TPSS and TM than on those of the LSDA and GGA, and therefore, should have greater influence on the performance of meta-GGA functionals. Finally, we show that the influence of the density quality on PBE0 is slightly reduced, compared to that of PBE, due to the exact mixing.

  6. Radiological optimization

    International Nuclear Information System (INIS)

    Zeevaert, T.

    1998-01-01

    Radiological optimization is one of the basic principles in each radiation-protection system and it is a basic requirement in the safety standards for radiation protection in the European Communities. The objectives of the research, performed in this field at the Belgian Nuclear Research Centre SCK-CEN, are: (1) to implement the ALARA principles in activities with radiological consequences; (2) to develop methodologies for optimization techniques in decision-aiding; (3) to optimize radiological assessment models by validation and intercomparison; (4) to improve methods to assess in real time the radiological hazards in the environment in case of an accident; (5) to develop methods and programmes to assist decision-makers during a nuclear emergency; (6) to support the policy of radioactive waste management authorities in the field of radiation protection; (7) to investigate existing software programmes in the domain of multi criteria analysis. The main achievements for 1997 are given

  7. Optimizing detectability

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    HPLC is useful for trace and ultratrace analyses of a variety of compounds. For most applications, HPLC is useful for determinations in the nanogram-to-microgram range; however, detection limits of a picogram or less have been demonstrated in certain cases. These determinations require state-of-the-art capability; several examples of such determinations are provided in this chapter. As mentioned before, to detect and/or analyze low quantities of a given analyte at submicrogram or ultratrace levels, it is necessary to optimize the whole separation system, including the quantity and type of sample, sample preparation, HPLC equipment, chromatographic conditions (including column), choice of detector, and quantitation techniques. A limited discussion is provided here for optimization based on theoretical considerations, chromatographic conditions, detector selection, and miscellaneous approaches to detectability optimization. 59 refs

  8. Pareto-optimal alloys

    DEFF Research Database (Denmark)

    Bligaard, Thomas; Johannesson, Gisli Holmar; Ruban, Andrei

    2003-01-01

    Large databases that can be used in the search for new materials with specific properties remain an elusive goal in materials science. The problem is complicated by the fact that the optimal material for a given application is usually a compromise between a number of materials properties and the ......Large databases that can be used in the search for new materials with specific properties remain an elusive goal in materials science. The problem is complicated by the fact that the optimal material for a given application is usually a compromise between a number of materials properties...... and the cost. In this letter we present a database consisting of the lattice parameters, bulk moduli, and heats of formation for over 64 000 ordered metallic alloys, which has been established by direct first-principles density-functional-theory calculations. Furthermore, we use a concept from economic theory......, the Pareto-optimal set, to determine optimal alloy solutions for the compromise between low compressibility, high stability, and cost....

  9. Unconstrained Optimization

    DEFF Research Database (Denmark)

    Frandsen, P. E.; Jonasson, K.; Nielsen, Hans Bruun

    1999-01-01

    This lecture note is intended for use in the course 04212 Optimization and Data Fitting at the Technincal University of Denmark. It covers about 25% of the curriculum. Hopefully, the note may be useful also to interested persons not participating in that course. The aim of the note is to give...... an introduction to algorithms for unconstrained optimization. We present Conjugate Gradient, Damped Newton and Quasi Newton methods together with the relevant theoretical background. The reader is assumed to be familiar with algorithms for solving linear and nonlinear system of equations, at a level corresponding...

  10. On density forecast evaluation

    NARCIS (Netherlands)

    Diks, C.

    2008-01-01

    Traditionally, probability integral transforms (PITs) have been popular means for evaluating density forecasts. For an ideal density forecast, the PITs should be uniformly distributed on the unit interval and independent. However, this is only a necessary condition, and not a sufficient one, as

  11. Learning Grasp Affordance Densities

    DEFF Research Database (Denmark)

    Detry, Renaud; Kraft, Dirk; Kroemer, Oliver

    2011-01-01

    and relies on kernel density estimation to provide a continuous model. Grasp densities are learned and refined from exploration, by letting a robot “play” with an object in a sequence of graspand-drop actions: The robot uses visual cues to generate a set of grasp hypotheses; it then executes......We address the issue of learning and representing object grasp affordance models. We model grasp affordances with continuous probability density functions (grasp densities) which link object-relative grasp poses to their success probability. The underlying function representation is nonparametric...... these and records their outcomes. When a satisfactory number of grasp data is available, an importance-sampling algorithm turns these into a grasp density. We evaluate our method in a largely autonomous learning experiment run on three objects of distinct shapes. The experiment shows how learning increases success...

  12. Optimal transport

    CERN Document Server

    Eckmann, B

    2008-01-01

    At the close of the 1980s, the independent contributions of Yann Brenier, Mike Cullen and John Mather launched a revolution in the venerable field of optimal transport founded by G Monge in the 18th century, which has made breathtaking forays into various other domains of mathematics ever since. The author presents a broad overview of this area.

  13. Topology optimization

    DEFF Research Database (Denmark)

    Bendsøe, Martin P.; Sigmund, Ole

    2007-01-01

    Taking as a starting point a design case for a compliant mechanism (a force inverter), the fundamental elements of topology optimization are described. The basis for the developments is a FEM format for this design problem and emphasis is given to the parameterization of design as a raster image...

  14. On the efficiency of chaos optimization algorithms for global optimization

    International Nuclear Information System (INIS)

    Yang Dixiong; Li Gang; Cheng Gengdong

    2007-01-01

    Chaos optimization algorithms as a novel method of global optimization have attracted much attention, which were all based on Logistic map. However, we have noticed that the probability density function of the chaotic sequences derived from Logistic map is a Chebyshev-type one, which may affect the global searching capacity and computational efficiency of chaos optimization algorithms considerably. Considering the statistical property of the chaotic sequences of Logistic map and Kent map, the improved hybrid chaos-BFGS optimization algorithm and the Kent map based hybrid chaos-BFGS algorithm are proposed. Five typical nonlinear functions with multimodal characteristic are tested to compare the performance of five hybrid optimization algorithms, which are the conventional Logistic map based chaos-BFGS algorithm, improved Logistic map based chaos-BFGS algorithm, Kent map based chaos-BFGS algorithm, Monte Carlo-BFGS algorithm, mesh-BFGS algorithm. The computational performance of the five algorithms is compared, and the numerical results make us question the high efficiency of the chaos optimization algorithms claimed in some references. It is concluded that the efficiency of the hybrid optimization algorithms is influenced by the statistical property of chaotic/stochastic sequences generated from chaotic/stochastic algorithms, and the location of the global optimum of nonlinear functions. In addition, it is inappropriate to advocate the high efficiency of the global optimization algorithms only depending on several numerical examples of low-dimensional functions

  15. Topology optimization of microwave waveguide filters

    DEFF Research Database (Denmark)

    Aage, Niels; Johansen, Villads Egede

    2017-01-01

    We present a density based topology optimization approach for the design of metallic microwave insert filters. A two-phase optimization procedure is proposed in which we, starting from a uniform design, first optimize to obtain a set of spectral varying resonators followed by a band gap...... optimization for the desired filter characteristics. This is illustrated through numerical experiments and comparison to a standard band pass filter design. It is seen that the carefully optimized topologies can sharpen the filter characteristics and improve performance. Furthermore, the obtained designs share...... little resemblance to standard filter layouts and hence the proposed design method offers a new design tool in microwave engineering....

  16. Methodology of shell structure reinforcement layout optimization

    Science.gov (United States)

    Szafrański, Tomasz; Małachowski, Jerzy; Damaziak, Krzysztof

    2018-01-01

    This paper presents an optimization process of a reinforced shell diffuser intended for a small wind turbine (rated power of 3 kW). The diffuser structure consists of multiple reinforcement and metal skin. This kind of structure is suitable for optimization in terms of selection of reinforcement density, stringers cross sections, sheet thickness, etc. The optimisation approach assumes the reduction of the amount of work to be done between the optimization process and the final product design. The proposed optimization methodology is based on application of a genetic algorithm to generate the optimal reinforcement layout. The obtained results are the basis for modifying the existing Small Wind Turbine (SWT) design.

  17. Maximization of eigenvalues using topology optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2000-01-01

    to localized modes in low density areas. The topology optimization problem is formulated using the SIMP method. Special attention is paid to a numerical method for removing localized eigenmodes in low density areas. The method is applied to numerical examples of maximizing the first eigenfrequency, One example...

  18. The optimal density of cellular solids in axial tension.

    Science.gov (United States)

    Mihai, L Angela; Alayyash, Khulud; Wyatt, Hayley

    2017-05-01

    For cellular bodies with uniform cell size, wall thickness, and shape, an important question is whether the same volume of material has the same effect when arranged as many small cells or as fewer large cells. To answer this question, for finite element models of periodic structures of Mooney-type material with different structural geometry and subject to large strain deformations, we identify a nonlinear elastic modulus as the ratio between the mean effective stress and the mean effective strain in the solid cell walls, and show that this modulus increases when the thickness of the walls increases, as well as when the number of cells increases while the volume of solid material remains fixed. Since, under the specified conditions, this nonlinear elastic modulus increases also as the corresponding mean stress increases, either the mean modulus or the mean stress can be employed as indicator when the optimum wall thickness or number of cells is sought.

  19. Optimization of plasmid electrotransformation into Escherichia coli ...

    African Journals Online (AJOL)

    In order to improve electroporation, optical density of bacteria, recovery time and electrical parameter (field strength and capacitance) were optimized using the Taguchi statistical method. ANOVA of obtained data indicated that the optimal conditions of electrotransformation of pET-28a (+) plasmid into Escherichia coli ...

  20. Topology optimization of microwave waveguide filters

    DEFF Research Database (Denmark)

    Aage, Niels; Johansen, Villads Egede

    2017-01-01

    We present a density based topology optimization approach for the design of metallic microwave insert filters. A two-phase optimization procedure is proposed in which we, starting from a uniform design, first optimize to obtain a set of spectral varying resonators followed by a band gap optimizat......We present a density based topology optimization approach for the design of metallic microwave insert filters. A two-phase optimization procedure is proposed in which we, starting from a uniform design, first optimize to obtain a set of spectral varying resonators followed by a band gap...... little resemblance to standard filter layouts and hence the proposed design method offers a new design tool in microwave engineering....

  1. Current density tensors

    Science.gov (United States)

    Lazzeretti, Paolo

    2018-04-01

    It is shown that nonsymmetric second-rank current density tensors, related to the current densities induced by magnetic fields and nuclear magnetic dipole moments, are fundamental properties of a molecule. Together with magnetizability, nuclear magnetic shielding, and nuclear spin-spin coupling, they completely characterize its response to magnetic perturbations. Gauge invariance, resolution into isotropic, deviatoric, and antisymmetric parts, and contributions of current density tensors to magnetic properties are discussed. The components of the second-rank tensor properties are rationalized via relationships explicitly connecting them to the direction of the induced current density vectors and to the components of the current density tensors. The contribution of the deviatoric part to the average value of magnetizability, nuclear shielding, and nuclear spin-spin coupling, uniquely determined by the antisymmetric part of current density tensors, vanishes identically. The physical meaning of isotropic and anisotropic invariants of current density tensors has been investigated, and the connection between anisotropy magnitude and electron delocalization has been discussed.

  2. A density gradient theory based method for surface tension calculations

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Michelsen, Michael Locht; Kontogeorgis, Georgios

    2016-01-01

    The density gradient theory has been becoming a widely used framework for calculating surface tension, within which the same equation of state is used for the interface and bulk phases, because it is a theoretically sound, consistent and computationally affordable approach. Based on the observation...... that the optimal density path from the geometric mean density gradient theory passes the saddle point of the tangent plane distance to the bulk phases, we propose to estimate surface tension with an approximate density path profile that goes through this saddle point. The linear density gradient theory, which...... assumes linearly distributed densities between the two bulk phases, has also been investigated. Numerical problems do not occur with these density path profiles. These two approximation methods together with the full density gradient theory have been used to calculate the surface tension of various...

  3. Extracellular ATP elevates cytoplasmatic free Ca2+ in HeLa cells by the interaction with a 5'-nucleotide receptor

    NARCIS (Netherlands)

    Smit, M J; Leurs, R; Bloemers, S M; Tertoolen, L G; Bast, A; De Laat, S W; Timmerman, H

    1993-01-01

    In the present study we have characterized the effects of ATP and several other nucleotides on the intracellular Ca2+ levels of HeLa cells. Using fura-2 microscopy fluorescence measurements, the ATP-mediated increase in intracellular Ca2+ was shown to consist of a rapid rise which decreased after a

  4. Density and geometry of single component plasmas

    International Nuclear Information System (INIS)

    Speck, A.; Gabrielse, G.; Larochelle, P.; Le Sage, D.; Levitt, B.; Kolthammer, W.S.; McConnell, R.; Wrubel, J.; Grzonka, D.; Oelert, W.; Sefzick, T.; Zhang, Z.; Comeau, D.; George, M.C.; Hessels, E.A.; Storry, C.H.; Weel, M.; Walz, J.

    2007-01-01

    The density and geometry of p-bar and e + plasmas in realistic trapping potentials are required to understand and optimize antihydrogen (H-bar) formation. An aperture method and a quadrupole oscillation frequency method for characterizing such plasmas are compared for the first time, using electrons in a cylindrical Penning trap. Both methods are used in a way that makes it unnecessary to assume that the plasmas are spheroidal, and it is shown that they are not. Good agreement between the two methods illustrates the possibility to accurately determine plasma densities and geometries within non-idealized, realistic trapping potentials

  5. Density and geometry of single component plasmas

    CERN Document Server

    Speck, A; Larochelle, P; Le Sage, D; Levitt, B; Kolthammer, W S; McConnell, R; Wrubel, J; Grzonka, D; Oelert, W; Sefzick, T; Zhang, Z; Comeau, D; George, M C; Hessels, E A; Storry, C H; Weel, M; Walz, J

    2007-01-01

    The density and geometry of p¯ and e+ plasmas in realistic trapping potentials are required to understand and optimize antihydrogen (H¯) formation. An aperture method and a quadrupole oscillation frequency method for characterizing such plasmas are compared for the first time, using electrons in a cylindrical Penning trap. Both methods are used in a way that makes it unnecessary to assume that the plasmas are spheroidal, and it is shown that they are not. Good agreement between the two methods illustrates the possibility to accurately determine plasma densities and geometries within non-idealized, realistic trapping potentials.

  6. Aplicação de métodos estatísticos na otimização da densidade de empacotamento de distribuições de pós de alumina Optimization of the packing density of alumina powder distributions using statistical techniques

    Directory of Open Access Journals (Sweden)

    A. P. Silva

    2004-12-01

    related statistical techniques (software Statistica, the particle size distribution that maximises the packing density was obtained in both cases and, by comparison with theoretical particle size distributions, the validity of Alfred's theoretical model for perfect spheres was demonstrated. These results clearly show that the harmful effect of the non-spherical shape of real particles can, in fact, be compensated by the optimization of the overall particle size distribution.

  7. Optimal control

    CERN Document Server

    Aschepkov, Leonid T; Kim, Taekyun; Agarwal, Ravi P

    2016-01-01

    This book is based on lectures from a one-year course at the Far Eastern Federal University (Vladivostok, Russia) as well as on workshops on optimal control offered to students at various mathematical departments at the university level. The main themes of the theory of linear and nonlinear systems are considered, including the basic problem of establishing the necessary and sufficient conditions of optimal processes. In the first part of the course, the theory of linear control systems is constructed on the basis of the separation theorem and the concept of a reachability set. The authors prove the closure of a reachability set in the class of piecewise continuous controls, and the problems of controllability, observability, identification, performance and terminal control are also considered. The second part of the course is devoted to nonlinear control systems. Using the method of variations and the Lagrange multipliers rule of nonlinear problems, the authors prove the Pontryagin maximum principle for prob...

  8. Intrinsic-density functionals

    International Nuclear Information System (INIS)

    Engel, J.

    2007-01-01

    The Hohenberg-Kohn theorem and Kohn-Sham procedure are extended to functionals of the localized intrinsic density of a self-bound system such as a nucleus. After defining the intrinsic-density functional, we modify the usual Kohn-Sham procedure slightly to evaluate the mean-field approximation to the functional, and carefully describe the construction of the leading corrections for a system of fermions in one dimension with a spin-degeneracy equal to the number of particles N. Despite the fact that the corrections are complicated and nonlocal, we are able to construct a local Skyrme-like intrinsic-density functional that, while different from the exact functional, shares with it a minimum value equal to the exact ground-state energy at the exact ground-state intrinsic density, to next-to-leading order in 1/N. We briefly discuss implications for real Skyrme functionals

  9. Density functional theory

    International Nuclear Information System (INIS)

    Das, M.P.

    1984-07-01

    The state of the art of the density functional formalism (DFT) is reviewed. The theory is quantum statistical in nature; its simplest version is the well-known Thomas-Fermi theory. The DFT is a powerful formalism in which one can treat the effect of interactions in inhomogeneous systems. After some introductory material, the DFT is outlined from the two basic theorems, and various generalizations of the theorems appropriate to several physical situations are pointed out. Next, various approximations to the density functionals are presented and some practical schemes, discussed; the approximations include an electron gas of almost constant density and an electron gas of slowly varying density. Then applications of DFT in various diverse areas of physics (atomic systems, plasmas, liquids, nuclear matter) are mentioned, and its strengths and weaknesses are pointed out. In conclusion, more recent developments of DFT are indicated

  10. Low Density Supersonic Decelerators

    Data.gov (United States)

    National Aeronautics and Space Administration — The Low-Density Supersonic Decelerator project will demonstrate the use of inflatable structures and advanced parachutes that operate at supersonic speeds to more...

  11. Bone mineral density test

    Science.gov (United States)

    BMD test; Bone density test; Bone densitometry; DEXA scan; DXA; Dual-energy x-ray absorptiometry; p-DEXA; Osteoporosis - BMD ... need to undress. This scan is the best test to predict your risk of fractures, especially of ...

  12. Density scaling for multiplets

    International Nuclear Information System (INIS)

    Nagy, A

    2011-01-01

    Generalized Kohn-Sham equations are presented for lowest-lying multiplets. The way of treating non-integer particle numbers is coupled with an earlier method of the author. The fundamental quantity of the theory is the subspace density. The Kohn-Sham equations are similar to the conventional Kohn-Sham equations. The difference is that the subspace density is used instead of the density and the Kohn-Sham potential is different for different subspaces. The exchange-correlation functional is studied using density scaling. It is shown that there exists a value of the scaling factor ζ for which the correlation energy disappears. Generalized OPM and Krieger-Li-Iafrate (KLI) methods incorporating correlation are presented. The ζKLI method, being as simple as the original KLI method, is proposed for multiplets.

  13. Fission level densities

    International Nuclear Information System (INIS)

    Maslov, V.M.

    1998-01-01

    Fission level densities (or fissioning nucleus level densities at fission saddle deformations) are required for statistical model calculations of actinide fission cross sections. Back-shifted Fermi-Gas Model, Constant Temperature Model and Generalized Superfluid Model (GSM) are widely used for the description of level densities at stable deformations. These models provide approximately identical level density description at excitations close to the neutron binding energy. It is at low excitation energies that they are discrepant, while this energy region is crucial for fission cross section calculations. A drawback of back-shifted Fermi gas model and traditional constant temperature model approaches is that it is difficult to include in a consistent way pair correlations, collective effects and shell effects. Pair, shell and collective properties of nucleus do not reduce just to the renormalization of level density parameter a, but influence the energy dependence of level densities. These effects turn out to be important because they seem to depend upon deformation of either equilibrium or saddle-point. These effects are easily introduced within GSM approach. Fission barriers are another key ingredients involved in the fission cross section calculations. Fission level density and barrier parameters are strongly interdependent. This is the reason for including fission barrier parameters along with the fission level densities in the Starter File. The recommended file is maslov.dat - fission barrier parameters. Recent version of actinide fission barrier data obtained in Obninsk (obninsk.dat) should only be considered as a guide for selection of initial parameters. These data are included in the Starter File, together with the fission barrier parameters recommended by CNDC (beijing.dat), for completeness. (author)

  14. Density-wave oscillations

    International Nuclear Information System (INIS)

    Belblidia, L.A.; Bratianu, C.

    1979-01-01

    Boiling flow in a steam generator, a water-cooled reactor, and other multiphase processes can be subject to instabilities. It appears that the most predominant instabilities are the so-called density-wave oscillations. They can cause difficulties for three main reasons; they may induce burnout; they may cause mechanical vibrations of components; and they create system control problems. A comprehensive review is presented of experimental and theoretical studies concerning density-wave oscillations. (author)

  15. Density of liquid Ytterbium

    International Nuclear Information System (INIS)

    Stankus, S.V.; Basin, A.S.

    1983-01-01

    Results are presented for measurements of the density of metallic ytterbium in the liquid state and at the liquid-solid phase transition. Based on the numerical data obtained, the coefficient of thermal expansion βZ of the liquid and the density discontinuity on melting deltarho/sub m/ are calculated. The magnitudes of βZ and deltarho/sub m/ for the heavy lanthanides are compared

  16. Negative Ion Density Fronts

    International Nuclear Information System (INIS)

    Igor Kaganovich

    2000-01-01

    Negative ions tend to stratify in electronegative plasmas with hot electrons (electron temperature Te much larger than ion temperature Ti, Te > Ti ). The boundary separating a plasma containing negative ions, and a plasma, without negative ions, is usually thin, so that the negative ion density falls rapidly to zero-forming a negative ion density front. We review theoretical, experimental and numerical results giving the spatio-temporal evolution of negative ion density fronts during plasma ignition, the steady state, and extinction (afterglow). During plasma ignition, negative ion fronts are the result of the break of smooth plasma density profiles during nonlinear convection. In a steady-state plasma, the fronts are boundary layers with steepening of ion density profiles due to nonlinear convection also. But during plasma extinction, the ion fronts are of a completely different nature. Negative ions diffuse freely in the plasma core (no convection), whereas the negative ion front propagates towards the chamber walls with a nearly constant velocity. The concept of fronts turns out to be very effective in analysis of plasma density profile evolution in strongly non-isothermal plasmas

  17. High-Power-Density, High-Energy-Density Fluorinated Graphene for Primary Lithium Batteries

    Directory of Open Access Journals (Sweden)

    Guiming Zhong

    2018-03-01

    Full Text Available Li/CFx is one of the highest-energy-density primary batteries; however, poor rate capability hinders its practical applications in high-power devices. Here we report a preparation of fluorinated graphene (GFx with superior performance through a direct gas fluorination method. We find that the so-called “semi-ionic” C-F bond content in all C-F bonds presents a more critical impact on rate performance of the GFx in comparison with sp2 C content in the GFx, morphology, structure, and specific surface area of the materials. The rate capability remains excellent before the semi-ionic C-F bond proportion in the GFx decreases. Thus, by optimizing semi-ionic C-F content in our GFx, we obtain the optimal x of 0.8, with which the GF0.8 exhibits a very high energy density of 1,073 Wh kg−1 and an excellent power density of 21,460 W kg−1 at a high current density of 10 A g−1. More importantly, our approach opens a new avenue to obtain fluorinated carbon with high energy densities without compromising high power densities.

  18. Discrete optimization

    CERN Document Server

    Parker, R Gary

    1988-01-01

    This book treats the fundamental issues and algorithmic strategies emerging as the core of the discipline of discrete optimization in a comprehensive and rigorous fashion. Following an introductory chapter on computational complexity, the basic algorithmic results for the two major models of polynomial algorithms are introduced--models using matroids and linear programming. Further chapters treat the major non-polynomial algorithms: branch-and-bound and cutting planes. The text concludes with a chapter on heuristic algorithms.Several appendixes are included which review the fundamental ideas o

  19. Rational Density Functional Selection Using Game Theory.

    Science.gov (United States)

    McAnanama-Brereton, Suzanne; Waller, Mark P

    2018-01-22

    Theoretical chemistry has a paradox of choice due to the availability of a myriad of density functionals and basis sets. Traditionally, a particular density functional is chosen on the basis of the level of user expertise (i.e., subjective experiences). Herein we circumvent the user-centric selection procedure by describing a novel approach for objectively selecting a particular functional for a given application. We achieve this by employing game theory to identify optimal functional/basis set combinations. A three-player (accuracy, complexity, and similarity) game is devised, through which Nash equilibrium solutions can be obtained. This approach has the advantage that results can be systematically improved by enlarging the underlying knowledge base, and the deterministic selection procedure mathematically justifies the density functional and basis set selections.

  20. Elastic reflection waveform inversion with variable density

    KAUST Repository

    Li, Yuanyuan

    2017-08-17

    Elastic full waveform inversion (FWI) provides a better description of the subsurface than those given by the acoustic assumption. However it suffers from a more serious cycle skipping problem compared with the latter. Reflection waveform inversion (RWI) provides a method to build a good background model, which can serve as an initial model for elastic FWI. Therefore, we introduce the concept of RWI for elastic media, and propose elastic RWI with variable density. We apply Born modeling to generate the synthetic reflection data by using optimized perturbations of P- and S-wave velocities and density. The inversion for the perturbations in P- and S-wave velocities and density is similar to elastic least-squares reverse time migration (LSRTM). An incorrect initial model will lead to some misfits at the far offsets of reflections; thus, can be utilized to update the background velocity. We optimize the perturbation and background models in a nested approach. Numerical tests on the Marmousi model demonstrate that our method is able to build reasonably good background models for elastic FWI with absence of low frequencies, and it can deal with the variable density, which is needed in real cases.

  1. CRISS power spectral density

    International Nuclear Information System (INIS)

    Vaeth, W.

    1979-04-01

    The correlation of signal components at different frequencies like higher harmonics cannot be detected by a normal power spectral density measurement, since this technique correlates only components at the same frequency. This paper describes a special method for measuring the correlation of two signal components at different frequencies: the CRISS power spectral density. From this new function in frequency analysis, the correlation of two components can be determined quantitatively either they stem from one signal or from two diverse signals. The principle of the method, suitable for the higher harmonics of a signal as well as for any other frequency combinations is shown for the digital frequency analysis technique. Two examples of CRISS power spectral densities demonstrates the operation of the new method. (orig.) [de

  2. High density dispersion fuel

    International Nuclear Information System (INIS)

    Hofman, G.L.

    1996-01-01

    A fuel development campaign that results in an aluminum plate-type fuel of unlimited LEU burnup capability with an uranium loading of 9 grams per cm 3 of meat should be considered an unqualified success. The current worldwide approved and accepted highest loading is 4.8 g cm -3 with U 3 Si 2 as fuel. High-density uranium compounds offer no real density advantage over U 3 Si 2 and have less desirable fabrication and performance characteristics as well. Of the higher-density compounds, U 3 Si has approximately a 30% higher uranium density but the density of the U 6 X compounds would yield the factor 1.5 needed to achieve 9 g cm -3 uranium loading. Unfortunately, irradiation tests proved these peritectic compounds have poor swelling behavior. It is for this reason that the authors are turning to uranium alloys. The reason pure uranium was not seriously considered as a dispersion fuel is mainly due to its high rate of growth and swelling at low temperatures. This problem was solved at least for relatively low burnup application in non-dispersion fuel elements with small additions of Si, Fe, and Al. This so called adjusted uranium has nearly the same density as pure α-uranium and it seems prudent to reconsider this alloy as a dispersant. Further modifications of uranium metal to achieve higher burnup swelling stability involve stabilization of the cubic γ phase at low temperatures where normally α phase exists. Several low neutron capture cross section elements such as Zr, Nb, Ti and Mo accomplish this in various degrees. The challenge is to produce a suitable form of fuel powder and develop a plate fabrication procedure, as well as obtain high burnup capability through irradiation testing

  3. Gap and density theorems

    CERN Document Server

    Levinson, N

    1940-01-01

    A typical gap theorem of the type discussed in the book deals with a set of exponential functions { \\{e^{{{i\\lambda}_n} x}\\} } on an interval of the real line and explores the conditions under which this set generates the entire L_2 space on this interval. A typical gap theorem deals with functions f on the real line such that many Fourier coefficients of f vanish. The main goal of this book is to investigate relations between density and gap theorems and to study various cases where these theorems hold. The author also shows that density- and gap-type theorems are related to various propertie

  4. Nuclear level density

    International Nuclear Information System (INIS)

    Cardoso Junior, J.L.

    1982-10-01

    Experimental data show that the number of nuclear states increases rapidly with increasing excitation energy. The properties of highly excited nuclei are important for many nuclear reactions, mainly those that go via processes of the compound nucleus type. In this case, it is sufficient to know the statistical properties of the nuclear levels. First of them is the function of nuclear levels density. Several theoretical models which describe the level density are presented. The statistical mechanics and a quantum mechanics formalisms as well as semi-empirical results are analysed and discussed. (Author) [pt

  5. Polarizable Density Embedding

    DEFF Research Database (Denmark)

    Olsen, Jógvan Magnus Haugaard; Steinmann, Casper; Ruud, Kenneth

    2015-01-01

    We present a new QM/QM/MM-based model for calculating molecular properties and excited states of solute-solvent systems. We denote this new approach the polarizable density embedding (PDE) model and it represents an extension of our previously developed polarizable embedding (PE) strategy. The PDE...... model is a focused computational approach in which a core region of the system studied is represented by a quantum-chemical method, whereas the environment is divided into two other regions: an inner and an outer region. Molecules belonging to the inner region are described by their exact densities...

  6. Holographic magnetisation density waves

    Energy Technology Data Exchange (ETDEWEB)

    Donos, Aristomenis [Centre for Particle Theory and Department of Mathematical Sciences, Durham University,Stockton Road, Durham, DH1 3LE (United Kingdom); Pantelidou, Christiana [Departament de Fisica Quantica i Astrofisica & Institut de Ciencies del Cosmos (ICC),Universitat de Barcelona,Marti i Franques 1, 08028 Barcelona (Spain)

    2016-10-10

    We numerically construct asymptotically AdS black brane solutions of D=4 Einstein theory coupled to a scalar and two U(1) gauge fields. The solutions are holographically dual to d=3 CFTs in a constant external magnetic field along one of the U(1)’s. Below a critical temperature the system’s magnetisation density becomes inhomogeneous, leading to spontaneous formation of current density waves. We find that the transition can be of second order and that the solutions which minimise the free energy locally in the parameter space of solutions have averaged stressed tensor of a perfect fluid.

  7. Condensation energy density in Bi-2212 superconductors

    International Nuclear Information System (INIS)

    Matsushita, Teruo; Kiuchi, Masaru; Haraguchi, Teruhisa; Imada, Takeki; Okamura, Kazunori; Okayasu, Satoru; Uchida, Satoshi; Shimoyama, Jun-ichi; Kishio, Kohji

    2006-01-01

    The relationship between the condensation energy density and the anisotropy parameter, γ a , has been derived for Bi-2212 superconductors in various anisotropic states by analysing the critical current density due to columnar defects introduced by heavy ion irradiation. The critical current density depended on the size of the defects, determined by the kind and irradiation energy of the ions. A significantly large critical current density of 17.0 MA cm -2 was obtained at 5 K and 0.1 T even for the defect density of a matching field of 1 T in a specimen irradiated with iodine ions. The dependence of the critical current density on the size of the defects agreed well with the prediction from the summation theory of pinning forces, and the condensation energy density could be obtained consistently from specimens irradiated with different ions. The condensation energy density obtained increased with decreasing γ a over the entire range of measurement temperature, and reached about 60% of the value for the most three-dimensional Y-123 observed by Civale et al at 5 K. This gives the reason for the very strong pinning in Bi-2212 superconductors at low temperatures. The thermodynamic critical field obtained decreased linearly with increasing temperature and extrapolated to zero at a certain characteristic temperature, T * , lower than the critical temperature, T c . T * , which seems to be associated with the superconductivity in the block layers, was highest for the optimally doped specimen. This shows that the superconductivity becomes more inhomogeneous as the doped state of a superconductor deviates from the optimum condition

  8. A short numerical study on the optimization methods influence on topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Sigmund, Ole; Stolpe, Mathias

    2017-01-01

    Structural topology optimization problems are commonly defined using continuous design variables combined with material interpolation schemes. One of the challenges for density based topology optimization observed in the review article (Sigmund and Maute Struct Multidiscip Optim 48(6):1031–1055...... 2013) is the slow convergence that is often encountered in practice, when an almost solid-and-void design is found. The purpose of this forum article is to present some preliminary observations on how designs evolves during the optimization process for different choices of optimization methods...

  9. A Tryst With Density

    Indian Academy of Sciences (India)

    best known for developing the density functional theory (DFT). This is an extremely ... lem that has become famous in popular culture is that of the planet. Tatooine. Fans of ... the Schrödinger equation (or, if relativistic effects are important, the Dirac .... it supplies a moral justification for one's subsequent endeav- ours along ...

  10. Density in Liquids.

    Science.gov (United States)

    Nesin, Gert; Barrow, Lloyd H.

    1984-01-01

    Describes a fourth-grade unit on density which introduces a concept useful in the study of chemistry and procedures appropriate to the chemistry laboratory. The hands-on activities, which use simple equipment and household substances, are at the level of thinking Piaget describes as concrete operational. (BC)

  11. Destiny from density

    OpenAIRE

    Seewaldt, Victoria L.

    2012-01-01

    The identification of a signalling protein that regulates the accumulation of fat and connective tissue in breasts may help to explain why high mammographic density is linked to breast-cancer risk and may provide a marker for predicting this risk.

  12. Polarizable Density Embedding

    DEFF Research Database (Denmark)

    Reinholdt, Peter; Kongsted, Jacob; Olsen, Jógvan Magnus Haugaard

    2017-01-01

    We analyze the performance of the polarizable density embedding (PDE) model-a new multiscale computational approach designed for prediction and rationalization of general molecular properties of large and complex systems. We showcase how the PDE model very effectively handles the use of large...

  13. Convex Optimization in R

    Directory of Open Access Journals (Sweden)

    Roger Koenker

    2014-09-01

    Full Text Available Convex optimization now plays an essential role in many facets of statistics. We briefly survey some recent developments and describe some implementations of these methods in R . Applications of linear and quadratic programming are introduced including quantile regression, the Huber M-estimator and various penalized regression methods. Applications to additively separable convex problems subject to linear equality and inequality constraints such as nonparametric density estimation and maximum likelihood estimation of general nonparametric mixture models are described, as are several cone programming problems. We focus throughout primarily on implementations in the R environment that rely on solution methods linked to R, like MOSEK by the package Rmosek. Code is provided in R to illustrate several of these problems. Other applications are available in the R package REBayes, dealing with empirical Bayes estimation of nonparametric mixture models.

  14. [SIAM conference on optimization

    Energy Technology Data Exchange (ETDEWEB)

    1992-05-10

    Abstracts are presented of 63 papers on the following topics: large-scale optimization, interior-point methods, algorithms for optimization, problems in control, network optimization methods, and parallel algorithms for optimization problems.

  15. Optimal census by quorum sensing

    Science.gov (United States)

    Taillefumier, Thibaud

    Bacteria regulate their gene expression in response to changes in local cell density in a process called quorum sensing. To synchronize their gene-expression programs, these bacteria need to glean as much information as possible about local density. Our study is the first to physically model the flow of information in a quorum-sensing microbial community, wherein the internal regulator of the individual's response tracks the external cell density via an endogenously generated shared signal. Combining information theory and Lagrangian optimization, we find that quorum-sensing systems can improve their information capabilities by tuning circuit feedbacks. At the population level, external feedback adjusts the dynamic range of the shared input to individuals' detection channels. At the individual level, internal feedback adjusts the regulator's response time to dynamically balance output noise reduction and signal tracking ability. Our analysis suggests that achieving information benefit via feedback requires dedicated systems to control gene expression noise, such as sRNA-based regulation.

  16. Quantal density functional theory

    CERN Document Server

    Sahni, Viraht

    2016-01-01

    This book deals with quantal density functional theory (QDFT) which is a time-dependent local effective potential theory of the electronic structure of matter. The treated time-independent QDFT constitutes a special case. In the 2nd edition, the theory is extended to include the presence of external magnetostatic fields. The theory is a description of matter based on the ‘quantal Newtonian’ first and second laws which is in terms of “classical” fields that pervade all space, and their quantal sources. The fields, which are explicitly defined, are separately representative of electron correlations due to the Pauli exclusion principle, Coulomb repulsion, correlation-kinetic, correlation-current-density, and correlation-magnetic effects. The book further describes Schrödinger theory from the new physical perspective of fields and quantal sources. It also describes traditional Hohenberg-Kohn-Sham DFT, and explains via QDFT the physics underlying the various energy functionals and functional derivatives o...

  17. Discrete density of states

    International Nuclear Information System (INIS)

    Aydin, Alhun; Sisman, Altug

    2016-01-01

    By considering the quantum-mechanically minimum allowable energy interval, we exactly count number of states (NOS) and introduce discrete density of states (DOS) concept for a particle in a box for various dimensions. Expressions for bounded and unbounded continua are analytically recovered from discrete ones. Even though substantial fluctuations prevail in discrete DOS, they're almost completely flattened out after summation or integration operation. It's seen that relative errors of analytical expressions of bounded/unbounded continua rapidly decrease for high NOS values (weak confinement or high energy conditions), while the proposed analytical expressions based on Weyl's conjecture always preserve their lower error characteristic. - Highlights: • Discrete density of states considering minimum energy difference is proposed. • Analytical DOS and NOS formulas based on Weyl conjecture are given. • Discrete DOS and NOS functions are examined for various dimensions. • Relative errors of analytical formulas are much better than the conventional ones.

  18. Discrete density of states

    Energy Technology Data Exchange (ETDEWEB)

    Aydin, Alhun; Sisman, Altug, E-mail: sismanal@itu.edu.tr

    2016-03-22

    By considering the quantum-mechanically minimum allowable energy interval, we exactly count number of states (NOS) and introduce discrete density of states (DOS) concept for a particle in a box for various dimensions. Expressions for bounded and unbounded continua are analytically recovered from discrete ones. Even though substantial fluctuations prevail in discrete DOS, they're almost completely flattened out after summation or integration operation. It's seen that relative errors of analytical expressions of bounded/unbounded continua rapidly decrease for high NOS values (weak confinement or high energy conditions), while the proposed analytical expressions based on Weyl's conjecture always preserve their lower error characteristic. - Highlights: • Discrete density of states considering minimum energy difference is proposed. • Analytical DOS and NOS formulas based on Weyl conjecture are given. • Discrete DOS and NOS functions are examined for various dimensions. • Relative errors of analytical formulas are much better than the conventional ones.

  19. Variable Kernel Density Estimation

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1992-01-01

    We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...

  20. Density oscillations within hadrons

    International Nuclear Information System (INIS)

    Arnold, R.; Barshay, S.

    1976-01-01

    In models of extended hadrons, in which small bits of matter carrying charge and effective mass exist confined within a medium, oscillations in the matter density may occur. A way of investigating this possibility experimentally in high-energy hadron-hadron elastic diffraction scattering is suggested, and the effect is illustrated by examining some existing data which might be relevant to the question [fr

  1. Toward a Redefinition of Density

    Science.gov (United States)

    Rapoport, Amos

    1975-01-01

    This paper suggests that in addition to the recent work indicating that crowding is a subjective phenomenon, an adequate definition of density must also include a subjective component since density is a complex phenomenon in itself. Included is a discussion of both physical density and perceived density. (Author/MA)

  2. Density measures and additive property

    OpenAIRE

    Kunisada, Ryoichi

    2015-01-01

    We deal with finitely additive measures defined on all subsets of natural numbers which extend the asymptotic density (density measures). We consider a class of density measures which are constructed from free ultrafilters on natural numbers and study a certain additivity property of such density measures.

  3. Reproductive sink of sweet corn in response to plant density and hybrid

    Science.gov (United States)

    Improvements in plant density tolerance have played an essential role in grain corn yield gains for ~80 years; however, plant density effects on sweet corn biomass allocation to the ear (the reproductive ‘sink’) is poorly quantified. Moreover, optimal plant densities for modern white-kernel shrunke...

  4. Scaling laws between population and facility densities.

    Science.gov (United States)

    Um, Jaegon; Son, Seung-Woo; Lee, Sung-Ik; Jeong, Hawoong; Kim, Beom Jun

    2009-08-25

    When a new facility like a grocery store, a school, or a fire station is planned, its location should ideally be determined by the necessities of people who live nearby. Empirically, it has been found that there exists a positive correlation between facility and population densities. In the present work, we investigate the ideal relation between the population and the facility densities within the framework of an economic mechanism governing microdynamics. In previous studies based on the global optimization of facility positions in minimizing the overall travel distance between people and facilities, it was shown that the density of facility D and that of population rho should follow a simple power law D approximately rho(2/3). In our empirical analysis, on the other hand, the power-law exponent alpha in D approximately rho(alpha) is not a fixed value but spreads in a broad range depending on facility types. To explain this discrepancy in alpha, we propose a model based on economic mechanisms that mimic the competitive balance between the profit of the facilities and the social opportunity cost for populations. Through our simple, microscopically driven model, we show that commercial facilities driven by the profit of the facilities have alpha = 1, whereas public facilities driven by the social opportunity cost have alpha = 2/3. We simulate this model to find the optimal positions of facilities on a real U.S. map and show that the results are consistent with the empirical data.

  5. Topology and boundary shape optimization as an integrated design tool

    Science.gov (United States)

    Bendsoe, Martin Philip; Rodrigues, Helder Carrico

    1990-01-01

    The optimal topology of a two dimensional linear elastic body can be computed by regarding the body as a domain of the plane with a high density of material. Such an optimal topology can then be used as the basis for a shape optimization method that computes the optimal form of the boundary curves of the body. This results in an efficient and reliable design tool, which can be implemented via common FEM mesh generator and CAD type input-output facilities.

  6. Improving Power Density of Free-Piston Stirling Engines

    Science.gov (United States)

    Briggs, Maxwell H.; Prahl, Joseph M.; Loparo, Kenneth A.

    2016-01-01

    Analyses and experiments demonstrate the potential benefits of optimizing piston and displacer motion in a free-piston Stirling Engine. Isothermal analysis shows the theoretical limits of power density improvement due to ideal motion in ideal Stirling engines. More realistic models based on nodal analysis show that ideal piston and displacer waveforms are not optimal, often producing less power than engines that use sinusoidal piston and displacer motion. Constrained optimization using nodal analysis predicts that Stirling engine power density can be increased by as much as 58 percent using optimized higher harmonic piston and displacer motion. An experiment is conducted in which an engine designed for sinusoidal motion is forced to operate with both second and third harmonics, resulting in a piston power increase of as much as 14 percent. Analytical predictions are compared to experimental data and show close agreement with indirect thermodynamic power calculations, but poor agreement with direct electrical power measurements.

  7. Improving Free-Piston Stirling Engine Power Density

    Science.gov (United States)

    Briggs, Maxwell H.

    2016-01-01

    Analyses and experiments demonstrate the potential benefits of optimizing piston and displacer motion in a free piston Stirling Engine. Isothermal analysis shows the theoretical limits of power density improvement due to ideal motion in ideal Stirling engines. More realistic models based on nodal analysis show that ideal piston and displacer waveforms are not optimal, often producing less power than engines that use sinusoidal piston and displacer motion. Constrained optimization using nodal analysis predicts that Stirling engine power density can be increased by as much as 58% using optimized higher harmonic piston and displacer motion. An experiment is conducted in which an engine designed for sinusoidal motion is forced to operate with both second and third harmonics, resulting in a maximum piston power increase of 14%. Analytical predictions are compared to experimental data showing close agreement with indirect thermodynamic power calculations, but poor agreement with direct electrical power measurements.

  8. Future xenon system operational parameter optimization

    International Nuclear Information System (INIS)

    Lowrey, J.D.; Eslinger, P.W.; Miley, H.S.

    2016-01-01

    Any atmospheric monitoring network will have practical limitations in the density of its sampling stations. The classical approach to network optimization has been to have 12 or 24-h integration of air samples at the highest station density possible to improve minimum detectable concentrations. The authors present here considerations on optimizing sampler integration time to make the best use of any network and maximize the likelihood of collecting quality samples at any given location. In particular, this work makes the case that shorter duration sample integration (i.e. <12 h) enhances critical isotopic information and improves the source location capability of a radionuclide network, or even just one station. (author)

  9. Density Distribution Sunflower Plots

    Directory of Open Access Journals (Sweden)

    William D. Dupont

    2003-01-01

    Full Text Available Density distribution sunflower plots are used to display high-density bivariate data. They are useful for data where a conventional scatter plot is difficult to read due to overstriking of the plot symbol. The x-y plane is subdivided into a lattice of regular hexagonal bins of width w specified by the user. The user also specifies the values of l, d, and k that affect the plot as follows. Individual observations are plotted when there are less than l observations per bin as in a conventional scatter plot. Each bin with from l to d observations contains a light sunflower. Other bins contain a dark sunflower. In a light sunflower each petal represents one observation. In a dark sunflower, each petal represents k observations. (A dark sunflower with p petals represents between /2-pk k and /2+pk k observations. The user can control the sizes and colors of the sunflowers. By selecting appropriate colors and sizes for the light and dark sunflowers, plots can be obtained that give both the overall sense of the data density distribution as well as the number of data points in any given region. The use of this graphic is illustrated with data from the Framingham Heart Study. A documented Stata program, called sunflower, is available to draw these graphs. It can be downloaded from the Statistical Software Components archive at http://ideas.repec.org/c/boc/bocode/s430201.html . (Journal of Statistical Software 2003; 8 (3: 1-5. Posted at http://www.jstatsoft.org/index.php?vol=8 .

  10. General theory to determine the critical charge density

    International Nuclear Information System (INIS)

    Vila, Floran

    2000-09-01

    In this work we determine theoretically the critical charge density in the system grounded metallic sphere, uniformly charged dielectric plane, in the presence of grounded surfaces, in a more general case. Special attention is paid to the influence of the system geometry in determining the most optimal conditions for obtaining the minimum critical charge density. This is a situation frequently encountered in industrial condition and is important in evaluating the danger of the electrostatic discharges. (author)

  11. Air shower density spectrum

    International Nuclear Information System (INIS)

    Porter, M.R.; Foster, J.M.; Hodson, A.L.; Hazen, W.E.; Hendel, A.Z.; Bull, R.M.

    1982-01-01

    Measurements of the differential local density spectrum have been made using a 1 m 2 discharge chamber mounted in the Leeds discharge chamber array. The results are fitted to a power law of the form h(δ)dδ = kδsup(-ν)dδ, where ν=2.47+-0.04; k=0.21 s - 1 , for 7 m - 2 - 2 ; ν=2.90+-0.22; k=2.18 s - 1 , for δ > 200 m - 2 . Details of the measurement techniques are given with particular reference to the treatment of closely-spaced discharges. A comparison of these results with previous experiments using different techniques is made

  12. Measurement of loose powder density

    International Nuclear Information System (INIS)

    Akhtar, S.; Ali, A.; Haider, A.; Farooque, M.

    2011-01-01

    Powder metallurgy is a conventional technique for making engineering articles from powders. Main objective is to produce final products with the highest possible uniform density, which depends on the initial loose powder characteristics. Producing, handling, characterizing and compacting materials in loose powder form are part of the manufacturing processes. Density of loose metallic or ceramic powder is an important parameter for die design. Loose powder density is required for calculating the exact mass of powder to fill the die cavity for producing intended green density of the powder compact. To fulfill this requirement of powder metallurgical processing, a loose powder density meter as per ASTM standards is designed and fabricated for measurement of density. The density of free flowing metallic powders can be determined using Hall flow meter funnel and density cup of 25 cm/sup 3/ volume. Density of metal powders like cobalt, manganese, spherical bronze and pure iron is measured and results are obtained with 99.9% accuracy. (author)

  13. Gluon density in nuclei

    International Nuclear Information System (INIS)

    Ayala, A.L.

    1996-01-01

    In this talk we present our detailed study (theory and numbers) on the shadowing corrections to the gluon structure functions for nuclei. Starting from rather controversial information on the nucleon structure function which is originated by the recent HERA data, we develop the Glauber approach for the gluon density in a nucleus based on Mueller formula and estimate the value of the shadowing corrections in this case. Then we calculate the first corrections to the Glauber approach and show that these corrections are big. Based on this practical observation we suggest the new evolution equation which takes into account the shadowing corrections and solve it. We hope to convince you that the new evolution equation gives a good theoretical tool to treat the shadowing corrections for the gluons density in a nucleus and, therefore, it is able to provide the theoretically reliable initial conditions for the time evolution of the nucleus-nucleus cascade. The initial conditions should be fixed both theoretically and phenomenologically before to attack such complicated problems as the mixture of hard and soft processes in nucleus-nucleus interactions at high energy or the theoretically reliable approach to hadron or/and parton cascades for high energy nucleus-nucleus interaction. 35 refs., 24 figs., 1 tab

  14. Fast clustering using adaptive density peak detection.

    Science.gov (United States)

    Wang, Xiao-Feng; Xu, Yifan

    2017-12-01

    Common limitations of clustering methods include the slow algorithm convergence, the instability of the pre-specification on a number of intrinsic parameters, and the lack of robustness to outliers. A recent clustering approach proposed a fast search algorithm of cluster centers based on their local densities. However, the selection of the key intrinsic parameters in the algorithm was not systematically investigated. It is relatively difficult to estimate the "optimal" parameters since the original definition of the local density in the algorithm is based on a truncated counting measure. In this paper, we propose a clustering procedure with adaptive density peak detection, where the local density is estimated through the nonparametric multivariate kernel estimation. The model parameter is then able to be calculated from the equations with statistical theoretical justification. We also develop an automatic cluster centroid selection method through maximizing an average silhouette index. The advantage and flexibility of the proposed method are demonstrated through simulation studies and the analysis of a few benchmark gene expression data sets. The method only needs to perform in one single step without any iteration and thus is fast and has a great potential to apply on big data analysis. A user-friendly R package ADPclust is developed for public use.

  15. New method for initial density reconstruction

    Science.gov (United States)

    Shi, Yanlong; Cautun, Marius; Li, Baojiu

    2018-01-01

    A theoretically interesting and practically important question in cosmology is the reconstruction of the initial density distribution provided a late-time density field. This is a long-standing question with a revived interest recently, especially in the context of optimally extracting the baryonic acoustic oscillation (BAO) signals from observed galaxy distributions. We present a new efficient method to carry out this reconstruction, which is based on numerical solutions to the nonlinear partial differential equation that governs the mapping between the initial Lagrangian and final Eulerian coordinates of particles in evolved density fields. This is motivated by numerical simulations of the quartic Galileon gravity model, which has similar equations that can be solved effectively by multigrid Gauss-Seidel relaxation. The method is based on mass conservation, and does not assume any specific cosmological model. Our test shows that it has a performance comparable to that of state-of-the-art algorithms that were very recently put forward in the literature, with the reconstructed density field over ˜80 % (50%) correlated with the initial condition at k ≲0.6 h /Mpc (1.0 h /Mpc ). With an example, we demonstrate that this method can significantly improve the accuracy of BAO reconstruction.

  16. Stochastic global optimization as a filtering problem

    International Nuclear Information System (INIS)

    Stinis, Panos

    2012-01-01

    We present a reformulation of stochastic global optimization as a filtering problem. The motivation behind this reformulation comes from the fact that for many optimization problems we cannot evaluate exactly the objective function to be optimized. Similarly, we may not be able to evaluate exactly the functions involved in iterative optimization algorithms. For example, we may only have access to noisy measurements of the functions or statistical estimates provided through Monte Carlo sampling. This makes iterative optimization algorithms behave like stochastic maps. Naive global optimization amounts to evolving a collection of realizations of this stochastic map and picking the realization with the best properties. This motivates the use of filtering techniques to allow focusing on realizations that are more promising than others. In particular, we present a filtering reformulation of global optimization in terms of a special case of sequential importance sampling methods called particle filters. The increasing popularity of particle filters is based on the simplicity of their implementation and their flexibility. We utilize the flexibility of particle filters to construct a stochastic global optimization algorithm which can converge to the optimal solution appreciably faster than naive global optimization. Several examples of parametric exponential density estimation are provided to demonstrate the efficiency of the approach.

  17. Anomalous evolution of Ar metastable density with electron density in high density Ar discharge

    International Nuclear Information System (INIS)

    Park, Min; Chang, Hong-Young; You, Shin-Jae; Kim, Jung-Hyung; Shin, Yong-Hyeon

    2011-01-01

    Recently, an anomalous evolution of argon metastable density with plasma discharge power (electron density) was reported [A. M. Daltrini, S. A. Moshkalev, T. J. Morgan, R. B. Piejak, and W. G. Graham, Appl. Phys. Lett. 92, 061504 (2008)]. Although the importance of the metastable atom and its density has been reported in a lot of literature, however, a basic physics behind the anomalous evolution of metastable density has not been clearly understood yet. In this study, we investigated a simple global model to elucidate the underlying physics of the anomalous evolution of argon metastable density with the electron density. On the basis of the proposed simple model, we reproduced the anomalous evolution of the metastable density and disclosed the detailed physics for the anomalous result. Drastic changes of dominant mechanisms for the population and depopulation processes of Ar metastable atoms with electron density, which take place even in relatively low electron density regime, is the clue to understand the result.

  18. Grasp Densities for Grasp Refinement in Industrial Bin Picking

    DEFF Research Database (Denmark)

    Hupfauf, Benedikt; Hahn, Heiko; Bodenhagen, Leon

    in terms of object-relative gripper pose, can be learned from empirical experience, and allow the automatic choice of optimal grasps in a given scene context (object pose, workspace constraints, etc.). We will show grasp densities extracted from empirical data in a real industrial bin picking context...... generated in industrial bin-picking for grasp learning. This aim is achieved by using the novel concept of grasp densities (Detry et al., 2010). Grasp densities can describe the full variety of grasps that apply to specific objects using specific grippers. They represent the likelihood of grasp success...

  19. Density-Functional formalism

    International Nuclear Information System (INIS)

    Szasz, L.; Berrios-Pagan, I.; McGinn, G.

    1975-01-01

    A new Density-Functional formula is constructed for atoms. The kinetic energy of the electron is divided into two parts: the kinetic self-energy and the orthogonalization energy. Calculations were made for the total energies of neutral atoms, positive ions and for the He isoelectronic series. For neutral atoms the results match the Hartree-Fock energies within 1% for atoms with N 36 the results generally match the HF energies within 0.1%. For positive ions the results are fair; for the molecular applications a simplified model is developed in which the kinetic energy consists of the Weizsaecker term plus the Fermi energy reduced by a continuous function. (orig.) [de

  20. Density functional theory

    International Nuclear Information System (INIS)

    Freyss, M.

    2015-01-01

    This chapter gives an introduction to first-principles electronic structure calculations based on the density functional theory (DFT). Electronic structure calculations have a crucial importance in the multi-scale modelling scheme of materials: not only do they enable one to accurately determine physical and chemical properties of materials, they also provide data for the adjustment of parameters (or potentials) in higher-scale methods such as classical molecular dynamics, kinetic Monte Carlo, cluster dynamics, etc. Most of the properties of a solid depend on the behaviour of its electrons, and in order to model or predict them it is necessary to have an accurate method to compute the electronic structure. DFT is based on quantum theory and does not make use of any adjustable or empirical parameter: the only input data are the atomic number of the constituent atoms and some initial structural information. The complicated many-body problem of interacting electrons is replaced by an equivalent single electron problem, in which each electron is moving in an effective potential. DFT has been successfully applied to the determination of structural or dynamical properties (lattice structure, charge density, magnetisation, phonon spectra, etc.) of a wide variety of solids. Its efficiency was acknowledged by the attribution of the Nobel Prize in Chemistry in 1998 to one of its authors, Walter Kohn. A particular attention is given in this chapter to the ability of DFT to model the physical properties of nuclear materials such as actinide compounds. The specificities of the 5f electrons of actinides will be presented, i.e., their more or less high degree of localisation around the nuclei and correlations. The limitations of the DFT to treat the strong 5f correlations are one of the main issues for the DFT modelling of nuclear fuels. Various methods that exist to better treat strongly correlated materials will finally be presented. (author)

  1. Hormonal Determinants of Mammographic Density

    National Research Council Canada - National Science Library

    Simpson, Jennifer K; Modugno, Francemary; Weissfeld, Joel L; Kuller, Lewis; Vogel, Victor; Constantino, Joseph P

    2005-01-01

    .... However, not all women on HRT will experience an increase in breast density. We propose a novel hypothesis to explain in part the individual variability in breast density seen among women on HRT...

  2. Improving experimental phases for strong reflections prior to density modification

    International Nuclear Information System (INIS)

    Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; Read, Randy J.

    2013-01-01

    A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography

  3. Density limit in ASDEX discharges with peaked density profiles

    International Nuclear Information System (INIS)

    Staebler, A.; Niedermeyer, H.; Loch, R.; Mertens, V.; Mueller, E.R.; Soeldner, F.X.; Wagner, F.

    1989-01-01

    Results concerning the density limit in OH and NI-heated ASDEX discharges with the usually observed broad density profiles have been reported earlier: In ohmic discharges with high q a (q-cylindrical is used throughout this paper) the Murakami parameter (n e R/B t ) is a good scaling parameter. At the high densities edge cooling is observed causing the plasma to shrink until an m=2-instability terminates the discharge. When approaching q a =2 the density limit is no longer proportional to I p ; a minimum exists in n e,max (q a ) at q a ∼2.15. With NI-heating the density limit increases less than proportional to the heating power; the behaviour during the pre-disruptive phase is rather similar to the one of OH discharges. There are specific operating regimes on ASDEX leading to discharges with strongly peaked density profiles: the improved ohmic confinement regime, counter neutral injection, and multipellet injection. These regimes are characterized by enhanced energy and particle confinement. The operational limit in density for these discharges is, therefore, of great interest having furthermore in mind that high central densities are favourable in achieving high fusion yields. In addition, further insight into the mechanisms of the density limit observed in tokamaks may be obtained by comparing plasmas with rather different density profiles at their maximum attainable densities. 7 refs., 2 figs

  4. Optimization modeling with spreadsheets

    CERN Document Server

    Baker, Kenneth R

    2015-01-01

    An accessible introduction to optimization analysis using spreadsheets Updated and revised, Optimization Modeling with Spreadsheets, Third Edition emphasizes model building skills in optimization analysis. By emphasizing both spreadsheet modeling and optimization tools in the freely available Microsoft® Office Excel® Solver, the book illustrates how to find solutions to real-world optimization problems without needing additional specialized software. The Third Edition includes many practical applications of optimization models as well as a systematic framework that il

  5. Thermospheric density and satellite drag modeling

    Science.gov (United States)

    Mehta, Piyush Mukesh

    The United States depends heavily on its space infrastructure for a vast number of commercial and military applications. Space Situational Awareness (SSA) and Threat Assessment require maintaining accurate knowledge of the orbits of resident space objects (RSOs) and the associated uncertainties. Atmospheric drag is the largest source of uncertainty for low-perigee RSOs. The uncertainty stems from inaccurate modeling of neutral atmospheric mass density and inaccurate modeling of the interaction between the atmosphere and the RSO. In order to reduce the uncertainty in drag modeling, both atmospheric density and drag coefficient (CD) models need to be improved. Early atmospheric density models were developed from orbital drag data or observations of a few early compact satellites. To simplify calculations, densities derived from orbit data used a fixed CD value of 2.2 measured in a laboratory using clean surfaces. Measurements from pressure gauges obtained in the early 1990s have confirmed the adsorption of atomic oxygen on satellite surfaces. The varying levels of adsorbed oxygen along with the constantly changing atmospheric conditions cause large variations in CD with altitude and along the orbit of the satellite. Therefore, the use of a fixed CD in early development has resulted in large biases in atmospheric density models. A technique for generating corrections to empirical density models using precision orbit ephemerides (POE) as measurements in an optimal orbit determination process was recently developed. The process generates simultaneous corrections to the atmospheric density and ballistic coefficient (BC) by modeling the corrections as statistical exponentially decaying Gauss-Markov processes. The technique has been successfully implemented in generating density corrections using the CHAMP and GRACE satellites. This work examines the effectiveness, specifically the transfer of density models errors into BC estimates, of the technique using the CHAMP and

  6. Topology optimization for nano-photonics

    DEFF Research Database (Denmark)

    Jensen, Jakob Søndergaard; Sigmund, Ole

    2011-01-01

    Topology optimization is a computational tool that can be used for the systematic design of photonic crystals, waveguides, resonators, filters and plasmonics. The method was originally developed for mechanical design problems but has within the last six years been applied to a range of photonics...... applications. Topology optimization may be based on finite element and finite difference type modeling methods in both frequency and time domain. The basic idea is that the material density of each element or grid point is a design variable, hence the geometry is parameterized in a pixel-like fashion....... The optimization problem is efficiently solved using mathematical programming-based optimization methods and analytical gradient calculations. The paper reviews the basic procedures behind topology optimization, a large number of applications ranging from photonic crystal design to surface plasmonic devices...

  7. Smoothing densities under shape constraints

    OpenAIRE

    Davies, Paul Laurie; Meise, Monika

    2009-01-01

    In Davies and Kovac (2004) the taut string method was proposed for calculating a density which is consistent with the data and has the minimum number of peaks. The main disadvantage of the taut string density is that it is piecewise constant. In this paper a procedure is presented which gives a smoother density by minimizing the total variation of a derivative of the density subject to the number, positions and heights of the local extreme values obtained from the taut string density. 2...

  8. Optimum rabbit density over fish ponds to optimise Nile tilapia ...

    African Journals Online (AJOL)

    Although previous studies have suggested that rabbit excreta can be used as high-quality manure for sustaining plankton production due to their gradual nutrient release, integrated rabbit–fish production systems are still not widely used. Between 2006 and 2010 optimal rabbit densities for sustainable integrated rabbit–Nile ...

  9. Positron camera with high-density avalanche chambers

    International Nuclear Information System (INIS)

    Manfrass, D.; Enghardt, W.; Fromm, W.D.; Wohlfarth, D.; Hennig, K.

    1988-01-01

    The results of an extensive investigation of the properties of high-density avalanche chambers (HIDAC) are presented. This study has been performed in order to optimize the layout of HIDAC detectors, since they are intended to be applied as position sensitive detectors for annihilation radiation in a positron emission tomograph being under construction. (author)

  10. Finite Gaussian Mixture Approximations to Analytically Intractable Density Kernels

    DEFF Research Database (Denmark)

    Khorunzhina, Natalia; Richard, Jean-Francois

    The objective of the paper is that of constructing finite Gaussian mixture approximations to analytically intractable density kernels. The proposed method is adaptive in that terms are added one at the time and the mixture is fully re-optimized at each step using a distance measure that approxima...

  11. Device for measuring neutron-flux distribution density

    International Nuclear Information System (INIS)

    Rozenbljum, N.D.; Mitelman, M.G.; Kononovich, A.A.; Kirsanov, V.S.; Zagadkin, V.A.

    1977-01-01

    An arrangement is described for measuring the distribution of neutron flux density over the height of a nuclear reactor core and which may be used for monitoring energy release or for detecting deviations of neutron flux from an optimal level so that subsequent balance can be achieved. It avoids mutual interference of detectors. Full constructional details are given. (UK)

  12. High density hydrogen research

    International Nuclear Information System (INIS)

    Hawke, R.S.

    1977-01-01

    The interest in the properties of very dense hydrogen is prompted by its abundance in Saturn and Jupiter and its importance in laser fusion studies. Furthermore, it has been proposed that the metallic form of hydrogen may be a superconductor at relatively high temperatures and/or exist in a metastable phase at ambient pressure. For ten years or more, laboratories have been developing the techniques to study hydrogen in the megabar region (1 megabar = 100 GPa). Three major approaches to study dense hydrogen experimentally have been used, static presses, shockwave compression, and magnetic compression. Static tchniques have crossed the megabar threshold in stiff materials but have not yet been convincingly successful in very compressible hydrogen. Single and double shockwave techniques have improved the precision of the pressure, volume, temperature Equation of State (EOS) of molecular hydrogen (deuterium) up to near 1 Mbar. Multiple shockwave and magnetic techniques have compressed hydrogen to several megabars and densities in the range of the metallic phase. The net result is that hydrogen becomes conducting at a pressure between 2 and 4 megabars. Hence, the possibility of making a significant amount of hydrogen into a metal in a static press remains a formidable challenge. The success of such experiments will hopefully answer the questions about hydrogen's metallic vs. conducting molecular phase, superconductivity, and metastability. 4 figures, 15 references

  13. CBM RICH geometry optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mahmoud, Tariq; Hoehne, Claudia [II. Physikalisches Institut, Giessen Univ. (Germany); Collaboration: CBM-Collaboration

    2016-07-01

    The Compressed Baryonic Matter (CBM) experiment at the future FAIR complex will investigate the phase diagram of strongly interacting matter at high baryon density and moderate temperatures in A+A collisions from 2-11 AGeV (SIS100) beam energy. The main electron identification detector in the CBM experiment will be a RICH detector with a CO{sub 2} gaseous-radiator, focusing spherical glass mirrors, and MAPMT photo-detectors being placed on a PMT-plane. The RICH detector is located directly behind the CBM dipole magnet. As the final magnet geometry is now available, some changes in the RICH geometry become necessary. In order to guarantee a magnetic field of 1 mT at maximum in the PMT plane for effective operation of the MAPMTs, two measures have to be taken: The PMT plane is moved outwards of the stray field by tilting the mirrors by 10 degrees and shielding boxes have been designed. In this contribution the results of the geometry optimization procedure are presented.

  14. Variable Bone Density of Scaphoid: Importance of Subchondral Screw Placement.

    Science.gov (United States)

    Swanstrom, Morgan M; Morse, Kyle W; Lipman, Joseph D; Hearns, Krystle A; Carlson, Michelle G

    2018-02-01

    Background  Ideal internal fixation of the scaphoid relies on adequate bone stock for screw purchase; so, knowledge of regional bone density of the scaphoid is crucial. Questions/Purpose  The purpose of this study was to evaluate regional variations in scaphoid bone density. Materials and Methods  Three-dimensional CT models of fractured scaphoids were created and sectioned into proximal/distal segments and then into quadrants (volar/dorsal/radial/ulnar). Concentric shells in the proximal and distal pole were constructed in 2-mm increments moving from exterior to interior. Bone density was measured in Hounsfield units (HU). Results  Bone density of the distal scaphoid (453.2 ± 70.8 HU) was less than the proximal scaphoid (619.8 ± 124.2 HU). There was no difference in bone density between the four quadrants in either pole. In both the poles, the first subchondral shell was the densest. In both the proximal and distal poles, bone density decreased significantly in all three deeper shells. Conclusion  The proximal scaphoid had a greater density than the distal scaphoid. Within the poles, there was no difference in bone density between the quadrants. The subchondral 2-mm shell had the greatest density. Bone density dropped off significantly between the first and second shell in both the proximal and distal scaphoids. Clinical Relevance  In scaphoid fracture ORIF, optimal screw placement engages the subchondral 2-mm shell, especially in the distal pole, which has an overall lower bone density, and the second shell has only two-third the density of the first shell.

  15. Density limit experiments on FTU

    International Nuclear Information System (INIS)

    Pucella, G.; Tudisco, O.; Apicella, M.L.; Apruzzese, G.; Artaserse, G.; Belli, F.; Boncagni, L.; Botrugno, A.; Buratti, P.; Calabrò, G.; Castaldo, C.; Cianfarani, C.; Cocilovo, V.; Dimatteo, L.; Esposito, B.; Frigione, D.; Gabellieri, L.; Giovannozzi, E.; Bin, W.; Granucci, G.

    2013-01-01

    One of the main problems in tokamak fusion devices concerns the capability to operate at a high plasma density, which is observed to be limited by the appearance of catastrophic events causing loss of plasma confinement. The commonly used empirical scaling law for the density limit is the Greenwald limit, predicting that the maximum achievable line-averaged density along a central chord depends only on the average plasma current density. However, the Greenwald density limit has been exceeded in tokamak experiments in the case of peaked density profiles, indicating that the edge density is the real parameter responsible for the density limit. Recently, it has been shown on the Frascati Tokamak Upgrade (FTU) that the Greenwald density limit is exceeded in gas-fuelled discharges with a high value of the edge safety factor. In order to understand this behaviour, dedicated density limit experiments were performed on FTU, in which the high density domain was explored in a wide range of values of plasma current (I p = 500–900 kA) and toroidal magnetic field (B T = 4–8 T). These experiments confirm the edge nature of the density limit, as a Greenwald-like scaling holds for the maximum achievable line-averaged density along a peripheral chord passing at r/a ≃ 4/5. On the other hand, the maximum achievable line-averaged density along a central chord does not depend on the average plasma current density and essentially depends on the toroidal magnetic field only. This behaviour is explained in terms of density profile peaking in the high density domain, with a peaking factor at the disruption depending on the edge safety factor. The possibility that the MARFE (multifaced asymmetric radiation from the edge) phenomenon is the cause of the peaking has been considered, with the MARFE believed to form a channel for the penetration of the neutral particles into deeper layers of the plasma. Finally, the magnetohydrodynamic (MHD) analysis has shown that also the central line

  16. Optimal Pollution, Optimal Population, and Sustainability

    OpenAIRE

    Ulla Lehmijoki

    2012-01-01

    This paper develops a long-run consumer optimization model with endogenous pollution and endogenous population. The positive check increases mortality if pollution increases. The optimal path is sustainable if it provides non-decreasing consumption for a non-decreasing population. As usually, optimality and sustainability may conflict; with population endogenous to pollution, this conflict may ultimately lead the human species toward self-imposed extinction. Not even technical progress can wa...

  17. Resolvability of regional density structure

    Science.gov (United States)

    Plonka, A.; Fichtner, A.

    2016-12-01

    Lateral density variations are the source of mass transport in the Earth at all scales, acting as drivers of convectivemotion. However, the density structure of the Earth remains largely unknown since classic seismic observables and gravityprovide only weak constraints with strong trade-offs. Current density models are therefore often based on velocity scaling,making strong assumptions on the origin of structural heterogeneities, which may not necessarily be correct. Our goal is to assessif 3D density structure may be resolvable with emerging full-waveform inversion techniques. We have previously quantified the impact of regional-scale crustal density structure on seismic waveforms with the conclusion that reasonably sized density variations within thecrust can leave a strong imprint on both travel times and amplitudes, and, while this can produce significant biases in velocity and Q estimates, the seismic waveform inversion for density may become feasible. In this study we performprincipal component analyses of sensitivity kernels for P velocity, S velocity, and density. This is intended to establish theextent to which these kernels are linearly independent, i.e. the extent to which the different parameters may be constrainedindependently. Since the density imprint we observe is not exclusively linked to travel times and amplitudes of specific phases,we consider waveform differences between complete seismograms. We test the method using a known smooth model of the crust and seismograms with clear Love and Rayleigh waves, showing that - as expected - the first principal kernel maximizes sensitivity to SH and SV velocity structure, respectively, and that the leakage between S velocity, P velocity and density parameter spaces is minimal in the chosen setup. Next, we apply the method to data from 81 events around the Iberian Penninsula, registered in total by 492 stations. The objective is to find a principal kernel which would maximize the sensitivity to density

  18. Applications of combinatorial optimization

    CERN Document Server

    Paschos, Vangelis Th

    2013-01-01

    Combinatorial optimization is a multidisciplinary scientific area, lying in the interface of three major scientific domains: mathematics, theoretical computer science and management. The three volumes of the Combinatorial Optimization series aims to cover a wide range of topics in this area. These topics also deal with fundamental notions and approaches as with several classical applications of combinatorial optimization. "Applications of Combinatorial Optimization" is presenting a certain number among the most common and well-known applications of Combinatorial Optimization.

  19. Optimal Fisher Discriminant Ratio for an Arbitrary Spatial Light Modulator

    Science.gov (United States)

    Juday, Richard D.

    1999-01-01

    Optimizing the Fisher ratio is well established in statistical pattern recognition as a means of discriminating between classes. I show how to optimize that ratio for optical correlation intensity by choice of filter on an arbitrary spatial light modulator (SLM). I include the case of additive noise of known power spectral density.

  20. Automatized Parameterization of DFTB Using Particle Swarm Optimization.

    Science.gov (United States)

    Chou, Chien-Pin; Nishimura, Yoshifumi; Fan, Chin-Chai; Mazur, Grzegorz; Irle, Stephan; Witek, Henryk A

    2016-01-12

    We present a novel density-functional tight-binding (DFTB) parametrization toolkit developed to optimize the parameters of various DFTB models in a fully automatized fashion. The main features of the algorithm, based on the particle swarm optimization technique, are discussed, and a number of initial pilot applications of the developed methodology to molecular and solid systems are presented.

  1. Super liquid density target designs

    International Nuclear Information System (INIS)

    Pan, Y.L.; Bailey, D.S.

    1976-01-01

    The success of laser fusion depends on obtaining near isentropic compression of fuel to very high densities and igniting this fuel. To date, the results of laser fusion experiments have been based mainly on the exploding pusher implosion of fusion capsules consisting of thin glass microballoons (wall thickness of less than 1 micron) filled with low density DT gas (initial density of a few mg/cc). Maximum DT densities of a few tenths of g/cc and temperatures of a few keV have been achieved in these experiments. We will discuss the results of LASNEX target design calculations for targets which: (a) can compress fuel to much higher densities using the capabilities of existing Nd-glass systems at LLL; (b) allow experimental measurement of the peak fuel density achieved

  2. High Power Density Motors

    Science.gov (United States)

    Kascak, Daniel J.

    2004-01-01

    With the growing concerns of global warming, the need for pollution-free vehicles is ever increasing. Pollution-free flight is one of NASA's goals for the 21" Century. , One method of approaching that goal is hydrogen-fueled aircraft that use fuel cells or turbo- generators to develop electric power that can drive electric motors that turn the aircraft's propulsive fans or propellers. Hydrogen fuel would likely be carried as a liquid, stored in tanks at its boiling point of 20.5 K (-422.5 F). Conventional electric motors, however, are far too heavy (for a given horsepower) to use on aircraft. Fortunately the liquid hydrogen fuel can provide essentially free refrigeration that can be used to cool the windings of motors before the hydrogen is used for fuel. Either High Temperature Superconductors (HTS) or high purity metals such as copper or aluminum may be used in the motor windings. Superconductors have essentially zero electrical resistance to steady current. The electrical resistance of high purity aluminum or copper near liquid hydrogen temperature can be l/lOO* or less of the room temperature resistance. These conductors could provide higher motor efficiency than normal room-temperature motors achieve. But much more importantly, these conductors can carry ten to a hundred times more current than copper conductors do in normal motors operating at room temperature. This is a consequence of the low electrical resistance and of good heat transfer coefficients in boiling LH2. Thus the conductors can produce higher magnetic field strengths and consequently higher motor torque and power. Designs, analysis and actual cryogenic motor tests show that such cryogenic motors could produce three or more times as much power per unit weight as turbine engines can, whereas conventional motors produce only 1/5 as much power per weight as turbine engines. This summer work has been done with Litz wire to maximize the current density. The current is limited by the amount of heat it

  3. Density functionals from deep learning

    OpenAIRE

    McMahon, Jeffrey M.

    2016-01-01

    Density-functional theory is a formally exact description of a many-body quantum system in terms of its density; in practice, however, approximations to the universal density functional are required. In this work, a model based on deep learning is developed to approximate this functional. Deep learning allows computational models that are capable of naturally discovering intricate structure in large and/or high-dimensional data sets, with multiple levels of abstraction. As no assumptions are ...

  4. Transition densities with electron scattering

    International Nuclear Information System (INIS)

    Heisenberg, J.

    1985-01-01

    This paper reviews the ground state and transition charge densities in nuclei via electron scattering. Using electrons as a spectroscopic tool in nuclear physics, these transition densities can be determined with high precision, also in the nuclear interior. These densities generally ask for a microscopic interpretation in terms of contributions from individual nucleons. The results for single particle transitions confirm the picture of particle-phonon coupling. (Auth.)

  5. Sets with Prescribed Arithmetic Densities

    Czech Academy of Sciences Publication Activity Database

    Luca, F.; Pomerance, C.; Porubský, Štefan

    2008-01-01

    Roč. 3, č. 2 (2008), s. 67-80 ISSN 1336-913X R&D Projects: GA ČR GA201/07/0191 Institutional research plan: CEZ:AV0Z10300504 Keywords : generalized arithmetic density * generalized asymptotic density * generalized logarithmic density * arithmetical semigroup * weighted arithmetic mean * ratio set * R-dense set * Axiom A * delta-regularly varying function Subject RIV: BA - General Mathematics

  6. Histogram Estimators of Bivariate Densities

    National Research Council Canada - National Science Library

    Husemann, Joyce A

    1986-01-01

    One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...

  7. Importing low-density ideas to high-density revitalisation

    DEFF Research Database (Denmark)

    Arnholtz, Jens; Ibsen, Christian Lyhne; Ibsen, Flemming

    2016-01-01

    Why did union officials from a high-union-density country like Denmark choose to import an organising strategy from low-density countries such as the US and the UK? Drawing on in-depth interviews with key union officials and internal documents, the authors of this article argue two key points. Fi...

  8. A Density Functional Theory Study

    KAUST Repository

    Lim, XiaoZhi

    2011-12-11

    Complexes with pincer ligand moieties have garnered much attention in the past few decades. They have been shown to be highly active catalysts in several known transition metal-catalyzed organic reactions as well as some unprecedented organic transformations. At the same time, the use of computational organometallic chemistry to aid in the understanding of the mechanisms in organometallic catalysis for the development of improved catalysts is on the rise. While it was common in earlier studies to reduce computational cost by truncating donor group substituents on complexes such as tertbutyl or isopropyl groups to hydrogen or methyl groups, recent advancements in the processing capabilities of computer clusters and codes have streamlined the time required for calculations. As the full modeling of complexes become increasingly popular, a commonly overlooked aspect, especially in the case of complexes bearing isopropyl substituents, is the conformational analysis of complexes. Isopropyl groups generate a different conformer with each 120 ° rotation (rotamer), and it has been found that each rotamer typically resides in its own potential energy well in density functional theory studies. As a result, it can be challenging to select the most appropriate structure for a theoretical study, as the adjustment of isopropyl substituents from a higher-energy rotamer to the lowest-energy rotamer usually does not occur during structure optimization. In this report, the influence of the arrangement of isopropyl substituents in pincer complexes on calculated complex structure energies as well as a case study on the mechanism of the isomerization of an iPrPCP-Fe complex is covered. It was found that as many as 324 rotamers can be generated for a single complex, as in the case of an iPrPCP-Ni formato complex, with the energy difference between the global minimum and the highest local minimum being as large as 16.5 kcalmol-1. In the isomerization of a iPrPCP-Fe complex, it was found

  9. Global Optimization using Interval Analysis : Interval Optimization for Aerospace Applications

    NARCIS (Netherlands)

    Van Kampen, E.

    2010-01-01

    Optimization is an important element in aerospace related research. It is encountered for example in trajectory optimization problems, such as: satellite formation flying, spacecraft re-entry optimization and airport approach and departure optimization; in control optimization, for example in

  10. Mammography density estimation with automated volumetic breast density measurement

    International Nuclear Information System (INIS)

    Ko, Su Yeon; Kim, Eun Kyung; Kim, Min Jung; Moon, Hee Jung

    2014-01-01

    To compare automated volumetric breast density measurement (VBDM) with radiologists' evaluations based on the Breast Imaging Reporting and Data System (BI-RADS), and to identify the factors associated with technical failure of VBDM. In this study, 1129 women aged 19-82 years who underwent mammography from December 2011 to January 2012 were included. Breast density evaluations by radiologists based on BI-RADS and by VBDM (Volpara Version 1.5.1) were compared. The agreement in interpreting breast density between radiologists and VBDM was determined based on four density grades (D1, D2, D3, and D4) and a binary classification of fatty (D1-2) vs. dense (D3-4) breast using kappa statistics. The association between technical failure of VBDM and patient age, total breast volume, fibroglandular tissue volume, history of partial mastectomy, the frequency of mass > 3 cm, and breast density was analyzed. The agreement between breast density evaluations by radiologists and VBDM was fair (k value = 0.26) when the four density grades (D1/D2/D3/D4) were used and moderate (k value = 0.47) for the binary classification (D1-2/D3-4). Twenty-seven women (2.4%) showed failure of VBDM. Small total breast volume, history of partial mastectomy, and high breast density were significantly associated with technical failure of VBDM (p 0.001 to 0.015). There is fair or moderate agreement in breast density evaluation between radiologists and VBDM. Technical failure of VBDM may be related to small total breast volume, a history of partial mastectomy, and high breast density.

  11. Ligand identification using electron-density map correlations

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Adams, Paul D.; Moriarty, Nigel W.; Cohn, Judith D.

    2007-01-01

    An automated ligand-fitting procedure is applied to (F o − F c )exp(iϕ c ) difference density for 200 commonly found ligands from macromolecular structures in the Protein Data Bank to identify ligands from density maps. A procedure for the identification of ligands bound in crystal structures of macromolecules is described. Two characteristics of the density corresponding to a ligand are used in the identification procedure. One is the correlation of the ligand density with each of a set of test ligands after optimization of the fit of that ligand to the density. The other is the correlation of a fingerprint of the density with the fingerprint of model density for each possible ligand. The fingerprints consist of an ordered list of correlations of each the test ligands with the density. The two characteristics are scored using a Z-score approach in which the correlations are normalized to the mean and standard deviation of correlations found for a variety of mismatched ligand-density pairs, so that the Z scores are related to the probability of observing a particular value of the correlation by chance. The procedure was tested with a set of 200 of the most commonly found ligands in the Protein Data Bank, collectively representing 57% of all ligands in the Protein Data Bank. Using a combination of these two characteristics of ligand density, ranked lists of ligand identifications were made for representative (F o − F c )exp(iϕ c ) difference density from entries in the Protein Data Bank. In 48% of the 200 cases, the correct ligand was at the top of the ranked list of ligands. This approach may be useful in identification of unknown ligands in new macromolecular structures as well as in the identification of which ligands in a mixture have bound to a macromolecule

  12. Mechanical Design Optimization Using Advanced Optimization Techniques

    CERN Document Server

    Rao, R Venkata

    2012-01-01

    Mechanical design includes an optimization process in which designers always consider objectives such as strength, deflection, weight, wear, corrosion, etc. depending on the requirements. However, design optimization for a complete mechanical assembly leads to a complicated objective function with a large number of design variables. It is a good practice to apply optimization techniques for individual components or intermediate assemblies than a complete assembly. Analytical or numerical methods for calculating the extreme values of a function may perform well in many practical cases, but may fail in more complex design situations. In real design problems, the number of design parameters can be very large and their influence on the value to be optimized (the goal function) can be very complicated, having nonlinear character. In these complex cases, advanced optimization algorithms offer solutions to the problems, because they find a solution near to the global optimum within reasonable time and computational ...

  13. Oil Reservoir Production Optimization using Optimal Control

    DEFF Research Database (Denmark)

    Völcker, Carsten; Jørgensen, John Bagterp; Stenby, Erling Halfdan

    2011-01-01

    Practical oil reservoir management involves solution of large-scale constrained optimal control problems. In this paper we present a numerical method for solution of large-scale constrained optimal control problems. The method is a single-shooting method that computes the gradients using the adjo...... reservoir using water ooding and smart well technology. Compared to the uncontrolled case, the optimal operation increases the Net Present Value of the oil field by 10%.......Practical oil reservoir management involves solution of large-scale constrained optimal control problems. In this paper we present a numerical method for solution of large-scale constrained optimal control problems. The method is a single-shooting method that computes the gradients using...

  14. Particle Swarm Optimization Toolbox

    Science.gov (United States)

    Grant, Michael J.

    2010-01-01

    The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry

  15. Packing Density Approach for Sustainable Development of Concrete

    Directory of Open Access Journals (Sweden)

    Sudarshan Dattatraya KORE

    2017-12-01

    Full Text Available This paper deals with the details of optimized mix design for normal strength concrete using particle packing density method. Also the concrete mixes were designed as per BIS: 10262-2009. Different water-cement ratios were used and kept same in both design methods. An attempt has been made to obtain sustainable and cost effective concrete product by use of particle packing density method. The parameters such as workability, compressive strength, cost analysis and carbon di oxide emission were discussed. The results of the study showed that, the compressive strength of the concrete produced by packing density method are closer to that of design compressive strength of BIS code method. By adopting the packing density method for design of concrete mixes, resulted in 11% cost saving with 12% reduction in carbon di oxide emission.

  16. Density of biogas digestate depending on temperature and composition.

    Science.gov (United States)

    Gerber, Mandy; Schneider, Nico

    2015-09-01

    Density is one of the most important physical properties of biogas digestate to ensure an optimal dimensioning and a precise design of biogas plant components like stirring devices, pumps and heat exchangers. In this study the density of biogas digestates with different compositions was measured using pycnometers at ambient pressure in a temperature range from 293.15 to 313.15K. The biogas digestates were taken from semi-continuous experiments, in which the marine microalga Nannochloropsis salina, corn silage and a mixture of both were used as feedstocks. The results show an increase of density with increasing total solid content and a decrease with increasing temperature. Three equations to calculate the density of biogas digestate were set up depending on temperature as well as on the total solid content, organic composition and elemental composition, respectively. All correlations show a relative deviation below 1% compared to experimental data. Copyright © 2015. Published by Elsevier Ltd.

  17. Level density of 57Co

    International Nuclear Information System (INIS)

    Mishra, V.; Boukharouba, N.; Brient, C.E.; Grimes, S.M.; Pedroni, R.S.

    1994-01-01

    Levels in 57 Co have been studied in the region of resolved levels (E 57 Fe(p,n) 57 Co neutron spectrum with resolution ΔE∼5 keV. Seventeen previously unknown levels are located. Level density parameters in the continuum region are deduced from thick target measurements of the same reaction and additional level density information is deduced from Ericson fluctuation studies of the reaction 56 Fe(p,n) 56 Co. A set of level density parameters is found which describes the level density of 57 Co at energies up to 14 MeV. Efforts to obtain level density information from the 56 Fe(d,n) 57 Co reaction were unsuccessful, but estimates of the fraction of the deuteron absorption cross section corresponding to compound nucleus formation are obtained

  18. The density of cement phases

    International Nuclear Information System (INIS)

    Balonis, M.; Glasser, F.P.

    2009-01-01

    The densities of principal crystalline phases occurring in Portland cement are critically assessed and tabulated, in some cases with addition of new data. A reliable and self-consistent density set for crystalline phases was obtained by calculating densities from crystallographic data and unit cell contents. Independent laboratory work was undertaken to synthesize major AFm and AFt cement phases, determine their unit cell parameters and compare the results with those recorded in the literature. Parameters were refined from powder diffraction patterns using CELREF 2 software. A density value is presented for each phase, showing literature sources, in some cases describing limitations on the data, and the weighting attached to numerical values where an averaging process was used for accepted data. A brief discussion is made of the consequences of the packing of water to density changes in AFm and AFt structures.

  19. Euler's fluid equations: Optimal control vs optimization

    International Nuclear Information System (INIS)

    Holm, Darryl D.

    2009-01-01

    An optimization method used in image-processing (metamorphosis) is found to imply Euler's equations for incompressible flow of an inviscid fluid, without requiring that the Lagrangian particle labels exactly follow the flow lines of the Eulerian velocity vector field. Thus, an optimal control problem and an optimization problem for incompressible ideal fluid flow both yield the same Euler fluid equations, although their Lagrangian parcel dynamics are different. This is a result of the gauge freedom in the definition of the fluid pressure for an incompressible flow, in combination with the symmetry of fluid dynamics under relabeling of their Lagrangian coordinates. Similar ideas are also illustrated for SO(N) rigid body motion.

  20. Load flow optimization and optimal power flow

    CERN Document Server

    Das, J C

    2017-01-01

    This book discusses the major aspects of load flow, optimization, optimal load flow, and culminates in modern heuristic optimization techniques and evolutionary programming. In the deregulated environment, the economic provision of electrical power to consumers requires knowledge of maintaining a certain power quality and load flow. Many case studies and practical examples are included to emphasize real-world applications. The problems at the end of each chapter can be solved by hand calculations without having to use computer software. The appendices are devoted to calculations of line and cable constants, and solutions to the problems are included throughout the book.

  1. Optimal Training Systems STTR

    National Research Council Canada - National Science Library

    Best, Brad; Lovett, Marsha

    2005-01-01

    .... Using an optimal model of task performance subject to human constraints may be a more efficient way to develop models of skilled human performance for use in training, especially since optimal models...

  2. Optimal Alarm Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — An optimal alarm system is simply an optimal level-crossing predictor that can be designed to elicit the fewest false alarms for a fixed detection probability. It...

  3. Optimization under Uncertainty

    KAUST Repository

    Lopez, Rafael H.

    2016-01-01

    in optimization, the so called the reliability based design. Subsequently, we present the risk optimization approach, which includes the expected costs of failure in the objective function. After that the basic description of each approach is given, the projects

  4. Numerical simulation of logging-while-drilling density image by Monte-Carlo method

    International Nuclear Information System (INIS)

    Yue Aizhong; He Biao; Zhang Jianmin; Wang Lijuan

    2010-01-01

    Logging-while-drilling system is researched by Monte Carlo Method. Model of Logging-while-drilling system is built, tool response and azimuth density image are acquired, methods dealing with azimuth density data is discussed. This outcome lay foundation for optimizing tool, developing new tool and logging explanation. (authors)

  5. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Science.gov (United States)

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  6. Pseudolinear functions and optimization

    CERN Document Server

    Mishra, Shashi Kant

    2015-01-01

    Pseudolinear Functions and Optimization is the first book to focus exclusively on pseudolinear functions, a class of generalized convex functions. It discusses the properties, characterizations, and applications of pseudolinear functions in nonlinear optimization problems.The book describes the characterizations of solution sets of various optimization problems. It examines multiobjective pseudolinear, multiobjective fractional pseudolinear, static minmax pseudolinear, and static minmax fractional pseudolinear optimization problems and their results. The authors extend these results to locally

  7. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  8. Obesity and Regional Immigrant Density.

    Science.gov (United States)

    Emerson, Scott D; Carbert, Nicole S

    2017-11-24

    Canada has an increasingly large immigrant population. Areas of higher immigrant density, may relate to immigrants' health through reduced acculturation to Western foods, greater access to cultural foods, and/or promotion of salubrious values/practices. It is unclear, however, whether an association exists between Canada-wide regional immigrant density and obesity among immigrants. Thus, we examined whether regional immigrant density was related to obesity, among immigrants. Adult immigrant respondents (n = 15,595) to a national population-level health survey were merged with region-level immigrant density data. Multi-level logistic regression was used to model the odds of obesity associated with increased immigrant density. The prevalence of obesity among the analytic sample was 16%. Increasing regional immigrant density was associated with lower odds of obesity among minority immigrants and long-term white immigrants. Immigrant density at the region-level in Canada may be an important contextual factor to consider when examining obesity among immigrants.

  9. Density dependent hadron field theory

    International Nuclear Information System (INIS)

    Fuchs, C.; Lenske, H.; Wolter, H.H.

    1995-01-01

    A fully covariant approach to a density dependent hadron field theory is presented. The relation between in-medium NN interactions and field-theoretical meson-nucleon vertices is discussed. The medium dependence of nuclear interactions is described by a functional dependence of the meson-nucleon vertices on the baryon field operators. As a consequence, the Euler-Lagrange equations lead to baryon rearrangement self-energies which are not obtained when only a parametric dependence of the vertices on the density is assumed. It is shown that the approach is energy-momentum conserving and thermodynamically consistent. Solutions of the field equations are studied in the mean-field approximation. Descriptions of the medium dependence in terms of the baryon scalar and vector density are investigated. Applications to infinite nuclear matter and finite nuclei are discussed. Density dependent coupling constants obtained from Dirac-Brueckner calculations with the Bonn NN potentials are used. Results from Hartree calculations for energy spectra, binding energies, and charge density distributions of 16 O, 40,48 Ca, and 208 Pb are presented. Comparisons to data strongly support the importance of rearrangement in a relativistic density dependent field theory. Most striking is the simultaneous improvement of charge radii, charge densities, and binding energies. The results indicate the appearance of a new ''Coester line'' in the nuclear matter equation of state

  10. Measuring single-cell density.

    Science.gov (United States)

    Grover, William H; Bryan, Andrea K; Diez-Silva, Monica; Suresh, Subra; Higgins, John M; Manalis, Scott R

    2011-07-05

    We have used a microfluidic mass sensor to measure the density of single living cells. By weighing each cell in two fluids of different densities, our technique measures the single-cell mass, volume, and density of approximately 500 cells per hour with a density precision of 0.001 g mL(-1). We observe that the intrinsic cell-to-cell variation in density is nearly 100-fold smaller than the mass or volume variation. As a result, we can measure changes in cell density indicative of cellular processes that would be otherwise undetectable by mass or volume measurements. Here, we demonstrate this with four examples: identifying Plasmodium falciparum malaria-infected erythrocytes in a culture, distinguishing transfused blood cells from a patient's own blood, identifying irreversibly sickled cells in a sickle cell patient, and identifying leukemia cells in the early stages of responding to a drug treatment. These demonstrations suggest that the ability to measure single-cell density will provide valuable insights into cell state for a wide range of biological processes.

  11. Attractor comparisons based on density

    International Nuclear Information System (INIS)

    Carroll, T. L.

    2015-01-01

    Recognizing a chaotic attractor can be seen as a problem in pattern recognition. Some feature vector must be extracted from the attractor and used to compare to other attractors. The field of machine learning has many methods for extracting feature vectors, including clustering methods, decision trees, support vector machines, and many others. In this work, feature vectors are created by representing the attractor as a density in phase space and creating polynomials based on this density. Density is useful in itself because it is a one dimensional function of phase space position, but representing an attractor as a density is also a way to reduce the size of a large data set before analyzing it with graph theory methods, which can be computationally intensive. The density computation in this paper is also fast to execute. In this paper, as a demonstration of the usefulness of density, the density is used directly to construct phase space polynomials for comparing attractors. Comparisons between attractors could be useful for tracking changes in an experiment when the underlying equations are too complicated for vector field modeling

  12. Energy vs. density on paths toward more exact density functionals.

    Science.gov (United States)

    Kepp, Kasper P

    2018-03-14

    Recently, the progression toward more exact density functional theory has been questioned, implying a need for more formal ways to systematically measure progress, i.e. a "path". Here I use the Hohenberg-Kohn theorems and the definition of normality by Burke et al. to define a path toward exactness and "straying" from the "path" by separating errors in ρ and E[ρ]. A consistent path toward exactness involves minimizing both errors. Second, a suitably diverse test set of trial densities ρ' can be used to estimate the significance of errors in ρ without knowing the exact densities which are often inaccessible. To illustrate this, the systems previously studied by Medvedev et al., the first ionization energies of atoms with Z = 1 to 10, the ionization energy of water, and the bond dissociation energies of five diatomic molecules were investigated using CCSD(T)/aug-cc-pV5Z as benchmark at chemical accuracy. Four functionals of distinct designs was used: B3LYP, PBE, M06, and S-VWN. For atomic cations regardless of charge and compactness up to Z = 10, the energy effects of the different ρ are energy-wise insignificant. An interesting oscillating behavior in the density sensitivity is observed vs. Z, explained by orbital occupation effects. Finally, it is shown that even large "normal" problems such as the Co-C bond energy of cobalamins can use simpler (e.g. PBE) trial densities to drastically speed up computation by loss of a few kJ mol -1 in accuracy. The proposed method of using a test set of trial densities to estimate the sensitivity and significance of density errors of functionals may be useful for testing and designing new balanced functionals with more systematic improvement of densities and energies.

  13. Optimization and optimal control in automotive systems

    CERN Document Server

    Kolmanovsky, Ilya; Steinbuch, Maarten; Re, Luigi

    2014-01-01

    This book demonstrates the use of the optimization techniques that are becoming essential to meet the increasing stringency and variety of requirements for automotive systems. It shows the reader how to move away from earlier  approaches, based on some degree of heuristics, to the use of  more and more common systematic methods. Even systematic methods can be developed and applied in a large number of forms so the text collects contributions from across the theory, methods and real-world automotive applications of optimization. Greater fuel economy, significant reductions in permissible emissions, new drivability requirements and the generally increasing complexity of automotive systems are among the criteria that the contributing authors set themselves to meet. In many cases multiple and often conflicting requirements give rise to multi-objective constrained optimization problems which are also considered. Some of these problems fall into the domain of the traditional multi-disciplinary optimization applie...

  14. Experimental effects of herbivore density on above-ground plant biomass in an alpine grassland ecosystem

    OpenAIRE

    Austrheim, Gunnar; Speed, James David Mervyn; Martinsen, Vegard; Mulder, Jan; Mysterud, Atle

    2014-01-01

    Herbivores may increase or decrease aboveground plant productivity depending on factors such as herbivore density and habitat productivity. The grazing optimization hypothesis predicts a peak in plant production at intermediate herbivore densities, but has rarely been tested experimentally in an alpine field setting. In an experimental design with three densities of sheep (high, low, and no sheep), we harvested aboveground plant biomass in alpine grasslands prior to treatment and after five y...

  15. Optimal neutral beam heating scenario for FED

    International Nuclear Information System (INIS)

    Hively, L.M.; Houlberg, W.A.; Attenberger, S.E.

    1981-01-01

    Optimal neutral beam heating scenarios are determined for FED based on a 1/one-half/-D transport analysis. Tradeoffs are examined between neutral beam energy, power, and species mix for positive ion systems. A ramped density startup is found to provide the most economical heating. The resulting plasma power requirements are reduced by 10-30% from a constant density startup. For beam energies between 100 and 200 keV, the power needed to heat the plasma does not decrease significantly as beam energy is increased. This is due to reduced ion heating, more power in the fractional energy components, and rising power supply requirements as beam energy increases

  16. Optimization and improvement of Halbach cylinder design

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Bahl, Christian Robert Haffenden; Smith, Anders

    2008-01-01

    possible volume of magnets with a given mean flux density in the cylinder bore. The volume of the cylinder bore could also be significantly increased by only slightly increasing the volume of the magnets, for a fixed mean flux density. Placing additional blocks of magnets on the end faces of the Halbach...... that this parameter was optimal for long Halbach cylinders with small rex. Using the previously mentioned additional blocks of magnets can improve the parameter by as much as 15% as well as improve the homogeneity of the field in the cylinder bore. ©2008 American Institute of Physics...

  17. Density limit in JT-60

    International Nuclear Information System (INIS)

    Kamada, Yutaka; Hosogane, Nobuyuki; Hirayama, Toshio; Tsunematsu, Toshihide

    1990-05-01

    This report studies mainly the density limit for a series of gas- and pellet-fuelled limiter discharges in JT-60. With the pellet injection into high-current/low-q (q(a)=2.3∼2.5) discharges, the Murakami factor reaches up to 10∼13 x 10 19 m -2 T -1 . The values are about factors of 1.5∼2.0 higher than those for usual gas-fuelled discharges. The pellet injected discharges have high central density, whereas the electron density in the outer region (a/2 abs and n e 2 (r=50 cm) x Z eff (r=50 cm). (author)

  18. Charge density waves in solids

    CERN Document Server

    Gor'kov, LP

    2012-01-01

    The latest addition to this series covers a field which is commonly referred to as charge density wave dynamics.The most thoroughly investigated materials are inorganic linear chain compounds with highly anisotropic electronic properties. The volume opens with an examination of their structural properties and the essential features which allow charge density waves to develop.The behaviour of the charge density waves, where interesting phenomena are observed, is treated both from a theoretical and an experimental standpoint. The role of impurities in statics and dynamics is considered and an

  19. Magnetothermopower in unconventional density waves

    International Nuclear Information System (INIS)

    Dora, B.; Maki, K.; Vanyolos, A.; Virosztek, A.

    2003-10-01

    After a brief introduction on unconventional density waves (i.e. unconventional charge density wave (UCDW) and unconventional spin density wave (USDW)), we discuss the magnetotransport of the low temperature phase (LTP) of α-(BEDT-TTF) 2 KHg(SCN) 4 . Recently we have proposed that the low temperature phase in α-(BEDT-TTF) 2 KHg(SCN 4 should be UCDW. Here we show that UCDW describes very consistently the magnetothermopower of )α-(BEDT-TTF) 2 KHg(SCN) 4 observed by Choi et al. (author)

  20. Optimization of the ECT background coil

    International Nuclear Information System (INIS)

    Ballou, J.K.; Luton, J.N.

    1975-01-01

    This study was begun to optimize the Eccentric Coil Test (ECT) background coil. In the course of this work a general optimization code was obtained, tested, and applied to the ECT problem. So far this code has proven to be very satisfactory. The results obtained with this code and earlier codes have illustrated the parametric behavior of such a coil system and that the optimum for this type system is broad. This study also shows that a background coil with a winding current density of less than 3000 A/cm 2 is not feasible for the ECT models presented in this paper

  1. optBINS: Optimal Binning for histograms

    Science.gov (United States)

    Knuth, Kevin H.

    2018-03-01

    optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.

  2. Transmission loss optimization in acoustic sandwich panels

    Science.gov (United States)

    Makris, S. E.; Dym, C. L.; MacGregor Smith, J.

    1986-06-01

    Considering the sound transmission loss (TL) of a sandwich panel as the single objective, different optimization techniques are examined and a sophisticated computer program is used to find the optimum TL. Also, for one of the possible case studies such as core optimization, closed-form expressions are given between TL and the core-design variables for different sets of skins. The significance of these functional relationships lies in the fact that the panel designer can bypass the necessity of using a sophisticated software package in order to assess explicitly the dependence of the TL on core thickness and density.

  3. Exact and Optimal Quantum Mechanics/Molecular Mechanics Boundaries.

    Science.gov (United States)

    Sun, Qiming; Chan, Garnet Kin-Lic

    2014-09-09

    Motivated by recent work in density matrix embedding theory, we define exact link orbitals that capture all quantum mechanical (QM) effects across arbitrary quantum mechanics/molecular mechanics (QM/MM) boundaries. Exact link orbitals are rigorously defined from the full QM solution, and their number is equal to the number of orbitals in the primary QM region. Truncating the exact set yields a smaller set of link orbitals optimal with respect to reproducing the primary region density matrix. We use the optimal link orbitals to obtain insight into the limits of QM/MM boundary treatments. We further analyze the popular general hybrid orbital (GHO) QM/MM boundary across a test suite of molecules. We find that GHOs are often good proxies for the most important optimal link orbital, although there is little detailed correlation between the detailed GHO composition and optimal link orbital valence weights. The optimal theory shows that anions and cations cannot be described by a single link orbital. However, expanding to include the second most important optimal link orbital in the boundary recovers an accurate description. The second optimal link orbital takes the chemically intuitive form of a donor or acceptor orbital for charge redistribution, suggesting that optimal link orbitals can be used as interpretative tools for electron transfer. We further find that two optimal link orbitals are also sufficient for boundaries that cut across double bonds. Finally, we suggest how to construct "approximately" optimal link orbitals for practical QM/MM calculations.

  4. FOREWORD: Special issue on density

    Science.gov (United States)

    Fujii, Kenichi

    2004-04-01

    This special issue on density was undertaken to provide readers with an overview of the present state of the density standards for solids, liquids and gases, as well as the technologies developed for measuring density. This issue also includes topics on the refractive index of gases and on techniques used for calibrating hydrometers so that almost all areas concerned with density standards are covered in four review articles and seven original articles, most of which describe current research being conducted at national metrology institutes (NMIs). A review article was invited from the Ruhr-Universität Bochum to highlight research on the magnetic suspension densimeters. In metrology, the determinations of the volume of a weight and the density of air are of primary importance in establishing a mass standard because the effect of the buoyancy force of air acting on the weight must be known accurately to determine the mass of the weight. A density standard has therefore been developed at many NMIs with a close relation to the mass standard. Hydrostatic weighing is widely used to measure the volume of a solid. The most conventional hydrostatic weighing method uses water as a primary density standard for measuring the volume of a solid. A brief history of the determination of the density of water is therefore given in a review article, as well as a recommended value for the density of water with a specified isotopic abundance. The most modern technique for hydrostatic weighing uses a solid density standard instead of water. For this purpose, optical interferometers for measuring the diameters of silicon spheres have been developed to convert the length standard into the volume standard with a small uncertainty. A review article is therefore dedicated to describing the state-of-the-art optical interferometers developed for silicon spheres. Relative combined standard uncertainties of several parts in 108 have been achieved today for measuring the volume and density of

  5. Methods of mathematical optimization

    Science.gov (United States)

    Vanderplaats, G. N.

    The fundamental principles of numerical optimization methods are reviewed, with an emphasis on potential engineering applications. The basic optimization process is described; unconstrained and constrained minimization problems are defined; a general approach to the design of optimization software programs is outlined; and drawings and diagrams are shown for examples involving (1) the conceptual design of an aircraft, (2) the aerodynamic optimization of an airfoil, (3) the design of an automotive-engine connecting rod, and (4) the optimization of a 'ski-jump' to assist aircraft in taking off from a very short ship deck.

  6. Optimization theory with applications

    CERN Document Server

    Pierre, Donald A

    1987-01-01

    Optimization principles are of undisputed importance in modern design and system operation. They can be used for many purposes: optimal design of systems, optimal operation of systems, determination of performance limitations of systems, or simply the solution of sets of equations. While most books on optimization are limited to essentially one approach, this volume offers a broad spectrum of approaches, with emphasis on basic techniques from both classical and modern work.After an introductory chapter introducing those system concepts that prevail throughout optimization problems of all typ

  7. Concepts of combinatorial optimization

    CERN Document Server

    Paschos, Vangelis Th

    2014-01-01

    Combinatorial optimization is a multidisciplinary scientific area, lying in the interface of three major scientific domains: mathematics, theoretical computer science and management.  The three volumes of the Combinatorial Optimization series aim to cover a wide range  of topics in this area. These topics also deal with fundamental notions and approaches as with several classical applications of combinatorial optimization.Concepts of Combinatorial Optimization, is divided into three parts:- On the complexity of combinatorial optimization problems, presenting basics about worst-case and randomi

  8. Introduction to Continuous Optimization

    DEFF Research Database (Denmark)

    Andreasson, Niclas; Evgrafov, Anton; Patriksson, Michael

    optimal solutions for continuous optimization models. The main part of the mathematical material therefore concerns the analysis and linear algebra that underlie the workings of convexity and duality, and necessary/sufficient local/global optimality conditions for continuous optimization problems. Natural...... algorithms are then developed from these optimality conditions, and their most important convergence characteristics are analyzed. The book answers many more questions of the form “Why?” and “Why not?” than “How?”. We use only elementary mathematics in the development of the book, yet are rigorous throughout...

  9. Optimization principles and the figure of merit for triboelectric generators.

    Science.gov (United States)

    Peng, Jun; Kang, Stephen Dongmin; Snyder, G Jeffrey

    2017-12-01

    Energy harvesting with triboelectric nanogenerators is a burgeoning field, with a growing portfolio of creative application schemes attracting much interest. Although power generation capabilities and its optimization are one of the most important subjects, a satisfactory elemental model that illustrates the basic principles and sets the optimization guideline remains elusive. We use a simple model to clarify how the energy generation mechanism is electrostatic induction but with a time-varying character that makes the optimal matching for power generation more restrictive. By combining multiple parameters into dimensionless variables, we pinpoint the optimum condition with only two independent parameters, leading to predictions of the maximum limit of power density, which allows us to derive the triboelectric material and device figure of merit. We reveal the importance of optimizing device capacitance, not only load resistance, and minimizing the impact of parasitic capacitance. Optimized capacitances can lead to an overall increase in power density of more than 10 times.

  10. Putting Priors in Mixture Density Mercer Kernels

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  11. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  12. Interactive Topology Optimization

    DEFF Research Database (Denmark)

    Nobel-Jørgensen, Morten

    Interactivity is the continuous interaction between the user and the application to solve a task. Topology optimization is the optimization of structures in order to improve stiffness or other objectives. The goal of the thesis is to explore how topology optimization can be used in applications...... on theory of from human-computer interaction which is described in Chapter 2. Followed by a description of the foundations of topology optimization in Chapter 3. Our applications for topology optimization in 2D and 3D are described in Chapter 4 and a game which trains the human intuition of topology...... optimization is presented in Chapter 5. Topology optimization can also be used as an interactive modeling tool with local control which is presented in Chapter 6. Finally, Chapter 7 contains a summary of the findings and concludes the dissertation. Most of the presented applications of the thesis are available...

  13. Particle Swarm Optimization

    Science.gov (United States)

    Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw

    2002-01-01

    The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.

  14. SYNTHESIS, CHARACTERIZATION AND DENSITY FUNCTIONAL ...

    African Journals Online (AJOL)

    Preferred Customer

    We synthesized a number of aniline derivatives containing acyl groups to compare their barriers of rotation around ... KEY WORDS: Monoacyl aniline, Synthesis, Density functional theory, Rotation barrier. INTRODUCTION. Developments in ...

  15. Optimization of MIS/IL solar cells parameters using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, K.A.; Mohamed, E.A.; Alaa, S.H. [Faculty of Engineering, Alexandria Univ. (Egypt); Motaz, M.S. [Institute of Graduate Studies and Research, Alexandria Univ. (Egypt)

    2004-07-01

    This paper presents a genetic algorithm optimization for MIS/IL solar cell parameters including doping concentration NA, metal work function {phi}m, oxide thickness d{sub ox}, mobile charge density N{sub m}, fixed oxide charge density N{sub ox} and the external back bias applied to the inversion grid V. The optimization results are compared with theoretical optimization and shows that the genetic algorithm can be used for determining the optimum parameters of the cell. (orig.)

  16. High Density Lipoprotein and it's Dysfunction.

    Science.gov (United States)

    Eren, Esin; Yilmaz, Necat; Aydin, Ozgur

    2012-01-01

    Plasma high-density lipoprotein cholesterol(HDL-C) levels do not predict functionality and composition of high-density lipoprotein(HDL). Traditionally, keeping levels of low-density lipoprotein cholesterol(LDL-C) down and HDL-C up have been the goal of patients to prevent atherosclerosis that can lead to coronary vascular disease(CVD). People think about the HDL present in their cholesterol test, but not about its functional capability. Up to 65% of cardiovascular death cannot be prevented by putative LDL-C lowering agents. It well explains the strong interest in HDL increasing strategies. However, recent studies have questioned the good in using drugs to increase level of HDL. While raising HDL is a theoretically attractive target, the optimal approach remains uncertain. The attention has turned to the quality, rather than the quantity, of HDL-C. An alternative to elevations in HDL involves strategies to enhance HDL functionality. The situation poses an opportunity for clinical chemists to take the lead in the development and validation of such biomarkers. The best known function of HDL is the capacity to promote cellular cholesterol efflux from peripheral cells and deliver cholesterol to the liver for excretion, thereby playing a key role in reverse cholesterol transport (RCT). The functions of HDL that have recently attracted attention include anti-inflammatory and anti-oxidant activities. High antioxidant and anti-inflammatory activities of HDL are associated with protection from CVD.This review addresses the current state of knowledge regarding assays of HDL functions and their relationship to CVD. HDL as a therapeutic target is the new frontier with huge potential for positive public health implications.

  17. Vibronic coupling density and related concepts

    International Nuclear Information System (INIS)

    Sato, Tohru; Uejima, Motoyuki; Iwahara, Naoya; Haruta, Naoki; Shizu, Katsuyuki; Tanaka, Kazuyoshi

    2013-01-01

    Vibronic coupling density is derived from a general point of view as a one-electron property density. Related concepts as well as their applications are presented. Linear and nonlinear vibronic coupling density and related concepts, orbital vibronic coupling density, reduced vibronic coupling density, atomic vibronic coupling constant, and effective vibronic coupling density, illustrate the origin of vibronic couplings and enable us to design novel functional molecules or to elucidate chemical reactions. Transition dipole moment density is defined as an example of the one-electron property density. Vibronic coupling density and transition dipole moment density open a way to design light-emitting molecules with high efficiency.

  18. A novel optimization method, Gravitational Search Algorithm (GSA), for PWR core optimization

    International Nuclear Information System (INIS)

    Mahmoudi, S.M.; Aghaie, M.; Bahonar, M.; Poursalehi, N.

    2016-01-01

    Highlights: • The Gravitational Search Algorithm (GSA) is introduced. • The advantage of GSA is verified in Shekel’s Foxholes. • Reload optimizing in WWER-1000 and WWER-440 cases are performed. • Maximizing K eff , minimizing PPFs and flattening power density is considered. - Abstract: In-core fuel management optimization (ICFMO) is one of the most challenging concepts of nuclear engineering. In recent decades several meta-heuristic algorithms or computational intelligence methods have been expanded to optimize reactor core loading pattern. This paper presents a new method of using Gravitational Search Algorithm (GSA) for in-core fuel management optimization. The GSA is constructed based on the law of gravity and the notion of mass interactions. It uses the theory of Newtonian physics and searcher agents are the collection of masses. In this work, at the first step, GSA method is compared with other meta-heuristic algorithms on Shekel’s Foxholes problem. In the second step for finding the best core, the GSA algorithm has been performed for three PWR test cases including WWER-1000 and WWER-440 reactors. In these cases, Multi objective optimizations with the following goals are considered, increment of multiplication factor (K eff ), decrement of power peaking factor (PPF) and power density flattening. It is notable that for neutronic calculation, PARCS (Purdue Advanced Reactor Core Simulator) code is used. The results demonstrate that GSA algorithm have promising performance and could be proposed for other optimization problems of nuclear engineering field.

  19. Rydberg energies using excited state density functional theory

    International Nuclear Information System (INIS)

    Cheng, C.-L.; Wu Qin; Van Voorhis, Troy

    2008-01-01

    We utilize excited state density functional theory (eDFT) to study Rydberg states in atoms. We show both analytically and numerically that semilocal functionals can give quite reasonable Rydberg energies from eDFT, even in cases where time dependent density functional theory (TDDFT) fails catastrophically. We trace these findings to the fact that in eDFT the Kohn-Sham potential for each state is computed using the appropriate excited state density. Unlike the ground state potential, which typically falls off exponentially, the sequence of excited state potentials has a component that falls off polynomially with distance, leading to a Rydberg-type series. We also address the rigorous basis of eDFT for these systems. Perdew and Levy have shown using the constrained search formalism that every stationary density corresponds, in principle, to an exact stationary state of the full many-body Hamiltonian. In the present context, this means that the excited state DFT solutions are rigorous as long as they deliver the minimum noninteracting kinetic energy for the given density. We use optimized effective potential techniques to show that, in some cases, the eDFT Rydberg solutions appear to deliver the minimum kinetic energy because the associated density is not pure state v-representable. We thus find that eDFT plays a complementary role to constrained DFT: The former works only if the excited state density is not the ground state of some potential while the latter applies only when the density is a ground state density.

  20. Density Distributions of Cyclotrimethylenetrinitramines (RDX)

    International Nuclear Information System (INIS)

    Hoffman, D M

    2002-01-01

    As part of the US Army Foreign Comparative Testing (FCT) program the density distributions of six samples of class 1 RDX were measured using the density gradient technique. This technique was used in an attempt to distinguish between RDX crystallized by a French manufacturer (designated insensitive or IRDX) from RDX manufactured at Holston Army Ammunition Plant (HAAP), the current source of RDX for Department of Defense (DoD). Two samples from different lots of French IRDX had an average density of 1.7958 ± 0.0008 g/cc. The theoretical density of a perfect RDX crystal is 1.806 g/cc. This yields 99.43% of the theoretical maximum density (TMD). For two HAAP RDX lots the average density was 1.786 ± 0.002 g/cc, only 98.89% TMD. Several other techniques were used for preliminary characterization of one lot of French IRDX and two lot of HAAP RDX. Light scattering, SEM and polarized optical microscopy (POM) showed that SNPE and Holston RDX had the appropriate particle size distribution for Class 1 RDX. High performance liquid chromatography showed quantities of HMX in HAAP RDX. French IRDX also showed a 1.1 C higher melting point compared to HAAP RDX in the differential scanning calorimetry (DSC) consistent with no melting point depression due to the HMX contaminant. A second part of the program involved characterization of Holston RDX recrystallized using the French process. After reprocessing the average density of the Holston RDX was increased to 1.7907 g/cc. Apparently HMX in RDX can act as a nucleating agent in the French RDX recrystallization process. The French IRDX contained no HMX, which is assumed to account for its higher density and narrower density distribution. Reprocessing of RDX from Holston improved the average density compared to the original Holston RDX, but the resulting HIRDX was not as dense as the original French IRDX. Recrystallized Holston IRDX crystals were much larger (3-500 (micro)m or more) then either the original class 1 HAAP RDX or French

  1. High density thermite mixture for shaped charge ordnance disposal

    OpenAIRE

    Tamer Elshenawy; Salah Soliman; Ahmed Hawass

    2017-01-01

    The effect of thermite mixture based on aluminum and ferric oxides for ammunition neutralization has been studied and tested. Thermochemical calculations have been carried out for different percentage of Al using Chemical Equilibrium Code to expect the highest performance thermite mixture used for shaped charge ordnance disposal. Densities and enthalpy of different formulations have been calculated and demonstrated. The optimized thermite formulation has been prepared experimentally using col...

  2. Imaginary time density-density correlations for two-dimensional electron gases at high density

    Energy Technology Data Exchange (ETDEWEB)

    Motta, M.; Galli, D. E. [Dipartimento di Fisica, Università degli Studi di Milano, Via Celoria 16, 20133 Milano (Italy); Moroni, S. [IOM-CNR DEMOCRITOS National Simulation Center and SISSA, Via Bonomea 265, 34136 Trieste (Italy); Vitali, E. [Department of Physics, College of William and Mary, Williamsburg, Virginia 23187-8795 (United States)

    2015-10-28

    We evaluate imaginary time density-density correlation functions for two-dimensional homogeneous electron gases of up to 42 particles in the continuum using the phaseless auxiliary field quantum Monte Carlo method. We use periodic boundary conditions and up to 300 plane waves as basis set elements. We show that such methodology, once equipped with suitable numerical stabilization techniques necessary to deal with exponentials, products, and inversions of large matrices, gives access to the calculation of imaginary time correlation functions for medium-sized systems. We discuss the numerical stabilization techniques and the computational complexity of the methodology and we present the limitations related to the size of the systems on a quantitative basis. We perform the inverse Laplace transform of the obtained density-density correlation functions, assessing the ability of the phaseless auxiliary field quantum Monte Carlo method to evaluate dynamical properties of medium-sized homogeneous fermion systems.

  3. Comparison of density estimators. [Estimation of probability density functions

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.; Monahan, J.F.

    1977-09-01

    Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)

  4. Discharge optimization in JT-60

    International Nuclear Information System (INIS)

    Ninomiya, H.; Hosogane, N.; Kikuchi, M.; Yoshino, R.; Seki, S.; Kurihara, K.; Kimura, T.; Shimada, R.; Matsukawa, M.

    1986-01-01

    For the optimization of the feedback control gains of the plasma control system in JT-60, the plasma modelling by the regression analysis, the matrix transfer function analysis and the simulation study are performed. The experimental results of plasma control are well consistent with these estimations and the usefulness of a modelling by the regression analysis, the matrix transfer function analysis and the simulation study is experimentally confirmed. It is also shown that the regression analysis is useful for development of the sensor algorithm of plasma shape and location of separatrix line in a feedback control system. Some topics are also presented about plasma engineering obtained in JT-60: possibility to suppress the uncontrollability of plasma density, αI/sub p/ control for plasma position and volt-sec consumption

  5. Molecular Model for HNBR with Tunable Cross-Link Density.

    Science.gov (United States)

    Molinari, N; Khawaja, M; Sutton, A P; Mostofi, A A

    2016-12-15

    We introduce a chemically inspired, all-atom model of hydrogenated nitrile butadiene rubber (HNBR) and assess its performance by computing the mass density and glass-transition temperature as a function of cross-link density in the structure. Our HNBR structures are created by a procedure that mimics the real process used to produce HNBR, that is, saturation of the carbon-carbon double bonds in NBR, either by hydrogenation or by cross-linking. The atomic interactions are described by the all-atom "Optimized Potentials for Liquid Simulations" (OPLS-AA). In this paper, first, we assess the use of OPLS-AA in our models, especially using NBR bulk properties, and second, we evaluate the validity of the proposed model for HNBR by investigating mass density and glass transition as a function of the tunable cross-link density. Experimental densities are reproduced within 3% for both elastomers, and qualitatively correct trends in the glass-transition temperature as a function of monomer composition and cross-link density are obtained.

  6. Workshop on Computational Optimization

    CERN Document Server

    2015-01-01

    Our everyday life is unthinkable without optimization. We try to minimize our effort and to maximize the achieved profit. Many real world and industrial problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks. This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2013. It presents recent advances in computational optimization. The volume includes important real life problems like parameter settings for controlling processes in bioreactor, resource constrained project scheduling, problems arising in transport services, error correcting codes, optimal system performance and energy consumption and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others.

  7. Optimal placement of capacito

    Directory of Open Access Journals (Sweden)

    N. Gnanasekaran

    2016-06-01

    Full Text Available Optimal size and location of shunt capacitors in the distribution system plays a significant role in minimizing the energy loss and the cost of reactive power compensation. This paper presents a new efficient technique to find optimal size and location of shunt capacitors with the objective of minimizing cost due to energy loss and reactive power compensation of distribution system. A new Shark Smell Optimization (SSO algorithm is proposed to solve the optimal capacitor placement problem satisfying the operating constraints. The SSO algorithm is a recently developed metaheuristic optimization algorithm conceptualized using the shark’s hunting ability. It uses a momentum incorporated gradient search and a rotational movement based local search for optimization. To demonstrate the applicability of proposed method, it is tested on IEEE 34-bus and 118-bus radial distribution systems. The simulation results obtained are compared with previous methods reported in the literature and found to be encouraging.

  8. Optimal guidance law in quantum mechanics

    International Nuclear Information System (INIS)

    Yang, Ciann-Dong; Cheng, Lieh-Lieh

    2013-01-01

    Following de Broglie’s idea of a pilot wave, this paper treats quantum mechanics as a problem of stochastic optimal guidance law design. The guidance scenario considered in the quantum world is that an electron is the flight vehicle to be guided and its accompanying pilot wave is the guidance law to be designed so as to guide the electron to a random target driven by the Wiener process, while minimizing a cost-to-go function. After solving the stochastic optimal guidance problem by differential dynamic programming, we point out that the optimal pilot wave guiding the particle’s motion is just the wavefunction Ψ(t,x), a solution to the Schrödinger equation; meanwhile, the closed-loop guidance system forms a complex state–space dynamics for Ψ(t,x), from which quantum operators emerge naturally. Quantum trajectories under the action of the optimal guidance law are solved and their statistical distribution is shown to coincide with the prediction of the probability density function Ψ ∗ Ψ. -- Highlights: •Treating quantum mechanics as a pursuit-evasion game. •Reveal an interesting analogy between guided flight motion and guided quantum motion. •Solve optimal quantum guidance problem by dynamic programming. •Gives a formal proof of de Broglie–Bohm’s idea of a pilot wave. •The optimal pilot wave is shown to be a wavefunction solved from Schrödinger equation

  9. Optimal guidance law in quantum mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ciann-Dong, E-mail: cdyang@mail.ncku.edu.tw; Cheng, Lieh-Lieh, E-mail: leo8101@hotmail.com

    2013-11-15

    Following de Broglie’s idea of a pilot wave, this paper treats quantum mechanics as a problem of stochastic optimal guidance law design. The guidance scenario considered in the quantum world is that an electron is the flight vehicle to be guided and its accompanying pilot wave is the guidance law to be designed so as to guide the electron to a random target driven by the Wiener process, while minimizing a cost-to-go function. After solving the stochastic optimal guidance problem by differential dynamic programming, we point out that the optimal pilot wave guiding the particle’s motion is just the wavefunction Ψ(t,x), a solution to the Schrödinger equation; meanwhile, the closed-loop guidance system forms a complex state–space dynamics for Ψ(t,x), from which quantum operators emerge naturally. Quantum trajectories under the action of the optimal guidance law are solved and their statistical distribution is shown to coincide with the prediction of the probability density function Ψ{sup ∗}Ψ. -- Highlights: •Treating quantum mechanics as a pursuit-evasion game. •Reveal an interesting analogy between guided flight motion and guided quantum motion. •Solve optimal quantum guidance problem by dynamic programming. •Gives a formal proof of de Broglie–Bohm’s idea of a pilot wave. •The optimal pilot wave is shown to be a wavefunction solved from Schrödinger equation.

  10. Optimization of Inventory

    OpenAIRE

    PROKOPOVÁ, Nikola

    2017-01-01

    The subject of this thesis is optimization of inventory in selected organization. Inventory optimization is a very important topic in each organization because it reduces storage costs. At the beginning the inventory theory is presented. It shows the meaning and types of inventory, inventory control and also different methods and models of inventory control. Inventory optimization in the enterprise can be reached by using models of inventory control. In the second part the company on which is...

  11. Search engine optimization

    OpenAIRE

    Marolt, Klemen

    2013-01-01

    Search engine optimization techniques, often shortened to “SEO,” should lead to first positions in organic search results. Some optimization techniques do not change over time, yet still form the basis for SEO. However, as the Internet and web design evolves dynamically, new optimization techniques flourish and flop. Thus, we looked at the most important factors that can help to improve positioning in search results. It is important to emphasize that none of the techniques can guarantee high ...

  12. LOGISTICS OPTIMIZATION USING ONTOLOGIES

    OpenAIRE

    Hendi , Hayder; Ahmad , Adeel; Bouneffa , Mourad; Fonlupt , Cyril

    2014-01-01

    International audience; Logistics processes involve complex physical flows and integration of different elements. It is widely observed that the uncontrolled processes can decline the state of logistics. The optimization of logistic processes can support the desired growth and consistent continuity of logistics. In this paper, we present a software framework for logistic processes optimization. It primarily defines logistic ontologies and then optimize them. It intends to assist the design of...

  13. Optimization and approximation

    CERN Document Server

    Pedregal, Pablo

    2017-01-01

    This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.

  14. Dictionary descent in optimization

    OpenAIRE

    Temlyakov, Vladimir

    2015-01-01

    The problem of convex optimization is studied. Usually in convex optimization the minimization is over a d-dimensional domain. Very often the convergence rate of an optimization algorithm depends on the dimension d. The algorithms studied in this paper utilize dictionaries instead of a canonical basis used in the coordinate descent algorithms. We show how this approach allows us to reduce dimensionality of the problem. Also, we investigate which properties of a dictionary are beneficial for t...

  15. POSTDOC : THE HUMAN OPTIMIZATION

    OpenAIRE

    Satish Gajawada

    2013-01-01

    This paper is dedicated to everyone who is interested in the Artificial Intelligence. John Henry Holland proposed Genetic Algorithm in the early 1970s. Ant Colony Optimization was proposed by Marco Dorigo in 1992. Particle Swarm Optimization was introduced by Kennedy and Eberhart in 1995. Storn and Price introduced Differential Evolution in 1996. K.M. Passino introduced Bacterial Foraging Optimization Algorithm in 2002. In 2003, X.L. Li proposed Artificial Fish Swarm Algorithm....

  16. Optimization with Extremal Dynamics

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Percus, Allon G.

    2001-01-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard discrete optimization problems. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. Extremal optimization successively updates extremely undesirable variables of a single suboptimal solution, assigning them new, random values. Large fluctuations ensue, efficiently exploring many local optima. We use extremal optimization to elucidate the phase transition in the 3-coloring problem, and we provide independent confirmation of previously reported extrapolations for the ground-state energy of ±J spin glasses in d=3 and 4

  17. Maintenance optimization after RCM

    International Nuclear Information System (INIS)

    Doyle, E.K.; Lee, C.-G.; Cho, D.

    2005-01-01

    Variant forms of RCM (Reliability Centered Maintenance) have been the maintenance optimizing tools of choice in industry for the last 20 years. Several such optimization techniques have been implemented at the Bruce Nuclear Station. Further cost refinement of the Station preventive maintenance strategy whereby decisions are based on statistical analysis of historical failure data are now being evaluated. The evaluation includes a requirement to demonstrate that earlier optimization projects have long term positive impacts. This proved to be a significant challenge. Eventually a methodology was developed using Crowe/AMSAA (Army Materials Systems Analysis Activity) plots to justify expenditures on further optimization efforts. (authors)

  18. Sequential stochastic optimization

    CERN Document Server

    Cairoli, Renzo

    1996-01-01

    Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

  19. Optimization under Uncertainty

    KAUST Repository

    Lopez, Rafael H.

    2016-01-06

    The goal of this poster is to present the main approaches to optimization of engineering systems in the presence of uncertainties. We begin by giving an insight about robust optimization. Next, we detail how to deal with probabilistic constraints in optimization, the so called the reliability based design. Subsequently, we present the risk optimization approach, which includes the expected costs of failure in the objective function. After that the basic description of each approach is given, the projects developed by CORE are presented. Finally, the main current topic of research of CORE is described.

  20. Beam optimization: improving methodology

    International Nuclear Information System (INIS)

    Quinteiro, Guillermo F.

    2004-01-01

    Different optimization techniques commonly used in biology and food technology allow a systematic and complete analysis of response functions. In spite of the great interest in medical and nuclear physics in the problem of optimizing mixed beams, little attention has been given to sophisticate mathematical tools. Indeed, many techniques are perfectly suited to the typical problem of beam optimization. This article is intended as a guide to the use of two methods, namely Response Surface Methodology and Simplex, that are expected to fasten the optimization process and, meanwhile give more insight into the relationships among the dependent variables controlling the response

  1. Integer and combinatorial optimization

    CERN Document Server

    Nemhauser, George L

    1999-01-01

    Rave reviews for INTEGER AND COMBINATORIAL OPTIMIZATION ""This book provides an excellent introduction and survey of traditional fields of combinatorial optimization . . . It is indeed one of the best and most complete texts on combinatorial optimization . . . available. [And] with more than 700 entries, [it] has quite an exhaustive reference list.""-Optima ""A unifying approach to optimization problems is to formulate them like linear programming problems, while restricting some or all of the variables to the integers. This book is an encyclopedic resource for such f

  2. Optimization : insights and applications

    CERN Document Server

    Brinkhuis, Jan

    2005-01-01

    This self-contained textbook is an informal introduction to optimization through the use of numerous illustrations and applications. The focus is on analytically solving optimization problems with a finite number of continuous variables. In addition, the authors provide introductions to classical and modern numerical methods of optimization and to dynamic optimization. The book's overarching point is that most problems may be solved by the direct application of the theorems of Fermat, Lagrange, and Weierstrass. The authors show how the intuition for each of the theoretical results can be s

  3. Initiating statistical maintenance optimization

    International Nuclear Information System (INIS)

    Doyle, E. Kevin; Tuomi, Vesa; Rowley, Ian

    2007-01-01

    Since the 1980 s maintenance optimization has been centered around various formulations of Reliability Centered Maintenance (RCM). Several such optimization techniques have been implemented at the Bruce Nuclear Station. Further cost refinement of the Station preventive maintenance strategy includes evaluation of statistical optimization techniques. A review of successful pilot efforts in this direction is provided as well as initial work with graphical analysis. The present situation reguarding data sourcing, the principle impediment to use of stochastic methods in previous years, is discussed. The use of Crowe/AMSAA (Army Materials Systems Analysis Activity) plots is demonstrated from the point of view of justifying expenditures in optimization efforts. (author)

  4. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2008-01-01

    Optimization problems arising in practice involve random model parameters. This book features many illustrations, several examples, and applications to concrete problems from engineering and operations research.

  5. High density data storage principle, technology, and materials

    CERN Document Server

    Zhu, Daoben

    2009-01-01

    The explosive increase in information and the miniaturization of electronic devices demand new recording technologies and materials that combine high density, fast response, long retention time and rewriting capability. As predicted, the current silicon-based computer circuits are reaching their physical limits. Further miniaturization of the electronic components and increase in data storage density are vital for the next generation of IT equipment such as ultra high-speed mobile computing, communication devices and sophisticated sensors. This original book presents a comprehensive introduction to the significant research achievements on high-density data storage from the aspects of recording mechanisms, materials and fabrication technologies, which are promising for overcoming the physical limits of current data storage systems. The book serves as an useful guide for the development of optimized materials, technologies and device structures for future information storage, and will lead readers to the fascin...

  6. Magnetic behavior study of samarium nitride using density functional theory

    Science.gov (United States)

    Som, Narayan N.; Mankad, Venu H.; Dabhi, Shweta D.; Patel, Anjali; Jha, Prafulla K.

    2018-02-01

    In this work, the state-of-art density functional theory is employed to study the structural, electronic and magnetic properties of samarium nitride (SmN). We have performed calculation for both ferromagnetic and antiferromagnetic states in rock-salt phase. The calculated results of optimized lattice parameter and magnetic moment agree well with the available experimental and theoretical values. From energy band diagram and electronic density of states, we observe a half-metallic behaviour in FM phase of rock salt SmN in while metallicity in AFM I and AFM III phases. We present and discuss our current understanding of the possible half-metallicity together with the magnetic ordering in SmN. The calculated phonon dispersion curves shows dynamical stability of the considered structures. The phonon density of states and Eliashberg functional have also been analysed to understand the superconductivity in SmN.

  7. Simulation of density measurements in plasma wakefields using photo acceleration

    CERN Document Server

    Kasim, Muhammad Firmansyah; Ceurvorst, Luke; Sadler, James; Burrows, Philip N; Trines, Raoul; Holloway, James; Wing, Matthew; Bingham, Robert; Norreys, Peter

    2015-01-01

    One obstacle in plasma accelerator development is the limitation of techniques to diagnose and measure plasma wakefield parameters. In this paper, we present a novel concept for the density measurement of a plasma wakefield using photon acceleration, supported by extensive particle in cell simulations of a laser pulse that copropagates with a wakefield. The technique can provide the perturbed electron density profile in the laser’s reference frame, averaged over the propagation length, to be accurate within 10%. We discuss the limitations that affect the measurement: small frequency changes, photon trapping, laser displacement, stimulated Raman scattering, and laser beam divergence. By considering these processes, one can determine the optimal parameters of the laser pulse and its propagation length. This new technique allows a characterization of the density perturbation within a plasma wakefield accelerator.

  8. Simulation of density measurements in plasma wakefields using photon acceleration

    Directory of Open Access Journals (Sweden)

    Muhammad Firmansyah Kasim

    2015-03-01

    Full Text Available One obstacle in plasma accelerator development is the limitation of techniques to diagnose and measure plasma wakefield parameters. In this paper, we present a novel concept for the density measurement of a plasma wakefield using photon acceleration, supported by extensive particle in cell simulations of a laser pulse that copropagates with a wakefield. The technique can provide the perturbed electron density profile in the laser’s reference frame, averaged over the propagation length, to be accurate within 10%. We discuss the limitations that affect the measurement: small frequency changes, photon trapping, laser displacement, stimulated Raman scattering, and laser beam divergence. By considering these processes, one can determine the optimal parameters of the laser pulse and its propagation length. This new technique allows a characterization of the density perturbation within a plasma wakefield accelerator.

  9. Methodology to evaluation of the density in radiographic image

    International Nuclear Information System (INIS)

    Louzada, M.J.Q.; Pela, C.A.; Belangero, W.D.; Santos-Pinto, R.

    1998-01-01

    This study was designed in order to optimize the optical densitometry technique in radiographic images by the setorization of the characteristic curves of the radiographic films. We used 24 radiographs of a stepped aluminium wedge that were taken without rigorous control development and manually revealed. The densitometric values of the steps images and its thickness, for each radiographic, was utilized to generate its particular mathematics expressions that represent its characteristic densitometric curves and then it were used for setorization. The densitometric values were obtained by a Macbeth TD528 densitometer. The study showed an optimization in the representation of the relationship between the optical density of the steps images of the wedge and its correspondent thickness, provided by the setorization, with mean square error around 10 -5 . This optimization will allow the use of this methodology in quantitative evaluations of bone mass, by radiographic images. (author)

  10. Design, Fabrication, and Optimization of Jatropha Sheller

    Directory of Open Access Journals (Sweden)

    Richard P. TING

    2012-07-01

    Full Text Available A study designed, fabricated, and optimized performance of a jatropha sheller, consisting of mainframe, rotary cylinder, stationary cylinder, transmission system. Evaluation and optimization considered moisture content, clearance, and roller speed as independent parameters while the responses comprised of recovery, bulk density factor, shelling capacity, energy utilization of sheller, whole kernel recovery, oil recovery, and energy utilization by extruder.Moisture content failed to affect the response variables. The clearance affected response variables except energy utilization of the extruder. Roller speed affected shelling capacity, whole kernel recovery, and energy utilization of the extruder. Optimization resulted in operating conditions of 9.5%wb moisture content, clearance of 6 mm, and roller speed of 750 rpm.

  11. Current interruption by density depression

    International Nuclear Information System (INIS)

    Wagner, J.S.; Tajima, T.; Akasofu, S.I.

    1985-04-01

    Using a one-dimensional electrostatic particle code, we examine processes associated with current interruption in a collisionless plasma when a density depression is present along the current channel. Current interruption due to double layers was suggested by Alfven and Carlqvist (1967) as a cause of solar flares. At a local density depression, plasma instabilities caused by an electron current flow are accentuated, leading to current disruption. Our simulation study encompasses a wide range of the parameters in such a way that under appropriate conditions, both the Alfven and Carlqvist (1967) regime and the Smith and Priest (1972) regime take place. In the latter regime the density depression decays into a stationary structure (''ion-acoustic layer'') which spawns a series of ion-acoustic ''solitons'' and ion phase space holes travelling upstream. A large inductance of the current circuit tends to enhance the plasma instabilities

  12. Sleep spindle density in narcolepsy

    DEFF Research Database (Denmark)

    Christensen, Julie Anja Engelhard; Nikolic, Miki; Hvidtfelt, Mathias

    2017-01-01

    BACKGROUND: Patients with narcolepsy type 1 (NT1) show alterations in sleep stage transitions, rapid-eye-movement (REM) and non-REM sleep due to the loss of hypocretinergic signaling. However, the sleep microstructure has not yet been evaluated in these patients. We aimed to evaluate whether...... the sleep spindle (SS) density is altered in patients with NT1 compared to controls and patients with narcolepsy type 2 (NT2). METHODS: All-night polysomnographic recordings from 28 NT1 patients, 19 NT2 patients, 20 controls (C) with narcolepsy-like symptoms, but with normal cerebrospinal fluid hypocretin...... levels and multiple sleep latency tests, and 18 healthy controls (HC) were included. Unspecified, slow, and fast SS were automatically detected, and SS densities were defined as number per minute and were computed across sleep stages and sleep cycles. The between-cycle trends of SS densities in N2...

  13. High Energy Density Laboratory Astrophysics

    CERN Document Server

    Lebedev, Sergey V

    2007-01-01

    During the past decade, research teams around the world have developed astrophysics-relevant research utilizing high energy-density facilities such as intense lasers and z-pinches. Every two years, at the International conference on High Energy Density Laboratory Astrophysics, scientists interested in this emerging field discuss the progress in topics covering: - Stellar evolution, stellar envelopes, opacities, radiation transport - Planetary Interiors, high-pressure EOS, dense plasma atomic physics - Supernovae, gamma-ray bursts, exploding systems, strong shocks, turbulent mixing - Supernova remnants, shock processing, radiative shocks - Astrophysical jets, high-Mach-number flows, magnetized radiative jets, magnetic reconnection - Compact object accretion disks, x-ray photoionized plasmas - Ultrastrong fields, particle acceleration, collisionless shocks. These proceedings cover many of the invited and contributed papers presented at the 6th International Conference on High Energy Density Laboratory Astrophys...

  14. Energy vs. density on paths toward exact density functionals

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta

    2018-01-01

    Recently, the progression toward more exact density functional theory has been questioned, implying a need for more formal ways to systematically measure progress, i.e. a “path”. Here I use the Hohenberg-Kohn theorems and the definition of normality by Burke et al. to define a path toward exactness...

  15. Time-dependent density-functional tight-binding method with the third-order expansion of electron density.

    Science.gov (United States)

    Nishimoto, Yoshio

    2015-09-07

    We develop a formalism for the calculation of excitation energies and excited state gradients for the self-consistent-charge density-functional tight-binding method with the third-order contributions of a Taylor series of the density functional theory energy with respect to the fluctuation of electron density (time-dependent density-functional tight-binding (TD-DFTB3)). The formulation of the excitation energy is based on the existing time-dependent density functional theory and the older TD-DFTB2 formulae. The analytical gradient is computed by solving Z-vector equations, and it requires one to calculate the third-order derivative of the total energy with respect to density matrix elements due to the inclusion of the third-order contributions. The comparison of adiabatic excitation energies for selected small and medium-size molecules using the TD-DFTB2 and TD-DFTB3 methods shows that the inclusion of the third-order contributions does not affect excitation energies significantly. A different set of parameters, which are optimized for DFTB3, slightly improves the prediction of adiabatic excitation energies statistically. The application of TD-DFTB for the prediction of absorption and fluorescence energies of cresyl violet demonstrates that TD-DFTB3 reproduced the experimental fluorescence energy quite well.

  16. Density dependence of the nuclear energy-density functional

    Science.gov (United States)

    Papakonstantinou, Panagiota; Park, Tae-Sun; Lim, Yeunhwan; Hyun, Chang Ho

    2018-01-01

    Background: The explicit density dependence in the coupling coefficients entering the nonrelativistic nuclear energy-density functional (EDF) is understood to encode effects of three-nucleon forces and dynamical correlations. The necessity for the density-dependent coupling coefficients to assume the form of a preferably small fractional power of the density ρ is empirical and the power is often chosen arbitrarily. Consequently, precision-oriented parametrizations risk overfitting in the regime of saturation and extrapolations in dilute or dense matter may lose predictive power. Purpose: Beginning with the observation that the Fermi momentum kF, i.e., the cubic root of the density, is a key variable in the description of Fermi systems, we first wish to examine if a power hierarchy in a kF expansion can be inferred from the properties of homogeneous matter in a domain of densities, which is relevant for nuclear structure and neutron stars. For subsequent applications we want to determine a functional that is of good quality but not overtrained. Method: For the EDF, we fit systematically polynomial and other functions of ρ1 /3 to existing microscopic, variational calculations of the energy of symmetric and pure neutron matter (pseudodata) and analyze the behavior of the fits. We select a form and a set of parameters, which we found robust, and examine the parameters' naturalness and the quality of resulting extrapolations. Results: A statistical analysis confirms that low-order terms such as ρ1 /3 and ρ2 /3 are the most relevant ones in the nuclear EDF beyond lowest order. It also hints at a different power hierarchy for symmetric vs. pure neutron matter, supporting the need for more than one density-dependent term in nonrelativistic EDFs. The functional we propose easily accommodates known or adopted properties of nuclear matter near saturation. More importantly, upon extrapolation to dilute or asymmetric matter, it reproduces a range of existing microscopic

  17. Optimization and Optimal Control in Automotive Systems

    NARCIS (Netherlands)

    Waschl, H.; Kolmanovsky, I.V.; Steinbuch, M.; Re, del L.

    2014-01-01

    This book demonstrates the use of the optimization techniques that are becoming essential to meet the increasing stringency and variety of requirements for automotive systems. It shows the reader how to move away from earlier approaches, based on some degree of heuristics, to the use of more and

  18. Simulating QCD at finite density

    CERN Document Server

    de Forcrand, Philippe

    2009-01-01

    In this review, I recall the nature and the inevitability of the "sign problem" which plagues attempts to simulate lattice QCD at finite baryon density. I present the main approaches used to circumvent the sign problem at small chemical potential. I sketch how one can predict analytically the severity of the sign problem, as well as the numerically accessible range of baryon densities. I review progress towards the determination of the pseudo-critical temperature T_c(mu), and towards the identification of a possible QCD critical point. Some promising advances with non-standard approaches are reviewed.

  19. Momentum density maps for molecules

    International Nuclear Information System (INIS)

    Cook, J.P.D.; Brion, C.E.

    1982-01-01

    Momentum-space and position-space molecular orbital density functions computed from LCAO-MO-SCF wavefunctions are used to rationalize the shapes of some momentum distributions measured by binary (e,2e) spectroscopy. A set of simple rules is presented which enable one to sketch the momentum density function and the momentum distribution from a knowledge of the position-space wavefunction and the properties and effects of the Fourier Transform and the spherical average. Selected molecular orbitals of H 2 , N 2 and CO 2 are used to illustrate this work

  20. Photoionization and High Density Gas

    Science.gov (United States)

    Kallman, T.; Bautista, M.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We present results of calculations using the XSTAR version 2 computer code. This code is loosely based on the XSTAR v.1 code which has been available for public use for some time. However it represents an improvement and update in several major respects, including atomic data, code structure, user interface, and improved physical description of ionization/excitation. In particular, it now is applicable to high density situations in which significant excited atomic level populations are likely to occur. We describe the computational techniques and assumptions, and present sample runs with particular emphasis on high density situations.

  1. Flashing coupled density wave oscillation

    International Nuclear Information System (INIS)

    Jiang Shengyao; Wu Xinxin; Zhang Youjie

    1997-07-01

    The experiment was performed on the test loop (HRTL-5), which simulates the geometry and system design of the 5 MW reactor. The phenomenon and mechanism of different kinds of two-phase flow instabilities, namely geyser instability, flashing instability and flashing coupled density wave instability are described. The especially interpreted flashing coupled density wave instability has never been studied well, it is analyzed by using a one-dimensional non-thermo equilibrium two-phase flow drift model computer code. Calculations are in good agreement with the experiment results. (5 refs.,5 figs., 1 tab.)

  2. High-density multicore fibers

    DEFF Research Database (Denmark)

    Takenaga, K.; Matsuo, S.; Saitoh, K.

    2016-01-01

    High-density single-mode multicore fibers were designed and fabricated. A heterogeneous 30-core fiber realized a low crosstalk of −55 dB. A quasi-single-mode homogeneous 31-core fiber attained the highest core count as a single-mode multicore fiber.......High-density single-mode multicore fibers were designed and fabricated. A heterogeneous 30-core fiber realized a low crosstalk of −55 dB. A quasi-single-mode homogeneous 31-core fiber attained the highest core count as a single-mode multicore fiber....

  3. High density operation in pulsator

    International Nuclear Information System (INIS)

    Klueber, O.; Cannici, B.; Engelhardt, W.; Gernhardt, J.; Glock, E.; Karger, F.; Lisitano, G.; Mayer, H.M.; Meisel, D.; Morandi, P.

    1976-03-01

    This report summarizes the results of experiments at high electron densities (>10 14 cm -3 ) which have been achieved by pulsed gas inflow during the discharge. At these densities a regime is established which is characterized by βsub(p) > 1, nsub(i) approximately nsub(e), Tsub(i) approximately Tsub(e) and tausub(E) proportional to nsub(e). Thus the toroidal magnetic field contributes considerably to the plasma confinement and the ions constitute almost half of the plasma pressure. Furthermore, the confinement is appreciably improved and the plasma becomes impermeable to hot neutrals. (orig.) [de

  4. On the (non-)optimality of Michell structures

    DEFF Research Database (Denmark)

    Sigmund, Ole; Aage, Niels; Andreassen, Erik

    2016-01-01

    Optimal analytical Michell frame structures have been extensively used as benchmark examples in topology optimization, including truss, frame, homogenization, density and level-set based approaches. However, as we will point out, partly the interpretation of Michell’s structural continua...... as discrete frame structures is not accurate and partly, it turns out that limiting structural topology to frame-like structures is a rather severe design restriction and results in structures that are quite far from being stiffness optimal. The paper discusses the interpretation of Michell’s theory...... in the context of numerical topology optimization and compares various topology optimization results obtained with the frame restriction to cases with no design restrictions. For all examples considered, the true stiffness optimal structures are composed of sheets (2D) or closed-walled shell structures (3D...

  5. Creating Great Neighborhoods: Density in Your Community

    Science.gov (United States)

    This report highlights nine community-led efforts to create vibrant neighborhoods through density, discusses the connections between smart growth and density, and introduces design principles to ensure that density becomes a community asset.

  6. Numerical Optimization in Microfluidics

    DEFF Research Database (Denmark)

    Jensen, Kristian Ejlebjærg

    2017-01-01

    Numerical modelling can illuminate the working mechanism and limitations of microfluidic devices. Such insights are useful in their own right, but one can take advantage of numerical modelling in a systematic way using numerical optimization. In this chapter we will discuss when and how numerical...... optimization is best used....

  7. Optimization of surface maintenance

    International Nuclear Information System (INIS)

    Oeverland, E.

    1990-01-01

    The present conference paper deals with methods of optimizing the surface maintenance of steel-made offshore installations. The paper aims at identifying important approaches to the problems regarding the long-range planning of an economical and cost effective maintenance program. The methods of optimization are based on the obtained experiences from the maintenance of installations on the Norwegian continental shelf. 3 figs

  8. Economically optimal thermal insulation

    Energy Technology Data Exchange (ETDEWEB)

    Berber, J.

    1978-10-01

    Exemplary calculations to show that exact adherence to the demands of the thermal insulation ordinance does not lead to an optimal solution with regard to economics. This is independent of the mode of financing. Optimal thermal insulation exceeds the values given in the thermal insulation ordinance.

  9. Optimizing Plutonium stock management

    International Nuclear Information System (INIS)

    Niquil, Y.; Guillot, J.

    1997-01-01

    Plutonium from spent fuel reprocessing is reused in new MOX assemblies. Since plutonium isotopic composition deteriorates with time, it is necessary to optimize plutonium stock management over a long period, to guarantee safe procurement, and contribute to a nuclear fuel cycle policy at the lowest cost. This optimization is provided by the prototype software POMAR

  10. Optimal Aging and Death

    DEFF Research Database (Denmark)

    Dalgaard, Carl-Johan Lars; Strulik, Holger

    2010-01-01

    This study introduces physiological aging into a simple model of optimal intertemporal consumption. In this endeavor we draw on the natural science literature on aging. According to the purposed theory, the speed of the aging process and the time of death are endogenously determined by optimal...

  11. Asymptotically Optimal Agents

    OpenAIRE

    Lattimore, Tor; Hutter, Marcus

    2011-01-01

    Artificial general intelligence aims to create agents capable of learning to solve arbitrary interesting problems. We define two versions of asymptotic optimality and prove that no agent can satisfy the strong version while in some cases, depending on discounting, there does exist a non-computable weak asymptotically optimal agent.

  12. Natural selection and optimality

    International Nuclear Information System (INIS)

    Torres, J.L.

    1989-01-01

    It is assumed that Darwin's principle translates into optimal regimes of operation along metabolical pathways in an ecological system. Fitness is then defined in terms of the distance of a given individual's thermodynamic parameters from their optimal values. The method is illustrated testing maximum power as a criterion of merit satisfied in ATP synthesis. (author). 26 refs, 2 figs

  13. Optimization in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Geraldo R.M. da [Sao Paulo Univ., Sao Carlos, SP (Brazil). Escola de Engenharia

    1994-12-31

    This paper discusses, partially, the advantages and the disadvantages of the optimal power flow. It shows some of the difficulties of implementation and proposes solutions. An analysis is made comparing the power flow, BIGPOWER/CESP, and the optimal power flow, FPO/SEL, developed by the author, when applied to the CEPEL-ELETRONORTE and CESP systems. (author) 8 refs., 5 tabs.

  14. Symbiotic Optimization of Behavior

    Science.gov (United States)

    2015-05-01

    SYMBIOTIC OPTIMIZATION OF BEHAVIOR UNIVERSITY OF WASHINGTON MAY 2015 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED...2014 4. TITLE AND SUBTITLE SYMBIOTIC OPTIMIZATION OF BEHAVIOR 5a. CONTRACT NUMBER FA8750-12-1-0304 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT

  15. Perceptually optimal color reproduction

    NARCIS (Netherlands)

    Yendrikhovskij, S.N.; Blommaert, F.J.J.; Ridder, de H.; Rogowitz, B.E.; Pappas, T.N.

    1998-01-01

    What requirements do people place on optimal color reproduction of real-life scenes? We suggest that when people look at images containing familiar categories of objects, two primary factors shape their subjective impression of how optimal colors are reproduced: perceived naturalness and perceived

  16. Level densities in nuclear physics

    International Nuclear Information System (INIS)

    Beckerman, M.

    1978-01-01

    In the independent-particle model nucleons move independently in a central potential. There is a well-defined set of single- particle orbitals, each nucleon occupies one of these orbitals subject to Fermi statistics, and the total energy of the nucleus is equal to the sum of the energies of the individual nucleons. The basic question is the range of validity of this Fermi gas description and, in particular, the roles of the residual interactions and collective modes. A detailed examination of experimental level densities in light-mass system is given to provide some insight into these questions. Level densities over the first 10 MeV or so in excitation energy as deduced from neutron and proton resonances data and from spectra of low-lying bound levels are discussed. To exhibit some of the salient features of these data comparisons to independent-particle (shell) model calculations are presented. Shell structure is predicted to manifest itself through discontinuities in the single-particle level density at the Fermi energy and through variatons in the occupancy of the valence orbitals. These predictions are examined through combinatorial calculations performed with the Grover [Phys. Rev., 157, 832(1967), 185 1303(1969)] odometer method. Before the discussion of the experimenta results, statistical mechanical level densities for spherical nuclei are reviewed. After consideration of deformed nuclei, the conclusions resulting from this work are drawn. 7 figures, 3 tables

  17. Solar corona electron density distribution

    International Nuclear Information System (INIS)

    Esposito, P.B.; Edenhofer, P.; Lueneburg, E.

    1980-01-01

    Three and one-half months of single-frequency (f= 0 2.2 x 10 9 Hz) time delay data (earth-to-spacecraft and return signal travel time) were acquired from the Helios 2 spacecraft around the time of its solar occupation (May 16, 1976). Following the determination of the spacecraft trajectory the excess time delay due to the integrated effect of free electrons along the signal's ray path could be separated and modeled. An average solar corona, equatorial, electron density profile, during solar minimum, was deduced from time delay measurements acquired within 5--60 solar radii (R/sub S/) of the sun. As a point of reference, at 10 R/sub S/ from the sun we find an average electron density of 4500 el cm -3 . However, there appears to be an asymmtry in the electron density as the ray path moved from the west (preoccultation) to east (post-occulation) solar limb. This may be related to the fact that during entry into occulation the heliographic latitude of the ray path (at closes approach to the sun) was about 6 0 , whereas during exit it became -7 0 . The Helios electron density model is compared with similar models deduced from a variety of different experimental techniques. Within 5--20 R/sub S/ of the sun the models separate according to solar minimum or maximum conditions; however, anomalies are evident

  18. Electron densities in planetary nebulae

    International Nuclear Information System (INIS)

    Stanghellini, L.; Kaler, J.B.

    1989-01-01

    Electron densities for 146 planetary nebulae have been obtained for analyzing a large sample of forbidden lines by interpolating theoretical curves obtained from solutions of the five-level atoms using up-to-date collision strengths and transition probabilities. Electron temperatures were derived from forbidden N II and/or forbidden O III lines or were estimated from the He II 4686 A line strengths. The forbidden O II densities are generally lower than those from forbidden Cl III by an average factor of 0.65. For data sets in which forbidden O II and forbidden S II were observed in common, the forbidden O II values drop to 0.84 that of the forbidden S II, implying that the outermost parts of the nebulae might have elevated densities. The forbidden Cl II and forbidden Ar IV densities show the best correlation, especially where they have been obtained from common data sets. The data give results within 30 percent of one another, assuming homogeneous nebulae. 106 refs

  19. High density matter at RHIC

    Indian Academy of Sciences (India)

    QCD predicts a phase transition between hadronic matter and a quark-gluon plasma at high energy density. The relativistic heavy ion collider (RHIC) at Brookhaven National Laboratory is a new facility dedicated to the experimental study of matter under extreme conditions. Already the first round of experimental results at ...

  20. density-dependent selection revisited

    Indian Academy of Sciences (India)

    Unknown

    is a more useful way of looking at density-dependent selection, and then go on ... these models was that the condition for maintenance of ... In a way, their formulation may be viewed as ... different than competition among species, and typical.

  1. Modern charge-density analysis

    CERN Document Server

    Gatti, Carlo

    2012-01-01

    Focusing on developments from the past 10-15 years, this volume presents an objective overview of the research in charge density analysis. The most promising methodologies are included, in addition to powerful interpretative tools and a survey of important areas of research.

  2. High current density ion source

    International Nuclear Information System (INIS)

    King, H.J.

    1977-01-01

    A high-current-density ion source with high total current is achieved by individually directing the beamlets from an electron bombardment ion source through screen and accelerator electrodes. The openings in these screen and accelerator electrodes are oriented and positioned to direct the individual beamlets substantially toward a focus point. 3 figures, 1 table

  3. The density limit in Tokamaks

    International Nuclear Information System (INIS)

    Alladio, F.

    1985-01-01

    A short summary of the present status of experimental observations, theoretical ideas and understanding of the density limit in tokamaks is presented. It is the result of the discussion that was held on this topic at the 4th European Tokamak Workshop in Copenhagen (December 4th to 6th, 1985). 610 refs

  4. Density estimation from local structure

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2009-11-01

    Full Text Available Mixture Model (GMM) density function of the data and the log-likelihood scores are compared to the scores of a GMM trained with the expectation maximization (EM) algorithm on 5 real-world classification datasets (from the UCI collection). They show...

  5. Dual model for parton densities

    International Nuclear Information System (INIS)

    El Hassouni, A.; Napoly, O.

    1981-01-01

    We derive power-counting rules for quark densities near x=1 and x=0 from parton interpretations of one-particle inclusive dual amplitudes. Using these rules, we give explicit expressions for quark distributions (including charm) inside hadrons. We can then show the compatibility between fragmentation and recombination descriptions of low-p/sub perpendicular/ processes

  6. Micro Coriolis Gas Density Sensor

    NARCIS (Netherlands)

    Sparreboom, Wouter; Ratering, Gijs; Kruijswijk, Wim; van der Wouden, E.J.; Groenesteijn, Jarno; Lötters, Joost Conrad

    2017-01-01

    In this paper we report on gas density measurements using a micro Coriolis sensor. The technology to fabricate the sensor is based on surface channel technology. The measurement tube is freely suspended and has a wall thickness of only 1 micron. This renders the sensor extremely sensitive to changes

  7. Method of measuring surface density

    International Nuclear Information System (INIS)

    Gregor, J.

    1982-01-01

    A method is described of measuring surface density or thickness, preferably of coating layers, using radiation emitted by a suitable radionuclide, e.g., 241 Am. The radiation impinges on the measured material, e.g., a copper foil and in dependence on its surface density or thickness part of the flux of impinging radiation is reflected and part penetrates through the material. The radiation which has penetrated through the material excites in a replaceable adjustable backing characteristic radiation of an energy close to that of the impinging radiation (within +-30 keV). Part of the flux of the characteristic radiation spreads back to the detector, penetrates through the material in which in dependence on surface density or thickness of the coating layer it is partly absorbed. The flux of the penetrated characteristic radiation impinging on the face of the detector is a function of surface density or thickness. Only that part of the energy is evaluated of the energy spectrum which corresponds to the energy of characteristic radiation. (B.S.)

  8. Information Density and Syntactic Repetition

    Science.gov (United States)

    Temperley, David; Gildea, Daniel

    2015-01-01

    In noun phrase (NP) coordinate constructions (e.g., NP and NP), there is a strong tendency for the syntactic structure of the second conjunct to match that of the first; the second conjunct in such constructions is therefore low in syntactic information. The theory of uniform information density predicts that low-information syntactic…

  9. Bolt Thread Stress Optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2012-01-01

    of threads and therefore indirectly the bolt fatigue life. The root shape is circular, and from shape optimization for minimum stress concentration it is well known that the circular shape is seldom optimal. An axisymmetric Finite Element (FE) formulation is used to analyze the bolted connection, and a study...... is performed to establish the need for contact modeling with regard to finding the correct stress concentration factor. Optimization is performed with a simple parameterization with two design variables. Stress reduction of up to 9% is found in the optimization process, and some similarities are found...... in the optimized designs leading to the proposal of a new standard. The reductions in the stress are achieved by rather simple changes made to the cutting tool....

  10. Overall bolt stress optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2013-01-01

    The state of stress in bolts and nuts with International Organization for Standardization metric thread design is examined and optimized. The assumed failure mode is fatigue, so the applied preload and the load amplitude together with the stress concentrations define the connection strength....... Maximum stress in the bolt is found at the fillet under the head, at the thread start, or at the thread root. To minimize the stress concentration, shape optimization is applied. Nut shape optimization also has a positive effect on the maximum stress. The optimization results show that designing a nut......, which results in a more evenly distribution of load along the engaged thread, has a limited influence on the maximum stress due to the stress concentration at the first thread root. To further reduce the maximum stress, the transition from bolt shank to the thread must be optimized. Stress reduction...

  11. Workshop on Computational Optimization

    CERN Document Server

    2016-01-01

    This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2014, held at Warsaw, Poland, September 7-10, 2014. The book presents recent advances in computational optimization. The volume includes important real problems like parameter settings for controlling processes in bioreactor and other processes, resource constrained project scheduling, infection distribution, molecule distance geometry, quantum computing, real-time management and optimal control, bin packing, medical image processing, localization the abrupt atmospheric contamination source and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others. This research demonstrates how some real-world problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks.

  12. Flight plan optimization

    Science.gov (United States)

    Dharmaseelan, Anoop; Adistambha, Keyne D.

    2015-05-01

    Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.

  13. System performance optimization

    International Nuclear Information System (INIS)

    Bednarz, R.J.

    1978-01-01

    The System Performance Optimization has become an important and difficult field for large scientific computer centres. Important because the centres must satisfy increasing user demands at the lowest possible cost. Difficult because the System Performance Optimization requires a deep understanding of hardware, software and workload. The optimization is a dynamic process depending on the changes in hardware configuration, current level of the operating system and user generated workload. With the increasing complication of the computer system and software, the field for the optimization manoeuvres broadens. The hardware of two manufacturers IBM and CDC is discussed. Four IBM and two CDC operating systems are described. The description concentrates on the organization of the operating systems, the job scheduling and I/O handling. The performance definitions, workload specification and tools for the system stimulation are given. The measurement tools for the System Performance Optimization are described. The results of the measurement and various methods used for the operating system tuning are discussed. (Auth.)

  14. Optimized manufacturable porous materials

    DEFF Research Database (Denmark)

    Andreassen, Erik; Andreasen, Casper Schousboe; Jensen, Jakob Søndergaard

    Topology optimization has been used to design two-dimensional material structures with specific elastic properties, but optimized designs of three-dimensional material structures are more scarsely seen. Partly because it requires more computational power, and partly because it is a major challenge...... to include manufacturing constraints in the optimization. This work focuses on incorporating the manufacturability into the optimization procedure, allowing the resulting material structure to be manufactured directly using rapid manufacturing techniques, such as selective laser melting/sintering (SLM....../S). The available manufacturing methods are best suited for porous materials (one constituent and void), but the optimization procedure can easily include more constituents. The elasticity tensor is found from one unit cell using the homogenization method together with a standard finite element (FE) discretization...

  15. Modern optimization with R

    CERN Document Server

    Cortez, Paulo

    2014-01-01

    The goal of this book is to gather in a single document the most relevant concepts related to modern optimization methods, showing how such concepts and methods can be addressed using the open source, multi-platform R tool. Modern optimization methods, also known as metaheuristics, are particularly useful for solving complex problems for which no specialized optimization algorithm has been developed. These methods often yield high quality solutions with a more reasonable use of computational resources (e.g. memory and processing effort). Examples of popular modern methods discussed in this book are: simulated annealing; tabu search; genetic algorithms; differential evolution; and particle swarm optimization. This book is suitable for undergraduate and graduate students in Computer Science, Information Technology, and related areas, as well as data analysts interested in exploring modern optimization methods using R.

  16. Spectral optimization for micro-CT

    International Nuclear Information System (INIS)

    Hupfer, Martin; Nowak, Tristan; Brauweiler, Robert; Eisa, Fabian; Kalender, Willi A.

    2012-01-01

    Purpose: To optimize micro-CT protocols with respect to x-ray spectra and thereby reduce radiation dose at unimpaired image quality. Methods: Simulations were performed to assess image contrast, noise, and radiation dose for different imaging tasks. The figure of merit used to determine the optimal spectrum was the dose-weighted contrast-to-noise ratio (CNRD). Both optimal photon energy and tube voltage were considered. Three different types of filtration were investigated for polychromatic x-ray spectra: 0.5 mm Al, 3.0 mm Al, and 0.2 mm Cu. Phantoms consisted of water cylinders of 20, 32, and 50 mm in diameter with a central insert of 9 mm which was filled with different contrast materials: an iodine-based contrast medium (CM) to mimic contrast-enhanced (CE) imaging, hydroxyapatite to mimic bone structures, and water with reduced density to mimic soft tissue contrast. Validation measurements were conducted on a commercially available micro-CT scanner using phantoms consisting of water-equivalent plastics. Measurements on a mouse cadaver were performed to assess potential artifacts like beam hardening and to further validate simulation results. Results: The optimal photon energy for CE imaging was found at 34 keV. For bone imaging, optimal energies were 17, 20, and 23 keV for the 20, 32, and 50 mm phantom, respectively. For density differences, optimal energies varied between 18 and 50 keV for the 20 and 50 mm phantom, respectively. For the 32 mm phantom and density differences, CNRD was found to be constant within 2.5% for the energy range of 21–60 keV. For polychromatic spectra and CMs, optimal settings were 50 kV with 0.2 mm Cu filtration, allowing for a dose reduction of 58% compared to the optimal setting for 0.5 mm Al filtration. For bone imaging, optimal tube voltages were below 35 kV. For soft tissue imaging, optimal tube settings strongly depended on phantom size. For 20 mm, low voltages were preferred. For 32 mm, CNRD was found to be almost independent

  17. Spectral optimization for micro-CT.

    Science.gov (United States)

    Hupfer, Martin; Nowak, Tristan; Brauweiler, Robert; Eisa, Fabian; Kalender, Willi A

    2012-06-01

    To optimize micro-CT protocols with respect to x-ray spectra and thereby reduce radiation dose at unimpaired image quality. Simulations were performed to assess image contrast, noise, and radiation dose for different imaging tasks. The figure of merit used to determine the optimal spectrum was the dose-weighted contrast-to-noise ratio (CNRD). Both optimal photon energy and tube voltage were considered. Three different types of filtration were investigated for polychromatic x-ray spectra: 0.5 mm Al, 3.0 mm Al, and 0.2 mm Cu. Phantoms consisted of water cylinders of 20, 32, and 50 mm in diameter with a central insert of 9 mm which was filled with different contrast materials: an iodine-based contrast medium (CM) to mimic contrast-enhanced (CE) imaging, hydroxyapatite to mimic bone structures, and water with reduced density to mimic soft tissue contrast. Validation measurements were conducted on a commercially available micro-CT scanner using phantoms consisting of water-equivalent plastics. Measurements on a mouse cadaver were performed to assess potential artifacts like beam hardening and to further validate simulation results. The optimal photon energy for CE imaging was found at 34 keV. For bone imaging, optimal energies were 17, 20, and 23 keV for the 20, 32, and 50 mm phantom, respectively. For density differences, optimal energies varied between 18 and 50 keV for the 20 and 50 mm phantom, respectively. For the 32 mm phantom and density differences, CNRD was found to be constant within 2.5% for the energy range of 21-60 keV. For polychromatic spectra and CMs, optimal settings were 50 kV with 0.2 mm Cu filtration, allowing for a dose reduction of 58% compared to the optimal setting for 0.5 mm Al filtration. For bone imaging, optimal tube voltages were below 35 kV. For soft tissue imaging, optimal tube settings strongly depended on phantom size. For 20 mm, low voltages were preferred. For 32 mm, CNRD was found to be almost independent of tube voltage. For 50 mm

  18. A procedure for multi-objective optimization of tire design parameters

    OpenAIRE

    Nikola Korunović; Miloš Madić; Miroslav Trajanović; Miroslav Radovanović

    2015-01-01

    The identification of optimal tire design parameters for satisfying different requirements, i.e. tire performance characteristics, plays an essential role in tire design. In order to improve tire performance characteristics, formulation and solving of multi-objective optimization problem must be performed. This paper presents a multi-objective optimization procedure for determination of optimal tire design parameters for simultaneous minimization of strain energy density at two distinctive zo...

  19. Optimal conversion of an atomic to a molecular Bose-Einstein condensate

    International Nuclear Information System (INIS)

    Hornung, Thomas; Gordienko, Sergei; Vivie-Riedle, Regina de; Verhaar, Boudewijn J.

    2002-01-01

    The work in this article extends the optimal control framework of variational calculus to optimize the conversion of a Bose-Einstein condensate of atoms to one of molecules. It represents the derivation of the closed form optimal control equations for a system governed by a nonlinear Schroedinger equation and its successful application. It was necessary to derive a density matrix formulation of the coupled Gross-Pitaevskii equations to optimize STIRAP-like Raman light fields, to overcome dissipation

  20. A systematic optimization for graphene-based supercapacitors

    Science.gov (United States)

    Deuk Lee, Sung; Lee, Han Sung; Kim, Jin Young; Jeong, Jaesik; Kahng, Yung Ho

    2017-08-01

    Increasing the energy-storage density for supercapacitors is critical for their applications. Many researchers have attempted to identify optimal candidate component materials to achieve this goal, but investigations into systematically optimizing their mixing rate for maximizing the performance of each candidate material have been insufficient, which hinders the progress in their technology. In this study, we employ a statistically systematic method to determine the optimum mixing ratio of three components that constitute graphene-based supercapacitor electrodes: reduced graphene oxide (rGO), acetylene black (AB), and polyvinylidene fluoride (PVDF). By using the extreme-vertices design, the optimized proportion is determined to be (rGO: AB: PVDF  =  0.95: 0.00: 0.05). The corresponding energy-storage density increases by a factor of 2 compared with that of non-optimized electrodes. Electrochemical and microscopic analyses are performed to determine the reason for the performance improvements.

  1. Tapped density optimisation for four agricultural wastes - Part II: Performance analysis and Taguchi-Pareto

    Directory of Open Access Journals (Sweden)

    Ajibade Oluwaseyi Ayodele

    2016-01-01

    Full Text Available In this attempt, which is a second part of discussions on tapped density optimisation for four agricultural wastes (particles of coconut, periwinkle, palm kernel and egg shells, performance analysis for comparative basis is made. This paper pioneers a study direction in which optimisation of process variables are pursued using Taguchi method integrated with the Pareto 80-20 rule. Negative percentage improvements resulted when the optimal tapped density was compared with the average tapped density. However, the performance analysis between optimal tapped density and the peak tapped density values yielded positive percentage improvements for the four filler particles. The performance analysis results validate the effectiveness of using the Taguchi method in improving the tapped density properties of the filler particles. The application of the Pareto 80-20 rule to the table of parameters and levels produced revised tables of parameters and levels which helped to identify the factor-levels position of each parameter that is economical to optimality. The Pareto 80-20 rule also produced revised S/N response tables which were used to know the relevant S/N ratios that are relevant to optimality.

  2. The topology of the Coulomb potential density. A comparison with the electron density, the virial energy density, and the Ehrenfest force density.

    Science.gov (United States)

    Ferreira, Lizé-Mari; Eaby, Alan; Dillen, Jan

    2017-12-15

    The topology of the Coulomb potential density has been studied within the context of the theory of Atoms in Molecules and has been compared with the topologies of the electron density, the virial energy density and the Ehrenfest force density. The Coulomb potential density is found to be mainly structurally homeomorphic with the electron density. The Coulomb potential density reproduces the non-nuclear attractor which is observed experimentally in the molecular graph of the electron density of a Mg dimer, thus, for the first time ever providing an alternative and energetic foundation for the existence of this critical point. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  3. The optimization design of nuclear measurement teaching equipment

    International Nuclear Information System (INIS)

    Tang Rulong; Qiu Xiaoping

    2008-01-01

    So far domestic student-oriented experimental nuclear measuring instruments are used only to measure object density, thickness or material level, and in the choice of sources activity is mostly about 10 mCi. this design will proposed a optimization program dealing with domestic situation. It discussed the radioactive sources activity, the structural design of sealed sources, such as the choice of the tested material in order to get a program optimization. The program used 1 mCi activity radioactive sources 137 Cs to reduce the radiation dose, and the measurement function was improved. So that the apparatus can measure density, thickness nad material level. (authors)

  4. Optimality criteria for the components of anisotropic constitutive matrix

    DEFF Research Database (Denmark)

    Pedersen, Pauli; Pedersen, Niels Leergaard

    2014-01-01

    densities is equal value for the weighted elastic energy densities, as a natural extension of the optimality criterion for a single load case. The second optimality criterion for the components of a constitutive matrix (of unit norm) is proportionality to corresponding weighted strain components...... with the same proportionality factor $widehat lambda $ for all the components, as shortly specified by $C_{i j k l} = widehat lambda sum _{n} eta _{n} (epsilon _{i j})_{n} (epsilon _{k l})_{n}$, in traditional notation (n indicate load case). These simple analytical results should be communicated, in spite...

  5. Regularity of optimal transport maps on multiple products of spheres

    OpenAIRE

    Figalli, Alessio; Kim, Young-Heon; McCann, Robert J.

    2010-01-01

    This article addresses regularity of optimal transport maps for cost="squared distance" on Riemannian manifolds that are products of arbitrarily many round spheres with arbitrary sizes and dimensions. Such manifolds are known to be non-negatively cross-curved [KM2]. Under boundedness and non-vanishing assumptions on the transfered source and target densities we show that optimal maps stay away from the cut-locus (where the cost exhibits singularity), and obtain injectivity and continuity of o...

  6. OPTIMALIZATION OF BLASTING IN »LAKOVIĆI« LIMESTONE QUARRY

    OpenAIRE

    Branko Božić; Karlo Braun

    1992-01-01

    The optimalization of exploitation in »Lakovići« limestone quarry is described. Based on determined discontinuities in the rock mass and their densities, the best possible working site have been located in order to obtain the best possible sizes of blasted rocks and work slope stability. Optimal lowest resistance line size for the quarry has been counted and proved experimentally. New blasting parameters have resulted in considerable saving of drilling and explosive (the paper is published in...

  7. 4th Optimization Day

    CERN Document Server

    Eberhard, Andrew; Ralph, Daniel; Glover, Barney M

    1999-01-01

    Although the monograph Progress in Optimization I: Contributions from Aus­ tralasia grew from the idea of publishing a proceedings of the Fourth Optimiza­ tion Day, held in July 1997 at the Royal Melbourne Institute of Technology, the focus soon changed to a refereed volume in optimization. The intention is to publish a similar book annually, following each Optimization Day. The idea of having an annual Optimization Day was conceived by Barney Glover; the first of these Optimization Days was held in 1994 at the University of Ballarat. Barney hoped that such a yearly event would bring together the many, but widely dispersed, researchers in Australia who were publishing in optimization and related areas such as control. The first Optimization Day event was followed by similar conferences at The University of New South Wales (1995), The University of Melbourne (1996), the Royal Melbourne Institute of Technology (1997), and The University of Western Australia (1998). The 1999 conference will return to Ballarat ...

  8. Face Value: Towards Robust Estimates of Snow Leopard Densities.

    Directory of Open Access Journals (Sweden)

    Justine S Alexander

    Full Text Available When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01 individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87. Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality.

  9. Analysis of optimum density of forest roads in rural properties

    Directory of Open Access Journals (Sweden)

    Flávio Cipriano de Assis do Carmo

    2013-09-01

    Full Text Available This study analyzed the density of roads in rural properties in the south of the Espírito Santo and compared it with the calculation of the optimal density in forestry companies in steep areas. The work was carried out in six small rural properties based on the costs of roads of forest use, wood extraction and the costs of loss of productive area. The technical analysis included time and movement study and productivity. The economic analysis included operational costs, production costs and returns for different scenarios of productivity (180m.ha-1, 220m.ha-1and 250 m.ha-1. According to the results, all the properties have densities of road well above the optimum, which reflects the lack of criteria in the planning of the forest stands, resulting in a inadequate use of plantation area. Property 1 had the highest density of roads (373.92 m.ha-1 and the property 5 presented the lowest density (111.56 m.ha-1.

  10. Thermal Analysis of Low Layer Density Multilayer Insulation Test Results

    Science.gov (United States)

    Johnson, Wesley L.

    2011-01-01

    Investigation of the thermal performance of low layer density multilayer insulations is important for designing long-duration space exploration missions involving the storage of cryogenic propellants. Theoretical calculations show an analytical optimal layer density, as widely reported in the literature. However, the appropriate test data by which to evaluate these calculations have been only recently obtained. As part of a recent research project, NASA procured several multilayer insulation test coupons for calorimeter testing. These coupons were configured to allow for the layer density to be varied from 0.5 to 2.6 layer/mm. The coupon testing was completed using the cylindrical Cryostat-l00 apparatus by the Cryogenics Test Laboratory at Kennedy Space Center. The results show the properties of the insulation as a function of layer density for multiple points. Overlaying these new results with data from the literature reveals a minimum layer density; however, the value is higher than predicted. Additionally, the data show that the transition region between high vacuum and no vacuum is dependent on the spacing of the reflective layers. Historically this spacing has not been taken into account as thermal performance was calculated as a function of pressure and temperature only; however the recent testing shows that the data is dependent on the Knudsen number which takes into account pressure, temperature, and layer spacing. These results aid in the understanding of the performance parameters of MLI and help to complete the body of literature on the topic.

  11. Sodium magnetic resonance imaging. Development of a 3D radial acquisition technique with optimized k-space sampling density and high SNR-efficiency; Natrium-Magnetresonanztomographie. Entwicklung einer 3D radialen Messtechnik mit optimierter k-Raum-Abtastdichte und hoher SNR-Effizienz

    Energy Technology Data Exchange (ETDEWEB)

    Nagel, Armin Michael

    2009-04-01

    A 3D radial k-space acquisition technique with homogenous distribution of the sampling density (DA-3D-RAD) is presented. This technique enables short echo times (TE<0.5 ms), that are necessary for {sup 23}Na-MRI, and provides a high SNR-efficiency. The gradients of the DA-3D-RAD-sequence are designed such that the average sampling density in each spherical shell of k-space is constant. The DA-3D-RAD-sequence provides 34% more SNR than a conventional 3D radial sequence (3D-RAD) if T{sub 2}{sup *}-decay is neglected. This SNR-gain is enhanced if T{sub 2}{sup *}-decay is present, so a 1.5 to 1.8 fold higher SNR is measured in brain tissue with the DA-3D-RAD-sequence. Simulations and experimental measurements show that the DA-3D-RAD sequence yields a better resolution in the presence of T{sub 2}{sup *}-decay and less image artefacts when B{sub 0}-inhomogeneities exist. Using the developed sequence, T{sub 1}-, T{sub 2}{sup *}- and Inversion-Recovery-{sup 23}Na-image contrasts were acquired for several organs and {sup 23}Na-relaxation times were measured (brain tissue: T{sub 1}=29.0{+-}0.3 ms; T{sub 2s}{sup *}{approx}4 ms; T{sub 2l}{sup *}{approx}31 ms; cerebrospinal fluid: T{sub 1}=58.1{+-}0.6 ms; T{sub 2}{sup *}=55{+-}3 ms (B{sub 0}=3 T)). T{sub 1}- und T{sub 2}{sup *}-relaxation times of cerebrospinal fluid are independent of the selected magnetic field strength (B0 = 3T/7 T), whereas the relaxation times of brain tissue increase with field strength. Furthermore, {sup 23}Na-signals of oedemata were suppressed in patients and thus signals from different tissue compartments were selectively measured. (orig.)

  12. Research on connection structure of aluminumbody bus using multi-objective topology optimization

    Science.gov (United States)

    Peng, Q.; Ni, X.; Han, F.; Rhaman, K.; Ulianov, C.; Fang, X.

    2018-01-01

    For connecting Aluminum Alloy bus body aluminum components often occur the problem of failure, a new aluminum alloy connection structure is designed based on multi-objective topology optimization method. Determining the shape of the outer contour of the connection structure with topography optimization, establishing a topology optimization model of connections based on SIMP density interpolation method, going on multi-objective topology optimization, and improving the design of the connecting piece according to the optimization results. The results show that the quality of the aluminum alloy connector after topology optimization is reduced by 18%, and the first six natural frequencies are improved and the strength performance and stiffness performance are obviously improved.

  13. Encyclopedia of optimization

    CERN Document Server

    Pardalos, Panos

    2001-01-01

    Optimization problems are widespread in the mathematical modeling of real world systems and their applications arise in all branches of science, applied science and engineering. The goal of the Encyclopedia of Optimization is to introduce the reader to a complete set of topics in order to show the spectrum of recent research activities and the richness of ideas in the development of theories, algorithms and the applications of optimization. It is directed to a diverse audience of students, scientists, engineers, decision makers and problem solvers in academia, business, industry, and government.

  14. Optimization of Antivirus Software

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available The paper describes the main techniques used in development of computer antivirus software applications. For this particular category of software, are identified and defined optimum criteria that helps determine which solution is better and what are the objectives of the optimization process. From the general viewpoint of software optimization are presented methods and techniques that are applied at code development level. Regarding the particularities of antivirus software, the paper analyzes some of the optimization concepts applied to this category of applications

  15. Nonlinear optimal control theory

    CERN Document Server

    Berkovitz, Leonard David

    2012-01-01

    Nonlinear Optimal Control Theory presents a deep, wide-ranging introduction to the mathematical theory of the optimal control of processes governed by ordinary differential equations and certain types of differential equations with memory. Many examples illustrate the mathematical issues that need to be addressed when using optimal control techniques in diverse areas. Drawing on classroom-tested material from Purdue University and North Carolina State University, the book gives a unified account of bounded state problems governed by ordinary, integrodifferential, and delay systems. It also dis

  16. What is unrealistic optimism?

    Science.gov (United States)

    Jefferson, Anneli; Bortolotti, Lisa; Kuzmanovic, Bojana

    2017-04-01

    Here we consider the nature of unrealistic optimism and other related positive illusions. We are interested in whether cognitive states that are unrealistically optimistic are belief states, whether they are false, and whether they are epistemically irrational. We also ask to what extent unrealistically optimistic cognitive states are fixed. Based on the classic and recent empirical literature on unrealistic optimism, we offer some preliminary answers to these questions, thereby laying the foundations for answering further questions about unrealistic optimism, such as whether it has biological, psychological, or epistemic benefits. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Agent-Based Optimization

    CERN Document Server

    Jędrzejowicz, Piotr; Kacprzyk, Janusz

    2013-01-01

    This volume presents a collection of original research works by leading specialists focusing on novel and promising approaches in which the multi-agent system paradigm is used to support, enhance or replace traditional approaches to solving difficult optimization problems. The editors have invited several well-known specialists to present their solutions, tools, and models falling under the common denominator of the agent-based optimization. The book consists of eight chapters covering examples of application of the multi-agent paradigm and respective customized tools to solve  difficult optimization problems arising in different areas such as machine learning, scheduling, transportation and, more generally, distributed and cooperative problem solving.

  18. Optimization of refrigeration machinery

    Energy Technology Data Exchange (ETDEWEB)

    Wall, Goeran [University Coll. of Eskilstuna/Vaesteraas (SE)

    1991-11-01

    This paper reports the application of thermoeconomics to the optimization of a heat pump. The method is suited for application to thermodynamic processes and yields exergy losses. The marginal cost of an arbitary variable can also be calculated. The efficiencies of the compressor, condenser, evaporator and electric motor are chosen as variables to be optimized. Parameters such as the price of electricity and the temperature of the delivered heat may vary between optimizations, and results are presented for different parameter values. The results show that the efficiency of the electric motor is the most important variable. (author).

  19. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    Science.gov (United States)

    Paasche, H.; Tronicke, J.

    2012-04-01

    optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.

  20. Adaptive treatment-length optimization in spatiobiologically integrated radiotherapy

    Science.gov (United States)

    Ajdari, Ali; Ghate, Archis; Kim, Minsun

    2018-04-01

    Recent theoretical research on spatiobiologically integrated radiotherapy has focused on optimization models that adapt fluence-maps to the evolution of tumor state, for example, cell densities, as observed in quantitative functional images acquired over the treatment course. We propose an optimization model that adapts the length of the treatment course as well as the fluence-maps to such imaged tumor state. Specifically, after observing the tumor cell densities at the beginning of a session, the treatment planner solves a group of convex optimization problems to determine an optimal number of remaining treatment sessions, and a corresponding optimal fluence-map for each of these sessions. The objective is to minimize the total number of tumor cells remaining (TNTCR) at the end of this proposed treatment course, subject to upper limits on the biologically effective dose delivered to the organs-at-risk. This fluence-map is administered in future sessions until the next image is available, and then the number of sessions and the fluence-map are re-optimized based on the latest cell density information. We demonstrate via computer simulations on five head-and-neck test cases that such adaptive treatment-length and fluence-map planning reduces the TNTCR and increases the biological effect on the tumor while employing shorter treatment courses, as compared to only adapting fluence-maps and using a pre-determined treatment course length based on one-size-fits-all guidelines.

  1. Double trouble at high density:

    DEFF Research Database (Denmark)

    Gergs, André; Palmqvist, Annemette; Preuss, Thomas G

    2014-01-01

    Population size is often regulated by negative feedback between population density and individual fitness. At high population densities, animals run into double trouble: they might concurrently suffer from overexploitation of resources and also from negative interference among individuals...... regardless of resource availability, referred to as crowding. Animals are able to adapt to resource shortages by exhibiting a repertoire of life history and physiological plasticities. In addition to resource-related plasticity, crowding might lead to reduced fitness, with consequences for individual life...... history. We explored how different mechanisms behind resource-related plasticity and crowding-related fitness act independently or together, using the water flea Daphnia magna as a case study. For testing hypotheses related to mechanisms of plasticity and crowding stress across different biological levels...

  2. Generalized Expression for Polarization Density

    International Nuclear Information System (INIS)

    Wang, Lu; Hahm, T.S.

    2009-01-01

    A general polarization density which consists of classical and neoclassical parts is systematically derived via modern gyrokinetics and bounce-kinetics by employing a phase-space Lagrangian Lie-transform perturbation method. The origins of polarization density are further elucidated. Extending the work on neoclassical polarization for long wavelength compared to ion banana width [M. N. Rosenbluth and F. L. Hinton, Phys. Rev. Lett. 80, 724 (1998)], an analytical formula for the generalized neoclassical polarization including both finite-banana-width (FBW) and finite-Larmor-radius (FLR) effects for arbitrary radial wavelength in comparison to banana width and gyroradius is derived. In additional to the contribution from trapped particles, the contribution of passing particles to the neoclassical polarization is also explicitly calculated. Our analytic expression agrees very well with the previous numerical results for a wide range of radial wavelength.

  3. Asymptotic density and effective negligibility

    Science.gov (United States)

    Astor, Eric P.

    In this thesis, we join the study of asymptotic computability, a project attempting to capture the idea that an algorithm might work correctly in all but a vanishing fraction of cases. In collaboration with Hirschfeldt and Jockusch, broadening the original investigation of Jockusch and Schupp, we introduce dense computation, the weakest notion of asymptotic computability (requiring only that the correct answer is produced on a set of density 1), and effective dense computation, where every computation halts with either the correct answer or (on a set of density 0) a symbol denoting uncertainty. A few results make more precise the relationship between these notions and work already done with Jockusch and Schupp's original definitions of coarse and generic computability. For all four types of asymptotic computation, including generic computation, we demonstrate that non-trivial upper cones have measure 0, building on recent work of Hirschfeldt, Jockusch, Kuyper, and Schupp in which they establish this for coarse computation. Their result transfers to yield a minimal pair for relative coarse computation; we generalize their method and extract a similar result for relative dense computation (and thus for its corresponding reducibility). However, all of these notions of near-computation treat a set as negligible iff it has asymptotic density 0. Noting that this definition is not computably invariant, this produces some failures of intuition and a break with standard expectations in computability theory. For instance, as shown by Hamkins and Miasnikov, the halting problem is (in some formulations) effectively densely computable, even in polynomial time---yet this result appears fragile, as indicated by Rybalov. In independent work, we respond to this by strengthening the approach of Jockusch and Schupp to avoid such phenomena; specifically, we introduce a new notion of intrinsic asymptotic density, invariant under computable permutation, with rich relations to both

  4. High density energy storage capacitor

    International Nuclear Information System (INIS)

    Whitham, K.; Howland, M.M.; Hutzler, J.R.

    1979-01-01

    The Nova laser system will use 130 MJ of capacitive energy storage and have a peak power capability of 250,000 MW. This capacitor bank is a significant portion of the laser cost and requires a large portion of the physical facilities. In order to reduce the cost and volume required by the bank, the Laser Fusion Program funded contracts with three energy storage capacitor producers: Aerovox, G.E., and Maxwell Laboratories, to develop higher energy density, lower cost energy storage capacitors. This paper describes the designs which resulted from the Aerovox development contract, and specifically addresses the design and initial life testing of a 12.5 kJ, 22 kV capacitor with a density of 4.2 J/in 3 and a projected cost in the range of 5 cents per joule

  5. First-principle optimal local pseudopotentials construction via optimized effective potential method

    International Nuclear Information System (INIS)

    Mi, Wenhui; Zhang, Shoutao; Wang, Yanchao; Ma, Yanming; Miao, Maosheng

    2016-01-01

    The local pseudopotential (LPP) is an important component of orbital-free density functional theory, a promising large-scale simulation method that can maintain information on a material’s electron state. The LPP is usually extracted from solid-state density functional theory calculations, thereby it is difficult to assess its transferability to cases involving very different chemical environments. Here, we reveal a fundamental relation between the first-principles norm-conserving pseudopotential (NCPP) and the LPP. On the basis of this relationship, we demonstrate that the LPP can be constructed optimally from the NCPP for a large number of elements using the optimized effective potential method. Specially, our method provides a unified scheme for constructing and assessing the LPP within the framework of first-principles pseudopotentials. Our practice reveals that the existence of a valid LPP with high transferability may strongly depend on the element.

  6. Evaluating lidar point densities for effective estimation of aboveground biomass

    Science.gov (United States)

    Wu, Zhuoting; Dye, Dennis G.; Stoker, Jason M.; Vogel, John M.; Velasco, Miguel G.; Middleton, Barry R.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) was recently established to provide airborne lidar data coverage on a national scale. As part of a broader research effort of the USGS to develop an effective remote sensing-based methodology for the creation of an operational biomass Essential Climate Variable (Biomass ECV) data product, we evaluated the performance of airborne lidar data at various pulse densities against Landsat 8 satellite imagery in estimating above ground biomass for forests and woodlands in a study area in east-central Arizona, U.S. High point density airborne lidar data, were randomly sampled to produce five lidar datasets with reduced densities ranging from 0.5 to 8 point(s)/m2, corresponding to the point density range of 3DEP to provide national lidar coverage over time. Lidar-derived aboveground biomass estimate errors showed an overall decreasing trend as lidar point density increased from 0.5 to 8 points/m2. Landsat 8-based aboveground biomass estimates produced errors larger than the lowest lidar point density of 0.5 point/m2, and therefore Landsat 8 observations alone were ineffective relative to airborne lidar for generating a Biomass ECV product, at least for the forest and woodland vegetation types of the Southwestern U.S. While a national Biomass ECV product with optimal accuracy could potentially be achieved with 3DEP data at 8 points/m2, our results indicate that even lower density lidar data could be sufficient to provide a national Biomass ECV product with accuracies significantly higher than that from Landsat observations alone.

  7. Density operators in quantum mechanics

    International Nuclear Information System (INIS)

    Burzynski, A.

    1979-01-01

    A brief discussion and resume of density operator formalism in the way it occurs in modern physics (in quantum optics, quantum statistical physics, quantum theory of radiation) is presented. Particularly we emphasize the projection operator method, application of spectral theorems and superoperators formalism in operator Hilbert spaces (Hilbert-Schmidt type). The paper includes an appendix on direct sums and direct products of spaces and operators, and problems of reducibility for operator class by using the projection operators. (author)

  8. On the kinetic energy density

    International Nuclear Information System (INIS)

    Lombard, R.J.; Mas, D.; Moszkowski, S.A.

    1991-01-01

    We discuss two expressions for the density of kinetic energy which differ by an integration by parts. Using the Wigner transform we shown that the arithmetic mean of these two terms is closely analogous to the classical value. Harmonic oscillator wavefunctions are used to illustrate the radial dependence of these expressions. We study the differences they induce through effective mass terms when performing self-consistent calculations. (author)

  9. Neutronic density perturbation by probes

    International Nuclear Information System (INIS)

    Vigon, M. A.; Diez, L.

    1956-01-01

    The introduction of absorbent materials of neutrons in diffuser media, produces local disturbances of neutronic density. The disturbance depends especially on the nature and size of the absorbent. Approximated equations which relates te disturbance and the distance to the absorbent in the case of thin disks have been drawn. The experimental comprobation has been carried out in two especial cases. In both cases the experimental results are in agreement with the calculated values from these equations. (Author)

  10. Orbital functionals in density-matrix- and current-density-functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Helbig, N

    2006-05-15

    Density-Functional Theory (DFT), although widely used and very successful in the calculation of several observables, fails to correctly describe strongly correlated materials. In the first part of this work we, therefore, introduce reduced-densitymatrix- functional theory (RDMFT) which is one possible way to treat electron correlation beyond DFT. Within this theory the one-body reduced density matrix (1- RDM) is used as the basic variable. Our main interest is the calculation of the fundamental gap which proves very problematic within DFT. In order to calculate the fundamental gap we generalize RDMFT to fractional particle numbers M by describing the system as an ensemble of an N and an N+1 particle system (with N{<=}M{<=}N+1). For each fixed particle number, M, the total energy is minimized with respect to the natural orbitals and their occupation numbers. This leads to the total energy as a function of M. The derivative of this function with respect to the particle number has a discontinuity at integer particle number which is identical to the gap. In addition, we investigate the necessary and sufficient conditions for the 1- RDM of a system with fractional particle number to be N-representable. Numerical results are presented for alkali atoms, small molecules, and periodic systems. Another problem within DFT is the description of non-relativistic many-electron systems in the presence of magnetic fields. It requires the paramagnetic current density and the spin magnetization to be used as basic variables besides the electron density. However, electron-gas-based functionals of current-spin-density-functional Theory (CSDFT) exhibit derivative discontinuities as a function of the magnetic field whenever a new Landau level is occupied, which makes them difficult to use in practice. Since the appearance of Landau levels is, intrinsically, an orbital effect it is appealing to use orbital-dependent functionals. We have developed a CSDFT version of the optimized

  11. Evolution strategies for robust optimization

    NARCIS (Netherlands)

    Kruisselbrink, Johannes Willem

    2012-01-01

    Real-world (black-box) optimization problems often involve various types of uncertainties and noise emerging in different parts of the optimization problem. When this is not accounted for, optimization may fail or may yield solutions that are optimal in the classical strict notion of optimality, but

  12. Optimization of photonic crystal cavities

    DEFF Research Database (Denmark)

    Wang, Fengwen; Sigmund, Ole

    2017-01-01

    We present optimization of photonic crystal cavities. The optimization problem is formulated to maximize the Purcell factor of a photonic crystal cavity. Both topology optimization and air-hole-based shape optimization are utilized for the design process. Numerical results demonstrate...... that the Purcell factor of the photonic crystal cavity can be significantly improved through optimization....

  13. Ion density in ionizing beams

    International Nuclear Information System (INIS)

    Knuyt, G.K.; Callebaut, D.K.

    1978-01-01

    The equations defining the ion density in a non-quasineutral plasma (chasma) are derived for a number of particular cases from the general results obtained in paper 1. Explicit calculations are made for a fairly general class of boundaries: all tri-axial ellipsoids, including cylinders with elliptic cross-section and the plane parallel case. The results are very simple. When the ion production and the beam intensity are constant then the steady state ion space charge is also constant in space, it varies over less than 10% for the various geometries, it may exceed the beam density largely for comparatively high pressures (usually still less than about 10 -3 Torr), it is tabulated for a number of interesting cases and moreover it can be calculated precisely and easily by some simple formulae for which also approximations are elaborated. The total potential is U =-ax 2 -by 2 -cz 2 , a, b and c constants which can be calculated immediately from the space charge density and the geometry; the largest coefficient varies at most over a factor four for various geometries; it is tabulated for a number of interesting cases. (author)

  14. Density functional theory of nuclei

    International Nuclear Information System (INIS)

    Terasaki, Jun

    2008-01-01

    The density functional theory of nuclei has come to draw attention of scientists in the field of nuclear structure because the theory is expected to provide reliable numerical data in wide range on the nuclear chart. This article is organized to present an overview of the theory to the people engaged in the theory of other fields as well as those people in the nuclear physics experiments. At first, the outline of the density functional theory widely used in the electronic systems (condensed matter, atoms, and molecules) was described starting from the Kohn-Sham equation derived on the variational principle. Then the theory used in the field of nuclear physics was presented. Hartree-Fock and Hartree-Fock-Bogolyubov approximation by using Skyrme interaction was explained. Comparison of the results of calculations and experiments of binding energies and ground state mean square charge radii of some magic number nuclei were shown. The similarity and dissimilarity between the two streams were summarized. Finally the activities of the international project of Universal Nuclear Energy Density Functional (UNEDF) which was started recently lead by US scientist was reported. This project is programmed for five years. One of the applications of the project is the calculation of the neutron capture cross section of nuclei on the r-process, which is absolutely necessary for the nucleosynthesis research. (S. Funahashi)

  15. Optimal Aerocapture Guidance

    Data.gov (United States)

    National Aeronautics and Space Administration — The main goal of my research is to develop, implement, verify, and validate an optimal numerical predictor-corrector aerocapture guidance algorithm that is...

  16. Optimal primitive reference frames

    International Nuclear Information System (INIS)

    Jennings, David

    2011-01-01

    We consider the smallest possible directional reference frames allowed and determine the best one can ever do in preserving quantum information in various scenarios. We find that for the preservation of a single spin state, two orthogonal spins are optimal primitive reference frames; and in a product state, they do approximately 22% as well as an infinite-sized classical frame. By adding a small amount of entanglement to the reference frame, this can be raised to 2(2/3) 5 =26%. Under the different criterion of entanglement preservation, a very similar optimal reference frame is found; however, this time it is for spins aligned at an optimal angle of 87 deg. In this case 24% of the negativity is preserved. The classical limit is considered numerically, and indicates under the criterion of entanglement preservation, that 90 deg. is selected out nonmonotonically, with a peak optimal angle of 96.5 deg. for L=3 spins.

  17. Linearly constrained minimax optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; Schjær-Jacobsen, Hans

    1978-01-01

    We present an algorithm for nonlinear minimax optimization subject to linear equality and inequality constraints which requires first order partial derivatives. The algorithm is based on successive linear approximations to the functions defining the problem. The resulting linear subproblems...

  18. Optimization in liner shipping

    DEFF Research Database (Denmark)

    Brouer, Berit Dangaard; Karsten, Christian Vad; Pisinger, David

    2017-01-01

    Seaborne trade is the lynchpin in almost every international supply chain, and about 90% of non-bulk cargo worldwide is transported by container. In this survey we give an overview of data-driven optimization problems in liner shipping. Research in liner shipping is motivated by a need for handling...... still more complex decision problems, based on big data sets and going across several organizational entities. Moreover, liner shipping optimization problems are pushing the limits of optimization methods, creating a new breeding ground for advanced modelling and solution methods. Starting from liner...... shipping network design, we consider the problem of container routing and speed optimization. Next, we consider empty container repositioning and stowage planning as well as disruption management. In addition, the problem of bunker purchasing is considered in depth. In each section we give a clear problem...

  19. Stochastic and global optimization

    National Research Council Canada - National Science Library

    Dzemyda, Gintautas; Šaltenis, Vydūnas; Zhilinskas, A; Mockus, Jonas

    2002-01-01

    ... and Effectiveness of Controlled Random Search E. M. T. Hendrix, P. M. Ortigosa and I. García 129 9. Discrete Backtracking Adaptive Search for Global Optimization B. P. Kristinsdottir, Z. B. Zabinsky and...

  20. Topology optimized microbioreactors

    DEFF Research Database (Denmark)

    Schäpper, Daniel; Lencastre Fernandes, Rita; Eliasson Lantz, Anna

    2011-01-01

    This article presents the fusion of two hitherto unrelated fields—microbioreactors and topology optimization. The basis for this study is a rectangular microbioreactor with homogeneously distributed immobilized brewers yeast cells (Saccharomyces cerevisiae) that produce a recombinant protein...

  1. Dynamic stochastic optimization

    CERN Document Server

    Ermoliev, Yuri; Pflug, Georg

    2004-01-01

    Uncertainties and changes are pervasive characteristics of modern systems involving interactions between humans, economics, nature and technology. These systems are often too complex to allow for precise evaluations and, as a result, the lack of proper management (control) may create significant risks. In order to develop robust strategies we need approaches which explic­ itly deal with uncertainties, risks and changing conditions. One rather general approach is to characterize (explicitly or implicitly) uncertainties by objec­ tive or subjective probabilities (measures of confidence or belief). This leads us to stochastic optimization problems which can rarely be solved by using the standard deterministic optimization and optimal control methods. In the stochastic optimization the accent is on problems with a large number of deci­ sion and random variables, and consequently the focus ofattention is directed to efficient solution procedures rather than to (analytical) closed-form solu­ tions. Objective an...

  2. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2005-01-01

    Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.

  3. Cooperative Bacterial Foraging Optimization

    Directory of Open Access Journals (Sweden)

    Hanning Chen

    2009-01-01

    Full Text Available Bacterial Foraging Optimization (BFO is a novel optimization algorithm based on the social foraging behavior of E. coli bacteria. This paper presents a variation on the original BFO algorithm, namely, the Cooperative Bacterial Foraging Optimization (CBFO, which significantly improve the original BFO in solving complex optimization problems. This significant improvement is achieved by applying two cooperative approaches to the original BFO, namely, the serial heterogeneous cooperation on the implicit space decomposition level and the serial heterogeneous cooperation on the hybrid space decomposition level. The experiments compare the performance of two CBFO variants with the original BFO, the standard PSO and a real-coded GA on four widely used benchmark functions. The new method shows a marked improvement in performance over the original BFO and appears to be comparable with the PSO and GA.

  4. Optimal mixture experiments

    CERN Document Server

    Sinha, B K; Pal, Manisha; Das, P

    2014-01-01

    The book dwells mainly on the optimality aspects of mixture designs. As mixture models are a special case of regression models, a general discussion on regression designs has been presented, which includes topics like continuous designs, de la Garza phenomenon, Loewner order domination, Equivalence theorems for different optimality criteria and standard optimality results for single variable polynomial regression and multivariate linear and quadratic regression models. This is followed by a review of the available literature on estimation of parameters in mixture models. Based on recent research findings, the volume also introduces optimal mixture designs for estimation of optimum mixing proportions in different mixture models, which include Scheffé’s quadratic model, Darroch-Waller model, log- contrast model, mixture-amount models, random coefficient models and multi-response model.  Robust mixture designs and mixture designs in blocks have been also reviewed. Moreover, some applications of mixture desig...

  5. RF Gun Optimization Study

    International Nuclear Information System (INIS)

    Alicia Hofler; Pavel Evtushenko

    2007-01-01

    Injector gun design is an iterative process where the designer optimizes a few nonlinearly interdependent beam parameters to achieve the required beam quality for a particle accelerator. Few tools exist to automate the optimization process and thoroughly explore the parameter space. The challenging beam requirements of new accelerator applications such as light sources and electron cooling devices drive the development of RF and SRF photo injectors. A genetic algorithm (GA) has been successfully used to optimize DC photo injector designs at Cornell University [1] and Jefferson Lab [2]. We propose to apply GA techniques to the design of RF and SRF gun injectors. In this paper, we report on the initial phase of the study where we model and optimize a system that has been benchmarked with beam measurements and simulation

  6. Handbook of simulation optimization

    CERN Document Server

    Fu, Michael C

    2014-01-01

    The Handbook of Simulation Optimization presents an overview of the state of the art of simulation optimization, providing a survey of the most well-established approaches for optimizing stochastic simulation models and a sampling of recent research advances in theory and methodology. Leading contributors cover such topics as discrete optimization via simulation, ranking and selection, efficient simulation budget allocation, random search methods, response surface methodology, stochastic gradient estimation, stochastic approximation, sample average approximation, stochastic constraints, variance reduction techniques, model-based stochastic search methods and Markov decision processes. This single volume should serve as a reference for those already in the field and as a means for those new to the field for understanding and applying the main approaches. The intended audience includes researchers, practitioners and graduate students in the business/engineering fields of operations research, management science,...

  7. Dual chiral density wave in quark matter

    International Nuclear Information System (INIS)

    Tatsumi, Toshitaka

    2002-01-01

    We prove that quark matter is unstable for forming a dual chiral density wave above a critical density, within the Nambu-Jona-Lasinio model. Presence of a dual chiral density wave leads to a uniform ferromagnetism in quark matter. A similarity with the spin density wave theory in electron gas and the pion condensation theory is also pointed out. (author)

  8. Density functionals in the laboratory frame

    International Nuclear Information System (INIS)

    Giraud, B. G.

    2008-01-01

    We compare several definitions of the density of a self-bound system, such as a nucleus, in relation with its center-of-mass zero-point motion. A trivial deconvolution relates the internal density to the density defined in the laboratory frame. This result is useful for the practical definition of density functionals

  9. On VC-density over indiscernible sequences

    OpenAIRE

    Guingona, Vincent; Hill, Cameron Donnay

    2011-01-01

    In this paper, we study VC-density over indiscernible sequences (denoted VC_ind-density). We answer an open question in [1], showing that VC_ind-density is always integer valued. We also show that VC_ind-density and dp-rank coincide in the natural way.

  10. Calculation of the level density parameter using semi-classical approach

    International Nuclear Information System (INIS)

    Canbula, B.; Babacan, H.

    2011-01-01

    The level density parameters (level density parameter a and energy shift δ) for back-shifted Fermi gas model have been determined for 1136 nuclei for which complete level scheme is available. Level density parameter is calculated by using the semi-classical single particle level density, which can be obtained analytically through spherical harmonic oscillator potential. This method also enables us to analyze the Coulomb potential's effect on the level density parameter. The dependence of this parameter on energy has been also investigated. Another parameter, δ, is determined by fitting of the experimental level scheme and the average resonance spacings for 289 nuclei. Only level scheme is used for optimization procedure for remaining 847 nuclei. Level densities for some nuclei have been calculated by using these parameter values. Obtained results have been compared with the experimental level scheme and the resonance spacing data.

  11. Optimally Locating MARFORRES Units

    OpenAIRE

    Salmeron, Javier; Dell, Rob

    2015-01-01

    Javier Salmeron and Rob Dell The U.S. Marine Forces Reserve (USMCR, MARFORRES) is conducting realignment studies where discretionary changes may benefit from formal mathematical analysis. This study has developed an optimization tool to guide and/or support Commander, MARFORRES (CMFR) decisions. A prototype of the optimization tool has been tested with data from the units and Reserve Training Centers (RTCs) in the San Francisco, CA and Sacramento, CA areas. Prepared for: MARFORRES, POC:...

  12. Optimization of Bolt Stress

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2013-01-01

    The state of stress in bolts and nuts with ISO metric thread design is examined and optimized. The assumed failure mode is fatigue so the applied preload and the load amplitude together with the stress concentrations define the connection strength. Maximum stress in the bolt is found at, the fillet...... under the head, at the thread start or at the thread root. To minimize the stress concentration shape optimization is applied....

  13. Optimization of Antivirus Software

    OpenAIRE

    Catalin BOJA; Adrian VISOIU

    2007-01-01

    The paper describes the main techniques used in development of computer antivirus software applications. For this particular category of software, are identified and defined optimum criteria that helps determine which solution is better and what are the objectives of the optimization process. From the general viewpoint of software optimization are presented methods and techniques that are applied at code development level. Regarding the particularities of antivirus software, the paper analyze...

  14. Bargaining with Optimism

    OpenAIRE

    Yildiz, Muhamet

    2011-01-01

    Excessive optimism is a prominent explanation for bargaining delays. Recent results demonstrate that optimism plays a subtle role in bargaining, and its careful analysis may shed valuable insights into negotiation behavior. This article reviews some of these results, focusing on the following findings. First, when there is a nearby deadline, optimistic players delay the agreement to the last period before the deadline, replicating a broad empirical regularity known as the deadline effect. Sec...

  15. Tax optimization of companies

    OpenAIRE

    Dědinová, Pavla

    2017-01-01

    This diploma thesis deals with tax optimization of companies. The thesis is divided into two main parts - the theoretical and practical part. The introduction of the theoretical part describes the history of taxes, their basic characteristics and the importance of their collection for today's society. Subsequently, the tax system of the Czech Republic with a focus on value added tax and corporation tax is presented. The practical part deals with specific possibilities of optimization of the a...

  16. Optimal Responsible Investment

    DEFF Research Database (Denmark)

    Jessen, Pernille

    The paper studies retail Socially Responsible Investment and portfolio allocation. It extends conventional portfolio theory by allowing for a personal value based investment decision. When preferences for responsibility enter the framework for mean-variance analysis, it yields an optimal...... responsible investment model. An example of index investing illustrates the theory. Results show that it is crucial for the responsible investor to consider portfolio risk, expected return, and responsibility simultaneously in order to obtain an optimal portfolio. The model enables responsible investors...

  17. Guided randomness in optimization

    CERN Document Server

    Clerc, Maurice

    2015-01-01

    The performance of an algorithm used depends on the GNA. This book focuses on the comparison of optimizers, it defines a stress-outcome approach which can be derived all the classic criteria (median, average, etc.) and other more sophisticated.   Source-codes used for the examples are also presented, this allows a reflection on the ""superfluous chance,"" succinctly explaining why and how the stochastic aspect of optimization could be avoided in some cases.

  18. Building a Universal Nuclear Energy Density Functional

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, Joe A. [Michigan State Univ., East Lansing, MI (United States); Furnstahl, Dick; Horoi, Mihai; Lust, Rusty; Nazaewicc, Witek; Ng, Esmond; Thompson, Ian; Vary, James

    2012-12-30

    During the period of Dec. 1 2006 – Jun. 30, 2012, the UNEDF collaboration carried out a comprehensive study of all nuclei, based on the most accurate knowledge of the strong nuclear interaction, the most reliable theoretical approaches, the most advanced algorithms, and extensive computational resources, with a view towards scaling to the petaflop platforms and beyond. The long-term vision initiated with UNEDF is to arrive at a comprehensive, quantitative, and unified description of nuclei and their reactions, grounded in the fundamental interactions between the constituent nucleons. We seek to replace current phenomenological models of nuclear structure and reactions with a well-founded microscopic theory that delivers maximum predictive power with well-quantified uncertainties. Specifically, the mission of this project has been three-fold: First, to find an optimal energy density functional (EDF) using all our knowledge of the nucleonic Hamiltonian and basic nuclear properties; Second, to apply the EDF theory and its extensions to validate the functional using all the available relevant nuclear structure and reaction data; Third, to apply the validated theory to properties of interest that cannot be measured, in particular the properties needed for reaction theory.

  19. High-density oxidized porous silicon

    International Nuclear Information System (INIS)

    Gharbi, Ahmed; Souifi, Abdelkader; Remaki, Boudjemaa; Halimaoui, Aomar; Bensahel, Daniel

    2012-01-01

    We have studied oxidized porous silicon (OPS) properties using Fourier transform infraRed (FTIR) spectroscopy and capacitance–voltage C–V measurements. We report the first experimental determination of the optimum porosity allowing the elaboration of high-density OPS insulators. This is an important contribution to the research of thick integrated electrical insulators on porous silicon based on an optimized process ensuring dielectric quality (complete oxidation) and mechanical and chemical reliability (no residual pores or silicon crystallites). Through the measurement of the refractive indexes of the porous silicon (PS) layer before and after oxidation, one can determine the structural composition of the OPS material in silicon, air and silica. We have experimentally demonstrated that a porosity approaching 56% of the as-prepared PS layer is required to ensure a complete oxidation of PS without residual silicon crystallites and with minimum porosity. The effective dielectric constant values of OPS materials determined from capacitance–voltage C–V measurements are discussed and compared to FTIR results predictions. (paper)

  20. Research of mechanism of density lock

    International Nuclear Information System (INIS)

    Wang Shengfei; Yan Changqi; Gu Haifeng

    2010-01-01

    Mechanism of density lock was analyzed according to the work conditions of density lock. The results showed that: the stratification with no disturbance satisfied the work conditions of density lock; fluids between the stratification were not mixed at the condition of connected to each other; the density lock can be open automatically by controlled the pressure balance at the stratification. When disturbance existed, the stratification might be broken and mass would be transferred by convection. The stability of stratification can be enhanced by put the special structure in density lock to ensure the normal work of density lock. At last, the minimum of heat loss in density lock was also analyzed. (authors)

  1. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  2. 5th Optimization Day

    CERN Document Server

    Mees, Alistair; Fisher, Mike; Jennings, Les

    2000-01-01

    'Optimization Day' (OD) has been a series of annual mini-conferences in Australia since 1994. The purpose of this series of events is to gather researchers in optimization and its related areas from Australia and their collaborators, in order to exchange new developments of optimization theories, methods and their applications. The first four OD mini-conferences were held in The Uni­ versity of Ballarat (1994), The University of New South Wales (1995), The University of Melbourne (1996) and Royal Melbourne Institute of Technology (1997), respectively. They were all on the eastern coast of Australia. The fifth mini-conference Optimization Days was held at the Centre for Ap­ plied Dynamics and Optimization (CADO), Department of Mathematics and Statistics, The University of Western Australia, Perth, from 29 to 30 June 1998. This is the first time the OD mini-conference has been held at the west­ ern coast of Australia. This fifth OD preceded the International Conference on Optimization: Techniques and Applica...

  3. Learning optimal embedded cascades.

    Science.gov (United States)

    Saberian, Mohammad Javad; Vasconcelos, Nuno

    2012-10-01

    The problem of automatic and optimal design of embedded object detector cascades is considered. Two main challenges are identified: optimization of the cascade configuration and optimization of individual cascade stages, so as to achieve the best tradeoff between classification accuracy and speed, under a detection rate constraint. Two novel boosting algorithms are proposed to address these problems. The first, RCBoost, formulates boosting as a constrained optimization problem which is solved with a barrier penalty method. The constraint is the target detection rate, which is met at all iterations of the boosting process. This enables the design of embedded cascades of known configuration without extensive cross validation or heuristics. The second, ECBoost, searches over cascade configurations to achieve the optimal tradeoff between classification risk and speed. The two algorithms are combined into an overall boosting procedure, RCECBoost, which optimizes both the cascade configuration and its stages under a detection rate constraint, in a fully automated manner. Extensive experiments in face, car, pedestrian, and panda detection show that the resulting detectors achieve an accuracy versus speed tradeoff superior to those of previous methods.

  4. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  5. Optimizing countershading camouflage.

    Science.gov (United States)

    Cuthill, Innes C; Sanghera, N Simon; Penacchio, Olivier; Lovell, Paul George; Ruxton, Graeme D; Harris, Julie M

    2016-11-15

    Countershading, the widespread tendency of animals to be darker on the side that receives strongest illumination, has classically been explained as an adaptation for camouflage: obliterating cues to 3D shape and enhancing background matching. However, there have only been two quantitative tests of whether the patterns observed in different species match the optimal shading to obliterate 3D cues, and no tests of whether optimal countershading actually improves concealment or survival. We use a mathematical model of the light field to predict the optimal countershading for concealment that is specific to the light environment and then test this prediction with correspondingly patterned model "caterpillars" exposed to avian predation in the field. We show that the optimal countershading is strongly illumination-dependent. A relatively sharp transition in surface patterning from dark to light is only optimal under direct solar illumination; if there is diffuse illumination from cloudy skies or shade, the pattern provides no advantage over homogeneous background-matching coloration. Conversely, a smoother gradation between dark and light is optimal under cloudy skies or shade. The demonstration of these illumination-dependent effects of different countershading patterns on predation risk strongly supports the comparative evidence showing that the type of countershading varies with light environment.

  6. Totally optimal decision rules

    KAUST Repository

    Amin, Talha

    2017-11-22

    Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.

  7. Adaptive Bacterial Foraging Optimization

    Directory of Open Access Journals (Sweden)

    Hanning Chen

    2011-01-01

    Full Text Available Bacterial Foraging Optimization (BFO is a recently developed nature-inspired optimization algorithm, which is based on the foraging behavior of E. coli bacteria. Up to now, BFO has been applied successfully to some engineering problems due to its simplicity and ease of implementation. However, BFO possesses a poor convergence behavior over complex optimization problems as compared to other nature-inspired optimization techniques. This paper first analyzes how the run-length unit parameter of BFO controls the exploration of the whole search space and the exploitation of the promising areas. Then it presents a variation on the original BFO, called the adaptive bacterial foraging optimization (ABFO, employing the adaptive foraging strategies to improve the performance of the original BFO. This improvement is achieved by enabling the bacterial foraging algorithm to adjust the run-length unit parameter dynamically during algorithm execution in order to balance the exploration/exploitation tradeoff. The experiments compare the performance of two versions of ABFO with the original BFO, the standard particle swarm optimization (PSO and a real-coded genetic algorithm (GA on four widely-used benchmark functions. The proposed ABFO shows a marked improvement in performance over the original BFO and appears to be comparable with the PSO and GA.

  8. Totally optimal decision rules

    KAUST Repository

    Amin, Talha M.; Moshkov, Mikhail

    2017-01-01

    Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.

  9. Optimization and anti-optimization of structures under uncertainty

    National Research Council Canada - National Science Library

    Elishakoff, Isaac; Ohsaki, Makoto

    2010-01-01

    The volume presents a collaboration between internationally recognized experts on anti-optimization and structural optimization, and summarizes various novel ideas, methodologies and results studied over 20 years...

  10. Optimization of inspection and replacement period by using Bayesian statistics

    International Nuclear Information System (INIS)

    Kasai, Masao; Watanabe, Yasushi; Kusakari, Yoshiyuki; Notoya, Junichi

    2006-01-01

    This study describes the formulations to optimize the time interval of inspections and/or replacements of equipment/parts taking into account the probability density functions (PDF) for failure rates and parameters of failure distribution functions (FDF) and evaluates the optimized results of these time intervals using our formulations by comparing with those using only representative values of failure rates and the parameters of FDF instead of using these PDFs. The PDFs are obtained with Bayesian method and the representative values are obtained with likelihood estimation method. However, any significant difference is not observed between both optimized results within our preliminary calculations. (author)

  11. Handbook of optimization in telecommunications

    CERN Document Server

    Pardalos, Panos M

    2008-01-01

    Covers the field of optimization in telecommunications, and the optimization developments that are frequently applied to telecommunications. This book aims to provide a reference tool for scientists and engineers in telecommunications who depend upon optimization.

  12. Minimal nuclear energy density functional

    Science.gov (United States)

    Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi; Perez, Rodrigo Navarro; Schunck, Nicolas

    2018-04-01

    We present a minimal nuclear energy density functional (NEDF) called "SeaLL1" that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ɛr=0.022 fm and a standard deviation σr=0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body (NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body (NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. We identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.

  13. Foraging optimally for home ranges

    Science.gov (United States)

    Mitchell, Michael S.; Powell, Roger A.

    2012-01-01

    Economic models predict behavior of animals based on the presumption that natural selection has shaped behaviors important to an animal's fitness to maximize benefits over costs. Economic analyses have shown that territories of animals are structured by trade-offs between benefits gained from resources and costs of defending them. Intuitively, home ranges should be similarly structured, but trade-offs are difficult to assess because there are no costs of defense, thus economic models of home-range behavior are rare. We present economic models that predict how home ranges can be efficient with respect to spatially distributed resources, discounted for travel costs, under 2 strategies of optimization, resource maximization and area minimization. We show how constraints such as competitors can influence structure of homes ranges through resource depression, ultimately structuring density of animals within a population and their distribution on a landscape. We present simulations based on these models to show how they can be generally predictive of home-range behavior and the mechanisms that structure the spatial distribution of animals. We also show how contiguous home ranges estimated statistically from location data can be misleading for animals that optimize home ranges on landscapes with patchily distributed resources. We conclude with a summary of how we applied our models to nonterritorial black bears (Ursus americanus) living in the mountains of North Carolina, where we found their home ranges were best predicted by an area-minimization strategy constrained by intraspecific competition within a social hierarchy. Economic models can provide strong inference about home-range behavior and the resources that structure home ranges by offering falsifiable, a priori hypotheses that can be tested with field observations.

  14. Optimal conservation of migratory species.

    Directory of Open Access Journals (Sweden)

    Tara G Martin

    Full Text Available BACKGROUND: Migratory animals comprise a significant portion of biodiversity worldwide with annual investment for their conservation exceeding several billion dollars. Designing effective conservation plans presents enormous challenges. Migratory species are influenced by multiple events across land and sea-regions that are often separated by thousands of kilometres and span international borders. To date, conservation strategies for migratory species fail to take into account how migratory animals are spatially connected between different periods of the annual cycle (i.e. migratory connectivity bringing into question the utility and efficiency of current conservation efforts. METHODOLOGY/PRINCIPAL FINDINGS: Here, we report the first framework for determining an optimal conservation strategy for a migratory species. Employing a decision theoretic approach using dynamic optimization, we address the problem of how to allocate resources for habitat conservation for a Neotropical-Nearctic migratory bird, the American redstart Setophaga ruticilla, whose winter habitat is under threat. Our first conservation strategy used the acquisition of winter habitat based on land cost, relative bird density, and the rate of habitat loss to maximize the abundance of birds on the wintering grounds. Our second strategy maximized bird abundance across the entire range of the species by adding the constraint of maintaining a minimum percentage of birds within each breeding region in North America using information on migratory connectivity as estimated from stable-hydrogen isotopes in feathers. We show that failure to take into account migratory connectivity may doom some regional populations to extinction, whereas including information on migratory connectivity results in the protection of the species across its entire range. CONCLUSIONS/SIGNIFICANCE: We demonstrate that conservation strategies for migratory animals depend critically upon two factors: knowledge of

  15. Separation of density and viscosity influence on liquid-loaded surface acoustic wave devices

    Science.gov (United States)

    Herrmann, F.; Hahn, D.; Büttgenbach, S.

    1999-05-01

    Love-mode sensors are reported for separate measurement of liquid density and viscosity. They combine the general merits of Love-mode devices, e.g., ease of sensitivity adjustment and robustness, with a highly effective procedure of separate determination of liquid density and viscosity. A model is proposed to describe the frequency response of the devices to liquid loading. Moreover, design rules are given for further optimization and sensitivity enhancement.

  16. Power Link Optimization for a Neurostimulator in Nasal Cavity

    Directory of Open Access Journals (Sweden)

    Seunghyun Lee

    2017-01-01

    Full Text Available This paper examines system optimization for wirelessly powering a small implant embedded in tissue. For a given small receiver in a multilayer tissue model, the transmitter is abstracted as a sheet of tangential current density for which the optimal distribution is analytically found. This proposes a new design methodology for wireless power transfer systems. That is, from the optimal current distribution, the maximum achievable efficiency is derived first. Next, various design parameters are determined to achieve the target efficiency. Based on this design methodology, a centimeter-sized neurostimulator inside the nasal cavity is demonstrated. For this centimeter-sized implant, the optimal distribution resembles that of a coil source and the optimal frequency is around 15 MHz. While the existing solution showed an efficiency of about 0.3 percent, the proposed system could enhance the efficiency fivefold.

  17. Leptin and bone mineral density

    DEFF Research Database (Denmark)

    Morberg, Cathrine M.; Tetens, Inge; Black, Eva

    2003-01-01

    Leptin has been suggested to decrease bone mineral density (BMD). This observational analysis explored the relationship between serum leptin and BMD in 327 nonobese men (controls) (body mass index 26.1 +/- 3.7 kg/m(2), age 49.9 +/- 6.0 yr) and 285 juvenile obese men (body mass index 35.9 +/- 5.9 kg...... males, but it also stresses the fact that the strong covariation between the examined variables is a shortcoming of the cross-sectional design....

  18. Bounded Densities and Their Derivatives

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, V.

    2009-01-01

    This paper describes how one can compute interval-valued statistical measures given limited information about the underlying distribution. The particular focus is on a bounded derivative of a probability density function and its combination with other available statistical evidence for computing ...... quantities of interest. To be able to utilise the evidence about the derivative it is suggested to adapt the ‘conventional’ problem statement to variational calculus and the way to do so is demonstrated. A number of examples are given throughout the paper....

  19. Equilibrium problems for Raney densities

    Science.gov (United States)

    Forrester, Peter J.; Liu, Dang-Zheng; Zinn-Justin, Paul

    2015-07-01

    The Raney numbers are a class of combinatorial numbers generalising the Fuss-Catalan numbers. They are indexed by a pair of positive real numbers (p, r) with p > 1 and 0 0 and similarly use both methods to identify the equilibrium problem for (p, r) = (θ/q + 1, 1/q), θ > 0 and q \\in Z+ . The Wiener-Hopf method is used to extend the latter to parameters (p, r) = (θ/q + 1, m + 1/q) for m a non-negative integer, and also to identify the equilibrium problem for a family of densities with moments given by certain binomial coefficients.

  20. High density fuel storage rack

    International Nuclear Information System (INIS)

    Zezza, L.J.

    1980-01-01

    High storage density for spent nuclear fuel assemblies in a pool achieved by positioning fuel storage cells of high thermal neutron absorption materials in an upright configuration in a rack. The rack holds the cells at required pitch. Each cell carries an internal fuel assembly support, and most cells are vertically movable in the rack so that they rest on the pool bottom. Pool water circulation through the cells and around the fuel assemblies is permitted by circulation openings at the top and bottom of the cells above and below the fuel assemblies