WorldWideScience

Sample records for analytical optimization study

  1. Numerical and Analytical Study of Optimal Low-Thrust Limited-Power Transfers between Close Circular Coplanar Orbits

    Directory of Open Access Journals (Sweden)

    Sandro da Silva Fernandes

    2007-01-01

    Full Text Available A numerical and analytical study of optimal low-thrust limited-power trajectories for simple transfer (no rendezvous between close circular coplanar orbits in an inverse-square force field is presented. The numerical study is carried out by means of an indirect approach of the optimization problem in which the two-point boundary value problem, obtained from the set of necessary conditions describing the optimal solutions, is solved through a neighboring extremal algorithm based on the solution of the linearized two-point boundary value problem through Riccati transformation. The analytical study is provided by a linear theory which is expressed in terms of nonsingular elements and is determined through the canonical transformation theory. The fuel consumption is taken as the performance criterion and the analysis is carried out considering various radius ratios and transfer durations. The results are compared to the ones provided by a numerical method based on gradient techniques.

  2. Gradient Optimization for Analytic conTrols - GOAT

    Science.gov (United States)

    Assémat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank

    Quantum optimal control becomes a necessary step in a number of studies in the quantum realm. Recent experimental advances showed that superconducting qubits can be controlled with an impressive accuracy. However, most of the standard optimal control algorithms are not designed to manage such high accuracy. To tackle this issue, a novel quantum optimal control algorithm have been introduced: the Gradient Optimization for Analytic conTrols (GOAT). It avoids the piecewise constant approximation of the control pulse used by standard algorithms. This allows an efficient implementation of very high accuracy optimization. It also includes a novel method to compute the gradient that provides many advantages, e.g. the absence of backpropagation or the natural route to optimize the robustness of the control pulses. This talk will present the GOAT algorithm and a few applications to transmons systems.

  3. Parallel Aircraft Trajectory Optimization with Analytic Derivatives

    Science.gov (United States)

    Falck, Robert D.; Gray, Justin S.; Naylor, Bret

    2016-01-01

    Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.

  4. OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Maja Vrkljan

    2004-12-01

    Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.

  5. Analytical method for optimization of maintenance policy based on available system failure data

    International Nuclear Information System (INIS)

    Coria, V.H.; Maximov, S.; Rivas-Dávalos, F.; Melchor, C.L.; Guardado, J.L.

    2015-01-01

    An analytical optimization method for preventive maintenance (PM) policy with minimal repair at failure, periodic maintenance, and replacement is proposed for systems with historical failure time data influenced by a current PM policy. The method includes a new imperfect PM model based on Weibull distribution and incorporates the current maintenance interval T 0 and the optimal maintenance interval T to be found. The Weibull parameters are analytically estimated using maximum likelihood estimation. Based on this model, the optimal number of PM and the optimal maintenance interval for minimizing the expected cost over an infinite time horizon are also analytically determined. A number of examples are presented involving different failure time data and current maintenance intervals to analyze how the proposed analytical optimization method for periodic PM policy performances in response to changes in the distribution of the failure data and the current maintenance interval. - Highlights: • An analytical optimization method for preventive maintenance (PM) policy is proposed. • A new imperfect PM model is developed. • The Weibull parameters are analytically estimated using maximum likelihood. • The optimal maintenance interval and number of PM are also analytically determined. • The model is validated by several numerical examples

  6. Analytical study of optimal design and gain parameters of double-slot plasmonic waveguides

    International Nuclear Information System (INIS)

    Handapangoda, Dayan; Rukhlenko, Ivan D; Premaratne, Malin

    2013-01-01

    We theoretically analyze guided modes in optically active and passive double-slot plasmonic waveguides. We show that for one of the two different mode symmetries supported by the waveguide, a most productive guiding condition can be realized by adjusting the thicknesses of the layers to optimal values. We also derive approximate analytic expressions to calculate the optimal geometrical parameters of the waveguide. Interestingly, our analysis shows that the propagation losses associated with the inverse mode symmetry of the double-slot waveguide are comparatively low, regardless of the dimensions of the waveguide. We further show that the propagation losses become the smallest in the limiting case of a single-slot (metal–dielectric–metal (MDM)) waveguide. For both double- and single-slot waveguides, we show that the gain required to overcome the losses can be reduced by choosing a dielectric with a low refractive index. We also derive accurate analytical expressions to readily estimate the critical gain and modal gain of the waveguides. (paper)

  7. Analytical Model-Based Design Optimization of a Transverse Flux Machine

    Energy Technology Data Exchange (ETDEWEB)

    Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz; Husain, Iqbal; Muljadi, Eduard

    2017-02-16

    This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variables that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.

  8. Analytical study on the criticality of the stochastic optimal velocity model

    International Nuclear Information System (INIS)

    Kanai, Masahiro; Nishinari, Katsuhiro; Tokihiro, Tetsuji

    2006-01-01

    In recent works, we have proposed a stochastic cellular automaton model of traffic flow connecting two exactly solvable stochastic processes, i.e., the asymmetric simple exclusion process and the zero range process, with an additional parameter. It is also regarded as an extended version of the optimal velocity model, and moreover it shows particularly notable properties. In this paper, we report that when taking optimal velocity function to be a step function, all of the flux-density graph (i.e. the fundamental diagram) can be estimated. We first find that the fundamental diagram consists of two line segments resembling an inversed-λ form, and next identify their end-points from a microscopic behaviour of vehicles. It is notable that by using a microscopic parameter which indicates a driver's sensitivity to the traffic situation, we give an explicit formula for the critical point at which a traffic jam phase arises. We also compare these analytical results with those of the optimal velocity model, and point out the crucial differences between them

  9. Analytical development and optimization of a graphene–solution interface capacitance model

    Directory of Open Access Journals (Sweden)

    Hediyeh Karimi

    2014-05-01

    Full Text Available Graphene, which as a new carbon material shows great potential for a range of applications because of its exceptional electronic and mechanical properties, becomes a matter of attention in these years. The use of graphene in nanoscale devices plays an important role in achieving more accurate and faster devices. Although there are lots of experimental studies in this area, there is a lack of analytical models. Quantum capacitance as one of the important properties of field effect transistors (FETs is in our focus. The quantum capacitance of electrolyte-gated transistors (EGFETs along with a relevant equivalent circuit is suggested in terms of Fermi velocity, carrier density, and fundamental physical quantities. The analytical model is compared with the experimental data and the mean absolute percentage error (MAPE is calculated to be 11.82. In order to decrease the error, a new function of E composed of α and β parameters is suggested. In another attempt, the ant colony optimization (ACO algorithm is implemented for optimization and development of an analytical model to obtain a more accurate capacitance model. To further confirm this viewpoint, based on the given results, the accuracy of the optimized model is more than 97% which is in an acceptable range of accuracy.

  10. Analytic characterization of linear accelerator radiosurgery dose distributions for fast optimization

    International Nuclear Information System (INIS)

    Meeks, S.L.; Buatti, J.M.; Eyster, B.; Kendrick, L.A.

    1999-01-01

    Linear accelerator (linac) radiosurgery utilizes non-coplanar arc therapy delivered through circular collimators. Generally, spherically symmetric arc sets are used, resulting in nominally spherical dose distributions. Various treatment planning parameters may be manipulated to provide dose conformation to irregular lesions. Iterative manipulation of these variables can be a difficult and time-consuming task, because (a) understanding the effect of these parameters is complicated and (b) three-dimensional (3D) dose calculations are computationally expensive. This manipulation can be simplified, however, because the prescription isodose surface for all single isocentre distributions can be approximated by conic sections. In this study, the effects of treatment planning parameter manipulation on the dimensions of the treatment isodose surface were determined empirically. These dimensions were then fitted to analytic functions, assuming that the dose distributions were characterized as conic sections. These analytic functions allowed real-time approximation of the 3D isodose surface. Iterative plan optimization, either manual or automated, is achieved more efficiently using this real time approximation of the dose matrix. Subsequent to iterative plan optimization, the analytic function is related back to the appropriate plan parameters, and the dose distribution is determined using conventional dosimetry calculations. This provides a pseudo-inverse approach to radiosurgery optimization, based solely on geometric considerations. (author)

  11. Analytic semigroups and optimal regularity in parabolic problems

    CERN Document Server

    Lunardi, Alessandra

    2012-01-01

    The book shows how the abstract methods of analytic semigroups and evolution equations in Banach spaces can be fruitfully applied to the study of parabolic problems. Particular attention is paid to optimal regularity results in linear equations. Furthermore, these results are used to study several other problems, especially fully nonlinear ones. Owing to the new unified approach chosen, known theorems are presented from a novel perspective and new results are derived. The book is self-contained. It is addressed to PhD students and researchers interested in abstract evolution equations and in p

  12. Predictive Analytics for Coordinated Optimization in Distribution Systems

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Rui [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-04-13

    This talk will present NREL's work on developing predictive analytics that enables the optimal coordination of all the available resources in distribution systems to achieve the control objectives of system operators. Two projects will be presented. One focuses on developing short-term state forecasting-based optimal voltage regulation in distribution systems; and the other one focuses on actively engaging electricity consumers to benefit distribution system operations.

  13. SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Kenny S K; Lee, Louis K Y [Department of Clinical Oncology, Prince of Wales Hospital, Hong Kong SAR (China); Xing, L [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States); Chan, Anthony T C [Department of Clinical Oncology, Prince of Wales Hospital, Hong Kong SAR (China); State Key Laboratory of Oncology in South China, The Chinese University of Hong Kong, Hong Kong SAR (China)

    2015-06-15

    Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis, which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.

  14. Analytic Optimization of Near-Field Optical Chirality Enhancement

    Science.gov (United States)

    2017-01-01

    We present an analytic derivation for the enhancement of local optical chirality in the near field of plasmonic nanostructures by tuning the far-field polarization of external light. We illustrate the results by means of simulations with an achiral and a chiral nanostructure assembly and demonstrate that local optical chirality is significantly enhanced with respect to circular polarization in free space. The optimal external far-field polarizations are different from both circular and linear. Symmetry properties of the nanostructure can be exploited to determine whether the optimal far-field polarization is circular. Furthermore, the optimal far-field polarization depends on the frequency, which results in complex-shaped laser pulses for broadband optimization. PMID:28239617

  15. Optimization of analytical techniques to characterize antibiotics in aquatic systems

    International Nuclear Information System (INIS)

    Al Mokh, S.

    2013-01-01

    Antibiotics are considered as pollutants when they are present in aquatic ecosystems, ultimate receptacles of anthropogenic substances. These compounds are studied as their persistence in the environment or their effects on natural organisms. Numerous efforts have been made worldwide to assess the environmental quality of different water resources for the survival of aquatic species, but also for human consumption and health risk related. Towards goal, the optimization of analytical techniques for these compounds in aquatic systems remains a necessity. Our objective is to develop extraction and detection methods for 12 molecules of aminoglycosides and colistin in sewage treatment plants and hospitals waters. The lack of analytical methods for analysis of these compounds and the deficiency of studies for their detection in water is the reason for their study. Solid Phase Extraction (SPE) in classic mode (offline) or online followed by Liquid Chromatography analysis coupled with Mass Spectrometry (LC/MS/MS) is the most method commonly used for this type of analysis. The parameters are optimized and validated to ensure the best conditions for the environmental analysis. This technique was applied to real samples of wastewater treatment plants in Bordeaux and Lebanon. (author)

  16. Optimal starting conditions for the rendezvous maneuver: Analytical and computational approach

    Science.gov (United States)

    Ciarcia, Marco

    by the optimal trajectory. For the guidance trajectory, because of the replacement of the variable thrust direction of the powered subarc with a constant thrust direction, the optimal control problem degenerates into a mathematical programming problem with a relatively small number of degrees of freedom, more precisely: three for case (i) time-to-rendezvous free and two for case (ii) time-to-rendezvous given. In particular, we consider the rendezvous between the Space Shuttle (chaser) and the International Space Station (target). Once a given initial distance SS-to-ISS is preselected, the present work supplies not only the best initial conditions for the rendezvous trajectory, but simultaneously the corresponding final conditions for the ascent trajectory. In Part B, an analytical solution of the Clohessy-Wiltshire equations is presented (i) neglecting the change of the spacecraft mass due to the fuel consumption and (ii) and assuming that the thrust is finite, that is, the trajectory includes powered subarcs flown with max thrust and coasting subarc flown with zero thrust. Then, employing the found analytical solution, we study the rendezvous problem under the assumption that the initial separation coordinates and initial separation velocities are free except for the requirement that the initial chaser-to-target distance is given. The main contribution of Part B is the development of analytical solutions for the powered subarcs, an important extension of the analytical solutions already available for the coasting subarcs. One consequence is that the entire optimal trajectory can be described analytically. Another consequence is that the optimal control problems degenerate into mathematical programming problems. A further consequence is that, vis-a-vis the optimal control formulation, the mathematical programming formulation reduces the CPU time by a factor of order 1000. Key words. Space trajectories, rendezvous, optimization, guidance, optimal control, calculus of

  17. An analytical-numerical comprehensive method for optimizing the fringing magnetic field

    International Nuclear Information System (INIS)

    Xiao Meiqin; Mao Naifeng

    1991-01-01

    The criterion of optimizing the fringing magnetic field is discussed, and an analytical-numerical comprehensive method for realizing the optimization is introduced. The method mentioned above consists of two parts, the analytical part calculates the field of the shims, which corrects the fringing magnetic field by using uniform magnetizing method; the numerical part fulfils the whole calculation of the field distribution by solving the equation of magnetic vector potential A within the region covered by arbitrary triangular meshes with the aid of finite difference method and successive over relaxation method. On the basis of the method, the optimization of the fringing magnetic field for a large-scale electromagnetic isotope separator is finished

  18. An analytical optimization method for electric propulsion orbit transfer vehicles

    International Nuclear Information System (INIS)

    Oleson, S.R.

    1993-01-01

    Due to electric propulsion's inherent propellant mass savings over chemical propulsion, electric propulsion orbit transfer vehicles (EPOTVs) are a highly efficient mode of orbit transfer. When selecting an electric propulsion device (ion, MPD, or arcjet) and propellant for a particular mission, it is preferable to use quick, analytical system optimization methods instead of time intensive numerical integration methods. It is also of interest to determine each thruster's optimal operating characteristics for a specific mission. Analytical expressions are derived which determine the optimal specific impulse (Isp) for each type of electric thruster to maximize payload fraction for a desired thrusting time. These expressions take into account the variation of thruster efficiency with specific impulse. Verification of the method is made with representative electric propulsion values on a LEO-to-GEO mission. Application of the method to specific missions is discussed

  19. Optimization of offshore wind turbine support structures using analytical gradient-based method

    OpenAIRE

    Chew, Kok Hon; Tai, Kang; Ng, E.Y.K.; Muskulus, Michael

    2015-01-01

    Design optimization of the offshore wind turbine support structure is an expensive task; due to the highly-constrained, non-convex and non-linear nature of the design problem. This report presents an analytical gradient-based method to solve this problem in an efficient and effective way. The design sensitivities of the objective and constraint functions are evaluated analytically while the optimization of the structure is performed, subject to sizing, eigenfrequency, extreme load an...

  20. Analytic solution to variance optimization with no short positions

    Science.gov (United States)

    Kondor, Imre; Papp, Gábor; Caccioli, Fabio

    2017-12-01

    We consider the variance portfolio optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric \

  1. The application of analytical methods to the study of Pareto - optimal control systems

    Directory of Open Access Journals (Sweden)

    I. K. Romanova

    2014-01-01

    Full Text Available The subject of research articles - - methods of multicriteria optimization and their application for parametric synthesis of double-circuit control systems in conditions of inconsistency of individual criteria. The basis for solving multicriteria problems is a fundamental principle of a multi-criteria choice - the principle of the Edgeworth - Pareto. Getting Pareto - optimal variants due to inconsistency of individual criteria does not mean reaching a final decision. Set these options only offers the designer (DM.An important issue when using traditional numerical methods is their computational cost. An example is the use of methods of sounding the parameter space, including with use of uniform grids and uniformly distributed sequences. Very complex computational task is the application of computer methods of approximation bounds of Pareto.The purpose of this work is the development of a fairly simple search methods of Pareto - optimal solutions for the case of the criteria set out in the analytical form.The proposed solution is based on the study of the properties of the analytical dependences of criteria. The case is not covered so far in the literature, namely, the topology of the task, in which no touch of indifference curves (lines level. It is shown that for such tasks may be earmarked for compromise solutions. Prepositional use of the angular position of antigradient to the indifference curves in the parameter space relative to the coordinate axes. Formulated propositions on the characteristics of comonotonicity and contramonotonicity and angular characteristics of antigradient to determine Pareto optimal solutions. Considers the General algorithm of calculation: determine the scope of permissible values of parameters; investigates properties comonotonicity and contraventanas; to build an equal level (indifference curves; determined touch type: single sided (task is not strictly multicriteria or bilateral (objective relates to the Pareto

  2. Optimization of hot water transport and distribution networks by analytical method: OPTAL program

    International Nuclear Information System (INIS)

    Barreau, Alain; Caizergues, Robert; Moret-Bailly, Jean

    1977-06-01

    This report presents optimization studies of hot water transport and distribution network by minimizing operating cost. Analytical optimization is used: Lagrange's method of undetermined multipliers. Optimum diameter of each pipe is calculated for minimum network operating cost. The characteristics of the computer program used for calculations, OPTAL, are given in this report. An example of network is calculated and described: 52 branches and 27 customers. Results are discussed [fr

  3. Analytical Tools to Improve Optimization Procedures for Lateral Flow Assays

    Directory of Open Access Journals (Sweden)

    Helen V. Hsieh

    2017-05-01

    Full Text Available Immunochromatographic or lateral flow assays (LFAs are inexpensive, easy to use, point-of-care medical diagnostic tests that are found in arenas ranging from a doctor’s office in Manhattan to a rural medical clinic in low resource settings. The simplicity in the LFA itself belies the complex task of optimization required to make the test sensitive, rapid and easy to use. Currently, the manufacturers develop LFAs by empirical optimization of material components (e.g., analytical membranes, conjugate pads and sample pads, biological reagents (e.g., antibodies, blocking reagents and buffers and the design of delivery geometry. In this paper, we will review conventional optimization and then focus on the latter and outline analytical tools, such as dynamic light scattering and optical biosensors, as well as methods, such as microfluidic flow design and mechanistic models. We are applying these tools to find non-obvious optima of lateral flow assays for improved sensitivity, specificity and manufacturing robustness.

  4. Multi-objective analytical model for optimal sizing of stand-alone photovoltaic water pumping systems

    International Nuclear Information System (INIS)

    Olcan, Ceyda

    2015-01-01

    Highlights: • An analytical optimal sizing model is proposed for PV water pumping systems. • The objectives are chosen as deficiency of power supply and life-cycle costs. • The crop water requirements are estimated for a citrus tree yard in Antalya. • The optimal tilt angles are calculated for fixed, seasonal and monthly changes. • The sizing results showed the validity of the proposed analytical model. - Abstract: Stand-alone photovoltaic (PV) water pumping systems effectively use solar energy for irrigation purposes in remote areas. However the random variability and unpredictability of solar energy makes difficult the penetration of PV implementations and complicate the system design. An optimal sizing of these systems proves to be essential. This paper recommends a techno-economic optimization model to determine optimally the capacity of the components of PV water pumping system using a water storage tank. The proposed model is developed regarding the reliability and cost indicators, which are the deficiency of power supply probability and life-cycle costs, respectively. The novelty is that the proposed optimization model is analytically defined for two-objectives and it is able to find a compromise solution. The sizing of a stand-alone PV water pumping system comprises a detailed analysis of crop water requirements and optimal tilt angles. Besides the necessity of long solar radiation and temperature time series, the accurate forecasts of water supply needs have to be determined. The calculation of the optimal tilt angle for yearly, seasonally and monthly frequencies results in higher system efficiency. It is, therefore, suggested to change regularly the tilt angle in order to maximize solar energy output. The proposed optimal sizing model incorporates all these improvements and can accomplish a comprehensive optimization of PV water pumping systems. A case study is conducted considering the irrigation of citrus trees yard located in Antalya, Turkey

  5. Analytical optimization of interior PCM for energy storage in a lightweight passive solar room

    International Nuclear Information System (INIS)

    Xiao Wei; Wang Xin; Zhang Yinping

    2009-01-01

    Lightweight envelopes are widely used in modern buildings but they lack sufficient thermal capacity for passive solar utilization. An attractive solution to increase the building thermal capacity is to incorporate phase change material (PCM) into the building envelope. In this paper, a simplified theoretical model is established to optimize an interior PCM for energy storage in a lightweight passive solar room. Analytical equations are presented to calculate the optimal phase change temperature and the total amount of latent heat capacity and to estimate the benefit of the interior PCM for energy storage. Further, as an example, the analytical optimization is applied to the interior PCM panels in a direct-gain room with realistic outdoor climatic conditions of Beijing. The analytical results agree well with the numerical results. The analytical results show that: (1) the optimal phase change temperature depends on the average indoor air temperature and the radiation absorbed by the PCM panels; (2) the interior PCM has little effect on average indoor air temperature; and (3) the amplitude of the indoor air temperature fluctuation depends on the product of surface heat transfer coefficient h in and area A of the PCM panels in a lightweight passive solar room.

  6. Analytical approaches to optimizing system "Semiconductor converter-electric drive complex"

    Science.gov (United States)

    Kormilicin, N. V.; Zhuravlev, A. M.; Khayatov, E. S.

    2018-03-01

    In the electric drives of the machine-building industry, the problem of optimizing the drive in terms of mass-size indicators is acute. The article offers analytical methods that ensure the minimization of the mass of a multiphase semiconductor converter. In multiphase electric drives, the form of the phase current at which the best possible use of the "semiconductor converter-electric drive complex" for active materials is different from the sinusoidal form. It is shown that under certain restrictions on the phase current form, it is possible to obtain an analytical solution. In particular, if one assumes the shape of the phase current to be rectangular, the optimal shape of the control actions will depend on the width of the interpolar gap. In the general case, the proposed algorithm can be used to solve the problem under consideration by numerical methods.

  7. Experimental design and multiple response optimization. Using the desirability function in analytical methods development.

    Science.gov (United States)

    Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C

    2014-06-01

    A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Optimizing multi-pinhole SPECT geometries using an analytical model

    International Nuclear Information System (INIS)

    Rentmeester, M C M; Have, F van der; Beekman, F J

    2007-01-01

    State-of-the-art multi-pinhole SPECT devices allow for sub-mm resolution imaging of radio-molecule distributions in small laboratory animals. The optimization of multi-pinhole and detector geometries using simulations based on ray-tracing or Monte Carlo algorithms is time-consuming, particularly because many system parameters need to be varied. As an efficient alternative we develop a continuous analytical model of a pinhole SPECT system with a stationary detector set-up, which we apply to focused imaging of a mouse. The model assumes that the multi-pinhole collimator and the detector both have the shape of a spherical layer, and uses analytical expressions for effective pinhole diameters, sensitivity and spatial resolution. For fixed fields-of-view, a pinhole-diameter adapting feedback loop allows for the comparison of the system resolution of different systems at equal system sensitivity, and vice versa. The model predicts that (i) for optimal resolution or sensitivity the collimator layer with pinholes should be placed as closely as possible around the animal given a fixed detector layer, (ii) with high-resolution detectors a resolution improvement up to 31% can be achieved compared to optimized systems, (iii) high-resolution detectors can be placed close to the collimator without significant resolution losses, (iv) interestingly, systems with a physical pinhole diameter of 0 mm can have an excellent resolution when high-resolution detectors are used

  9. Optimal design of nuclear mechanical dampers with analytical hierarchy process

    International Nuclear Information System (INIS)

    Zou Yuehua; Wen Bo; Xu Hongxiang; Qin Yonglie

    2000-01-01

    An optimal design with analytical hierarchy process on nuclear mechanical dampers manufactured by authors' university was described. By using fuzzy judgement matrix the coincidence was automatically satisfied without the need of coincidence test. The results obtained by this method have been put into the production practices

  10. Effective factors on optimizing banks’ balance sheet using fuzzy analytical hierarchy process

    Directory of Open Access Journals (Sweden)

    Shoja Rezaei

    2013-11-01

    Full Text Available Every bank seeks methods to optimize its assets and liabilities, thus the main subject is managing assets-liabilities in the balance sheet and the main question is by which factor banks will be enabled to have an optimized combination of assets and liabilities in a common level of risk to get the most return. This case study is dedicated to Refah bank and is an applicable study. The data has collected from the headquarter by a questionnaire and finally effective factors weight on optimizing bank balance sheet determined by using Fuzzy analytical hierarchy process. Results showed that revenue has more effect on optimizing for %39.5 and also loan to deposit ratio for %.74, regarding revenue as a symbol of efficiency in banks, it seems to be the most important factor and goal in banking industry. Furthermore banks need to have some liquidity to respond customers demand to cover one of the most important risks of banking. This factor importance determined to be %18 in Refah Bank by using model and experts view.

  11. An analytical method for optimal design of MR valve structures

    International Nuclear Information System (INIS)

    Nguyen, Q H; Choi, S B; Lee, Y S; Han, M S

    2009-01-01

    This paper proposes an analytical methodology for the optimal design of a magnetorheological (MR) valve structure. The MR valve structure is constrained in a specific volume and the optimization problem identifies geometric dimensions of the valve structure that maximize the yield stress pressure drop of a MR valve or the yield stress damping force of a MR damper. In this paper, the single-coil and two-coil annular MR valve structures are considered. After describing the schematic configuration and operating principle of a typical MR valve and damper, a quasi-static model is derived based on the Bingham model of a MR fluid. The magnetic circuit of the valve and damper is then analyzed by applying Kirchoff's law and the magnetic flux conservation rule. Based on quasi-static modeling and magnetic circuit analysis, the optimization problem of the MR valve and damper is built. In order to reduce the computation load, the optimization problem is simplified and a procedure to obtain the optimal solution of the simplified optimization problem is presented. The optimal solution of the simplified optimization problem of the MR valve structure constrained in a specific volume is then obtained and compared with the solution of the original optimization problem and the optimal solution obtained from the finite element method

  12. Experimental analytical study on heat pipes

    International Nuclear Information System (INIS)

    Ismail, K.A.R.; Liu, C.Y.; Murcia, N.

    1981-01-01

    An analytical model is developed for optimizing the thickness distribution of the porous material in heat pipes. The method was used to calculate, design and construct heat pipes with internal geometrical changes. Ordinary pipes are also constructed and tested together with the modified ones. The results showed that modified tubes are superior in performance and that the analytical model can predict their performance to within 1.5% precision. (Author) [pt

  13. Optimizing an immersion ESL curriculum using analytic hierarchy process.

    Science.gov (United States)

    Tang, Hui-Wen Vivian

    2011-11-01

    The main purpose of this study is to fill a substantial knowledge gap regarding reaching a uniform group decision in English curriculum design and planning. A comprehensive content-based course criterion model extracted from existing literature and expert opinions was developed. Analytical hierarchy process (AHP) was used to identify the relative importance of course criteria for the purpose of tailoring an optimal one-week immersion English as a second language (ESL) curriculum for elementary school students in a suburban county of Taiwan. The hierarchy model and AHP analysis utilized in the present study will be useful for resolving several important multi-criteria decision-making issues in planning and evaluating ESL programs. This study also offers valuable insights and provides a basis for further research in customizing ESL curriculum models for different student populations with distinct learning needs, goals, and socioeconomic backgrounds. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Analytic model for ultrasound energy receivers and their optimal electric loads II: Experimental validation

    Science.gov (United States)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-10-01

    In this paper, we verify the two optimal electric load concepts based on the zero reflection condition and on the power maximization approach for ultrasound energy receivers. We test a high loss 1-3 composite transducer, and find that the measurements agree very well with the predictions of the analytic model for plate transducers that we have developed previously. Additionally, we also confirm that the power maximization and zero reflection loads are very different when the losses in the receiver are high. Finally, we compare the optimal load predictions by the KLM and the analytic models with frequency dependent attenuation to evaluate the influence of the viscosity.

  15. Analytical design of proportional-integral controllers for the optimal control of first-order processes with operational constraints

    Energy Technology Data Exchange (ETDEWEB)

    Thu, Hien Cao Thi; Lee, Moonyong [Yeungnam University, Gyeongsan (Korea, Republic of)

    2013-12-15

    A novel analytical design method of industrial proportional-integral (PI) controllers was developed for the optimal control of first-order processes with operational constraints. The control objective was to minimize a weighted sum of the controlled variable error and the rate of change in the manipulated variable under the maximum allowable limits in the controlled variable, manipulated variable and the rate of change in the manipulated variable. The constrained optimal servo control problem was converted to an unconstrained optimization to obtain an analytical tuning formula. A practical shortcut procedure for obtaining optimal PI parameters was provided based on graphical analysis of global optimality. The proposed PI controller was found to guarantee global optimum and deal explicitly with the three important operational constraints.

  16. An Analytical Solution for Yaw Maneuver Optimization on the International Space Station and Other Orbiting Space Vehicles

    Science.gov (United States)

    Dobrinskaya, Tatiana

    2015-01-01

    This paper suggests a new method for optimizing yaw maneuvers on the International Space Station (ISS). Yaw rotations are the most common large maneuvers on the ISS often used for docking and undocking operations, as well as for other activities. When maneuver optimization is used, large maneuvers, which were performed on thrusters, could be performed either using control moment gyroscopes (CMG), or with significantly reduced thruster firings. Maneuver optimization helps to save expensive propellant and reduce structural loads - an important factor for the ISS service life. In addition, optimized maneuvers reduce contamination of the critical elements of the vehicle structure, such as solar arrays. This paper presents an analytical solution for optimizing yaw attitude maneuvers. Equations describing pitch and roll motion needed to counteract the major torques during a yaw maneuver are obtained. A yaw rate profile is proposed. Also the paper describes the physical basis of the suggested optimization approach. In the obtained optimized case, the torques are significantly reduced. This torque reduction was compared to the existing optimization method which utilizes the computational solution. It was shown that the attitude profiles and the torque reduction have a good match for these two methods of optimization. The simulations using the ISS flight software showed similar propellant consumption for both methods. The analytical solution proposed in this paper has major benefits with respect to computational approach. In contrast to the current computational solution, which only can be calculated on the ground, the analytical solution does not require extensive computational resources, and can be implemented in the onboard software, thus, making the maneuver execution automatic. The automatic maneuver significantly simplifies the operations and, if necessary, allows to perform a maneuver without communication with the ground. It also reduces the probability of command

  17. The analytical approach to optimization of active region structure of quantum dot laser

    International Nuclear Information System (INIS)

    Korenev, V V; Savelyev, A V; Zhukov, A E; Omelchenko, A V; Maximov, M V

    2014-01-01

    Using the analytical approach introduced in our previous papers we analyse the possibilities of optimization of size and structure of active region of semiconductor quantum dot lasers emitting via ground-state optical transitions. It is shown that there are optimal length' dispersion and number of QD layers in laser active region which allow one to obtain lasing spectrum of a given width at minimum injection current. Laser efficiency corresponding to the injection current optimized by the cavity length is practically equal to its maximum value

  18. The analytical approach to optimization of active region structure of quantum dot laser

    Science.gov (United States)

    Korenev, V. V.; Savelyev, A. V.; Zhukov, A. E.; Omelchenko, A. V.; Maximov, M. V.

    2014-10-01

    Using the analytical approach introduced in our previous papers we analyse the possibilities of optimization of size and structure of active region of semiconductor quantum dot lasers emitting via ground-state optical transitions. It is shown that there are optimal length' dispersion and number of QD layers in laser active region which allow one to obtain lasing spectrum of a given width at minimum injection current. Laser efficiency corresponding to the injection current optimized by the cavity length is practically equal to its maximum value.

  19. Optimization of turning process through the analytic flank wear modelling

    Science.gov (United States)

    Del Prete, A.; Franchi, R.; De Lorenzis, D.

    2018-05-01

    In the present work, the approach used for the optimization of the process capabilities for Oil&Gas components machining will be described. These components are machined by turning of stainless steel castings workpieces. For this purpose, a proper Design Of Experiments (DOE) plan has been designed and executed: as output of the experimentation, data about tool wear have been collected. The DOE has been designed starting from the cutting speed and feed values recommended by the tools manufacturer; the depth of cut parameter has been maintained as a constant. Wear data has been obtained by means the observation of the tool flank wear under an optical microscope: the data acquisition has been carried out at regular intervals of working times. Through a statistical data and regression analysis, analytical models of the flank wear and the tool life have been obtained. The optimization approach used is a multi-objective optimization, which minimizes the production time and the number of cutting tools used, under the constraint on a defined flank wear level. The technique used to solve the optimization problem is a Multi Objective Particle Swarm Optimization (MOPS). The optimization results, validated by the execution of a further experimental campaign, highlighted the reliability of the work and confirmed the usability of the optimized process parameters and the potential benefit for the company.

  20. Optimized Analytical Method to Determine Gallic and Picric Acids in Pyrotechnic Samples by Using HPLC/UV (Reverse Phase)

    International Nuclear Information System (INIS)

    Garcia Alonso, S.; Perez Pastor, R. M.

    2013-01-01

    A study on the optimization and development of a chromatographic method for the determination of gallic and picric acids in pyrotechnic samples is presented. In order to achieve this, both analytical conditions by HPLC with diode detection and extraction step of a selected sample were studied. (Author)

  1. Simplified Analytical Method for Optimized Initial Shape Analysis of Self-Anchored Suspension Bridges and Its Verification

    Directory of Open Access Journals (Sweden)

    Myung-Rag Jung

    2015-01-01

    Full Text Available A simplified analytical method providing accurate unstrained lengths of all structural elements is proposed to find the optimized initial state of self-anchored suspension bridges under dead loads. For this, equilibrium equations of the main girder and the main cable system are derived and solved by evaluating the self-weights of cable members using unstrained cable lengths and iteratively updating both the horizontal tension component and the vertical profile of the main cable. Furthermore, to demonstrate the validity of the simplified analytical method, the unstrained element length method (ULM is applied to suspension bridge models based on the unstressed lengths of both cable and frame members calculated from the analytical method. Through numerical examples, it is demonstrated that the proposed analytical method can indeed provide an optimized initial solution by showing that both the simplified method and the nonlinear FE procedure lead to practically identical initial configurations with only localized small bending moment distributions.

  2. Optimization of analytical and pre-analytical conditions for MALDI-TOF-MS human urine protein profiles.

    Science.gov (United States)

    Calvano, C D; Aresta, A; Iacovone, M; De Benedetto, G E; Zambonin, C G; Battaglia, M; Ditonno, P; Rutigliano, M; Bettocchi, C

    2010-03-11

    Protein analysis in biological fluids, such as urine, by means of mass spectrometry (MS) still suffers for insufficient standardization in protocols for sample collection, storage and preparation. In this work, the influence of these variables on healthy donors human urine protein profiling performed by matrix assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was studied. A screening of various urine sample pre-treatment procedures and different sample deposition approaches on the MALDI target was performed. The influence of urine samples storage time and temperature on spectral profiles was evaluated by means of principal component analysis (PCA). The whole optimized procedure was eventually applied to the MALDI-TOF-MS analysis of human urine samples taken from prostate cancer patients. The best results in terms of detected ions number and abundance in the MS spectra were obtained by using home-made microcolumns packed with hydrophilic-lipophilic balance (HLB) resin as sample pre-treatment method; this procedure was also less expensive and suitable for high throughput analyses. Afterwards, the spin coating approach for sample deposition on the MALDI target plate was optimized, obtaining homogenous and reproducible spots. Then, PCA indicated that low storage temperatures of acidified and centrifuged samples, together with short handling time, allowed to obtain reproducible profiles without artifacts contribution due to experimental conditions. Finally, interesting differences were found by comparing the MALDI-TOF-MS protein profiles of pooled urine samples of healthy donors and prostate cancer patients. The results showed that analytical and pre-analytical variables are crucial for the success of urine analysis, to obtain meaningful and reproducible data, even if the intra-patient variability is very difficult to avoid. It has been proven how pooled urine samples can be an interesting way to make easier the comparison between

  3. Analytic model for ultrasound energy receivers and their optimal electric loads

    Science.gov (United States)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-08-01

    In this paper, we present an analytic model for thickness resonating plate ultrasound energy receivers, which we have derived from the piezoelectric and the wave equations and, in which we have included dielectric, viscosity and acoustic attenuation losses. Afterwards, we explore the optimal electric load predictions by the zero reflection and power maximization approaches present in the literature with different acoustic boundary conditions, and discuss their limitations. To validate our model, we compared our expressions with the KLM model solved numerically with very good agreement. Finally, we discuss the differences between the zero reflection and power maximization optimal electric loads, which start to differ as losses in the receiver increase.

  4. Understanding Business Analytics Success and Impact: A Qualitative Study

    Science.gov (United States)

    Parks, Rachida F.; Thambusamy, Ravi

    2017-01-01

    Business analytics is believed to be a huge boon for organizations since it helps offer timely insights over the competition, helps optimize business processes, and helps generate growth and innovation opportunities. As organizations embark on their business analytics initiatives, many strategic questions, such as how to operationalize business…

  5. Analytical studies on optimization of containment design pressure

    International Nuclear Information System (INIS)

    Haware, S.K.; Ghosh, A.K.; Kushwaha, H.S.

    2005-01-01

    optimizing on the size of BOP in order to optimize the containment design pressure. The results of the optimization studies are presented and discussed in the paper. (authors)

  6. Discrete optimization of isolator locations for vibration isolation systems: An analytical and experimental investigation

    Energy Technology Data Exchange (ETDEWEB)

    Ponslet, E.R.; Eldred, M.S. [Sandia National Labs., Albuquerque, NM (United States). Structural Dynamics Dept.

    1996-05-17

    An analytical and experimental study is conducted to investigate the effect of isolator locations on the effectiveness of vibration isolation systems. The study uses isolators with fixed properties and evaluates potential improvements to the isolation system that can be achieved by optimizing isolator locations. Because the available locations for the isolators are discrete in this application, a Genetic Algorithm (GA) is used as the optimization method. The system is modeled in MATLAB{trademark} and coupled with the GA available in the DAKOTA optimization toolkit under development at Sandia National Laboratories. Design constraints dictated by hardware and experimental limitations are implemented through penalty function techniques. A series of GA runs reveal difficulties in the search on this heavily constrained, multimodal, discrete problem. However, the GA runs provide a variety of optimized designs with predicted performance from 30 to 70 times better than a baseline configuration. An alternate approach is also tested on this problem: it uses continuous optimization, followed by rounding of the solution to neighboring discrete configurations. Results show that this approach leads to either infeasible or poor designs. Finally, a number of optimized designs obtained from the GA searches are tested in the laboratory and compared to the baseline design. These experimental results show a 7 to 46 times improvement in vibration isolation from the baseline configuration.

  7. An analytical model for the vertical electric field distribution and optimization of high voltage REBULF LDMOS

    International Nuclear Information System (INIS)

    Hu Xia-Rong; Lü Rui

    2014-01-01

    In this paper, an analytical model for the vertical electric field distribution and optimization of a high voltage-reduced bulk field (REBULF) lateral double-diffused metal—oxide-semiconductor (LDMOS) transistor is presented. The dependences of the breakdown voltage on the buried n-layer depth, thickness, and doping concentration are discussed in detail. The REBULF criterion and the optimal vertical electric field distribution condition are derived on the basis of the optimization of the electric field distribution. The breakdown voltage of the REBULF LDMOS transistor is always higher than that of a single reduced surface field (RESURF) LDMOS transistor, and both analytical and numerical results show that it is better to make a thick n-layer buried deep into the p-substrate. (interdisciplinary physics and related areas of science and technology)

  8. Application of analytical target cascading method in multidisciplinary design optimization of ship conceptual design

    Directory of Open Access Journals (Sweden)

    WANG Jian

    2017-10-01

    Full Text Available [Objectives] Ship conceptual design requires the coordination of many different disciplines for comprehensive optimization, which presents a complicated system design problem affecting several fields of technology. However, the development of overall ship design is relatively slow compared with other subjects. [Methods] The decomposition and coordination strategy of ship design is presented, and the analytical target cascading (ATC method is applied to the multidisciplinary design optimization of the conceptual design phase of ships on this basis. A tank ship example covering the 5 disciplines of buoyancy and stability, rapidity, maneuverability, capacity and economy is established to illustrate the analysis process in the present study. [Results] The results demonstrate the stability, convergence and validity of the ATC method in dealing with the complex coupling effect occurring in ship conceptual design.[Conclusions] The proposed method provides an effective basis for optimization of ship conceptual design.

  9. The application of the analytic hierarchy process (AHP) in uranium mine mining method of the optimal selection

    International Nuclear Information System (INIS)

    Tan Zhongyin; Kuang Zhengping; Qiu Huiyuan

    2014-01-01

    Analytic hierarchy process, AHP, is a combination of qualitative and quantitative, systematic and hierarchical analysis method. Basic decision theory of analytic hierarchy process is applied in this article, with a project example in north Guangdong region as the research object, the in-situ mining method optimization choose hierarchical analysis model is established and the analysis method, The results show that, the AHP model for mining method selecting model was reliable, optimization results were conformity with the actual use of the in-situ mining method, and it has better practicability. (authors)

  10. Optimization Model for Uncertain Statistics Based on an Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Yongchao Hou

    2014-01-01

    Full Text Available Uncertain statistics is a methodology for collecting and interpreting the expert’s experimental data by uncertainty theory. In order to estimate uncertainty distributions, an optimization model based on analytic hierarchy process (AHP and interpolation method is proposed in this paper. In addition, the principle of least squares method is presented to estimate uncertainty distributions with known functional form. Finally, the effectiveness of this method is illustrated by an example.

  11. Optimizing Hadoop Performance for Big Data Analytics in Smart Grid

    Directory of Open Access Journals (Sweden)

    Mukhtaj Khan

    2017-01-01

    Full Text Available The rapid deployment of Phasor Measurement Units (PMUs in power systems globally is leading to Big Data challenges. New high performance computing techniques are now required to process an ever increasing volume of data from PMUs. To that extent the Hadoop framework, an open source implementation of the MapReduce computing model, is gaining momentum for Big Data analytics in smart grid applications. However, Hadoop has over 190 configuration parameters, which can have a significant impact on the performance of the Hadoop framework. This paper presents an Enhanced Parallel Detrended Fluctuation Analysis (EPDFA algorithm for scalable analytics on massive volumes of PMU data. The novel EPDFA algorithm builds on an enhanced Hadoop platform whose configuration parameters are optimized by Gene Expression Programming. Experimental results show that the EPDFA is 29 times faster than the sequential DFA in processing PMU data and 1.87 times faster than a parallel DFA, which utilizes the default Hadoop configuration settings.

  12. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  13. Service Quality of Online Shopping Platforms: A Case-Based Empirical and Analytical Study

    Directory of Open Access Journals (Sweden)

    Tsan-Ming Choi

    2013-01-01

    Full Text Available Customer service is crucially important for online shopping platforms (OSPs such as eBay and Taobao. Based on the well-established service quality instruments and the scenario of the specific case on Taobao, this paper focuses on exploring the service quality of an OSP with an aim of revealing customer perceptions of the service quality associated with the provided functions and investigating their impacts on customer loyalty. By an empirical study, this paper finds that the “fulfillment and responsiveness” function is significantly related to the customer loyalty. Further analytical study is conducted to reveal that the optimal service level on the “fulfillment and responsiveness” function for the risk averse OSP uniquely exists. Moreover, the analytical results prove that (i if the customer loyalty is more positively correlated to the service level, it will lead to a larger optimal service level, and (ii the optimal service level is independent of the profit target, the source of uncertainty, and the risk preference of the OSP.

  14. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    International Nuclear Information System (INIS)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila; Lambas, Diego García; Cora, Sofía A.; Martínez, Cristian A. Vega-; Gargiulo, Ignacio D.; Padilla, Nelson D.; Tecce, Tomás E.; Orsi, Álvaro; Arancibia, Alejandra M. Muñoz

    2015-01-01

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs

  15. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila; Lambas, Diego García [Instituto de Astronomía Teórica y Experimental, CONICET-UNC, Laprida 854, X5000BGR, Córdoba (Argentina); Cora, Sofía A.; Martínez, Cristian A. Vega-; Gargiulo, Ignacio D. [Consejo Nacional de Investigaciones Científicas y Técnicas, Rivadavia 1917, C1033AAJ Buenos Aires (Argentina); Padilla, Nelson D.; Tecce, Tomás E.; Orsi, Álvaro; Arancibia, Alejandra M. Muñoz, E-mail: andresnicolas@oac.uncor.edu [Instituto de Astrofísica, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, Santiago (Chile)

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.

  16. Determination of Optimal Opening Scheme for Electromagnetic Loop Networks Based on Fuzzy Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Yang Li

    2016-01-01

    Full Text Available Studying optimization and decision for opening electromagnetic loop networks plays an important role in planning and operation of power grids. First, the basic principle of fuzzy analytic hierarchy process (FAHP is introduced, and then an improved FAHP-based scheme evaluation method is proposed for decoupling electromagnetic loop networks based on a set of indicators reflecting the performance of the candidate schemes. The proposed method combines the advantages of analytic hierarchy process (AHP and fuzzy comprehensive evaluation. On the one hand, AHP effectively combines qualitative and quantitative analysis to ensure the rationality of the evaluation model; on the other hand, the judgment matrix and qualitative indicators are expressed with trapezoidal fuzzy numbers to make decision-making more realistic. The effectiveness of the proposed method is validated by the application results on the real power system of Liaoning province of China.

  17. Variable-Field Analytical Ultracentrifugation: I. Time-Optimized Sedimentation Equilibrium

    Science.gov (United States)

    Ma, Jia; Metrick, Michael; Ghirlando, Rodolfo; Zhao, Huaying; Schuck, Peter

    2015-01-01

    Sedimentation equilibrium (SE) analytical ultracentrifugation (AUC) is a gold standard for the rigorous determination of macromolecular buoyant molar masses and the thermodynamic study of reversible interactions in solution. A significant experimental drawback is the long time required to attain SE, which is usually on the order of days. We have developed a method for time-optimized SE (toSE) with defined time-varying centrifugal fields that allow SE to be attained in a significantly (up to 10-fold) shorter time than is usually required. To achieve this, numerical Lamm equation solutions for sedimentation in time-varying fields are computed based on initial estimates of macromolecular transport properties. A parameterized rotor-speed schedule is optimized with the goal of achieving a minimal time to equilibrium while limiting transient sample preconcentration at the base of the solution column. The resulting rotor-speed schedule may include multiple over- and underspeeding phases, balancing the formation of gradients from strong sedimentation fluxes with periods of high diffusional transport. The computation is carried out in a new software program called TOSE, which also facilitates convenient experimental implementation. Further, we extend AUC data analysis to sedimentation processes in such time-varying centrifugal fields. Due to the initially high centrifugal fields in toSE and the resulting strong migration, it is possible to extract sedimentation coefficient distributions from the early data. This can provide better estimates of the size of macromolecular complexes and report on sample homogeneity early on, which may be used to further refine the prediction of the rotor-speed schedule. In this manner, the toSE experiment can be adapted in real time to the system under study, maximizing both the information content and the time efficiency of SE experiments. PMID:26287634

  18. Analytical Model and Optimized Design of Power Transmitting Coil for Inductively Coupled Endoscope Robot.

    Science.gov (United States)

    Ke, Quan; Luo, Weijie; Yan, Guozheng; Yang, Kai

    2016-04-01

    A wireless power transfer system based on the weakly inductive coupling makes it possible to provide the endoscope microrobot (EMR) with infinite power. To facilitate the patients' inspection with the EMR system, the diameter of the transmitting coil is enlarged to 69 cm. Due to the large transmitting range, a high quality factor of the Litz-wire transmitting coil is a necessity to ensure the intensity of magnetic field generated efficiently. Thus, this paper builds an analytical model of the transmitting coil, and then, optimizes the parameters of the coil by enlarging the quality factor. The lumped model of the transmitting coil includes three parameters: ac resistance, self-inductance, and stray capacitance. Based on the exact two-dimension solution, the accurate analytical expression of ac resistance is derived. Several transmitting coils of different specifications are utilized to verify this analytical expression, being in good agreements with the measured results except the coils with a large number of strands. Then, the quality factor of transmitting coils can be well predicted with the available analytical expressions of self- inductance and stray capacitance. Owing to the exact estimation of quality factor, the appropriate coil turns of the transmitting coil is set to 18-40 within the restrictions of transmitting circuit and human tissue issues. To supply enough energy for the next generation of the EMR equipped with a Ø9.5×10.1 mm receiving coil, the coil turns of the transmitting coil is optimally set to 28, which can transfer a maximum power of 750 mW with the remarkable delivering efficiency of 3.55%.

  19. Statistical and optimal learning with applications in business analytics

    Science.gov (United States)

    Han, Bin

    Statistical learning is widely used in business analytics to discover structure or exploit patterns from historical data, and build models that capture relationships between an outcome of interest and a set of variables. Optimal learning on the other hand, solves the operational side of the problem, by iterating between decision making and data acquisition/learning. All too often the two problems go hand-in-hand, which exhibit a feedback loop between statistics and optimization. We apply this statistical/optimal learning concept on a context of fundraising marketing campaign problem arising in many non-profit organizations. Many such organizations use direct-mail marketing to cultivate one-time donors and convert them into recurring contributors. Cultivated donors generate much more revenue than new donors, but also lapse with time, making it important to steadily draw in new cultivations. The direct-mail budget is limited, but better-designed mailings can improve success rates without increasing costs. We first apply statistical learning to analyze the effectiveness of several design approaches used in practice, based on a massive dataset covering 8.6 million direct-mail communications with donors to the American Red Cross during 2009-2011. We find evidence that mailed appeals are more effective when they emphasize disaster preparedness and training efforts over post-disaster cleanup. Including small cards that affirm donors' identity as Red Cross supporters is an effective strategy, while including gift items such as address labels is not. Finally, very recent acquisitions are more likely to respond to appeals that ask them to contribute an amount similar to their most recent donation, but this approach has an adverse effect on donors with a longer history. We show via simulation that a simple design strategy based on these insights has potential to improve success rates from 5.4% to 8.1%. Given these findings, when new scenario arises, however, new data need to

  20. Homogenized blocked arcs for multicriteria optimization of radiotherapy: Analytical and numerical solutions

    International Nuclear Information System (INIS)

    Fenwick, John D.; Pardo-Montero, Juan

    2010-01-01

    Purpose: Homogenized blocked arcs are intuitively appealing as basis functions for multicriteria optimization of rotational radiotherapy. Such arcs avoid an organ-at-risk (OAR), spread dose out well over the rest-of-body (ROB), and deliver homogeneous doses to a planning target volume (PTV) using intensity modulated fluence profiles, obtainable either from closed-form solutions or iterative numerical calculations. Here, the analytic and iterative arcs are compared. Methods: Dose-distributions have been calculated for nondivergent beams, both including and excluding scatter, beam penumbra, and attenuation effects, which are left out of the derivation of the analytic arcs. The most straightforward analytic arc is created by truncating the well-known Brahme, Roos, and Lax (BRL) solution, cutting its uniform dose region down from an annulus to a smaller nonconcave region lying beyond the OAR. However, the truncation leaves behind high dose hot-spots immediately on either side of the OAR, generated by very high BRL fluence levels just beyond the OAR. These hot-spots can be eliminated using alternative analytical solutions ''C'' and ''L,'' which, respectively, deliver constant and linearly rising fluences in the gap region between the OAR and PTV (before truncation). Results: Measured in terms of PTV dose homogeneity, ROB dose-spread, and OAR avoidance, C solutions generate better arc dose-distributions than L when scatter, penumbra, and attenuation are left out of the dose modeling. Including these factors, L becomes the best analytical solution. However, the iterative approach generates better dose-distributions than any of the analytical solutions because it can account and compensate for penumbra and scatter effects. Using the analytical solutions as starting points for the iterative methodology, dose-distributions almost as good as those obtained using the conventional iterative approach can be calculated very rapidly. Conclusions: The iterative methodology is

  1. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    Science.gov (United States)

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  2. An analytical approach for optimizing the leaf design of a multi-leaf collimator in a linear accelerator

    International Nuclear Information System (INIS)

    Topolnjak, R; Heide, U A van der

    2008-01-01

    In this study, we present an analytical approach for optimizing the leaf design of a multi-leaf collimator (MLC) in a linear accelerator. Because leaf designs vary between vendors, our goal is to characterize and quantify the effects of different compromises which have to be made between performance parameters. Subsequently, an optimal leaf design for an earlier proposed six-bank MLC which combines a high-resolution field-shaping ability with a large field size is determined. To this end a model of the linac is created that includes the following parameters: the source size, the maximum field size, the distance between source and isocenter, and the leaf's design parameters. First, the optimal radius of the leaf tip was found. This optimum was defined by the requirement that the fluence intensity should fall from 80% of the maximum value to 20% in a minimal distance, defining the width of the fluence penumbra. A second requirement was that this penumbra width should be constant when a leaf moves from one side of the field to the other. The geometric, transmission and total penumbra width (80-20%) were calculated depending on the design parameters. The analytical model is in agreement with Elekta, Varian and Siemens collimator designs. For leaves thinner than 4 cm, the transmission penumbra becomes dominant, and for leaves close to the source the geometric penumbra plays a role. Finally, by choosing the leaf thickness of 3.5 cm, 4 cm and 5 cm from the lowest to the highest bank, respectively, an optimal leaf design for a six-bank MLC is achieved

  3. Building a Model for Optimization of Informational-Analytical Ensuring of Cost Management of Industrial Enterprise

    Directory of Open Access Journals (Sweden)

    Lisovskyi Ihor V

    2015-09-01

    Full Text Available The article examines peculiarities of building a model of informational-analytical optimization of cost management. The main sources of information together with approaches to cost management of industrial enterprises have been identified. In order to ensure the successful operation of enterprise in the conditions of growing manifestations of crisis, a continuous improving of the system for enterprise management along with the most important elements, which are necessary for its normal functioning, should be carried out. One of these so important elements are costs of enterprise. Accordingly, for an effective cost management, the most appropriate management approaches and tools must be used, based on a proper informational-analytical support of all processes. The article proposes an optimization model of informationalanalytical ensuring of cost management of industrial enterprises, which will serve as a ground for more informed and economically feasible solutions. A combination of best practices and tools to improve the efficiency of enterprise management has been proposed

  4. A Novel Analytical Technique for Optimal Allocation of Capacitors in Radial Distribution Systems

    Directory of Open Access Journals (Sweden)

    Sarfaraz Nawaz

    2017-07-01

    Full Text Available In this paper, a novel analytical technique is proposed to determine the optimal size and location of shunt capacitor units in radial distribution systems. An objective function is formulated to reduce real power loss, to improve the voltage profile and to increase annual cost savings. A new constant, the Loss Sensitivity Constant (LSC, is proposed here. The value of LSC decides the location and size of candidate buses. The technique is demonstrated on an IEEE-33 bus system at different load levels and the 130-bus distribution system of Jamawa Ramgarh village, Jaipur city. The obtained results are compared with the latest optimization techniques to show the effectiveness and robustness of the proposed technique.

  5. An analytic approach to optimize tidal turbine fields

    Science.gov (United States)

    Pelz, P.; Metzler, M.

    2013-12-01

    Motivated by global warming due to CO2-emission various technologies for harvesting of energy from renewable sources are developed. Hydrokinetic turbines get applied to surface watercourse or tidal flow to gain electrical energy. Since the available power for hydrokinetic turbines is proportional to the projected cross section area, fields of turbines are installed to scale shaft power. Each hydrokinetic turbine of a field can be considered as a disk actuator. In [1], the first author derives the optimal operation point for hydropower in an open-channel. The present paper concerns about a 0-dimensional model of a disk-actuator in an open-channel flow with bypass, as a special case of [1]. Based on the energy equation, the continuity equation and the momentum balance an analytical approach is made to calculate the coefficient of performance for hydrokinetic turbines with bypass flow as function of the turbine head and the ratio of turbine width to channel width.

  6. Developing optimal search strategies for detecting clinically sound prognostic studies in MEDLINE: an analytic survey

    Directory of Open Access Journals (Sweden)

    Haynes R Brian

    2004-06-01

    Full Text Available Abstract Background Clinical end users of MEDLINE have a difficult time retrieving articles that are both scientifically sound and directly relevant to clinical practice. Search filters have been developed to assist end users in increasing the success of their searches. Many filters have been developed for the literature on therapy and reviews but little has been done in the area of prognosis. The objective of this study is to determine how well various methodologic textwords, Medical Subject Headings, and their Boolean combinations retrieve methodologically sound literature on the prognosis of health disorders in MEDLINE. Methods An analytic survey was conducted, comparing hand searches of journals with retrievals from MEDLINE for candidate search terms and combinations. Six research assistants read all issues of 161 journals for the publishing year 2000. All articles were rated using purpose and quality indicators and categorized into clinically relevant original studies, review articles, general papers, or case reports. The original and review articles were then categorized as 'pass' or 'fail' for methodologic rigor in the areas of prognosis and other clinical topics. Candidate search strategies were developed for prognosis and run in MEDLINE – the retrievals being compared with the hand search data. The sensitivity, specificity, precision, and accuracy of the search strategies were calculated. Results 12% of studies classified as prognosis met basic criteria for scientific merit for testing clinical applications. Combinations of terms reached peak sensitivities of 90%. Compared with the best single term, multiple terms increased sensitivity for sound studies by 25.2% (absolute increase, and increased specificity, but by a much smaller amount (1.1% when sensitivity was maximized. Combining terms to optimize both sensitivity and specificity achieved sensitivities and specificities of approximately 83% for each. Conclusion Empirically derived

  7. A Numerical-Analytical Approach Based on Canonical Transformations for Computing Optimal Low-Thrust Transfers

    Science.gov (United States)

    da Silva Fernandes, S.; das Chagas Carvalho, F.; Bateli Romão, J. V.

    2018-04-01

    A numerical-analytical procedure based on infinitesimal canonical transformations is developed for computing optimal time-fixed low-thrust limited power transfers (no rendezvous) between coplanar orbits with small eccentricities in an inverse-square force field. The optimization problem is formulated as a Mayer problem with a set of non-singular orbital elements as state variables. Second order terms in eccentricity are considered in the development of the maximum Hamiltonian describing the optimal trajectories. The two-point boundary value problem of going from an initial orbit to a final orbit is solved by means of a two-stage Newton-Raphson algorithm which uses an infinitesimal canonical transformation. Numerical results are presented for some transfers between circular orbits with moderate radius ratio, including a preliminary analysis of Earth-Mars and Earth-Venus missions.

  8. Optimization of instrumental neutron activation analysis method by means of 2k experimental design technique aiming the validation of analytical procedures

    International Nuclear Information System (INIS)

    Petroni, Robson; Moreira, Edson G.

    2013-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2 k experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  9. Analytical insights into optimality and resonance in fish swimming

    Science.gov (United States)

    Kohannim, Saba; Iwasaki, Tetsuya

    2014-01-01

    This paper provides analytical insights into the hypothesis that fish exploit resonance to reduce the mechanical cost of swimming. A simple body–fluid fish model, representing carangiform locomotion, is developed. Steady swimming at various speeds is analysed using optimal gait theory by minimizing bending moment over tail movements and stiffness, and the results are shown to match with data from observed swimming. Our analysis indicates the following: thrust–drag balance leads to the Strouhal number being predetermined based on the drag coefficient and the ratio of wetted body area to cross-sectional area of accelerated fluid. Muscle tension is reduced when undulation frequency matches resonance frequency, which maximizes the ratio of tail-tip velocity to bending moment. Finally, hydrodynamic resonance determines tail-beat frequency, whereas muscle stiffness is actively adjusted, so that overall body–fluid resonance is exploited. PMID:24430125

  10. Autonomic urban traffic optimization using data analytics

    OpenAIRE

    Garriga Porqueras, Albert

    2017-01-01

    This work focuses on a smart mobility use case where real-time data analytics on traffic measures is used to improve mobility in the event of a perturbation causing congestion in a local urban area. The data monitored is analysed in order to identify patterns that are used to properly reconfigure traffic lights. The monitoring and data analytics infrastructure is based on a hierarchical distributed architecture that allows placing data analytics processes such as machine learning close to the...

  11. Analytical methods of optimization

    CERN Document Server

    Lawden, D F

    2006-01-01

    Suitable for advanced undergraduates and graduate students, this text surveys the classical theory of the calculus of variations. It takes the approach most appropriate for applications to problems of optimizing the behavior of engineering systems. Two of these problem areas have strongly influenced this presentation: the design of the control systems and the choice of rocket trajectories to be followed by terrestrial and extraterrestrial vehicles.Topics include static systems, control systems, additional constraints, the Hamilton-Jacobi equation, and the accessory optimization problem. Prereq

  12. A study on an optimal movement model

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton BN1 9QH, UK (United Kingdom); Zhang, Kewei [SMS, Sussex University, Brighton BN1 9QH (United Kingdom); Luo Yousong [Department of Mathematics and Statistics, RMIT University, GOP Box 2476V, Melbourne, Vic 3001 (Australia)

    2003-07-11

    We present an analytical and rigorous study on a TOPS (task optimization in the presence of signal-dependent noise) model with a hold-on or an end-point control. Optimal control signals are rigorously obtained, which enables us to investigate various issues about the model including its trajectories, velocities, control signals, variances and the dependence of these quantities on various model parameters. With the hold-on control, we find that the optimal control can be implemented with an almost 'nil' hold-on period. The optimal control signal is a linear combination of two sub-control signals. One of the sub-control signals is positive and the other is negative. With the end-point control, the end-point variance is dramatically reduced, in comparison with the hold-on control. However, the velocity is not symmetric (bell shape). Finally, we point out that the velocity with a hold-on control takes the bell shape only within a limited parameter region.

  13. A Complete First-Order Analytical Solution for Optimal Low-Thrust Limited-Power Transfers Between Coplanar Orbits with Small Eccentricities

    Science.gov (United States)

    Da Silva Fernandes, Sandro; Das Chagas Carvalho, Francisco; Vilhena de Moraes, Rodolpho

    The purpose of this work is to present a complete first order analytical solution, which includes short periodic terms, for the problem of optimal low-thrust limited power trajectories with large amplitude transfers (no rendezvous) between coplanar orbits with small eccentricities in Newtonian central gravity field. The study of these transfers is particularly interesting because the orbits found in practice often have a small eccentricity and the problem of transferring a vehicle from a low earth orbit to a high earth orbit is frequently found. Besides, the analysis has been motivated by the renewed interest in the use of low-thrust propulsion systems in space missions verified in the last two decades. Several researchers have obtained numerical and sometimes analytical solutions for a number of specific initial orbits and specific thrust profiles. Averaging methods are also used in such researches. Firstly, the optimization problem associated to the space transfer problem is formulated as a Mayer problem of optimal control with Cartesian elements - position and velocity vectors - as state variables. After applying the Pontryagin Maximum Principle, successive Mathieu transformations are performed and suitable sets of orbital elements are introduced. The short periodic terms are eliminated from the maximum Hamiltonian function through an infinitesimal canonical transformation built through Hori method - a perturbation canonical method based on Lie series. The new Hamiltonian function, which results from the infinitesimal canonical transformation, describes the extremal trajectories for long duration maneuvers. Closed-form analytical solutions are obtained for the new canonical system by solving the Hamilton-Jacobi equation through the separation of variables technique. By applying the transformation equations of the algorithm of Hori method, a first order analytical solution for the problem is obtained in non-singular orbital elements. For long duration maneuvers

  14. Analytical Study of Oxalates Coprecipitation

    Directory of Open Access Journals (Sweden)

    Liana MARTA

    2003-03-01

    Full Text Available The paper deals with the establishing of the oxalates coprecipitation conditions in view of the synthesis of superconducting systems. A systematic analytical study of the oxalates precipitation conditions has been performed, for obtaining superconducting materials, in the Bi Sr-Ca-Cu-O system. For this purpose, the formulae of the precipitates solubility as a function of pH and oxalate excess were established. The possible formation of hydroxo-complexes and soluble oxalato-complexes was taken into account. A BASIC program was used for tracing the precipitation curves. The curves of the solubility versus pH for different oxalate excess have plotted for the four oxalates, using a logaritmic scale. The optimal conditions for the quantitative oxalate coprecipitation have been deduced from the diagrams. The theoretical curves were confirmed by experimental results. From the precursors obtained by this method, the BSCCO superconducting phases were obtained by an appropriate thermal treatment. The formation of the superconducting phases was identified by X-ray diffraction analysis.

  15. An analytical study of photoacoustic and thermoacoustic generation efficiency towards contrast agent and film design optimization

    Directory of Open Access Journals (Sweden)

    Fei Gao

    2017-09-01

    Full Text Available Photoacoustic (PA and thermoacoustic (TA effects have been explored in many applications, such as bio-imaging, laser-induced ultrasound generator, and sensitive electromagnetic (EM wave film sensor. In this paper, we propose a compact analytical PA/TA generation model to incorporate EM, thermal and mechanical parameters, etc. From the derived analytical model, both intuitive predictions and quantitative simulations are performed. It shows that beyond the EM absorption improvement, there are many other physical parameters that deserve careful consideration when designing contrast agents or film composites, followed by simulation study. Lastly, several sets of experimental results are presented to prove the feasibility of the proposed analytical model. Overall, the proposed compact model could work as a clear guidance and predication for improved PA/TA contrast agents and film generator/sensor designs in the domain area.

  16. The Framework of Intervention Engine Based on Learning Analytics

    Science.gov (United States)

    Sahin, Muhittin; Yurdugül, Halil

    2017-01-01

    Learning analytics primarily deals with the optimization of learning environments and the ultimate goal of learning analytics is to improve learning and teaching efficiency. Studies on learning analytics seem to have been made in the form of adaptation engine and intervention engine. Adaptation engine studies are quite widespread, but intervention…

  17. Optimization of instrumental neutron activation analysis method by means of 2{sup k} experimental design technique aiming the validation of analytical procedures

    Energy Technology Data Exchange (ETDEWEB)

    Petroni, Robson; Moreira, Edson G., E-mail: rpetroni@ipen.br, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2{sup k} experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  18. Parametric study of a turbocompound diesel engine based on an analytical model

    International Nuclear Information System (INIS)

    Zhao, Rongchao; Zhuge, Weilin; Zhang, Yangjun; Yin, Yong; Zhao, Yanting; Chen, Zhen

    2016-01-01

    Turbocompounding is an important technique to recover waste heat from engine exhaust and reduce CO_2 emission. This paper presents a parametric study of turbocompound diesel engine based on analytical model. An analytical model was developed to investigate the influence of system parameters on the engine fuel consumption. The model is based on thermodynamics knowledge and empirical models, which can consider the impacts of each parameter independently. The effects of turbine efficiency, back pressure, exhaust temperature, pressure ratio and engine speed on the recovery energy, pumping loss and engine fuel reductions were studied. Results show that turbine efficiency, exhaust temperature and back pressure has great influence on the fuel reduction and optimal power turbine (PT) expansion ratio. However, engine operation speed has little impact on the fuel savings obtained by turbocompounding. The interaction mechanism between the PT recovery power and engine pumping loss is presented in the paper. Due to the nonlinear characteristic of turbine power, there is an optimum value of PT expansion ratio to achieve largest power gain. At the end, the fuel saving potential of high performance turbocompound engine and the requirements for it are proposed in the paper. - Highlights: • An analytical model for turbocompound engine is developed and validated. • Parametric study is performed to obtain lowest BSFC and optimal expansion ratio. • The influences of each parameter on the fuel saving potentials are presented. • The impact mechanisms of each parameter on the energy tradeoff are disclosed. • It provides an effective tool to guide the preliminary design of turbocompounding.

  19. Data Analytics Based Dual-Optimized Adaptive Model Predictive Control for the Power Plant Boiler

    Directory of Open Access Journals (Sweden)

    Zhenhao Tang

    2017-01-01

    Full Text Available To control the furnace temperature of a power plant boiler precisely, a dual-optimized adaptive model predictive control (DoAMPC method is designed based on the data analytics. In the proposed DoAMPC, an accurate predictive model is constructed adaptively by the hybrid algorithm of the least squares support vector machine and differential evolution method. Then, an optimization problem is constructed based on the predictive model and many constraint conditions. To control the boiler furnace temperature, the differential evolution method is utilized to decide the control variables by solving the optimization problem. The proposed method can adapt to the time-varying situation by updating the sample data. The experimental results based on practical data illustrate that the DoAMPC can control the boiler furnace temperature with errors of less than 1.5% which can meet the requirements of the real production process.

  20. Commissioning of the laboratory of Atucha II NPP. Implementation and optimization of analytical techniques, quality aspects

    International Nuclear Information System (INIS)

    Schoenbrod, Betina; Quispe, Benjamin; Cattaneo, Alberto; Rodriguez, Ivanna; Chocron, Mauricio; Farias, Silvia

    2012-09-01

    Atucha II NPP is a Pressurized Vessel Heavy Water Reactor (PVHWR) of 740 MWe designed by SIEMENSKWU. After some years of delay, this NPP is in advanced construction state, being the beginning of commercial operation expected for 2013. Nucleoelectrica Argentina (N.A.S.A.) is the company in charge of the finalization of this project and the future operation of the plant. The Comision Nacional de Energia Atomica (C.N.E.A.) is the R and D nuclear institution in the country that, among many other topics, provides technical support to the stations. The Commissioning Chemistry Division of CNAII is in charge of the commissioning of the demineralization water plant and the organization of the chemical laboratory. The water plant started operating successfully in July 2010 and is providing the plant with nuclear grade purity water. Currently, in the conventional ('cold') laboratory several activities are taking place. On one hand, analytical techniques for the future operation of the plant are being tested and optimized. On the other hand, the laboratory is participating in the cleaning and conservation of the different components of the plant, providing technical support and the necessary analysis. To define the analytical techniques for the normal operation of the plant, the parameters to be measured and their range were established in the Chemistry Manual. The necessary equipment and reagents were bought. In this work, a summary of the analytical techniques that are being implemented and optimized is presented. Common anions (chloride, sulfate, fluoride, bromide and nitrate) are analyzed by ion chromatography. Cations, mainly sodium, are determined by absorption spectrometry. A UV-Vis spectrometer is used to determine silicates, iron, ammonia, DQO, total solids, true color and turbidity. TOC measurements are performed with a TOC analyzer. To optimize the methods, several parameters are evaluated: linearity, detection and quantification limits, precision and

  1. Study and optimization of the spatial resolution for detectors with binary readout

    Energy Technology Data Exchange (ETDEWEB)

    Yonamine, R., E-mail: ryo.yonamine@ulb.ac.be; Maerschalk, T.; Lentdecker, G. De

    2016-09-11

    Using simulations and analytical approaches, we have studied single hit resolutions obtained with a binary readout, which is often proposed for high granularity detectors to reduce the generated data volume. Our simulations considering several parameters (e.g. strip pitch) show that the detector geometry and an electronics parameter of the binary readout chips could be optimized for binary readout to offer an equivalent spatial resolution to the one with an analog readout. To understand the behavior as a function of simulation parameters, we developed analytical models that reproduce simulation results with a few parameters. The models can be used to optimize detector designs and operation conditions with regard to the spatial resolution.

  2. An Investigation to Manufacturing Analytical Services Composition using the Analytical Target Cascading Method.

    Science.gov (United States)

    Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas

    2017-01-01

    As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.

  3. CCS Site Optimization by Applying a Multi-objective Evolutionary Algorithm to Semi-Analytical Leakage Models

    Science.gov (United States)

    Cody, B. M.; Gonzalez-Nicolas, A.; Bau, D. A.

    2011-12-01

    Carbon capture and storage (CCS) has been proposed as a method of reducing global carbon dioxide (CO2) emissions. Although CCS has the potential to greatly retard greenhouse gas loading to the atmosphere while cleaner, more sustainable energy solutions are developed, there is a possibility that sequestered CO2 may leak and intrude into and adversely affect groundwater resources. It has been reported [1] that, while CO2 intrusion typically does not directly threaten underground drinking water resources, it may cause secondary effects, such as the mobilization of hazardous inorganic constituents present in aquifer minerals and changes in pH values. These risks must be fully understood and minimized before CCS project implementation. Combined management of project resources and leakage risk is crucial for the implementation of CCS. In this work, we present a method of: (a) minimizing the total CCS cost, the summation of major project costs with the cost associated with CO2 leakage; and (b) maximizing the mass of injected CO2, for a given proposed sequestration site. Optimization decision variables include the number of CO2 injection wells, injection rates, and injection well locations. The capital and operational costs of injection wells are directly related to injection well depth, location, injection flow rate, and injection duration. The cost of leakage is directly related to the mass of CO2 leaked through weak areas, such as abandoned oil wells, in the cap rock layers overlying the injected formation. Additional constraints on fluid overpressure caused by CO2 injection are imposed to maintain predefined effective stress levels that prevent cap rock fracturing. Here, both mass leakage and fluid overpressure are estimated using two semi-analytical models based upon work by [2,3]. A multi-objective evolutionary algorithm coupled with these semi-analytical leakage flow models is used to determine Pareto-optimal trade-off sets giving minimum total cost vs. maximum mass

  4. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    Directory of Open Access Journals (Sweden)

    Yongjun Ahn

    Full Text Available The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive

  5. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    Science.gov (United States)

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric

  6. Determination of proline in honey: comparison between official methods, optimization and validation of the analytical methodology.

    Science.gov (United States)

    Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe

    2014-05-01

    The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. A Two-Step Approach for Analytical Optimal Hedging with Two Triggers

    Directory of Open Access Journals (Sweden)

    Tiesong Hu

    2016-02-01

    Full Text Available Hedging is widely used to mitigate severe water shortages in the operation of reservoirs during droughts. Rationing is usually instituted with one hedging policy, which is based only on one trigger, i.e., initial storage level or current water availability. It may perform poorly in balancing the benefits of a release during the current period versus those of carryover storage during future droughts. This study proposes a novel hedging rule to improve the efficiency of a reservoir operated to supply water, in which, based on two triggers, hedging is initiated with three different hedging sub-rules through a two-step approach. In the first step, the sub-rule is triggered based on the relationship between the initial reservoir storage level and the level of the target rule curve or the firm rule curve at the end of the current period. This step is mainly concerned with increasing the water level or not in the current period. Hedging is then triggered under the sub-rule based on current water availability in the second step, in which the trigger implicitly considers both initial and ending reservoir storage levels in the current period. Moreover, the amount of hedging is analytically derived based on the Karush–Kuhn–Tucker (KKT conditions. In addition, the hedging parameters are optimized using the improved particle swarm optimization (IPSO algorithm coupled with a rule-based simulation. A single water-supply reservoir located in Hubei Province in central China is selected as a case study. The operation results show that the proposed rule is reasonable and significantly improves the reservoir operation performance for both long-term and critical periods relative to other operation policies, such as the standard operating policy (SOP and the most commonly used hedging rules.

  8. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    Science.gov (United States)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  9. Optimal control of quantum dissipative dynamics: Analytic solution for cooling the three-level Λ system

    International Nuclear Information System (INIS)

    Sklarz, Shlomo E.; Tannor, David J.; Khaneja, Navin

    2004-01-01

    We study the problem of optimal control of dissipative quantum dynamics. Although under most circumstances dissipation leads to an increase in entropy (or a decrease in purity) of the system, there is an important class of problems for which dissipation with external control can decrease the entropy (or increase the purity) of the system. An important example is laser cooling. In such systems, there is an interplay of the Hamiltonian part of the dynamics, which is controllable, and the dissipative part of the dynamics, which is uncontrollable. The strategy is to control the Hamiltonian portion of the evolution in such a way that the dissipation causes the purity of the system to increase rather than decrease. The goal of this paper is to find the strategy that leads to maximal purity at the final time. Under the assumption that Hamiltonian control is complete and arbitrarily fast, we provide a general framework by which to calculate optimal cooling strategies. These assumptions lead to a great simplification, in which the control problem can be reformulated in terms of the spectrum of eigenvalues of ρ, rather than ρ itself. By combining this formulation with the Hamilton-Jacobi-Bellman theorem we are able to obtain an equation for the globally optimal cooling strategy in terms of the spectrum of the density matrix. For the three-level Λ system, we provide a complete analytic solution for the optimal cooling strategy. For this system it is found that the optimal strategy does not exploit system coherences and is a 'greedy' strategy, in which the purity is increased maximally at each instant

  10. Optimality study of a gust alleviation system for light wing-loading STOL aircraft

    Science.gov (United States)

    Komoda, M.

    1976-01-01

    An analytical study was made of an optimal gust alleviation system that employs a vertical gust sensor mounted forward of an aircraft's center of gravity. Frequency domain optimization techniques were employed to synthesize the optimal filters that process the corrective signals to the flaps and elevator actuators. Special attention was given to evaluating the effectiveness of lead time, that is, the time by which relative wind sensor information should lead the actual encounter of the gust. The resulting filter is expressed as an implicit function of the prescribed control cost. A numerical example for a light wing loading STOL aircraft is included in which the optimal trade-off between performance and control cost is systematically studied.

  11. Fabrication of paper-based analytical devices optimized by central composite design.

    Science.gov (United States)

    Hamedpour, Vahid; Leardi, Riccardo; Suzuki, Koji; Citterio, Daniel

    2018-04-30

    In this work, an application of a design of experiments approach for the optimization of an isoniazid assay on a single-area inkjet-printed paper-based analytical device (PAD) is described. For this purpose, a central composite design was used for evaluation of the effect of device geometry and amount of assay reagents on the efficiency of the proposed device. The factors of interest were printed length, width, and sampling volume as factors related to device geometry, and amounts of the assay reagents polyvinyl alcohol (PVA), NH4OH, and AgNO3. Deposition of the assay reagents was performed by a thermal inkjet printer. The colorimetric assay mechanism of this device is based on the chemical interaction of isoniazid, ammonium hydroxide, and PVA with silver ions to induce the formation of yellow silver nanoparticles (AgNPs). The in situ-formed AgNPs can be easily detected by the naked eye or with a simple flat-bed scanner. Under optimal conditions, the calibration curve was linear in the isoniazid concentration range 0.03-10 mmol L-1 with a relative standard deviation of 3.4% (n = 5 for determination of 1.0 mmol L-1). Finally, the application of the proposed device for isoniazid determination in pharmaceutical preparations produced satisfactory results.

  12. Analytic Shielding Optimization to Reduce Crew Exposure to Ionizing Radiation Inside Space Vehicles

    Science.gov (United States)

    Gaza, Razvan; Cooper, Tim P.; Hanzo, Arthur; Hussein, Hesham; Jarvis, Kandy S.; Kimble, Ryan; Lee, Kerry T.; Patel, Chirag; Reddell, Brandon D.; Stoffle, Nicholas; hide

    2009-01-01

    A sustainable lunar architecture provides capabilities for leveraging out-of-service components for alternate uses. Discarded architecture elements may be used to provide ionizing radiation shielding to the crew habitat in case of a Solar Particle Event. The specific location relative to the vehicle where the additional shielding mass is placed, as corroborated with particularities of the vehicle design, has a large influence on protection gain. This effect is caused by the exponential- like decrease of radiation exposure with shielding mass thickness, which in turn determines that the most benefit from a given amount of shielding mass is obtained by placing it so that it preferentially augments protection in under-shielded areas of the vehicle exposed to the radiation environment. A novel analytic technique to derive an optimal shielding configuration was developed by Lockheed Martin during Design Analysis Cycle 3 (DAC-3) of the Orion Crew Exploration Vehicle (CEV). [1] Based on a detailed Computer Aided Design (CAD) model of the vehicle including a specific crew positioning scenario, a set of under-shielded vehicle regions can be identified as candidates for placement of additional shielding. Analytic tools are available to allow capturing an idealized supplemental shielding distribution in the CAD environment, which in turn is used as a reference for deriving a realistic shielding configuration from available vehicle components. While the analysis referenced in this communication applies particularly to the Orion vehicle, the general method can be applied to a large range of space exploration vehicles, including but not limited to lunar and Mars architecture components. In addition, the method can be immediately applied for optimization of radiation shielding provided to sensitive electronic components.

  13. What if Learning Analytics Were Based on Learning Science?

    Science.gov (United States)

    Marzouk, Zahia; Rakovic, Mladen; Liaqat, Amna; Vytasek, Jovita; Samadi, Donya; Stewart-Alonso, Jason; Ram, Ilana; Woloshen, Sonya; Winne, Philip H.; Nesbit, John C.

    2016-01-01

    Learning analytics are often formatted as visualisations developed from traced data collected as students study in online learning environments. Optimal analytics inform and motivate students' decisions about adaptations that improve their learning. We observe that designs for learning often neglect theories and empirical findings in learning…

  14. Portfolio Optimization and Mortgage Choice

    Directory of Open Access Journals (Sweden)

    Maj-Britt Nordfang

    2017-01-01

    Full Text Available This paper studies the optimal mortgage choice of an investor in a simple bond market with a stochastic interest rate and access to term life insurance. The study is based on advances in stochastic control theory, which provides analytical solutions to portfolio problems with a stochastic interest rate. We derive the optimal portfolio of a mortgagor in a simple framework and formulate stylized versions of mortgage products offered in the market today. This allows us to analyze the optimal investment strategy in terms of optimal mortgage choice. We conclude that certain extreme investors optimally choose either a traditional fixed rate mortgage or an adjustable rate mortgage, while investors with moderate risk aversion and income prefer a mix of the two. By matching specific investor characteristics to existing mortgage products, our study provides a better understanding of the complex and yet restricted mortgage choice faced by many household investors. In addition, the simple analytical framework enables a detailed analysis of how changes to market, income and preference parameters affect the optimal mortgage choice.

  15. Optimal Analytical Solution for a Capacitive Wireless Power Transfer System with One Transmitter and Two Receivers

    Directory of Open Access Journals (Sweden)

    Ben Minnaert

    2017-09-01

    Full Text Available Wireless power transfer from one transmitter to multiple receivers through inductive coupling is slowly entering the market. However, for certain applications, capacitive wireless power transfer (CWPT using electric coupling might be preferable. In this work, we determine closed-form expressions for a CWPT system with one transmitter and two receivers. We determine the optimal solution for two design requirements: (i maximum power transfer, and (ii maximum system efficiency. We derive the optimal loads and provide the analytical expressions for the efficiency and power. We show that the optimal load conductances for the maximum power configuration are always larger than for the maximum efficiency configuration. Furthermore, it is demonstrated that if the receivers are coupled, this can be compensated for by introducing susceptances that have the same value for both configurations. Finally, we numerically verify our results. We illustrate the similarities to the inductive wireless power transfer (IWPT solution and find that the same, but dual, expressions apply.

  16. Optimizing an Immersion ESL Curriculum Using Analytic Hierarchy Process

    Science.gov (United States)

    Tang, Hui-Wen Vivian

    2011-01-01

    The main purpose of this study is to fill a substantial knowledge gap regarding reaching a uniform group decision in English curriculum design and planning. A comprehensive content-based course criterion model extracted from existing literature and expert opinions was developed. Analytical hierarchy process (AHP) was used to identify the relative…

  17. Communication: Analytical optimal pulse shapes obtained with the aid of genetic algorithms: Controlling the photoisomerization yield of retinal

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero, R. D., E-mail: rdguerrerom@unal.edu.co [Department of Physics, Universidad Nacional de Colombia, Bogotá (Colombia); Arango, C. A., E-mail: caarango@icesi.edu.co [Department of Chemical Sciences, Universidad Icesi, Cali (Colombia); Reyes, A., E-mail: areyesv@unal.edu.co [Department of Chemistry, Universidad Nacional de Colombia, Bogotá (Colombia)

    2016-07-21

    We recently proposed a Quantum Optimal Control (QOC) method constrained to build pulses from analytical pulse shapes [R. D. Guerrero et al., J. Chem. Phys. 143(12), 124108 (2015)]. This approach was applied to control the dissociation channel yields of the diatomic molecule KH, considering three potential energy curves and one degree of freedom. In this work, we utilized this methodology to study the strong field control of the cis-trans photoisomerization of 11-cis retinal. This more complex system was modeled with a Hamiltonian comprising two potential energy surfaces and two degrees of freedom. The resulting optimal pulse, made of 6 linearly chirped pulses, was capable of controlling the population of the trans isomer on the ground electronic surface for nearly 200 fs. The simplicity of the pulse generated with our QOC approach offers two clear advantages: a direct analysis of the sequence of events occurring during the driven dynamics, and its reproducibility in the laboratory with current laser technologies.

  18. Optimization of the Water Volume in the Buckets of Pico Hydro Overshot Waterwheel by Analytical Method

    Science.gov (United States)

    Budiarso; Adanta, Dendy; Warjito; Siswantara, A. I.; Saputra, Pradhana; Dianofitra, Reza

    2018-03-01

    Rapid economic and population growth in Indonesia lead to increased energy consumption, including electricity needs. Pico hydro is considered as the right solution because the cost of investment and operational cost are fairly low. Additionally, Indonesia has many remote areas with high hydro-energy potential. The overshot waterwheel is one of technology that is suitable to be applied in remote areas due to ease of operation and maintenance. This study attempts to optimize bucket dimensions with the available conditions. In addition, the optimization also has a good impact on the amount of generated power because all available energy is utilized maximally. Analytical method is used to evaluate the volume of water contained in bucket overshot waterwheel. In general, there are two stages performed. First, calculation of the volume of water contained in each active bucket is done. If the amount total of water contained is less than the available discharge in active bucket, recalculation at the width of the wheel is done. Second, calculation of the torque of each active bucket is done to determine the power output. As the result, the mechanical power generated from the waterwheel is 305 Watts with the efficiency value of 28%.

  19. Analytical methodology for optimization of waste management scenarios in nuclear installation decommissioning process - 16148

    International Nuclear Information System (INIS)

    Zachar, Matej; Necas, Vladimir; Daniska, Vladimir; Rehak, Ivan; Vasko, Marek

    2009-01-01

    The nuclear installation decommissioning process is characterized by production of large amount of various radioactive and non-radioactive waste that has to be managed, taking into account its physical, chemical, toxic and radiological properties. Waste management is considered to be one of the key issues within the frame of the decommissioning process. During the decommissioning planning period, the scenarios covering possible routes of materials release into the environment and radioactive waste disposal, should be discussed and evaluated. Unconditional and conditional release to the environment, long-term storage at the nuclear site, near surface or deep geological disposal and relevant material management techniques for achieving the final status should be taken into account in the analysed scenarios. At the level of the final decommissioning plan, it is desirable to have the waste management scenario optimized for local specific facility conditions taking into account a national decommissioning background. The analytical methodology for the evaluation of decommissioning waste management scenarios, presented in the paper, is based on the materials and radioactivity flow modelling, which starts from waste generation activities like pre-dismantling decontamination, selected methods of dismantling, waste treatment and conditioning, up to materials release or conditioned radioactive waste disposal. The necessary input data for scenarios, e.g. nuclear installation inventory database (physical and radiological data), waste processing technologies parameters or material release and waste disposal limits, have to be considered. The analytical methodology principles are implemented into the standardised decommissioning parameters calculation code OMEGA, developed in the DECOM company. In the paper the examples of the methodology implementation for the scenarios optimization are presented and discussed. (authors)

  20. Using predictive analytics and big data to optimize pharmaceutical outcomes.

    Science.gov (United States)

    Hernandez, Inmaculada; Zhang, Yuting

    2017-09-15

    The steps involved, the resources needed, and the challenges associated with applying predictive analytics in healthcare are described, with a review of successful applications of predictive analytics in implementing population health management interventions that target medication-related patient outcomes. In healthcare, the term big data typically refers to large quantities of electronic health record, administrative claims, and clinical trial data as well as data collected from smartphone applications, wearable devices, social media, and personal genomics services; predictive analytics refers to innovative methods of analysis developed to overcome challenges associated with big data, including a variety of statistical techniques ranging from predictive modeling to machine learning to data mining. Predictive analytics using big data have been applied successfully in several areas of medication management, such as in the identification of complex patients or those at highest risk for medication noncompliance or adverse effects. Because predictive analytics can be used in predicting different outcomes, they can provide pharmacists with a better understanding of the risks for specific medication-related problems that each patient faces. This information will enable pharmacists to deliver interventions tailored to patients' needs. In order to take full advantage of these benefits, however, clinicians will have to understand the basics of big data and predictive analytics. Predictive analytics that leverage big data will become an indispensable tool for clinicians in mapping interventions and improving patient outcomes. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  1. Approximate analytical solution of diffusion equation with fractional time derivative using optimal homotopy analysis method

    Directory of Open Access Journals (Sweden)

    S. Das

    2013-12-01

    Full Text Available In this article, optimal homotopy-analysis method is used to obtain approximate analytic solution of the time-fractional diffusion equation with a given initial condition. The fractional derivatives are considered in the Caputo sense. Unlike usual Homotopy analysis method, this method contains at the most three convergence control parameters which describe the faster convergence of the solution. Effects of parameters on the convergence of the approximate series solution by minimizing the averaged residual error with the proper choices of parameters are calculated numerically and presented through graphs and tables for different particular cases.

  2. Novel analytical model for optimizing the pull-in voltage in a flexured MEMS switch incorporating beam perforation effect

    Science.gov (United States)

    Guha, K.; Laskar, N. M.; Gogoi, H. J.; Borah, A. K.; Baishnab, K. L.; Baishya, S.

    2017-11-01

    This paper presents a new method for the design, modelling and optimization of a uniform serpentine meander based MEMS shunt capacitive switch with perforation on upper beam. The new approach is proposed to improve the Pull-in Voltage performance in a MEMS switch. First a new analytical model of the Pull-in Voltage is proposed using the modified Mejis-Fokkema capacitance model taking care of the nonlinear electrostatic force, the fringing field effect due to beam thickness and etched holes on the beam simultaneously followed by the validation of same with the simulated results of benchmark full 3D FEM solver CoventorWare in a wide range of structural parameter variations. It shows a good agreement with the simulated results. Secondly, an optimization method is presented to determine the optimum configuration of switch for achieving minimum Pull-in voltage considering the proposed analytical mode as objective function. Some high performance Evolutionary Optimization Algorithms have been utilized to obtain the optimum dimensions with less computational cost and complexity. Upon comparing the applied algorithms between each other, the Dragonfly Algorithm is found to be most suitable in terms of minimum Pull-in voltage and higher convergence speed. Optimized values are validated against the simulated results of CoventorWare which shows a very satisfactory results with a small deviation of 0.223 V. In addition to these, the paper proposes, for the first time, a novel algorithmic approach for uniform arrangement of square holes in a given beam area of RF MEMS switch for perforation. The algorithm dynamically accommodates all the square holes within a given beam area such that the maximum space is utilized. This automated arrangement of perforation holes will further improve the computational complexity and design accuracy of the complex design of perforated MEMS switch.

  3. Optimizing the Performance of Data Analytics Frameworks

    NARCIS (Netherlands)

    Ghit, B.I.

    2017-01-01

    Data analytics frameworks enable users to process large datasets while hiding the complexity of scaling out their computations on large clusters of thousands of machines. Such frameworks parallelize the computations, distribute the data, and tolerate server failures by deploying their own runtime

  4. Noble gas encapsulation into carbon nanotubes: Predictions from analytical model and DFT studies

    Energy Technology Data Exchange (ETDEWEB)

    Balasubramani, Sree Ganesh; Singh, Devendra; Swathi, R. S., E-mail: swathi@iisertvm.ac.in [School of Chemistry, Indian Institute of Science Education and Research Thiruvananthapuram (IISER-TVM), Kerala 695016 (India)

    2014-11-14

    The energetics for the interaction of the noble gas atoms with the carbon nanotubes (CNTs) are investigated using an analytical model and density functional theory calculations. Encapsulation of the noble gas atoms, He, Ne, Ar, Kr, and Xe into CNTs of various chiralities is studied in detail using an analytical model, developed earlier by Hill and co-workers. The constrained motion of the noble gas atoms along the axes of the CNTs as well as the off-axis motion are discussed. Analyses of the forces, interaction energies, acceptance and suction energies for the encapsulation enable us to predict the optimal CNTs that can encapsulate each of the noble gas atoms. We find that CNTs of radii 2.98 − 4.20 Å (chiral indices, (5,4), (6,4), (9,1), (6,6), and (9,3)) can efficiently encapsulate the He, Ne, Ar, Kr, and Xe atoms, respectively. Endohedral adsorption of all the noble gas atoms is preferred over exohedral adsorption on various CNTs. The results obtained using the analytical model are subsequently compared with the calculations performed with the dispersion-including density functional theory at the M06 − 2X level using a triple-zeta basis set and good qualitative agreement is found. The analytical model is however found to be computationally cheap as the equations can be numerically programmed and the results obtained in comparatively very less time.

  5. Optimal evaluation of infectious medical waste disposal companies using the fuzzy analytic hierarchy process

    International Nuclear Information System (INIS)

    Ho, Chao Chung

    2011-01-01

    Ever since Taiwan's National Health Insurance implemented the diagnosis-related groups payment system in January 2010, hospital income has declined. Therefore, to meet their medical waste disposal needs, hospitals seek suppliers that provide high-quality services at a low cost. The enactment of the Waste Disposal Act in 1974 had facilitated some improvement in the management of waste disposal. However, since the implementation of the National Health Insurance program, the amount of medical waste from disposable medical products has been increasing. Further, of all the hazardous waste types, the amount of infectious medical waste has increased at the fastest rate. This is because of the increase in the number of items considered as infectious waste by the Environmental Protection Administration. The present study used two important findings from previous studies to determine the critical evaluation criteria for selecting infectious medical waste disposal firms. It employed the fuzzy analytic hierarchy process to set the objective weights of the evaluation criteria and select the optimal infectious medical waste disposal firm through calculation and sorting. The aim was to propose a method of evaluation with which medical and health care institutions could objectively and systematically choose appropriate infectious medical waste disposal firms.

  6. Wind Farm Layout Optimization through a Crossover-Elitist Evolutionary Algorithm performed over a High Performing Analytical Wake Model

    Science.gov (United States)

    Kirchner-Bossi, Nicolas; Porté-Agel, Fernando

    2017-04-01

    Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.

  7. Analytic central path, sensitivity analysis and parametric linear programming

    NARCIS (Netherlands)

    A.G. Holder; J.F. Sturm; S. Zhang (Shuzhong)

    1998-01-01

    textabstractIn this paper we consider properties of the central path and the analytic center of the optimal face in the context of parametric linear programming. We first show that if the right-hand side vector of a standard linear program is perturbed, then the analytic center of the optimal face

  8. Multicriteria optimization in a fuzzy environment: The fuzzy analytic hierarchy process

    Directory of Open Access Journals (Sweden)

    Gardašević-Filipović Milanka

    2010-01-01

    Full Text Available In the paper the fuzzy extension of the Analytic Hierarchy Process (AHP based on fuzzy numbers, and its application in solving a practical problem, are considered. The paper advocates the use of contradictory test to check the fuzzy user preferences during fuzzy AHP decision-making process. We also propose consistency check and deriving priorities from inconsistent fuzzy judgment matrices to be included in the process, in order to check if the fuzzy approach can be applied in the AHP for the problem considered. An aggregation of local priorities obtained at different levels into composite global priorities for the alternatives based on weighted-sum method is also discussed. The contradictory fuzzy judgment matrix is analyzed. Our theoretical consideration has been verified by an application of commercially available Super Decisions program (developed for solving multi-criteria optimization problems using AHP approach on the problem previously treated in the literature. The obtained results are compared with those from the literature. The conclusions are given and the possibilities for further work in the field are pointed out.

  9. Transaction fees and optimal rebalancing in the growth-optimal portfolio

    Science.gov (United States)

    Feng, Yu; Medo, Matúš; Zhang, Liang; Zhang, Yi-Cheng

    2011-05-01

    The growth-optimal portfolio optimization strategy pioneered by Kelly is based on constant portfolio rebalancing which makes it sensitive to transaction fees. We examine the effect of fees on an example of a risky asset with a binary return distribution and show that the fees may give rise to an optimal period of portfolio rebalancing. The optimal period is found analytically in the case of lognormal returns. This result is consequently generalized and numerically verified for broad return distributions and returns generated by a GARCH process. Finally we study the case when investment is rebalanced only partially and show that this strategy can improve the investment long-term growth rate more than optimization of the rebalancing period.

  10. Optimal control for Malaria disease through vaccination

    Science.gov (United States)

    Munzir, Said; Nasir, Muhammad; Ramli, Marwan

    2018-01-01

    Malaria is a disease caused by an amoeba (single-celled animal) type of plasmodium where anopheles mosquito serves as the carrier. This study examines the optimal control problem of malaria disease spread based on Aron and May (1982) SIR type models and seeks the optimal solution by minimizing the prevention of the spreading of malaria by vaccine. The aim is to investigate optimal control strategies on preventing the spread of malaria by vaccination. The problem in this research is solved using analytical approach. The analytical method uses the Pontryagin Minimum Principle with the symbolic help of MATLAB software to obtain optimal control result and to analyse the spread of malaria with vaccination control.

  11. A modified analytical model to study the sensing performance of a flexible capacitive tactile sensor array

    International Nuclear Information System (INIS)

    Liang, Guanhao; Wang, Yancheng; Mei, Deqing; Xi, Kailun; Chen, Zichen

    2015-01-01

    This paper presents a modified analytical model to study the sensing performance of a flexible capacitive tactile sensor array, which utilizes solid polydimethylsiloxane (PDMS) film as the dielectric layer. To predict the deformation of the sensing unit and capacitance changes, each sensing unit is simplified into a three-layer plate structure and divided into central, edge and corner regions. The plate structure and the three regions are studied by the general and modified models, respectively. For experimental validation, the capacitive tactile sensor array with 8  ×  8 (= 64) sensing units is fabricated. Experiments are conducted by measuring the capacitance changes versus applied external forces and compared with the general and modified models’ predictions. For the developed tactile sensor array, the sensitivity predicted by the modified analytical model is 1.25%/N, only 0.8% discrepancy from the experimental measurement. Results demonstrate that the modified analytical model can accurately predict the sensing performance of the sensor array and could be utilized for model-based optimal capacitive tactile sensor array design. (paper)

  12. An analytical and numerical study of solar chimney use for room natural ventilation

    Energy Technology Data Exchange (ETDEWEB)

    Bassiouny, Ramadan; Koura, Nader S.A. [Department of Mechanical Power Engineering and Energy, Minia University, Minia 61111 (Egypt)

    2008-07-01

    The solar chimney concept used for improving room natural ventilation was analytically and numerically studied. The study considered some geometrical parameters such as chimney inlet size and width, which are believed to have a significant effect on space ventilation. The numerical analysis was intended to predict the flow pattern in the room as well as in the chimney. This would help optimizing design parameters. The results were compared with available published experimental and theoretical data. There was an acceptable trend match between the present analytical results and the published data for the room air change per hour, ACH. Further, it was noticed that the chimney width has a more significant effect on ACH compared to the chimney inlet size. The results showed that the absorber average temperature could be correlated to the intensity as: (T{sub w} = 3.51I{sup 0.461}) with an accepted range of approximation error. In addition the average air exit velocity was found to vary with the intensity as ({nu}{sub ex} = 0.013I{sup 0.4}). (author)

  13. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    Science.gov (United States)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  14. Design and Optimization of AlN based RF MEMS Switches

    Science.gov (United States)

    Hasan Ziko, Mehadi; Koel, Ants

    2018-05-01

    Radio frequency microelectromechanical system (RF MEMS) switch technology might have potential to replace the semiconductor technology in future communication systems as well as communication satellites, wireless and mobile phones. This study is to explore the possibilities of RF MEMS switch design and optimization with aluminium nitride (AlN) thin film as the piezoelectric actuation material. Achieving low actuation voltage and high contact force with optimal geometry using the principle of piezoelectric effect is the main motivation for this research. Analytical and numerical modelling of single beam type RF MEMS switch used to analyse the design parameters and optimize them for the minimum actuation voltage and high contact force. An analytical model using isotropic AlN material properties used to obtain the optimal parameters. The optimized geometry of the device length, width and thickness are 2000 µm, 500 µm and 0.6 µm respectively obtained for the single beam RF MEMS switch. Low actuation voltage and high contact force with optimal geometry are less than 2 Vand 100 µN obtained by analytical analysis. Additionally, the single beam RF MEMS switch are optimized and validated by comparing the analytical and finite element modelling (FEM) analysis.

  15. Analytical challenges in sports drug testing.

    Science.gov (United States)

    Thevis, Mario; Krug, Oliver; Geyer, Hans; Walpurgis, Katja; Baume, Norbert; Thomas, Andreas

    2018-03-01

    Analytical chemistry represents a central aspect of doping controls. Routine sports drug testing approaches are primarily designed to address the question whether a prohibited substance is present in a doping control sample and whether prohibited methods (for example, blood transfusion or sample manipulation) have been conducted by an athlete. As some athletes have availed themselves of the substantial breadth of research and development in the pharmaceutical arena, proactive and preventive measures are required such as the early implementation of new drug candidates and corresponding metabolites into routine doping control assays, even though these drug candidates are to date not approved for human use. Beyond this, analytical data are also cornerstones of investigations into atypical or adverse analytical findings, where the overall picture provides ample reason for follow-up studies. Such studies have been of most diverse nature, and tailored approaches have been required to probe hypotheses and scenarios reported by the involved parties concerning the plausibility and consistency of statements and (analytical) facts. In order to outline the variety of challenges that doping control laboratories are facing besides providing optimal detection capabilities and analytical comprehensiveness, selected case vignettes involving the follow-up of unconventional adverse analytical findings, urine sample manipulation, drug/food contamination issues, and unexpected biotransformation reactions are thematized.

  16. Analytical approach to cross-layer protocol optimization in wireless sensor networks

    Science.gov (United States)

    Hortos, William S.

    2008-04-01

    In the distributed operations of route discovery and maintenance, strong interaction occurs across mobile ad hoc network (MANET) protocol layers. Quality of service (QoS) requirements of multimedia service classes must be satisfied by the cross-layer protocol, along with minimization of the distributed power consumption at nodes and along routes to battery-limited energy constraints. In previous work by the author, cross-layer interactions in the MANET protocol are modeled in terms of a set of concatenated design parameters and associated resource levels by multivariate point processes (MVPPs). Determination of the "best" cross-layer design is carried out using the optimal control of martingale representations of the MVPPs. In contrast to the competitive interaction among nodes in a MANET for multimedia services using limited resources, the interaction among the nodes of a wireless sensor network (WSN) is distributed and collaborative, based on the processing of data from a variety of sensors at nodes to satisfy common mission objectives. Sensor data originates at the nodes at the periphery of the WSN, is successively transported to other nodes for aggregation based on information-theoretic measures of correlation and ultimately sent as information to one or more destination (decision) nodes. The "multimedia services" in the MANET model are replaced by multiple types of sensors, e.g., audio, seismic, imaging, thermal, etc., at the nodes; the QoS metrics associated with MANETs become those associated with the quality of fused information flow, i.e., throughput, delay, packet error rate, data correlation, etc. Significantly, the essential analytical approach to MANET cross-layer optimization, now based on the MVPPs for discrete random events occurring in the WSN, can be applied to develop the stochastic characteristics and optimality conditions for cross-layer designs of sensor network protocols. Functional dependencies of WSN performance metrics are described in

  17. a cross-sectional analytic study 2014

    African Journals Online (AJOL)

    Assessment of HIV/AIDS comprehensive correct knowledge among Sudanese university: a cross-sectional analytic study 2014. ... There are limited studies on this topic in Sudan. In this study we investigated the Comprehensive correct ...

  18. Analytical optimization of active bandwidth and quality factor for TOCSY experiments in NMR spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Coote, Paul, E-mail: paul-coote@hms.harvard.edu [Harvard Medical School (United States); Bermel, Wolfgang [Bruker BioSpin GmbH (Germany); Wagner, Gerhard; Arthanari, Haribabu, E-mail: hari@hms.harvard.edu [Harvard Medical School (United States)

    2016-09-15

    Active bandwidth and global quality factor are the two main metrics used to quantitatively compare the performance of TOCSY mixing sequences. Active bandwidth refers to the spectral region over which at least 50 % of the magnetization is transferred via a coupling. Global quality factor scores mixing sequences according to the worst-case transfer over a range of possible mixing times and chemical shifts. Both metrics reward high transfer efficiency away from the main diagonal of a two-dimensional spectrum. They can therefore be used to design mixing sequences that will function favorably in experiments. Here, we develop optimization methods tailored to these two metrics, including precise control of off-diagonal cross peak buildup rates. These methods produce square shaped transfer efficiency profiles, directly matching the desirable properties that the metrics are intended to measure. The optimization methods are analytical, rather than numerical. The two resultant shaped pulses have significantly higher active bandwidth and quality factor, respectively, than all other known sequences. They are therefore highly suitable for use in NMR spectroscopy. We include experimental verification of these improved waveforms on small molecule and protein samples.

  19. Optimization of Thermal Aspects of Friction Stir Welding – Initial Studies Using a Space Mapping Technique

    DEFF Research Database (Denmark)

    Larsen, Anders Astrup; Bendsøe, Martin P.; Schmidt, Henrik Nikolaj Blicher

    2007-01-01

    The aim of this paper is to optimize a thermal model of a friction stir welding process. The optimization is performed using a space mapping technique in which an analytical model is used along with the FEM model to be optimized. The results are compared to traditional gradient based optimization...

  20. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation

    Science.gov (United States)

    Bergeron, Dominic; Tremblay, A.-M. S.

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  1. Trends in Process Analytical Technology: Present State in Bioprocessing.

    Science.gov (United States)

    Jenzsch, Marco; Bell, Christian; Buziol, Stefan; Kepert, Felix; Wegele, Harald; Hakemeyer, Christian

    2017-08-04

    Process analytical technology (PAT), the regulatory initiative for incorporating quality in pharmaceutical manufacturing, is an area of intense research and interest. If PAT is effectively applied to bioprocesses, this can increase process understanding and control, and mitigate the risk from substandard drug products to both manufacturer and patient. To optimize the benefits of PAT, the entire PAT framework must be considered and each elements of PAT must be carefully selected, including sensor and analytical technology, data analysis techniques, control strategies and algorithms, and process optimization routines. This chapter discusses the current state of PAT in the biopharmaceutical industry, including several case studies demonstrating the degree of maturity of various PAT tools. Graphical Abstract Hierarchy of QbD components.

  2. Optimizing RDF Data Cubes for Efficient Processing of Analytical Queries

    DEFF Research Database (Denmark)

    Jakobsen, Kim Ahlstrøm; Andersen, Alex B.; Hose, Katja

    2015-01-01

    data warehouses and data cubes. Today, external data sources are essential for analytics and, as the Semantic Web gains popularity, more and more external sources are available in native RDF. With the recent SPARQL 1.1 standard, performing analytical queries over RDF data sources has finally become...

  3. PRODUCT OPTIMIZATION METHOD BASED ON ANALYSIS OF OPTIMAL VALUES OF THEIR CHARACTERISTICS

    Directory of Open Access Journals (Sweden)

    Constantin D. STANESCU

    2016-05-01

    Full Text Available The paper presents an original method of optimizing products based on the analysis of optimal values of their characteristics . Optimization method comprises statistical model and analytical model . With this original method can easily and quickly obtain optimal product or material .

  4. Analytical optimization of digital subtraction mammography with contrast medium using a commercial unit.

    Science.gov (United States)

    Rosado-Méndez, I; Palma, B A; Brandan, M E

    2008-12-01

    Contrast-medium-enhanced digital mammography (CEDM) is an image subtraction technique which might help unmasking lesions embedded in very dense breasts. Previous works have stated the feasibility of CEDM and the imperative need of radiological optimization. This work presents an extension of a former analytical formalism to predict contrast-to-noise ratio (CNR) in subtracted mammograms. The goal is to optimize radiological parameters available in a clinical mammographic unit (x-ray tube anode/filter combination, voltage, and loading) by maximizing CNR and minimizing total mean glandular dose (D(gT)), simulating the experimental application of an iodine-based contrast medium and the image subtraction under dual-energy nontemporal, and single- or dual-energy temporal modalities. Total breast-entrance air kerma is limited to a fixed 8.76 mGy (1 R, similar to screening studies). Mathematical expressions obtained from the formalism are evaluated using computed mammographic x-ray spectra attenuated by an adipose/glandular breast containing an elongated structure filled with an iodinated solution in various concentrations. A systematic study of contrast, its associated variance, and CNR for different spectral combinations is performed, concluding in the proposal of optimum x-ray spectra. The linearity between contrast in subtracted images and iodine mass thickness is proven, including the determination of iodine visualization limits based on Rose's detection criterion. Finally, total breast-entrance air kerma is distributed between both images in various proportions in order to maximize the figure of merit CNR2/D(gT). Predicted results indicate the advantage of temporal subtraction (either single- or dual-energy modalities) with optimum parameters corresponding to high-voltage, strongly hardened Rh/Rh spectra. For temporal techniques, CNR was found to depend mostly on the energy of the iodinated image, and thus reduction in D(gT) could be achieved if the spectral energy

  5. New analytical techniques for traffic management on the basis of system studies in logistics

    Directory of Open Access Journals (Sweden)

    Олександр Павлович Кіркін

    2017-07-01

    Full Text Available In today's market conditions, it is necessary for enterprises to constantly maintain their competitiveness. This is achieved through raising customer service standards and use of the latest management techniques. In most cases, enterprises adhere to the logistic principles to optimize production. Over time, however, the development of logistics resulted in emergence of its principal subdivisions: transport, storage, etc. Thus, nowadays there are several parallel methodological developments in the field of logistics and making up logistics chains and systems at different stages of the life cycle of the goods. System research in the field of warehouse logistics showed that the majority of its analytical models of management are based on task conflict. Similar tasks of managing traffic flows in transport logistics are solved by methods of SMO, graph theory, linear programming and differential equations of state etc. These methods are not more accurate than the methods of warehouse logistics, have similar important assumptions and simplifications, and require appropriate mathematical training and knowledge in the field of transport, and sometimes lack visible correlation with economic performance. New analytical techniques for the management of transportation systems based on task conflict will reduce the time and resources for optimization and finding solutions. Methods of warehousing logistics can only be used for the continuous transport quantities (intensity, speed, performance, capacity, execution of works, etc.. In the static condition the search for the optimal service intensity can be found in warehouse logistics. In the study of object in dynamics it is better to use transport approach. Some problems such as supplement of warehouse logistics models with elements of the transport task, are still to be decided

  6. Influence of the faces relative arrangement on the optimal reloading station location and analytical determination of its coordinates

    Directory of Open Access Journals (Sweden)

    V.К. Slobodyanyuk

    2017-04-01

    Full Text Available The purpose of this study is to develop a methodology of the optimal rock mass run-of-mine (RoM stock point determination and research of the influence of faces spatial arrangement on this point. The research represents an overview of current researches, where algorithms of the Fermat-Torricelli-Steiner point are used in order to minimize the logistic processes. The methods of mathematical optimization and analytical geometry were applied. Formulae for the optimal point coordinates determination for a 4 faces were established using the latter methods. Mining technology with use of reloading stations is rather common at the deep iron ore pits. In most cases, when deciding on location of RoM stock, its high-altitude position in space of the pit is primarily taken into account. However, the location of the reloading station in a layout also has a significant influence on technical and economic parameters of open-pit mining operations. The traditional approach, which considers a point of the center of gravity as an optimal point for RoM stock location, does not guarantee the minimum haulage. In mathematics, the Fermat-Torricelli point that provides a minimum distance to the vertices of the triangle is known. It is shown that the minimum haulage is provided when the point of RoM stock location and Fermat-Torricelli point coincide. In terms of open pit mining operations, the development of a method that will determine an optimal point of RoM stock location for a working area with respect to the known coordinates of distinguished points on the basis of new weight factors is of particular practical importance. A two-stage solution to the problem of determining the rational point of RoM stock location (with a minimal transport work for any number of faces is proposed. Such optimal point for RoM stock location reduces the transport work by 10–20 %.

  7. A study of optimization problem for amplify-and-forward relaying over weibull fading channels

    KAUST Repository

    Ikki, Salama Said

    2010-09-01

    This paper addresses the power allocation and relay positioning problems in amplify-and-forward cooperative networks operating in Weibull fading environments. We study adaptive power allocation (PA) with fixed relay location, optimal relay location with fixed power allocation, and joint optimization of the PA and relay location under total transmit power constraint, in order to minimize the outage probability and average error probability at high signal-to-noise ratios (SNR). Analytical results are validated by numerical simulations and comparisons between the different optimization schemes and their performance are provided. Results show that optimum PA brings only coding gain, while optimum relay location yields, in addition to the latter, diversity gains as well. Also, joint optimization improves both, the diversity gain and coding gain. Furthermore, results illustrate that the analyzed adaptive algorithms outperform uniform schemes. ©2010 IEEE.

  8. Pre-analytical and post-analytical evaluation in the era of molecular diagnosis of sexually transmitted diseases: cellularity control and internal control

    Directory of Open Access Journals (Sweden)

    Loria Bianchi

    2014-06-01

    Full Text Available Background. Increase of molecular tests performed on DNA extracted from various biological materials should not be carried out without an adequate standardization of the pre-analytical and post-analytical phase. Materials and Methods. Aim of this study was to evaluate the role of internal control (IC to standardize pre-analytical phase and the role of cellularity control (CC in the suitability evaluation of biological matrices, and their influence on false negative results. 120 cervical swabs (CS were pre-treated and extracted following 3 different protocols. Extraction performance was evaluated by amplification of: IC, added in each mix extraction; human gene HPRT1 (CC with RT-PCR to quantify sample cellularity; L1 region of HPV with SPF10 primers. 135 urine, 135 urethral swabs, 553 CS and 332 ThinPrep swabs (TP were tested for C. trachomatis (CT and U. parvum (UP with RT-PCR and for HPV by endpoint-PCR. Samples were also tested for cellularity. Results. Extraction protocol with highest average cellularity (Ac/sample showed lowest number of samples with inhibitors; highest HPV positivity was achieved by protocol with greatest Ac/PCR. CS and TP under 300.000 cells/sample showed a significant decrease of UP (P<0.01 and HPV (P<0.005 positivity. Female urine under 40.000 cells/mL were inadequate to detect UP (P<0.05. Conclusions. Our data show that IC and CC allow optimization of pre-analytical phase, with an increase of analytical quality. Cellularity/sample allows better sample adequacy evaluation, crucial to avoid false negative results, while cellularity/PCR allows better optimization of PCR amplification. Further data are required to define the optimal cut-off for result normalization.

  9. Theory of net analyte signal vectors in inverse regression

    DEFF Research Database (Denmark)

    Bro, R.; Andersen, Charlotte Møller

    2003-01-01

    The. net analyte signal and the net analyte signal vector are useful measures in building and optimizing multivariate calibration models. In this paper a theory for their use in inverse regression is developed. The theory of net analyte signal was originally derived from classical least squares...

  10. Analytically solvable chaotic oscillator based on a first-order filter

    Energy Technology Data Exchange (ETDEWEB)

    Corron, Ned J.; Cooper, Roy M.; Blakely, Jonathan N. [Charles M. Bowden Laboratory, Aviation and Missile Research, Development and Engineering Center, U.S. Army RDECOM, Redstone Arsenal, Alabama 35898 (United States)

    2016-02-15

    A chaotic hybrid dynamical system is introduced and its analytic solution is derived. The system is described as an unstable first order filter subject to occasional switching of a set point according to a feedback rule. The system qualitatively differs from other recently studied solvable chaotic hybrid systems in that the timing of the switching is regulated by an external clock. The chaotic analytic solution is an optimal waveform for communications in noise when a resistor-capacitor-integrate-and-dump filter is used as a receiver. As such, these results provide evidence in support of a recent conjecture that the optimal communication waveform for any stable infinite-impulse response filter is chaotic.

  11. Multi-Objective Design Optimization of an Over-Constrained Flexure-Based Amplifier

    Directory of Open Access Journals (Sweden)

    Yuan Ni

    2015-07-01

    Full Text Available The optimizing design for enhancement of the micro performance of manipulator based on analytical models is investigated in this paper. By utilizing the established uncanonical linear homogeneous equations, the quasi-static analytical model of the micro-manipulator is built, and the theoretical calculation results are tested by FEA simulations. To provide a theoretical basis for a micro-manipulator being used in high-precision engineering applications, this paper investigates the modal property based on the analytical model. Based on the finite element method, with multipoint constraint equations, the model is built and the results have a good match with the simulation. The following parametric influences studied show that the influences of other objectives on one objective are complicated.  Consequently, the multi-objective optimization by the derived analytical models is carried out to find out the optimal solutions of the manipulator. Besides the inner relationships among these design objectives during the optimization process are discussed.

  12. Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO

    Science.gov (United States)

    Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.

    2016-01-01

    A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.

  13. Analytical study on model tests of soil-structure interaction

    International Nuclear Information System (INIS)

    Odajima, M.; Suzuki, S.; Akino, K.

    1987-01-01

    Since nuclear power plant (NPP) structures are stiff, heavy and partly-embedded, the behavior of those structures during an earthquake depends on the vibrational characteristics of not only the structure but also the soil. Accordingly, seismic response analyses considering the effects of soil-structure interaction (SSI) are extremely important for seismic design of NPP structures. Many studies have been conducted on analytical techniques concerning SSI and various analytical models and approaches have been proposed. Based on the studies, SSI analytical codes (computer programs) for NPP structures have been improved at JINS (Japan Institute of Nuclear Safety), one of the departments of NUPEC (Nuclear Power Engineering Test Center) in Japan. These codes are soil-spring lumped-mass code (SANLUM), finite element code (SANSSI), thin layered element code (SANSOL). In proceeding with the improvement of the analytical codes, in-situ large-scale forced vibration SSI tests were performed using models simulating light water reactor buildings, and simulation analyses were performed to verify the codes. This paper presents an analytical study to demonstrate the usefulness of the codes

  14. A decision support system using analytical hierarchy process (AHP) for the optimal environmental reclamation of an open-pit mine

    Science.gov (United States)

    Bascetin, A.

    2007-04-01

    The selection of an optimal reclamation method is one of the most important factors in open-pit design and production planning. It also affects economic considerations in open-pit design as a function of plan location and depth. Furthermore, the selection is a complex multi-person, multi-criteria decision problem. The group decision-making process can be improved by applying a systematic and logical approach to assess the priorities based on the inputs of several specialists from different functional areas within the mine company. The analytical hierarchy process (AHP) can be very useful in involving several decision makers with different conflicting objectives to arrive at a consensus decision. In this paper, the selection of an optimal reclamation method using an AHP-based model was evaluated for coal production in an open-pit coal mine located at Seyitomer region in Turkey. The use of the proposed model indicates that it can be applied to improve the group decision making in selecting a reclamation method that satisfies optimal specifications. Also, it is found that the decision process is systematic and using the proposed model can reduce the time taken to select a optimal method.

  15. Analytical optimization of demand management strategies across all urban water use sectors

    Science.gov (United States)

    Friedman, Kenneth; Heaney, James P.; Morales, Miguel; Palenchar, John

    2014-07-01

    An effective urban water demand management program can greatly influence both peak and average demand and therefore long-term water supply and infrastructure planning. Although a theoretical framework for evaluating residential indoor demand management has been well established, little has been done to evaluate other water use sectors such as residential irrigation in a compatible manner for integrating these results into an overall solution. This paper presents a systematic procedure to evaluate the optimal blend of single family residential irrigation demand management strategies to achieve a specified goal based on performance functions derived from parcel level tax assessor's data linked to customer level monthly water billing data. This framework is then generalized to apply to any urban water sector, as exponential functions can be fit to all resulting cumulative water savings functions. Two alternative formulations are presented: maximize net benefits, or minimize total costs subject to satisfying a target water savings. Explicit analytical solutions are presented for both formulations based on appropriate exponential best fits of performance functions. A direct result of this solution is the dual variable which represents the marginal cost of water saved at a specified target water savings goal. A case study of 16,303 single family irrigators in Gainesville Regional Utilities utilizing high quality tax assessor and monthly billing data along with parcel level GIS data provide an illustrative example of these techniques. Spatial clustering of targeted homes can be easily performed in GIS to identify priority demand management areas.

  16. HPTA: High-Performance Text Analytics

    OpenAIRE

    Vandierendonck, Hans; Murphy, Karen; Arif, Mahwish; Nikolopoulos, Dimitrios S.

    2017-01-01

    One of the main targets of data analytics is unstructured data, which primarily involves textual data. High-performance processing of textual data is non-trivial. We present the HPTA library for high-performance text analytics. The library helps programmers to map textual data to a dense numeric representation, which can be handled more efficiently. HPTA encapsulates three performance optimizations: (i) efficient memory management for textual data, (ii) parallel computation on associative dat...

  17. Study of the Analytical Conditions for the Determination of Cadmium in Coal Fly Ashes by GFAAS with evaluation of several matrix modifiers

    International Nuclear Information System (INIS)

    Rucandio, M.I.; Petit, M.D.

    1998-01-01

    A new method for the determination of cadmium in coal fly ash samples by Graphite Furnace Atomic Absorption Spectrometry (GFAAS) has been developed. Analytical conditions and different instrumental parameters have been optimized. In a first step, several types of matrix modifiers have been tested and a mixture of 2% NH 4 H 2 PO 4 with 0.4%Mg(NO 3 ) 2 in 0.5N HNO 3 has been selected, since it provides the highest sensitivity. In a second step, an optimization of several conditions, using the selected modifier, has been carried out, such as ashing and atomization temperatures, heating rate, etc. The influence of the use of a L' vov platform on the analytical and background signals has been studied, showing a significative decrease on the background signal, being the net absorbance similar to those obtained in absence of the platform. Using the optimal conditions, the direct method with standard samples provides cadmium concentration consistent with those obtained using the standard addition method. (Author) 18 refs

  18. Many-objective optimization and visual analytics reveal key trade-offs for London's water supply

    Science.gov (United States)

    Matrosov, Evgenii S.; Huskova, Ivana; Kasprzyk, Joseph R.; Harou, Julien J.; Lambert, Chris; Reed, Patrick M.

    2015-12-01

    In this study, we link a water resource management simulator to multi-objective search to reveal the key trade-offs inherent in planning a real-world water resource system. We consider new supplies and demand management (conservation) options while seeking to elucidate the trade-offs between the best portfolios of schemes to satisfy projected water demands. Alternative system designs are evaluated using performance measures that minimize capital and operating costs and energy use while maximizing resilience, engineering and environmental metrics, subject to supply reliability constraints. Our analysis shows many-objective evolutionary optimization coupled with state-of-the art visual analytics can help planners discover more diverse water supply system designs and better understand their inherent trade-offs. The approach is used to explore future water supply options for the Thames water resource system (including London's water supply). New supply options include a new reservoir, water transfers, artificial recharge, wastewater reuse and brackish groundwater desalination. Demand management options include leakage reduction, compulsory metering and seasonal tariffs. The Thames system's Pareto approximate portfolios cluster into distinct groups of water supply options; for example implementing a pipe refurbishment program leads to higher capital costs but greater reliability. This study highlights that traditional least-cost reliability constrained design of water supply systems masks asset combinations whose benefits only become apparent when more planning objectives are considered.

  19. Analytic hierarchy process-based approach for selecting a Pareto-optimal solution of a multi-objective, multi-site supply-chain planning problem

    Science.gov (United States)

    Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi

    2017-07-01

    The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.

  20. A characteristic study of CCF modeling techniques and optimization of CCF defense strategies

    International Nuclear Information System (INIS)

    Kim, Min Chull

    2000-02-01

    Common Cause Failures (CCFs ) are among the major contributors to risk and core damage frequency (CDF ) from operating nuclear power plants (NPPs ). Our study on CCF focused on the following aspects : 1) a characteristic study on the CCF modeling techniques and 2) development of the optimal CCF defense strategy. Firstly, the characteristics of CCF modeling techniques were studied through sensitivity study of CCF occurrence probability upon system redundancy. The modeling techniques considered in this study include those most widely used worldwide, i.e., beta factor, MGL, alpha factor, and binomial failure rate models. We found that MGL and alpha factor models are essentially identical in terms of the CCF probability. Secondly, in the study for CCF defense, the various methods identified in the previous studies for defending against CCF were classified into five different categories. Based on these categories, we developed a generic method by which the optimal CCF defense strategy can be selected. The method is not only qualitative but also quantitative in nature: the selection of the optimal strategy among candidates is based on the use of analytic hierarchical process (AHP). We applied this method to two motor-driven valves for containment sump isolation in Ulchin 3 and 4 nuclear power plants. The result indicates that the method for developing an optimal CCF defense strategy is effective

  1. On finding the analytic dependencies of the external field potential on the control function when optimizing the beam dynamics

    Science.gov (United States)

    Ovsyannikov, A. D.; Kozynchenko, S. A.; Kozynchenko, V. A.

    2017-12-01

    When developing a particle accelerator for generating the high-precision beams, the injection system design is of importance, because it largely determines the output characteristics of the beam. At the present paper we consider the injection systems consisting of electrodes with given potentials. The design of such systems requires carrying out simulation of beam dynamics in the electrostatic fields. For external field simulation we use the new approach, proposed by A.D. Ovsyannikov, which is based on analytical approximations, or finite difference method, taking into account the real geometry of the injection system. The software designed for solving the problems of beam dynamics simulation and optimization in the injection system for non-relativistic beams has been developed. Both beam dynamics and electric field simulations in the injection system which use analytical approach and finite difference method have been made and the results presented in this paper.

  2. Laser: a Tool for Optimization and Enhancement of Analytical Methods

    Energy Technology Data Exchange (ETDEWEB)

    Preisler, Jan [Iowa State Univ., Ames, IA (United States)

    1997-01-01

    In this work, we use lasers to enhance possibilities of laser desorption methods and to optimize coating procedure for capillary electrophoresis (CE). We use several different instrumental arrangements to characterize matrix-assisted laser desorption (MALD) at atmospheric pressure and in vacuum. In imaging mode, 488-nm argon-ion laser beam is deflected by two acousto-optic deflectors to scan plumes desorbed at atmospheric pressure via absorption. All absorbing species, including neutral molecules, are monitored. Interesting features, e.g. differences between the initial plume and subsequent plumes desorbed from the same spot, or the formation of two plumes from one laser shot are observed. Total plume absorbance can be correlated with the acoustic signal generated by the desorption event. A model equation for the plume velocity as a function of time is proposed. Alternatively, the use of a static laser beam for observation enables reliable determination of plume velocities even when they are very high. Static scattering detection reveals negative influence of particle spallation on MS signal. Ion formation during MALD was monitored using 193-nm light to photodissociate a portion of insulin ion plume. These results define the optimal conditions for desorbing analytes from matrices, as opposed to achieving a compromise between efficient desorption and efficient ionization as is practiced in mass spectrometry. In CE experiment, we examined changes in a poly(ethylene oxide) (PEO) coating by continuously monitoring the electroosmotic flow (EOF) in a fused-silica capillary during electrophoresis. An imaging CCD camera was used to follow the motion of a fluorescent neutral marker zone along the length of the capillary excited by 488-nm Ar-ion laser. The PEO coating was shown to reduce the velocity of EOF by more than an order of magnitude compared to a bare capillary at pH 7.0. The coating protocol was important, especially at an intermediate pH of 7.7. The increase of p

  3. An analytical study of the Q(s, S) policy applied to the joint replenishment problem

    DEFF Research Database (Denmark)

    Nielsen, Christina; Larsen, Christian

    2005-01-01

    be considered supply chain management problems. The paper uses Markov decision theory to work out an analytical solution procedure to evaluate the costs of a particular Q(s,S) policy, and thereby a method for computing the optimal Q(s,S) policy, under the assumption that demands follow a Poisson Process...

  4. An analytical study of the Q(s,S) policy applied on the joint replenishment problem

    DEFF Research Database (Denmark)

    Nielsen, Christina; Larsen, Christian

    2002-01-01

    be considered supply chain management problems. The paper uses Markov decision theory to work out an analytical solution procedure to evaluate the costs of a particular Q(s,S) policy, and thereby a method to compute the optimal Q(s,S) policy, under the assumption that demands follow a Poisson process...

  5. Analytical Investigation of Beam Deformation Equation using Perturbation, Homotopy Perturbation, Variational Iteration and Optimal Homotopy Asymptotic Methods

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Mowlaee, P.; Barari, Amin

    2011-01-01

    The beam deformation equation has very wide applications in structural engineering. As a differential equation, it has its own problem concerning existence, uniqueness and methods of solutions. Often, original forms of governing differential equations used in engineering problems are simplified...... Method (OHAM). The comparisons of the results reveal that these methods are very effective, convenient and quite accurate to systems of non-linear differential equation......., and this process produces noise in the obtained answers. This paper deals with solution of second order of differential equation governing beam deformation using four analytical approximate methods, namely the Homotopy Perturbation Method (HPM), Variational Iteration Method (VIM) and Optimal Homotopy Asymptotic...

  6. Analytical study in 1D nuclear waste migration

    International Nuclear Information System (INIS)

    Perez Guerrero, Jesus S.; Heilbron Filho, Paulo L.; Romani, Zrinka V.

    1999-01-01

    The simulation of the nuclear waste migration phenomena are governed mainly by diffusive-convective equation that includes the effects of hydrodynamic dispersion (mechanical dispersion and molecular diffusion), radioactive decay and chemical interaction. For some special problems (depending on the boundary conditions and when the domain is considered infinite or semi-infinite) an analytical solution may be obtained using classical analytical methods such as Laplace Transform or variable separation. The hybrid Generalized Integral Transform Technique (GITT) is a powerful tool that can be applied to solve diffusive-convective linear problems to obtain formal analytical solutions. The aim of this work is to illustrate that the GITT may be used to obtain an analytical formal solution for the study of migration of radioactive waste in saturated flow porous media. A case test considering 241 Am radionuclide is presented. (author)

  7. Optimization study on inductive-resistive circuit for broadband piezoelectric energy harvesters

    Directory of Open Access Journals (Sweden)

    Ting Tan

    2017-03-01

    Full Text Available The performance of cantilever-beam piezoelectric energy harvester is usually analyzed with pure resistive circuit. The optimal performance of such a vibration-based energy harvesting system is limited by narrow bandwidth around its modified natural frequency. For broadband piezoelectric energy harvesting, series and parallel inductive-resistive circuits are introduced. The electromechanical coupled distributed parameter models for such systems under harmonic base excitations are decoupled with modified natural frequency and electrical damping to consider the coupling effect. Analytical solutions of the harvested power and tip displacement for the electromechanical decoupled model are confirmed with numerical solutions for the coupled model. The optimal performance of piezoelectric energy harvesting with inductive-resistive circuits is revealed theoretically as constant maximal power at any excitation frequency. This is achieved by the scenarios of matching the modified natural frequency with the excitation frequency and equating the electrical damping to the mechanical damping. The inductance and load resistance should be simultaneously tuned to their optimal values, which may not be applicable for very high electromechanical coupling systems when the excitation frequency is higher than their natural frequencies. With identical optimal performance, the series inductive-resistive circuit is recommended for relatively small load resistance, while the parallel inductive-resistive circuit is suggested for relatively large load resistance. This study provides a simplified optimization method for broadband piezoelectric energy harvesters with inductive-resistive circuits.

  8. Optimization study on inductive-resistive circuit for broadband piezoelectric energy harvesters

    Science.gov (United States)

    Tan, Ting; Yan, Zhimiao

    2017-03-01

    The performance of cantilever-beam piezoelectric energy harvester is usually analyzed with pure resistive circuit. The optimal performance of such a vibration-based energy harvesting system is limited by narrow bandwidth around its modified natural frequency. For broadband piezoelectric energy harvesting, series and parallel inductive-resistive circuits are introduced. The electromechanical coupled distributed parameter models for such systems under harmonic base excitations are decoupled with modified natural frequency and electrical damping to consider the coupling effect. Analytical solutions of the harvested power and tip displacement for the electromechanical decoupled model are confirmed with numerical solutions for the coupled model. The optimal performance of piezoelectric energy harvesting with inductive-resistive circuits is revealed theoretically as constant maximal power at any excitation frequency. This is achieved by the scenarios of matching the modified natural frequency with the excitation frequency and equating the electrical damping to the mechanical damping. The inductance and load resistance should be simultaneously tuned to their optimal values, which may not be applicable for very high electromechanical coupling systems when the excitation frequency is higher than their natural frequencies. With identical optimal performance, the series inductive-resistive circuit is recommended for relatively small load resistance, while the parallel inductive-resistive circuit is suggested for relatively large load resistance. This study provides a simplified optimization method for broadband piezoelectric energy harvesters with inductive-resistive circuits.

  9. Transaction fees and optimal rebalancing in the growth-optimal portfolio

    OpenAIRE

    Yu Feng; Matus Medo; Liang Zhang; Yi-Cheng Zhang

    2010-01-01

    The growth-optimal portfolio optimization strategy pioneered by Kelly is based on constant portfolio rebalancing which makes it sensitive to transaction fees. We examine the effect of fees on an example of a risky asset with a binary return distribution and show that the fees may give rise to an optimal period of portfolio rebalancing. The optimal period is found analytically in the case of lognormal returns. This result is consequently generalized and numerically verified for broad return di...

  10. Application of X-ray fluorescence analytical techniques in phytoremediation and plant biology studies

    International Nuclear Information System (INIS)

    Necemer, Marijan; Kump, Peter; Scancar, Janez; Jacimovic, Radojko; Simcic, Jurij; Pelicon, Primoz; Budnar, Milos; Jeran, Zvonka; Pongrac, Paula; Regvar, Marjana; Vogel-Mikus, Katarina

    2008-01-01

    Phytoremediation is an emerging technology that employs the use of higher plants for the clean-up of contaminated environments. Progress in the field is however handicapped by limited knowledge of the biological processes involved in plant metal uptake, translocation, tolerance and plant-microbe-soil interactions; therefore a better understanding of the basic biological mechanisms involved in plant/microbe/soil/contaminant interactions would allow further optimization of phytoremediation technologies. In view of the needs of global environmental protection, it is important that in phytoremediation and plant biology studies the analytical procedures for elemental determination in plant tissues and soil should be fast and cheap, with simple sample preparation, and of adequate accuracy and reproducibility. The aim of this study was therefore to present the main characteristics, sample preparation protocols and applications of X-ray fluorescence-based analytical techniques (energy dispersive X-ray fluorescence spectrometry-EDXRF, total reflection X-ray fluorescence spectrometry-TXRF and micro-proton induced X-ray emission-micro-PIXE). Element concentrations in plant leaves from metal polluted and non-polluted sites, as well as standard reference materials, were analyzed by the mentioned techniques, and additionally by instrumental neutron activation analysis (INAA) and atomic absorption spectrometry (AAS). The results were compared and critically evaluated in order to assess the performance and capability of X-ray fluorescence-based techniques in phytoremediation and plant biology studies. It is the EDXRF, which is recommended as suitable to be used in the analyses of a large number of samples, because it is multi-elemental, requires only simple preparation of sample material, and it is analytically comparable to the most frequently used instrumental chemical techniques. The TXRF is compatible to FAAS in sample preparation, but relative to AAS it is fast, sensitive and

  11. Analytical and Numerical Studies of Sloshing in Tanks

    Energy Technology Data Exchange (ETDEWEB)

    Solaas, F

    1996-12-31

    For oil cargo ship tanks and liquid natural gas carriers, the dimensions of the tanks are often such that the highest resonant sloshing periods and the ship motions are in the same period range, which may cause violent resonant sloshing of the liquid. In this doctoral thesis, linear and non-linear analytical potential theory solutions of the sloshing problem are studied for a two-dimensional rectangular tank and a vertical circular cylindrical tank, using perturbation technique for the non-linear case. The tank is forced to oscillate harmonically with small amplitudes of sway with frequency in the vicinity of the lowest natural frequency of the fluid inside the tank. The method is extended to other tank shapes using a combined analytical and numerical method. A boundary element numerical method is used to determine the eigenfunctions and eigenvalues of the problem. These are used in the non-linear analytical free surface conditions, and the velocity potential and free surface elevation for each boundary value problem in the perturbation scheme are determined by the boundary element method. Both the analytical method and the combined analytical and numerical method are restricted to tanks with vertical walls in the free surface. The suitability of a commercial programme, FLOW-3D, to estimate sloshing is studied. It solves the Navier-Stokes equations by the finite difference method. The free surface as function of time is traced using the fractional volume of fluid method. 59 refs., 54 figs., 37 tabs.

  12. Analytical and Numerical Studies of Sloshing in Tanks

    Energy Technology Data Exchange (ETDEWEB)

    Solaas, F.

    1995-12-31

    For oil cargo ship tanks and liquid natural gas carriers, the dimensions of the tanks are often such that the highest resonant sloshing periods and the ship motions are in the same period range, which may cause violent resonant sloshing of the liquid. In this doctoral thesis, linear and non-linear analytical potential theory solutions of the sloshing problem are studied for a two-dimensional rectangular tank and a vertical circular cylindrical tank, using perturbation technique for the non-linear case. The tank is forced to oscillate harmonically with small amplitudes of sway with frequency in the vicinity of the lowest natural frequency of the fluid inside the tank. The method is extended to other tank shapes using a combined analytical and numerical method. A boundary element numerical method is used to determine the eigenfunctions and eigenvalues of the problem. These are used in the non-linear analytical free surface conditions, and the velocity potential and free surface elevation for each boundary value problem in the perturbation scheme are determined by the boundary element method. Both the analytical method and the combined analytical and numerical method are restricted to tanks with vertical walls in the free surface. The suitability of a commercial programme, FLOW-3D, to estimate sloshing is studied. It solves the Navier-Stokes equations by the finite difference method. The free surface as function of time is traced using the fractional volume of fluid method. 59 refs., 54 figs., 37 tabs.

  13. Perhitungan Value at Risk Pada Portfolio Optimal: Studi Perbandingan Saham Syariah dan Saham Konvensional

    Directory of Open Access Journals (Sweden)

    Sri Astuti Heryanti

    2017-05-01

    Full Text Available The aim of this study was to obtain empirical evidence about the difference between the level of risk when investing stocks in the Islamic and conventional by using Value at Risk (VaR. The object of research including consistent stock in the Jakarta Islamic Index and LQ45. The analytical method used in this research is quantitative analysis consisting of the establishment of the optimal portfolio by Markowitz method, calculation of VaR and testing the differences with Independent sample t-test. This study indicated that the value every stock can be reduced by diversifying through the establishment of an optimal portfolio. Based on the calculation Independent sample t-test, it is known that there is no difference between VaR of Islamic stocks and conventional stocks.

  14. Learning Analytics: Potential for Enhancing School Library Programs

    Science.gov (United States)

    Boulden, Danielle Cadieux

    2015-01-01

    Learning analytics has been defined as the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs. The potential use of data and learning analytics in educational contexts has caught the attention of educators and…

  15. Approximate analytical relationships for linear optimal aeroelastic flight control laws

    Science.gov (United States)

    Kassem, Ayman Hamdy

    1998-09-01

    This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.

  16. A New Method to Study Analytic Inequalities

    Directory of Open Access Journals (Sweden)

    Xiao-Ming Zhang

    2010-01-01

    Full Text Available We present a new method to study analytic inequalities involving n variables. Regarding its applications, we proved some well-known inequalities and improved Carleman's inequality.

  17. Multiobjective Optimization in Combinatorial Wind Farms System Integration and Resistive SFCL Using Analytical Hierarchy Process

    DEFF Research Database (Denmark)

    Moghadasi, Amirhasan; Sarwat, Arif; Guerrero, Josep M.

    2016-01-01

    This paper presents a positive approach for low voltage ride-through (LVRT) improvement of the permanent magnet synchronous generator (PMSG) based on a large wind power plant (WPP) of 50MW. The proposed method utilizes the conventional current control strategy to provide a reactive power...... requirement and retain the active power production during and after the fault for the grid codes compliance. Besides that, a resistive superconducting fault current limiter (RSFCL) as an additional self-healing support is applied outside the WPP to further increase the rated active power of the installation...... on the extreme load reduction is effectively demonstrated. A large WPP has a complicated structure using several components, and the inclusion of RSFCL composes this layout more problematic for optimal performance of the system. Hence, the most-widely decision-making technique based on the analytic hierarchy...

  18. A Comprehensive Optimization Strategy for Real-time Spatial Feature Sharing and Visual Analytics in Cyberinfrastructure

    Science.gov (United States)

    Li, W.; Shao, H.

    2017-12-01

    For geospatial cyberinfrastructure enabled web services, the ability of rapidly transmitting and sharing spatial data over the Internet plays a critical role to meet the demands of real-time change detection, response and decision-making. Especially for the vector datasets which serve as irreplaceable and concrete material in data-driven geospatial applications, their rich geometry and property information facilitates the development of interactive, efficient and intelligent data analysis and visualization applications. However, the big-data issues of vector datasets have hindered their wide adoption in web services. In this research, we propose a comprehensive optimization strategy to enhance the performance of vector data transmitting and processing. This strategy combines: 1) pre- and on-the-fly generalization, which automatically determines proper simplification level through the introduction of appropriate distance tolerance (ADT) to meet various visualization requirements, and at the same time speed up simplification efficiency; 2) a progressive attribute transmission method to reduce data size and therefore the service response time; 3) compressed data transmission and dynamic adoption of a compression method to maximize the service efficiency under different computing and network environments. A cyberinfrastructure web portal was developed for implementing the proposed technologies. After applying our optimization strategies, substantial performance enhancement is achieved. We expect this work to widen the use of web service providing vector data to support real-time spatial feature sharing, visual analytics and decision-making.

  19. NC CATCH: Advancing Public Health Analytics.

    Science.gov (United States)

    Studnicki, James; Fisher, John W; Eichelberger, Christopher; Bridger, Colleen; Angelon-Gaetz, Kim; Nelson, Debi

    2010-01-01

    The North Carolina Comprehensive Assessment for Tracking Community Health (NC CATCH) is a Web-based analytical system deployed to local public health units and their community partners. The system has the following characteristics: flexible, powerful online analytic processing (OLAP) interface; multiple sources of multidimensional, event-level data fully conformed to common definitions in a data warehouse structure; enabled utilization of available decision support software tools; analytic capabilities distributed and optimized locally with centralized technical infrastructure; two levels of access differentiated by the user (anonymous versus registered) and by the analytical flexibility (Community Profile versus Design Phase); and, an emphasis on user training and feedback. The ability of local public health units to engage in outcomes-based performance measurement will be influenced by continuing access to event-level data, developments in evidence-based practice for improving population health, and the application of information technology-based analytic tools and methods.

  20. Analytical modeling and numerical optimization of the biosurfactants production in solid-state fermentation by Aspergillus fumigatus - doi: 10.4025/actascitechnol.v36i1.17818

    Directory of Open Access Journals (Sweden)

    Gabriel Castiglioni

    2014-01-01

    Full Text Available This is an experimental, analytical and numerical study to optimize the biosurfactants production in solid-state fermentation of a medium containing rice straw and minced rice bran inoculated with Aspergillus fumigatus. The goal of this work was to analytically model the biosurfactants production in solid-state fermentation into a column fixed bed bioreactor. The Least-Squares Method was used to adjust the emulsification activity experimental values to a quadratic function semi-empirical model. Control variables were nutritional conditions, the fermentation time and the aeration. The mathematical model is validated against experimental results and then used to predict the maximum emulsification activity for different nutritional conditions and aerations. Based on the semi-empirical model the maximum emulsification activity with no additional hydrocarbon sources was 8.16 UE·g-1 for 112 hours. When diesel oil was used the predicted maximum emulsification activity was 8.10 UE·g-1 for 108 hours.

  1. A finite-buffer queue with a single vacation policy: An analytical study with evolutionary positioning

    Directory of Open Access Journals (Sweden)

    Woźniak Marcin

    2014-12-01

    Full Text Available In this paper, application of an evolutionary strategy to positioning a GI/M/1/N-type finite-buffer queueing system with exhaustive service and a single vacation policy is presented. The examined object is modeled by a conditional joint transform of the first busy period, the first idle time and the number of packets completely served during the first busy period. A mathematical model is defined recursively by means of input distributions. In the paper, an analytical study and numerical experiments are presented. A cost optimization problem is solved using an evolutionary strategy for a class of queueing systems described by exponential and Erlang distributions.

  2. Optimal operation of batch membrane processes

    CERN Document Server

    Paulen, Radoslav

    2016-01-01

    This study concentrates on a general optimization of a particular class of membrane separation processes: those involving batch diafiltration. Existing practices are explained and operational improvements based on optimal control theory are suggested. The first part of the book introduces the theory of membrane processes, optimal control and dynamic optimization. Separation problems are defined and mathematical models of batch membrane processes derived. The control theory focuses on problems of dynamic optimization from a chemical-engineering point of view. Analytical and numerical methods that can be exploited to treat problems of optimal control for membrane processes are described. The second part of the text builds on this theoretical basis to establish solutions for membrane models of increasing complexity. Each chapter starts with a derivation of optimal operation and continues with case studies exemplifying various aspects of the control problems under consideration. The authors work their way from th...

  3. Review of Factor Analytic Studies Examining Symptoms of Autism Spectrum Disorders

    Science.gov (United States)

    Shuster, Jill; Perry, Adrienne; Bebko, James; Toplak, Maggie E.

    2014-01-01

    Factor analytic studies have been conducted to examine the inter-relationships and degree of overlap among symptoms in Autism Spectrum Disorder (ASD). This paper reviewed 36 factor analytic studies that have examined ASD symptoms, using 13 different instruments. Studies were grouped into three categories: Studies with all DSM-IV symptoms, studies…

  4. Optimization of analytical parameters for inferring relationships among Escherichia coli isolates from repetitive-element PCR by maximizing correspondence with multilocus sequence typing data.

    Science.gov (United States)

    Goldberg, Tony L; Gillespie, Thomas R; Singer, Randall S

    2006-09-01

    Repetitive-element PCR (rep-PCR) is a method for genotyping bacteria based on the selective amplification of repetitive genetic elements dispersed throughout bacterial chromosomes. The method has great potential for large-scale epidemiological studies because of its speed and simplicity; however, objective guidelines for inferring relationships among bacterial isolates from rep-PCR data are lacking. We used multilocus sequence typing (MLST) as a "gold standard" to optimize the analytical parameters for inferring relationships among Escherichia coli isolates from rep-PCR data. We chose 12 isolates from a large database to represent a wide range of pairwise genetic distances, based on the initial evaluation of their rep-PCR fingerprints. We conducted MLST with these same isolates and systematically varied the analytical parameters to maximize the correspondence between the relationships inferred from rep-PCR and those inferred from MLST. Methods that compared the shapes of densitometric profiles ("curve-based" methods) yielded consistently higher correspondence values between data types than did methods that calculated indices of similarity based on shared and different bands (maximum correspondences of 84.5% and 80.3%, respectively). Curve-based methods were also markedly more robust in accommodating variations in user-specified analytical parameter values than were "band-sharing coefficient" methods, and they enhanced the reproducibility of rep-PCR. Phylogenetic analyses of rep-PCR data yielded trees with high topological correspondence to trees based on MLST and high statistical support for major clades. These results indicate that rep-PCR yields accurate information for inferring relationships among E. coli isolates and that accuracy can be enhanced with the use of analytical methods that consider the shapes of densitometric profiles.

  5. System of Systems Analytic Workbench - 2017

    Science.gov (United States)

    2017-08-31

    Genetic Algorithm and Particle Swarm Optimization with Type-2 Fuzzy Sets for Generating Systems of Systems Architectures. Procedia Computer Science...The application effort involves modeling an existing messaging network to perform real-time situational awareness. The Analytical Workbench’s

  6. Brackish groundwater membrane system design for sustainable irrigation: Optimal configuration selection using analytic hierarchy process and multi-dimension scaling

    Directory of Open Access Journals (Sweden)

    Beni eLew

    2014-12-01

    Full Text Available The recent high demands for reuse of salty water for irrigation affected membrane producers to assess new potential technologies for undesirable physical, chemical and biological contaminants removal. This paper studies the assembly options by the analytic hierarchy process (AHP model and the multi-dimension scaling (MDS techniques. A specialized form of MDS (CoPlot software enables presentation of the AHP outcomes in a two dimensional space and the optimal model can be visualized clearly. Four types of 8 membranes were selected: (i Nanofiltration low rejection and high flux (ESNA1-LF-LD, 86% rejection, 10,500gpd; (ii Nanofiltration medium rejection and medium flux (ESNA1-LF2-LD, 91% rejection, 8,200gpd; (iii Reverse Osmosis high rejection and high flux (CPA5-MAX, 99.7 rejection, 12,000gpd ; and (iv Reverse Osmosis medium rejection and extreme high flux (ESPA4-MAX, 99.2 rejection, 13,200gpd. The results indicate that: (i Nanofiltration membrane (High flux and Low rejection can produce water for irrigation with valuable levels of nutrient ions and a reduction in the sodium absorption ratio (SAR, minimizing soil salinity; this is an attractive option for agricultural irrigation and is the optimal solution; and (ii implementing the MDS approach with reference to the variables is consequently useful to characterize membrane system design.

  7. ATLAS Analytics and Machine Learning Platforms

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Legger, Federica; Gardner, Robert

    2018-01-01

    In 2015 ATLAS Distributed Computing started to migrate its monitoring systems away from Oracle DB and decided to adopt new big data platforms that are open source, horizontally scalable, and offer the flexibility of NoSQL systems. Three years later, the full software stack is in place, the system is considered in production and operating at near maximum capacity (in terms of storage capacity and tightly coupled analysis capability). The new model provides several tools for fast and easy to deploy monitoring and accounting. The main advantages are: ample ways to do complex analytics studies (using technologies such as java, pig, spark, python, jupyter), flexibility in reorganization of data flows, near real time and inline processing. The analytics studies improve our understanding of different computing systems and their interplay, thus enabling whole-system debugging and optimization. In addition, the platform provides services to alarm or warn on anomalous conditions, and several services closing feedback l...

  8. Human Resource Predictive Analytics HRPA For HR Management In Organizations

    Directory of Open Access Journals (Sweden)

    Sujeet N. Mishra

    2015-08-01

    Full Text Available Human resource predictive analytics is an evolving application field of analytics for HRM purposes. The purpose of HRM is measuring employee performance and engagement studying workforce collaboration patterns analyzing employee churn and turnover and modelling employee lifetime value. The motive of applying HRPA is to optimize performances and produce better return on investment for organizations through decision making based on data collection HR metrics and predictive models. The paper is divided into three sections to understand the emergence of HR predictive analytics for HRM. Firstly the paper introduces the concept of HRPA. Secondly the paper discusses three aspects of HRPA a Need b Approach amp Application c Impact. Lastly the paper leads to the conclusion on HRPA.

  9. Microwave magnetoelectric fields: An analytical study of topological characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Joffe, R., E-mail: ioffr1@gmail.com [Microwave Magnetic Laboratory, Department of Electrical and Computer Engineering, Ben Gurion University of the Negev, Beer Sheva (Israel); Department of Electrical and Electronics Engineering, Shamoon College of Engineering, Beer Sheva (Israel); Shavit, R.; Kamenetskii, E.O. [Microwave Magnetic Laboratory, Department of Electrical and Computer Engineering, Ben Gurion University of the Negev, Beer Sheva (Israel)

    2015-10-15

    The near fields originated from a small quasi-two-dimensional ferrite disk with magnetic-dipolar-mode (MDM) oscillations are the fields with broken dual (electric-magnetic) symmetry. Numerical studies show that such fields – called the magnetoelectric (ME) fields – are distinguished by the power-flow vortices and helicity parameters (E.O. Kamenetskii, R. Joffe, R. Shavit, Phys. Rev. E 87 (2013) 023201). These numerical studies can well explain recent experimental results with MDM ferrite disks. In the present paper, we obtain analytically topological characteristics of the ME-field modes. For this purpose, we used a method of successive approximations. In the second approximation we take into account the influence of the edge regions of an open ferrite disk, which are excluded in the first-approximation solving of the magnetostatic (MS) spectral problem. Based on the analytical method, we obtain a “pure” structure of the electric and magnetic fields outside the MDM ferrite disk. The analytical studies can display some fundamental features that are non-observable in the numerical results. While in numerical investigations, one cannot separate the ME fields from the external electromagnetic (EM) radiation, the present theoretical analysis allows clearly distinguish the eigen topological structure of the ME fields. Importantly, this ME-field structure gives evidence for certain phenomena that can be related to the Tellegen and bianisotropic coupling effects. We discuss the question whether the MDM ferrite disk can exhibit properties of the cross magnetoelectric polarizabilities. - Highlights: • We obtain analytically topological characteristics of the ME-field modes. • We take into account the influence of the edge regions of an open ferrite disk. • We obtain a “pure” structure of the electromagnetic fields outside the ferrite disk. • Analytical studies show features that are non-observable in the numerical results. • ME-field gives evidence for

  10. Microwave magnetoelectric fields: An analytical study of topological characteristics

    International Nuclear Information System (INIS)

    Joffe, R.; Shavit, R.; Kamenetskii, E.O.

    2015-01-01

    The near fields originated from a small quasi-two-dimensional ferrite disk with magnetic-dipolar-mode (MDM) oscillations are the fields with broken dual (electric-magnetic) symmetry. Numerical studies show that such fields – called the magnetoelectric (ME) fields – are distinguished by the power-flow vortices and helicity parameters (E.O. Kamenetskii, R. Joffe, R. Shavit, Phys. Rev. E 87 (2013) 023201). These numerical studies can well explain recent experimental results with MDM ferrite disks. In the present paper, we obtain analytically topological characteristics of the ME-field modes. For this purpose, we used a method of successive approximations. In the second approximation we take into account the influence of the edge regions of an open ferrite disk, which are excluded in the first-approximation solving of the magnetostatic (MS) spectral problem. Based on the analytical method, we obtain a “pure” structure of the electric and magnetic fields outside the MDM ferrite disk. The analytical studies can display some fundamental features that are non-observable in the numerical results. While in numerical investigations, one cannot separate the ME fields from the external electromagnetic (EM) radiation, the present theoretical analysis allows clearly distinguish the eigen topological structure of the ME fields. Importantly, this ME-field structure gives evidence for certain phenomena that can be related to the Tellegen and bianisotropic coupling effects. We discuss the question whether the MDM ferrite disk can exhibit properties of the cross magnetoelectric polarizabilities. - Highlights: • We obtain analytically topological characteristics of the ME-field modes. • We take into account the influence of the edge regions of an open ferrite disk. • We obtain a “pure” structure of the electromagnetic fields outside the ferrite disk. • Analytical studies show features that are non-observable in the numerical results. • ME-field gives evidence for

  11. Analytic and numerical studies of Scyllac equilibrium

    International Nuclear Information System (INIS)

    Barnes, D.C.; Brackbill, J.U.; Dagazian, R.Y.; Freidberg, J.P.; Schneider, W.; Betancourt, O.; Garabedian, P.

    1976-01-01

    The results of both numerical and analytic studies of the Scyllac equilibria are presented. Analytic expansions are used to derive equilibrium equations appropriate to noncircular cross sections, and compute the stellarator fields which produce toroidal force balance. Numerical algorithms are used to solve both the equilibrium equations and the full system of dynamical equations in three dimensions. Numerical equilibria are found for both l = 1,0 and l= 1,2 systems. It is found that the stellarator fields which produce equilibria in the l = 1.0 system are larger for diffuse than for sharp boundary plasma profiles, and that the stability of the equilibria depends strongly on the harmonic content of the stellarator fields

  12. Micro-focused ultrasonic solid-liquid extraction (muFUSLE) combined with HPLC and fluorescence detection for PAHs determination in sediments: optimization and linking with the analytical minimalism concept.

    Science.gov (United States)

    Capelo, J L; Galesio, M M; Felisberto, G M; Vaz, C; Pessoa, J Costa

    2005-06-15

    Analytical minimalism is a concept that deals with the optimization of all stages of an analytical procedure so that it becomes less time, cost, sample, reagent and energy consuming. The guide-lines provided in the USEPA extraction method 3550B recommend the use of focused ultrasound (FU), i.e., probe sonication, for the solid-liquid extraction of Polycyclic Aromatic Hydrocarbons, PAHs, but ignore the principle of analytical minimalism. The problems related with the dead sonication zones, often present when high volumes are sonicated with probe, are also not addressed. In this work, we demonstrate that successful extraction and quantification of PAHs from sediments can be done with low sample mass (0.125g), low reagent volume (4ml), short sonication time (3min) and low sonication amplitude (40%). Two variables are here particularly taken into account for total extraction: (i) the design of the extraction vessel and (ii) the solvent used to carry out the extraction. Results showed PAHs recoveries (EPA priority list) ranged between 77 and 101%, accounting for more than 95% for most of the PAHs here studied, as compared with the values obtained after soxhlet extraction. Taking into account the results reported in this work we recommend a revision of the EPA guidelines for PAHs extraction from solid matrices with focused ultrasound, so that these match the analytical minimalism concept.

  13. Formulating analytic expressions for atomic collision cross sections

    International Nuclear Information System (INIS)

    Tabata, Tatsuo; Kubo, Hirotaka; Sataka, Masao

    2003-08-01

    Methods to formulate analytic expression for atomic collision cross sections as a function of projectile energy are described on the basis of the experiences of the data compilation work for more than 20 years. Topics considered are the choice of appropriate functional forms for the expressions and optimization of adjustable parameters. To make extrapolation possible, functions to be used should have the form with reasonable asymptotic behavior. In this respect, modified Green-McNeal formulas have been found useful for various atomic collision cross sections. For ionization processes, a modified Lotz formula has often given a good fit. The ALESQ code for least-squares fits has been convenient to optimize adjustable parameters in analytic expressions. (author)

  14. Cellular Scanning Strategy for Selective Laser Melting: Capturing Thermal Trends with a Low-Fidelity, Pseudo-Analytical Model

    Directory of Open Access Journals (Sweden)

    Sankhya Mohanty

    2014-01-01

    Full Text Available Simulations of additive manufacturing processes are known to be computationally expensive. The resulting large runtimes prohibit their application in secondary analysis requiring several complete simulations such as optimization studies, and sensitivity analysis. In this paper, a low-fidelity pseudo-analytical model has been introduced to enable such secondary analysis. The model has been able to mimic a finite element model and was able to capture the thermal trends associated with the process. The model has been validated and subsequently applied in a small optimization case study. The pseudo-analytical modelling technique is established as a fast tool for primary modelling investigations.

  15. Seamless Digital Environment – Data Analytics Use Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-08-01

    Multiple research efforts in the U.S Department of Energy Light Water Reactor Sustainability (LWRS) Program studies the need and design of an underlying architecture to support the increased amount and use of data in the nuclear power plant. More specifically the three LWRS research efforts; Digital Architecture for an Automated Plant, Automated Work Packages, Computer-Based Procedures for Field Workers, and the Online Monitoring efforts all have identified the need for a digital architecture and more importantly the need for a Seamless Digital Environment (SDE). A SDE provides a mean to access multiple applications, gather the data points needed, conduct the analysis requested, and present the result to the user with minimal or no effort by the user. During the 2016 annual Nuclear Information Technology Strategic Leadership (NITSL) group meeting the nuclear utilities identified the need for research focused on data analytics. The effort was to develop and evaluate use cases for data mining and analytics for employing information from plant sensors and database for use in developing improved business analytics. The goal of the study is to research potential approaches to building an analytics solution for equipment reliability, on a small scale, focusing on either a single piece of equipment or a single system. The analytics solution will likely consist of a data integration layer, predictive and machine learning layer and the user interface layer that will display the output of the analysis in a straight forward, easy to consume manner. This report describes the use case study initiated by NITSL and conducted in a collaboration between Idaho National Laboratory, Arizona Public Service – Palo Verde Nuclear Generating Station, and NextAxiom Inc.

  16. Modeling and energy efficiency optimization of belt conveyors

    International Nuclear Information System (INIS)

    Zhang, Shirong; Xia, Xiaohua

    2011-01-01

    Highlights: → We take optimization approach to improve operation efficiency of belt conveyors. → An analytical energy model, originating from ISO 5048, is proposed. → Then an off-line and an on-line parameter estimation schemes are investigated. → In a case study, six optimization problems are formulated with solutions in simulation. - Abstract: The improvement of the energy efficiency of belt conveyor systems can be achieved at equipment and operation levels. Specifically, variable speed control, an equipment level intervention, is recommended to improve operation efficiency of belt conveyors. However, the current implementations mostly focus on lower level control loops without operational considerations at the system level. This paper intends to take a model based optimization approach to improve the efficiency of belt conveyors at the operational level. An analytical energy model, originating from ISO 5048, is firstly proposed, which lumps all the parameters into four coefficients. Subsequently, both an off-line and an on-line parameter estimation schemes are applied to identify the new energy model, respectively. Simulation results are presented for the estimates of the four coefficients. Finally, optimization is done to achieve the best operation efficiency of belt conveyors under various constraints. Six optimization problems of a typical belt conveyor system are formulated, respectively, with solutions in simulation for a case study.

  17. Interlaboratory analytical performance studies; a way to estimate measurement uncertainty

    Directory of Open Access Journals (Sweden)

    El¿bieta £ysiak-Pastuszak

    2004-09-01

    Full Text Available Comparability of data collected within collaborative programmes became the key challenge of analytical chemistry in the 1990s, including monitoring of the marine environment. To obtain relevant and reliable data, the analytical process has to proceed under a well-established Quality Assurance (QA system with external analytical proficiency tests as an inherent component. A programme called Quality Assurance in Marine Monitoring in Europe (QUASIMEME was established in 1993 and evolved over the years as the major provider of QA proficiency tests for nutrients, trace metals and chlorinated organic compounds in marine environment studies. The article presents an evaluation of results obtained in QUASIMEME Laboratory Performance Studies by the monitoring laboratory of the Institute of Meteorology and Water Management (Gdynia, Poland in exercises on nutrient determination in seawater. The measurement uncertainty estimated from routine internal quality control measurements and from results of analytical performance exercises is also presented in the paper.

  18. An equivalent method for optimization of particle tuned mass damper based on experimental parametric study

    Science.gov (United States)

    Lu, Zheng; Chen, Xiaoyi; Zhou, Ying

    2018-04-01

    A particle tuned mass damper (PTMD) is a creative combination of a widely used tuned mass damper (TMD) and an efficient particle damper (PD) in the vibration control area. The performance of a one-storey steel frame attached with a PTMD is investigated through free vibration and shaking table tests. The influence of some key parameters (filling ratio of particles, auxiliary mass ratio, and particle density) on the vibration control effects is investigated, and it is shown that the attenuation level significantly depends on the filling ratio of particles. According to the experimental parametric study, some guidelines for optimization of the PTMD that mainly consider the filling ratio are proposed. Furthermore, an approximate analytical solution based on the concept of an equivalent single-particle damper is proposed, and it shows satisfied agreement between the simulation and experimental results. This simplified method is then used for the preliminary optimal design of a PTMD system, and a case study of a PTMD system attached to a five-storey steel structure following this optimization process is presented.

  19. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    Science.gov (United States)

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  20. Nonlinear Dynamic Analysis and Optimization of Closed-Form Planetary Gear System

    Directory of Open Access Journals (Sweden)

    Qilin Huang

    2013-01-01

    Full Text Available A nonlinear purely rotational dynamic model of a multistage closed-form planetary gear set formed by two simple planetary stages is proposed in this study. The model includes time-varying mesh stiffness, excitation fluctuation and gear backlash nonlinearities. The nonlinear differential equations of motion are solved numerically using variable step-size Runge-Kutta. In order to obtain function expression of optimization objective, the nonlinear differential equations of motion are solved analytically using harmonic balance method (HBM. Based on the analytical solution of dynamic equations, the optimization mathematical model which aims at minimizing the vibration displacement of the low-speed carrier and the total mass of the gear transmission system is established. The optimization toolbox in MATLAB program is adopted to obtain the optimal solution. A case is studied to demonstrate the effectiveness of the dynamic model and the optimization method. The results show that the dynamic properties of the closed-form planetary gear transmission system have been improved and the total mass of the gear set has been decreased significantly.

  1. Analytical optimal pulse shapes obtained with the aid of genetic algorithms

    International Nuclear Information System (INIS)

    Guerrero, Rubén D.; Arango, Carlos A.; Reyes, Andrés

    2015-01-01

    We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding the interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions

  2. Analytical optimal pulse shapes obtained with the aid of genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co [Department of Physics, Universidad Nacional de Colombia, Bogota (Colombia); Arango, Carlos A. [Department of Chemical Sciences, Universidad Icesi, Cali (Colombia); Reyes, Andrés [Department of Chemistry, Universidad Nacional de Colombia, Bogota (Colombia)

    2015-09-28

    We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding the interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.

  3. Holistic versus Analytic Evaluation of EFL Writing: A Case Study

    Science.gov (United States)

    Ghalib, Thikra K.; Al-Hattami, Abdulghani A.

    2015-01-01

    This paper investigates the performance of holistic and analytic scoring rubrics in the context of EFL writing. Specifically, the paper compares EFL students' scores on a writing task using holistic and analytic scoring rubrics. The data for the study was collected from 30 participants attending an English undergraduate program in a Yemeni…

  4. Mechanical Design Optimization Using Advanced Optimization Techniques

    CERN Document Server

    Rao, R Venkata

    2012-01-01

    Mechanical design includes an optimization process in which designers always consider objectives such as strength, deflection, weight, wear, corrosion, etc. depending on the requirements. However, design optimization for a complete mechanical assembly leads to a complicated objective function with a large number of design variables. It is a good practice to apply optimization techniques for individual components or intermediate assemblies than a complete assembly. Analytical or numerical methods for calculating the extreme values of a function may perform well in many practical cases, but may fail in more complex design situations. In real design problems, the number of design parameters can be very large and their influence on the value to be optimized (the goal function) can be very complicated, having nonlinear character. In these complex cases, advanced optimization algorithms offer solutions to the problems, because they find a solution near to the global optimum within reasonable time and computational ...

  5. Optimization study on structural analyses for the J-PARC mercury target vessel

    Science.gov (United States)

    Guan, Wenhai; Wakai, Eiichi; Naoe, Takashi; Kogawa, Hiroyuki; Wakui, Takashi; Haga, Katsuhiro; Takada, Hiroshi; Futakawa, Masatoshi

    2018-06-01

    The spallation neutron source at the Japan Proton Accelerator Research Complex (J-PARC) mercury target vessel is used for various materials science studies, work is underway to achieve stable operation at 1 MW. This is very important for enhancing the structural integrity and durability of the target vessel, which is being developed for 1 MW operation. In the present study, to reduce thermal stress and relax stress concentrations more effectively in the existing target vessel in J-PARC, an optimization approach called the Taguchi method (TM) is applied to thermo-mechanical analysis. The ribs and their relative parameters, as well as the thickness of the mercury vessel and shrouds, were selected as important design parameters for this investigation. According to the analytical results of 18 model types designed using the TM, the optimal design was determined. It is characterized by discrete ribs and a thicker vessel wall than the current design. The maximum thermal stresses in the mercury vessel and the outer shroud were reduced by 14% and 15%, respectively. Furthermore, it was indicated that variations in rib width, left/right rib intervals, and shroud thickness could influence the maximum thermal stress performance. It is therefore concluded that the TM was useful for optimizing the structure of the target vessel and to reduce the thermal stress in a small number of calculation cases.

  6. Optimized Analytical Method to Determine Gallic and Picric Acids in Pyrotechnic Samples by Using HPLC/UV (Reverse Phase); Optimizacion del Metodo Analitico mediante HPLC/UV Operando en Fase Inversa para la Determinacion de Acido Galico y Acido Picrico en Muestras de Origen Pirotecnico

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Alonso, S.; Perez Pastor, R. M.

    2013-10-01

    A study on the optimization and development of a chromatographic method for the determination of gallic and picric acids in pyrotechnic samples is presented. In order to achieve this, both analytical conditions by HPLC with diode detection and extraction step of a selected sample were studied. (Author)

  7. Archetypes of Supply Chain Analytics Initiatives—An Exploratory Study

    Directory of Open Access Journals (Sweden)

    Tino T. Herden

    2018-05-01

    Full Text Available While Big Data and Analytics are arguably rising stars of competitive advantage, their application is often presented and investigated as an overall approach. A plethora of methods and technologies combined with a variety of objectives creates a barrier for managers to decide how to act, while researchers investigating the impact of Analytics oftentimes neglect this complexity when generalizing their results. Based on a cluster analysis applied to 46 case studies of Supply Chain Analytics (SCA we propose 6 archetypes of initiatives in SCA to provide orientation for managers as means to overcome barriers and build competitive advantage. Further, the derived archetypes present a distinction of SCA for researchers seeking to investigate the effects of SCA on organizational performance.

  8. Scalable and Power Efficient Data Analytics for Hybrid Exascale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Choudhary, Alok [Northwestern Univ., Evanston, IL (United States); Samatova, Nagiza [North Carolina State Univ., Raleigh, NC (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Liao, Wei-keng [Northwestern Univ., Evanston, IL (United States)

    2015-03-19

    This project developed a generic and optimized set of core data analytics functions. These functions organically consolidate a broad constellation of high performance analytical pipelines. As the architectures of emerging HPC systems become inherently heterogeneous, there is a need to design algorithms for data analysis kernels accelerated on hybrid multi-node, multi-core HPC architectures comprised of a mix of CPUs, GPUs, and SSDs. Furthermore, the power-aware trend drives the advances in our performance-energy tradeoff analysis framework which enables our data analysis kernels algorithms and software to be parameterized so that users can choose the right power-performance optimizations.

  9. Analytical structural optimization and experimental verifications for traveling wave generation in self-assembling swimming smart boxes

    International Nuclear Information System (INIS)

    Bani-Hani, M A; Karami, M A

    2015-01-01

    This paper presents vibration analysis and structural optimization of a swimming–morphing structure. The swimming of the structure is achieved by utilization of piezoelectric patches to generate traveling waves. The third mode shape of the structure in the longitudinal direction resembles the body waveform of a swimming eel. After swimming to its destination, the morphing structure changes shape from an open box to a cube using shape memory alloys (SMAs). The SMAs used for the configuration change of the box robot cannot be used for swimming since they fail to operate at high frequencies. Piezoelectric patches are actuated at the third natural frequency of the structure. We optimize the thickness of the panels and the stiffness of the springs at the joints to generate swimming waveforms that most closely resemble the body waveform of an eel. The traveling wave is generated using two piezoelectric sets of patches bonded to the first and last segments of the beams in the longitudinal direction. Excitation of the piezoelectric results in coupled system dynamics equations that can be translated into the generation of waves. Theoretical analysis based on the distributed parameter model is conducted in this paper. A scalar measure of the traveling to standing wave ratio is introduced using a 2-dimensional Fourier transform (2D-FFT) of the body deformation waveform. An optimization algorithm based on tuning the flexural transverse wave is established to obtain a higher traveling to standing wave ratio. The results are then compared to common methods in the literature for assessment of standing to traveling wave ratios. The analytical models are verified by the close agreement between the traveling waves predicted by the model and those measured in the experiments. (paper)

  10. Optimization of prophylaxis for hemophilia A.

    Directory of Open Access Journals (Sweden)

    Robert D Herbert

    Full Text Available Prophylactic injections of factor VIII reduce the incidence of bleeds and slow the development of joint damage in people with hemophilia. The aim of this study was to identify optimal person-specific prophylaxis regimens for children with hemophilia A.Analytic and numerical methods were used to identify prophylaxis regimens which maximize the time for which plasma factor VIII concentrations exceed a threshold, maximize the lowest plasma factor VIII concentrations, and minimize risk of bleeds.It was demonstrated analytically that, for any injection schedule, the regimen that maximizes the lowest factor VIII concentration involves sharing doses between injections so that all of the trough concentrations in a prophylaxis cycle are equal. Numerical methods were used to identify optimal prophylaxis schedules and explore the trade-offs between efficacy and acceptability of different prophylaxis regimens. The prophylaxis regimen which minimizes risk of bleeds depends on the person's pattern of physical activity and may differ greatly from prophylaxis regimens that optimize pharmacokinetic parameters. Prophylaxis regimens which minimize risk of bleeds also differ from prophylaxis regimens that are typically prescribed. Predictions about which regimen is optimal are sensitive to estimates of the effects on risk of bleeds of factor VIII concentration and physical activity.The methods described here can be used to identify optimal, person-specific prophylaxis regimens for children with hemophilia A.

  11. Energy Optimal Path Planning: Integrating Coastal Ocean Modelling with Optimal Control

    Science.gov (United States)

    Subramani, D. N.; Haley, P. J., Jr.; Lermusiaux, P. F. J.

    2016-02-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. To set up the energy optimization, the relative vehicle speed and headings are considered to be stochastic, and new stochastic Dynamically Orthogonal (DO) level-set equations that govern their stochastic time-optimal reachability fronts are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. The accuracy and efficiency of the DO level-set equations for solving the governing stochastic level-set reachability fronts are quantitatively assessed, including comparisons with independent semi-analytical solutions. Energy-optimal missions are studied in wind-driven barotropic quasi-geostrophic double-gyre circulations, and in realistic data-assimilative re-analyses of multiscale coastal ocean flows. The latter re-analyses are obtained from multi-resolution 2-way nested primitive-equation simulations of tidal-to-mesoscale dynamics in the Middle Atlantic Bight and Shelbreak Front region. The effects of tidal currents, strong wind events, coastal jets, and shelfbreak fronts on the energy-optimal paths are illustrated and quantified. Results showcase the opportunities for longer-duration missions that intelligently utilize the ocean environment to save energy, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  12. Concurrence of big data analytics and healthcare: A systematic review.

    Science.gov (United States)

    Mehta, Nishita; Pandit, Anil

    2018-06-01

    The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of

  13. Experimental verification and analytical calculation of unbalanced magnetic force in permanent magnet machines

    Directory of Open Access Journals (Sweden)

    Kyung-Hun Shin

    2017-05-01

    Full Text Available In this study, an exact analytical solution based on Fourier analysis is proposed to compute the unbalanced magnetic force in a permanent magnet machine. The magnetic field solutions are obtained by using a magnetic vector potential and by selecting the appropriate boundary conditions. Based on these field solutions, the force characteristics are also determined analytically. All analytical results were extensively validated with nonlinear two-dimensional finite element analysis and experimental results. Using proposed method, we investigated the influence on the UMF according to machine parameters. Therefore, the proposed method should be very useful in initial design and optimization process of PM machines for UMF analysis.

  14. An Optimization Algorithm for the Design of an Irregularly-Shaped Bridge Based on the Orthogonal Test and Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Hanbing Liu

    2016-11-01

    Full Text Available Irregularly-shaped bridges are usually adopted to connect the main bridge and ramps in urban overpasses, which are under significant flexion-torsion coupling effects and in complicated stress states. In irregular-shaped bridge design, the parameters such as ramp radius, bifurcation diaphragm stiffness, box girder height, and supporting condition could affect structural performance in different manners. In this paper, the influence of various parameters on three indices, including maximum stress, the stress variation coefficient, and the fundamental frequency of torsional vibration, is investigated and analyzed based on orthogonal test method. Through orthogonal analysis, the major influence parameters and corresponding optimal values for these indices are achieved. Combining with the analytic hierarchy process (AHP, the hierarchical structure model of the multi-indices orthogonal test is established and a comprehensive weight analysis method is proposed to reflect the parameter influence on overall mechanical properties of an irregularly-shaped bridge. Influence order and optimal values of parameters for overall mechanical properties are determined based on the weight of factors and levels calculated by the comprehensive weight analysis method. The results indicate that the comprehensive weight analysis method is superior to the overall balance method, which verifies the effectiveness and accuracy of the comprehensive weight analysis in the parameter optimization of the multi-indices orthogonal test for an irregularly-shaped bridge. Optimal parameters obtained in this paper can provide reference and guidance for parameter control in irregularly-shaped bridge design.

  15. Analytical study on U/G coal mine CPT and inferences

    Energy Technology Data Exchange (ETDEWEB)

    Dey, N.C.; Mukhopadhyay, S. [Bengal Engineering College, Howrath (India). Dept. of Mining and Geology

    1999-08-01

    The analytical aspects of underground CPT (coal mine cost per tonne), which varies from mine to mine due to the different weightages of various contributing factors, are described. The CPT is not only dictated by the increasing wages but also by the availability of man-hour and accountability of machine utilization. An optimal blend of labour-intensive and machine-intensive methods involving least investment and operating cost, is a challenge for the coal industry. Technology upgradation and implementation, higher skill and morale, excellence in planning and monitoring, optimization in capacity utilization, and better consumer acceptability of coal will consistently improve the financial health of the coal mining sector. Other factors which will help improve the financial health of coal mining industries are (1) cost propaganda like safety week celebration; (2) cost consciousness at all levels; (3) noticeboard comprising the cost of man-hour and machine- hour; (4) no idle time for men as well as machine; (5) care to increase the life of machines; (6) scope of target amendment in a year; (7) prior to introducing costly machines, due weightage to be given on coal grade, mine life, geo-mining conditions; and (8) award to most economic mine and punishment to others rated below the BEP (break- even point). 2 refs., 3 figs.

  16. Analytical and numerical studies of creation probabilities of hierarchical trees

    Directory of Open Access Journals (Sweden)

    S.S. Borysov

    2011-03-01

    Full Text Available We consider the creation conditions of diverse hierarchical trees both analytically and numerically. A connection between the probabilities to create hierarchical levels and the probability to associate these levels into a united structure is studied. We argue that a consistent probabilistic picture requires the use of deformed algebra. Our consideration is based on the study of the main types of hierarchical trees, among which both regular and degenerate ones are studied analytically, while the creation probabilities of Fibonacci, scale-free and arbitrary trees are determined numerically.

  17. Computing the optimal path in stochastic dynamical systems

    International Nuclear Information System (INIS)

    Bauver, Martha; Forgoston, Eric; Billings, Lora

    2016-01-01

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensional system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.

  18. An analytical study of double bend achromat lattice

    Energy Technology Data Exchange (ETDEWEB)

    Fakhri, Ali Akbar, E-mail: fakhri@rrcat.gov.in; Kant, Pradeep; Singh, Gurnam; Ghodke, A. D. [Raja Ramanna Centre for Advanced Technology, Indore 452 013 (India)

    2015-03-15

    In a double bend achromat, Chasman-Green (CG) lattice represents the basic structure for low emittance synchrotron radiation sources. In the basic structure of CG lattice single focussing quadrupole (QF) magnet is used to form an achromat. In this paper, this CG lattice is discussed and an analytical relation is presented, showing the limitation of basic CG lattice to provide the theoretical minimum beam emittance in achromatic condition. To satisfy theoretical minimum beam emittance parameters, achromat having two, three, and four quadrupole structures is presented. In this structure, different arrangements of QF and defocusing quadruple (QD) are used. An analytical approach assuming quadrupoles as thin lenses has been followed for studying these structures. A study of Indus-2 lattice in which QF-QD-QF configuration in the achromat part has been adopted is also presented.

  19. An analytical study of double bend achromat lattice.

    Science.gov (United States)

    Fakhri, Ali Akbar; Kant, Pradeep; Singh, Gurnam; Ghodke, A D

    2015-03-01

    In a double bend achromat, Chasman-Green (CG) lattice represents the basic structure for low emittance synchrotron radiation sources. In the basic structure of CG lattice single focussing quadrupole (QF) magnet is used to form an achromat. In this paper, this CG lattice is discussed and an analytical relation is presented, showing the limitation of basic CG lattice to provide the theoretical minimum beam emittance in achromatic condition. To satisfy theoretical minimum beam emittance parameters, achromat having two, three, and four quadrupole structures is presented. In this structure, different arrangements of QF and defocusing quadruple (QD) are used. An analytical approach assuming quadrupoles as thin lenses has been followed for studying these structures. A study of Indus-2 lattice in which QF-QD-QF configuration in the achromat part has been adopted is also presented.

  20. An analytical study of double bend achromat lattice

    International Nuclear Information System (INIS)

    Fakhri, Ali Akbar; Kant, Pradeep; Singh, Gurnam; Ghodke, A. D.

    2015-01-01

    In a double bend achromat, Chasman-Green (CG) lattice represents the basic structure for low emittance synchrotron radiation sources. In the basic structure of CG lattice single focussing quadrupole (QF) magnet is used to form an achromat. In this paper, this CG lattice is discussed and an analytical relation is presented, showing the limitation of basic CG lattice to provide the theoretical minimum beam emittance in achromatic condition. To satisfy theoretical minimum beam emittance parameters, achromat having two, three, and four quadrupole structures is presented. In this structure, different arrangements of QF and defocusing quadruple (QD) are used. An analytical approach assuming quadrupoles as thin lenses has been followed for studying these structures. A study of Indus-2 lattice in which QF-QD-QF configuration in the achromat part has been adopted is also presented

  1. Analytic theory of alternate multilayer gratings operating in single-order regime.

    Science.gov (United States)

    Yang, Xiaowei; Kozhevnikov, Igor V; Huang, Qiushi; Wang, Hongchang; Hand, Matthew; Sawhney, Kawal; Wang, Zhanshan

    2017-07-10

    Using the coupled wave approach (CWA), we introduce the analytical theory for alternate multilayer grating (AMG) operating in the single-order regime, in which only one diffraction order is excited. Differing from previous study analogizing AMG to crystals, we conclude that symmetrical structure, or equal thickness of the two multilayer materials, is not the optimal design for AMG and may result in significant reduction in diffraction efficiency. The peculiarities of AMG compared with other multilayer gratings are analyzed. An influence of multilayer structure materials on diffraction efficiency is considered. The validity conditions of analytical theory are also discussed.

  2. Analytical method comparisons for the accurate determination of PCBs in sediments

    Energy Technology Data Exchange (ETDEWEB)

    Numata, M.; Yarita, T.; Aoyagi, Y.; Yamazaki, M.; Takatsu, A. [National Metrology Institute of Japan, Tsukuba (Japan)

    2004-09-15

    operating temperature and/or the low viscosity of extraction media. To evaluate these extraction techniques as analytical methods for the certification, the effects of the extraction conditions on the determination of some chlorinated biphenyl (CB) congeners in sediment samples have been investigated in this study. The analytical results obtained by the optimized extraction techniques have been used to determine certified values of the CB congeners in reference materials that we have planed to develop.

  3. Retail video analytics: an overview and survey

    Science.gov (United States)

    Connell, Jonathan; Fan, Quanfu; Gabbur, Prasad; Haas, Norman; Pankanti, Sharath; Trinh, Hoang

    2013-03-01

    Today retail video analytics has gone beyond the traditional domain of security and loss prevention by providing retailers insightful business intelligence such as store traffic statistics and queue data. Such information allows for enhanced customer experience, optimized store performance, reduced operational costs, and ultimately higher profitability. This paper gives an overview of various camera-based applications in retail as well as the state-ofthe- art computer vision techniques behind them. It also presents some of the promising technical directions for exploration in retail video analytics.

  4. Optimization : insights and applications

    CERN Document Server

    Brinkhuis, Jan

    2005-01-01

    This self-contained textbook is an informal introduction to optimization through the use of numerous illustrations and applications. The focus is on analytically solving optimization problems with a finite number of continuous variables. In addition, the authors provide introductions to classical and modern numerical methods of optimization and to dynamic optimization. The book's overarching point is that most problems may be solved by the direct application of the theorems of Fermat, Lagrange, and Weierstrass. The authors show how the intuition for each of the theoretical results can be s

  5. Fast UPLC/PDA determination of squalene in Sicilian P.D.O. pistachio from Bronte: Optimization of oil extraction method and analytical characterization.

    Science.gov (United States)

    Salvo, Andrea; La Torre, Giovanna Loredana; Di Stefano, Vita; Capocchiano, Valentina; Mangano, Valentina; Saija, Emanuele; Pellizzeri, Vito; Casale, Katia Erminia; Dugo, Giacomo

    2017-04-15

    A fast reversed-phase UPLC method was developed for squalene determination in Sicilian pistachio samples that entry in the European register of the products with P.D.O. In the present study the SPE procedure was optimized for the squalene extraction prior to the UPLC/PDA analysis. The precision of the full analytical procedure was satisfactory and the mean recoveries were 92.8±0.3% and 96.6±0.1% for 25 and 50mgL -1 level of addition, respectively. Selected chromatographic conditions allowed a very fast squalene determination; in fact it was well separated in ∼0.54min with good resolution. Squalene was detected in all the pistachio samples analyzed and the levels ranged from 55.45-226.34mgkg -1 . Comparing our results with those of other studies it emerges that squalene contents in P.D.O. Sicilian pistachio samples, generally, were higher than those measured for other samples of different geographic origins. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Optimal Tax Depreciation under a Progressive Tax System

    NARCIS (Netherlands)

    Wielhouwer, J.L.; De Waegenaere, A.M.B.; Kort, P.M.

    2000-01-01

    The focus of this paper is on the effect of a progressive tax system on optimal tax depreciation. By using dynamic optimization we show that an optimal strategy exists, and we provide an analytical expression for the optimal depreciation charges. Depreciation charges initially decrease over time,

  7. RF Gun Optimization Study

    International Nuclear Information System (INIS)

    Alicia Hofler; Pavel Evtushenko

    2007-01-01

    Injector gun design is an iterative process where the designer optimizes a few nonlinearly interdependent beam parameters to achieve the required beam quality for a particle accelerator. Few tools exist to automate the optimization process and thoroughly explore the parameter space. The challenging beam requirements of new accelerator applications such as light sources and electron cooling devices drive the development of RF and SRF photo injectors. A genetic algorithm (GA) has been successfully used to optimize DC photo injector designs at Cornell University [1] and Jefferson Lab [2]. We propose to apply GA techniques to the design of RF and SRF gun injectors. In this paper, we report on the initial phase of the study where we model and optimize a system that has been benchmarked with beam measurements and simulation

  8. Analytical studies related to Indian PHWR containment system performance

    International Nuclear Information System (INIS)

    Haware, S.K.; Markandeya, S.G.; Ghosh, A.K.; Kushwaha, H.S.; Venkat Raj, V.

    1998-01-01

    Build-up of pressure in a multi-compartment containment after a postulated accident, the growth, transportation and removal of aerosols in the containment are complex processes of vital importance in deciding the source term. The release of hydrogen and its combustion increases the overpressure. In order to analyze these complex processes and to enable proper estimation of the source term, well tested analytical tools are necessary. This paper gives a detailed account of the analytical tools developed/adapted for PSA level 2 studies. (author)

  9. Accelerating SPARQL Queries and Analytics on RDF Data

    KAUST Repository

    Al-Harbi, Razen

    2016-11-09

    The complexity of SPARQL queries and RDF applications poses great challenges on distributed RDF management systems. SPARQL workloads are dynamic and con- sist of queries with variable complexities. Hence, systems that use static partitioning su↵er from communication overhead for workloads that generate excessive communi- cation. Concurrently, RDF applications are becoming more sophisticated, mandating analytical operations that extend beyond SPARQL queries. Being primarily designed and optimized to execute SPARQL queries, which lack procedural capabilities, exist- ing systems are not suitable for rich RDF analytics. This dissertation tackles the problem of accelerating SPARQL queries and RDF analytics on distributed shared-nothing RDF systems. First, a distributed RDF en- gine, coined AdPart, is introduced. AdPart uses lightweight hash partitioning for sharding triples using their subject values; rendering its startup overhead very low. The locality-aware query optimizer of AdPart takes full advantage of the partition- ing to (i) support the fully parallel processing of join patterns on subjects and (ii) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. By exploiting hash- based locality, AdPart achieves better or comparable performance to systems that employ sophisticated partitioning schemes. To cope with workloads dynamism, AdPart is extended to dynamically adapt to workload changes. AdPart monitors the data access patterns and dynamically redis- tributes and replicates the instances of the most frequent patterns among workers.Consequently, the communication cost for future queries is drastically reduced or even eliminated. Experiments with synthetic and real data verify that AdPart starts faster than all existing systems and gracefully adapts to the query load. Finally, to support and accelerate rich RDF analytical tasks, a vertex-centric RDF analytics framework is

  10. Analysis and modeling of safety parameters in the selection of optimal routes for emergency evacuation after the earthquake (Case study: 13 Aban neighborhood of Tehran

    Directory of Open Access Journals (Sweden)

    Sajad Ganjehi

    2013-08-01

    Full Text Available Introduction : Earthquakes are imminent threats to urban areas of Iran, especially Tehran. They can cause extensive destructions and lead to heavy casualties. One of the most important aspects of disaster management after earthquake is the rapid transfer of casualties to emergency shelters. To expedite emergency evacuation process the optimal safe path method should be considered. To examine the safety of road networks and to determine the most optimal route at pre-earthquake phase, a series of parameters should be taken into account.   Methods : In this study, we employed a multi-criteria decision-making approach to determine and evaluate the effective safety parameters for selection of optimal routes in emergency evacuation after an earthquake.   Results: The relationship between the parameters was analyzed and the effect of each parameter was listed. A process model was described and a case study was implemented in the 13th Aban neighborhood ( Tehran’s 20th municipal district . Then, an optimal path to safe places in an emergency evacuation after an earthquake in the 13th Aban neighborhood was selected.   Conclusion : Analytic hierarchy process (AHP, as the main model, was employed. Each parameter of the model was described. Also, the capabilities of GIS software such as layer coverage were used.     Keywords: Earthquake, emergency evacuation, Analytic Hierarchy Process (AHP, crisis management, optimization, 13th Aban neighborhood of Tehran

  11. Trends in PDE constrained optimization

    CERN Document Server

    Benner, Peter; Engell, Sebastian; Griewank, Andreas; Harbrecht, Helmut; Hinze, Michael; Rannacher, Rolf; Ulbrich, Stefan

    2014-01-01

    Optimization problems subject to constraints governed by partial differential equations (PDEs) are among the most challenging problems in the context of industrial, economical and medical applications. Almost the entire range of problems in this field of research was studied and further explored as part of the Deutsche Forschungsgemeinschaft (DFG) priority program 1253 on “Optimization with Partial Differential Equations” from 2006 to 2013. The investigations were motivated by the fascinating potential applications and challenging mathematical problems that arise in the field of PDE constrained optimization. New analytic and algorithmic paradigms have been developed, implemented and validated in the context of real-world applications. In this special volume, contributions from more than fifteen German universities combine the results of this interdisciplinary program with a focus on applied mathematics.   The book is divided into five sections on “Constrained Optimization, Identification and Control”...

  12. Analytical study of doubly excited ridge states

    International Nuclear Information System (INIS)

    Wong, H.Y.

    1988-01-01

    Two different non-separable problems are explored and analyzed. Non-perturbative methods need to be used to handle them, as the competing forces involved in these problems are equally strong and do not yield to a perturbative analysis. The first one is the study of doubly excited ridge states of atoms, in which two electrons are comparably excited. An analytical wavefunction for such states is introduced and is used to solve the two-electron Hamiltonian in the pair coordinates called hyperspherical coordinates variationally. The correlation between the electrons is built in analytically into the structure of the wavefunction. Sequences of ridge states out to very high excitation are computed and are organized as Rydberg series converging to the double ionization limit. Numerical results of such states in He and H - are compared with other theoretical calculations where available. The second problem is the analysis of the photodetachment of negative ions in an electric field via the frame transformation theory. The presence of the electron field requires a transformation from spherical to cylindrical symmetry for the outgoing photoelectron. This gives an oscillatory modulating factor as the effect of the electric field on cross-sections. All of this work is derived analytically in a general form applicable to the photodetachment of any negative ion. The expressions are applied to H - and S - for illustration

  13. Big Data Analytics with Datalog Queries on Spark.

    Science.gov (United States)

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2016-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.

  14. Solid Rocket Motor Design Using Hybrid Optimization

    Directory of Open Access Journals (Sweden)

    Kevin Albarado

    2012-01-01

    Full Text Available A particle swarm/pattern search hybrid optimizer was used to drive a solid rocket motor modeling code to an optimal solution. The solid motor code models tapered motor geometries using analytical burn back methods by slicing the grain into thin sections along the axial direction. Grains with circular perforated stars, wagon wheels, and dog bones can be considered and multiple tapered sections can be constructed. The hybrid approach to optimization is capable of exploring large areas of the solution space through particle swarming, but is also able to climb “hills” of optimality through gradient based pattern searching. A preliminary method for designing tapered internal geometry as well as tapered outer mold-line geometry is presented. A total of four optimization cases were performed. The first two case studies examines designing motors to match a given regressive-progressive-regressive burn profile. The third case study studies designing a neutrally burning right circular perforated grain (utilizing inner and external geometry tapering. The final case study studies designing a linearly regressive burning profile for right circular perforated (tapered grains.

  15. A study on the optimal replacement periods of digital control computer's components of Wolsung nuclear power plant unit 1

    International Nuclear Information System (INIS)

    Mok, Jin Il; Seong, Poong Hyun

    1993-01-01

    Due to the failure of the instrument and control devices of nuclear power plants caused by aging, nuclear power plants occasionally trip. Even a trip of a single nuclear power plant (NPP) causes an extravagant economical loss and deteriorates public acceptance of nuclear power plants. Therefore, the replacement of the instrument and control devices with proper consideration of the aging effect is necessary in order to prevent the inadvertent trip. In this paper we investigated the optimal replacement periods of the control computer's components of Wolsung nuclear power plant Unit 1. We first derived mathematical models of optimal replacement periods to the digital control computer's components of Wolsung NPP Unit 1 and calculated the optimal replacement periods analytically. We compared the periods with the replacement periods currently used at Wolsung NPP Unit 1. The periods used at Wolsung is not based on mathematical analysis, but on empirical knowledge. As a consequence, the optimal replacement periods analytically obtained and those used in the field show a little difference. (Author)

  16. Optimal analytic method for the nonlinear Hasegawa-Mima equation

    Science.gov (United States)

    Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle

    2014-05-01

    The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.

  17. MEMS resonant load cells for micro-mechanical test frames: feasibility study and optimal design

    Science.gov (United States)

    Torrents, A.; Azgin, K.; Godfrey, S. W.; Topalli, E. S.; Akin, T.; Valdevit, L.

    2010-12-01

    This paper presents the design, optimization and manufacturing of a novel micro-fabricated load cell based on a double-ended tuning fork. The device geometry and operating voltages are optimized for maximum force resolution and range, subject to a number of manufacturing and electromechanical constraints. All optimizations are enabled by analytical modeling (verified by selected finite elements analyses) coupled with an efficient C++ code based on the particle swarm optimization algorithm. This assessment indicates that force resolutions of ~0.5-10 nN are feasible in vacuum (~1-50 mTorr), with force ranges as large as 1 N. Importantly, the optimal design for vacuum operation is independent of the desired range, ensuring versatility. Experimental verifications on a sub-optimal device fabricated using silicon-on-glass technology demonstrate a resolution of ~23 nN at a vacuum level of ~50 mTorr. The device demonstrated in this article will be integrated in a hybrid micro-mechanical test frame for unprecedented combinations of force resolution and range, displacement resolution and range, optical (or SEM) access to the sample, versatility and cost.

  18. An analytical method to determine the optimal size of a photovoltaic plant

    Energy Technology Data Exchange (ETDEWEB)

    Barra, L; Catalanotti, S; Fontana, F; Lavorante, F

    1984-01-01

    In this paper, a simplified method for the optimal sizing of a photovoltaic system is presented. The results have been obtained for Italian meteorological data, but the methodology can be applied to any geographical area. The system studied is composed of a photovoltaic array, power tracker, battery storage, inverter and load. Computer simulation was used to obtain the performance of this system for many values of field area, battery storage value, solar flux and load by keeping constant the efficiencies. A simple fit was used to achieve a formula relating the system variables to the performance. Finally, the formulae for the optimal values of the field area and the battery storage value are shown.

  19. Optimal contracts for wind power producers in electricity markets

    KAUST Repository

    Bitar, E.

    2010-12-01

    This paper is focused on optimal contracts for an independent wind power producer in conventional electricity markets. Starting with a simple model of the uncertainty in the production of power from a wind turbine farm and a model for the electric energy market, we derive analytical expressions for optimal contract size and corresponding expected optimal profit. We also address problems involving overproduction penalties, cost of reserves, and utility of additional sensor information. We obtain analytical expressions for marginal profits from investing in local generation and energy storage. ©2010 IEEE.

  20. Analytical fuzzy approach to biological data analysis

    Directory of Open Access Journals (Sweden)

    Weiping Zhang

    2017-03-01

    Full Text Available The assessment of the physiological state of an individual requires an objective evaluation of biological data while taking into account both measurement noise and uncertainties arising from individual factors. We suggest to represent multi-dimensional medical data by means of an optimal fuzzy membership function. A carefully designed data model is introduced in a completely deterministic framework where uncertain variables are characterized by fuzzy membership functions. The study derives the analytical expressions of fuzzy membership functions on variables of the multivariate data model by maximizing the over-uncertainties-averaged-log-membership values of data samples around an initial guess. The analytical solution lends itself to a practical modeling algorithm facilitating the data classification. The experiments performed on the heartbeat interval data of 20 subjects verified that the proposed method is competing alternative to typically used pattern recognition and machine learning algorithms.

  1. Optimal Control and Optimization of Stochastic Supply Chain Systems

    CERN Document Server

    Song, Dong-Ping

    2013-01-01

    Optimal Control and Optimization of Stochastic Supply Chain Systems examines its subject in the context of the presence of a variety of uncertainties. Numerous examples with intuitive illustrations and tables are provided, to demonstrate the structural characteristics of the optimal control policies in various stochastic supply chains and to show how to make use of these characteristics to construct easy-to-operate sub-optimal policies.                 In Part I, a general introduction to stochastic supply chain systems is provided. Analytical models for various stochastic supply chain systems are formulated and analysed in Part II. In Part III the structural knowledge of the optimal control policies obtained in Part II is utilized to construct easy-to-operate sub-optimal control policies for various stochastic supply chain systems accordingly. Finally, Part IV discusses the optimisation of threshold-type control policies and their robustness. A key feature of the book is its tying together of ...

  2. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 2. Case study

    Science.gov (United States)

    Graham, Wendy D.; Neff, Christina R.

    1994-05-01

    The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.

  3. A Web-Based Geovisual Analytical System for Climate Studies

    Directory of Open Access Journals (Sweden)

    Zhenlong Li

    2012-12-01

    Full Text Available Climate studies involve petabytes of spatiotemporal datasets that are produced and archived at distributed computing resources. Scientists need an intuitive and convenient tool to explore the distributed spatiotemporal data. Geovisual analytical tools have the potential to provide such an intuitive and convenient method for scientists to access climate data, discover the relationships between various climate parameters, and communicate the results across different research communities. However, implementing a geovisual analytical tool for complex climate data in a distributed environment poses several challenges. This paper reports our research and development of a web-based geovisual analytical system to support the analysis of climate data generated by climate model. Using the ModelE developed by the NASA Goddard Institute for Space Studies (GISS as an example, we demonstrate that the system is able to (1 manage large volume datasets over the Internet; (2 visualize 2D/3D/4D spatiotemporal data; (3 broker various spatiotemporal statistical analyses for climate research; and (4 support interactive data analysis and knowledge discovery. This research also provides an example for managing, disseminating, and analyzing Big Data in the 21st century.

  4. Optimization and development of analytical methods for the determination of new brominated flame retardants and polybrominated diphenyl ethers in sediments and suspended particulate matter

    Energy Technology Data Exchange (ETDEWEB)

    Lopez, P. [VU University Amsterdam, Institute for Environmental Studies (IVM), Amsterdam (Netherlands); Institute for Reference Materials and Measurements, European Commission, Joint Research Centre, Retieseweg 111, 2440, Geel (Belgium); Brandsma, S.A.; Leonards, P.E.G.; Boer, J. de [VU University Amsterdam, Institute for Environmental Studies (IVM), Amsterdam (Netherlands)

    2011-05-15

    With more stringent legislation on brominated flame retardants, it is expected that increasing amounts of substitutes would replace polybrominated diphenylethers (PBDEs). Therefore, the development and optimization of analytical methodologies that allow their identification and quantification are of paramount relevance. This work describes the optimization of an analytical procedure to determine pentabromochlorocyclohexane, tetrabromo-o-chlorotoluene, 2,3,5,6-tetrabromo-p-xylene, tetrabromophthalic anhydride, 2,3,4,5,6-pentabromotoluene, tris(2,3-dibromopropyl)phosphate, decabromodiphenylethane and 1,2-bis(2,4,6-tribromophenoxy)ethane together with PBDEs in sediments and in suspended particulate matter. This method comprises a pressurized liquid extraction followed by three cleanup steps (gel permeation chromatography and solid phase extraction on Oasis trademark HLB and on silica cartridges). Gas chromatography-mass spectrometry, using electron capture negative chemical ionization, is used for the final analysis. The proposed method provides recoveries >85%. The method was applied to sediment and suspended particulate matter samples from different locations in the Western Scheldt estuary (the Netherlands). To the best of our knowledge, this is the first time that the occurrence of the additive flame retardants 2,3,5,6-tetrabromo-p-xylene, 3,4,5,6-tetrabromo-o-chlorotoluene and 2,3,4,5,6-pentabromochlorocyclohexane is reported in the literature. The concentrations of these new flame retardants ranged from 0.05 to 0.30 {mu}g/kg dry weight. (orig.)

  5. Optimal fuel inventory strategies

    International Nuclear Information System (INIS)

    Caspary, P.J.; Hollibaugh, J.B.; Licklider, P.L.; Patel, K.P.

    1990-01-01

    In an effort to maintain their competitive edge, most utilities are reevaluating many of their conventional practices and policies in an effort to further minimize customer revenue requirements without sacrificing system reliability. Over the past several years, Illinois Power has been rethinking its traditional fuel inventory strategies, recognizing that coal supplies are competitive and plentiful and that carrying charges on inventory are expensive. To help the Company achieve one of its strategic corporate goals, an optimal fuel inventory study was performed for its five major coal-fired generating stations. The purpose of this paper is to briefly describe Illinois Power's system and past practices concerning coal inventories, highlight the analytical process behind the optimal fuel inventory study, and discuss some of the recent experiences affecting coal deliveries and economic dispatch

  6. Study progression in application of process analytical technologies on film coating

    Directory of Open Access Journals (Sweden)

    Tingting Peng

    2015-06-01

    Full Text Available Film coating is an important unit operation to produce solid dosage forms, thereby, the monitoring of this process is helpful to find problems in time and improve the quality of coated products. Traditional methods adopted to monitor this process include measurement of coating weight gain, performance of disintegration and dissolution test, etc. However, not only do these methods cause destruction to the samples, but also consume time and energy. There have recently emerged the applications of process analytical technologies (PAT on film coating, especially some novel spectroscopic and imaging technologies, which have the potential to real-time track the progress in film coating and optimize production efficiency. This article gives an overview on the application of such technologies for film coating, with the goal to provide a reference for the further researches.

  7. Multimedia Pivot Tables for Multimedia Analytics on Image Collections

    OpenAIRE

    Worring, M.; Koelma, D.; Zahálka, J.

    2016-01-01

    We propose a multimedia analytics solution for getting insight into image collections by extending the powerful analytic capabilities of pivot tables, found in the ubiquitous spreadsheets, to multimedia. We formalize the concept of multimedia pivot tables and give design rules and methods for the multimodal summarization, structuring, and browsing of the collection based on these tables, all optimized to support an analyst in getting structural and conclusive insights. Our proposed solution p...

  8. Optimization of fracture length in gas/condensate reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Mohan, J.; Sharma, M.M.; Pope, G.A. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Texas Univ., Austin, TX (United States)

    2006-07-01

    A common practice that improves the productivity of gas-condensate reservoirs is hydraulic fracturing. Two important variables that determine the effectiveness of hydraulic fractures are fracture length and fracture conductivity. Although there are no simple guidelines for the optimization of fracture length and the factors that affect it, it is preferable to have an optimum fracture length for a given proppant volume in order to maximize productivity. An optimization study was presented in which fracture length was estimated at wells where productivity was maximized. An analytical expression that takes into account non-Darcy flow and condensate banking was derived. This paper also reviewed the hydraulic fracturing process and discussed previous simulation studies that investigated the effects of well spacing and fracture length on well productivity in low permeability gas reservoirs. The compositional simulation study and results and discussion were also presented. The analytical expression for optimum fracture length, analytical expression with condensate dropout, and equations for the optimum fracture length with non-Darcy flow in the fracture were included in an appendix. The Computer Modeling Group's GEM simulator, an equation-of-state compositional simulator, was used in this study. It was concluded that for cases with non-Darcy flow, the optimum fracture lengths are lower than those obtained with Darcy flow. 18 refs., 5 tabs., 22 figs., 1 appendix.

  9. Optimization of DNA Sensor Model Based Nanostructured Graphene Using Particle Swarm Optimization Technique

    Directory of Open Access Journals (Sweden)

    Hediyeh Karimi

    2013-01-01

    Full Text Available It has been predicted that the nanomaterials of graphene will be among the candidate materials for postsilicon electronics due to their astonishing properties such as high carrier mobility, thermal conductivity, and biocompatibility. Graphene is a semimetal zero gap nanomaterial with demonstrated ability to be employed as an excellent candidate for DNA sensing. Graphene-based DNA sensors have been used to detect the DNA adsorption to examine a DNA concentration in an analyte solution. In particular, there is an essential need for developing the cost-effective DNA sensors holding the fact that it is suitable for the diagnosis of genetic or pathogenic diseases. In this paper, particle swarm optimization technique is employed to optimize the analytical model of a graphene-based DNA sensor which is used for electrical detection of DNA molecules. The results are reported for 5 different concentrations, covering a range from 0.01 nM to 500 nM. The comparison of the optimized model with the experimental data shows an accuracy of more than 95% which verifies that the optimized model is reliable for being used in any application of the graphene-based DNA sensor.

  10. Case Study : Visual Analytics in Software Product Assessments

    NARCIS (Netherlands)

    Telea, Alexandru; Voinea, Lucian; Lanza, M; Storey, M; Muller, H

    2009-01-01

    We present how a combination of static source code analysis, repository analysis, and visualization techniques has been used to effectively get and communicate insight in the development and project management problems of a large industrial code base. This study is an example of how visual analytics

  11. CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY

    Energy Technology Data Exchange (ETDEWEB)

    BERGMAN, T. B.; STEFANSKI, L. D.; SEELEY, P. N.; ZINSLI, L. C.; CUSACK, L. J.

    2012-09-19

    THE CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY WAS CONDUCTED TO DEVELOP AN OPTIMAL SEQUENCE OF REMEDIATION ACTIVITIES IMPLEMENTING THE CERCLA DECISION ON THE CENTRAL PLATEAU. THE STUDY DEFINES A SEQUENCE OF ACTIVITIES THAT RESULT IN AN EFFECTIVE USE OF RESOURCES FROM A STRATEGIC PERSPECTIVE WHEN CONSIDERING EQUIPMENT PROCUREMENT AND STAGING, WORKFORCE MOBILIZATION/DEMOBILIZATION, WORKFORCE LEVELING, WORKFORCE SKILL-MIX, AND OTHER REMEDIATION/DISPOSITION PROJECT EXECUTION PARAMETERS.

  12. Optimal Homogenization of Perfusion Flows in Microfluidic Bio-Reactors: A Numerical Study

    DEFF Research Database (Denmark)

    Okkels, Fridolin; Dufva, Martin; Bruus, Henrik

    2011-01-01

    In recent years, the interest in small-scale bio-reactors has increased dramatically. To ensure homogeneous conditions within the complete area of perfused microfluidic bio-reactors, we develop a general design of a continually feed bio-reactor with uniform perfusion flow. This is achieved...... by introducing a specific type of perfusion inlet to the reaction area. The geometry of these inlets are found using the methods of topology optimization and shape optimization. The results are compared with two different analytic models, from which a general parametric description of the design is obtained...... and tested numerically. Such a parametric description will generally be beneficial for the design of a broad range of microfluidic bioreactors used for, e. g., cell culturing and analysis and in feeding bio-arrays....

  13. In vitro placental model optimization for nanoparticle transport studies

    Directory of Open Access Journals (Sweden)

    Cartwright L

    2012-01-01

    Full Text Available Laura Cartwright1, Marie Sønnegaard Poulsen2, Hanne Mørck Nielsen3, Giulio Pojana4, Lisbeth E Knudsen2, Margaret Saunders1, Erik Rytting2,51Bristol Initiative for Research of Child Health (BIRCH, Biophysics Research Unit, St Michael's Hospital, UH Bristol NHS Foundation Trust, Bristol, UK; 2University of Copenhagen, Faculty of Health Sciences, Department of Public Health, 3University of Copenhagen, Faculty of Pharmaceutical Sciences, Department of Pharmaceutics and Analytical Chemistry, Copenhagen, Denmark; 4Department of Environmental Sciences, Informatics and Statistics, University Ca' Foscari Venice, Venice, Italy; 5Department of Obstetrics and Gynecology, University of Texas Medical Branch, Galveston, Texas, USABackground: Advances in biomedical nanotechnology raise hopes in patient populations but may also raise questions regarding biodistribution and biocompatibility, especially during pregnancy. Special consideration must be given to the placenta as a biological barrier because a pregnant woman's exposure to nanoparticles could have significant effects on the fetus developing in the womb. Therefore, the purpose of this study is to optimize an in vitro model for characterizing the transport of nanoparticles across human placental trophoblast cells.Methods: The growth of BeWo (clone b30 human placental choriocarcinoma cells for nanoparticle transport studies was characterized in terms of optimized Transwell® insert type and pore size, the investigation of barrier properties by transmission electron microscopy, tight junction staining, transepithelial electrical resistance, and fluorescein sodium transport. Following the determination of nontoxic concentrations of fluorescent polystyrene nanoparticles, the cellular uptake and transport of 50 nm and 100 nm diameter particles was measured using the in vitro BeWo cell model.Results: Particle size measurements, fluorescence readings, and confocal microscopy indicated both cellular uptake of

  14. Optimization of rotational radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Tulovsky, Vladimir; Ringor, Michael; Papiez, Lech

    1995-01-01

    Purpose: Rotational therapy treatment planning for rotationally symmetric geometry of tumor and healthy tissue provides an important example of testing various approaches to optimizing dose distributions for therapeutic x-ray irradiations. In this article, dose distribution optimization is formulated as a variational problem. This problem is solved analytically and numerically. Methods and Materials: The classical Lagrange method is used to derive equations and inequalities that give necessary conditions for minimizing the mean-square deviation between the ideal dose distribution and the achievable dose distribution. The solution of the resulting integral equation with Cauchy kernel is used to derive analytical formulas for the minimizing irradiation intensity function. Results: The solutions are evaluated numerically and the graphs of the minimizing intensity functions and the corresponding dose distributions are presented. Conclusions: The optimal solutions obtained using the mean-square criterion lead to significant underdosage in some areas of the tumor volume. Possible solutions to this shortcoming are investigated and medically more appropriate criteria for optimization are proposed for future investigations

  15. DESIGN OPTIMIZATION AND EXPERIMENTAL STUDY ON THE BLOWER FOR FLUFFS COLLECTION SYSTEM

    Directory of Open Access Journals (Sweden)

    C. N. JAYAPRAGASAN

    2017-05-01

    Full Text Available Centrifugal fans play an important role in the fluffs collection system for industrial cleaner. Therefore it has become necessary to study on the parameters which influences the performance of the blower. Parameters chosen for optimization are - fan outer diameter, number of blades and fan blade angle. Taguchi’s orthogonal array method helps to find out the optimum number of cases and the modelling has been carried out using SOLIDWORKS. ICEM CFD is used for meshing the blowers and analysed using FLUENT. In this study, analytical results are compared with experimental values. ANOVA is used to find out the percentage contribution of parameters on the output. Using Minitab software the optimum combination is identified. The result shows that the optimum combinations are 190 mm outer diameter, 80° blade angle and 8 numbers of blades.

  16. Analytical stiffness matrices with Green-Lagrange strain measure

    DEFF Research Database (Denmark)

    Pedersen, Pauli

    2005-01-01

    Separating the dependence on material and stress/strain state from the dependence on initial geometry, we obtain analytical secant and tangent stiffness matrices. For the case of a linear displacement triangle with uniform thickness and uniform constitutive behaviour closed-form results are listed...... a solution based on Green-Lagrange strain measure. The approach is especially useful in design optimization, because analytical sensitivity analysis then can be performed. The case of a three node triangular ring element for axisymmetric analysis involves small modifications and extension to four node...

  17. Solar sail time-optimal interplanetary transfer trajectory design

    International Nuclear Information System (INIS)

    Gong Shengpin; Gao Yunfeng; Li Junfeng

    2011-01-01

    The fuel consumption associated with some interplanetary transfer trajectories using chemical propulsion is not affordable. A solar sail is a method of propulsion that does not consume fuel. Transfer time is one of the most pressing problems of solar sail transfer trajectory design. This paper investigates the time-optimal interplanetary transfer trajectories to a circular orbit of given inclination and radius. The optimal control law is derived from the principle of maximization. An indirect method is used to solve the optimal control problem by selecting values for the initial adjoint variables, which are normalized within a unit sphere. The conditions for the existence of the time-optimal transfer are dependent on the lightness number of the sail and the inclination and radius of the target orbit. A numerical method is used to obtain the boundary values for the time-optimal transfer trajectories. For the cases where no time-optimal transfer trajectories exist, first-order necessary conditions of the optimal control are proposed to obtain feasible solutions. The results show that the transfer time decreases as the minimum distance from the Sun decreases during the transfer duration. For a solar sail with a small lightness number, the transfer time may be evaluated analytically for a three-phase transfer trajectory. The analytical results are compared with previous results and the associated numerical results. The transfer time of the numerical result here is smaller than the transfer time from previous results and is larger than the analytical result.

  18. Optimization of Enzymatic Hydrolysis of Waste Bread before Fermentation

    OpenAIRE

    Hudečková, Helena; Šupinová, Petra; Ing. Mgr. Libor Babák, Ph.D., MBA

    2017-01-01

    Finding of optimal hydrolysis conditions is important for increasing the yield of saccharides. The higher yield of saccharides is usable for increase of the following fermentation effectivity. In this study optimal conditions (pH and temperature) for amylolytic enzymes were searched. As raw material was used waste bread. Two analytical methods for analysis were used. Efficiency and process of hydrolysis was analysed spectrophotometrically by Somogyi-Nelson method. Final yields of glucose were...

  19. Investigations of phosphate coatings of galvanized steel sheets by a surface-analytical multi-method approach

    International Nuclear Information System (INIS)

    Bubert, H.; Garten, R.; Klockenkaemper, R.; Puderbach, H.

    1983-01-01

    Corrosion protective coatings on galvanized steel sheets have been studied by a combination of SEM, EDX, AES, ISS and SIMS. Analytical statements concerning such rough, poly-crystalline and contaminated surfaces of technical samples are quite difficult to obtain. The use of a surface-analytical multi-method approach overcomes, the intrinsic limitations of the individual method applied, thus resulting in a consistent picture of those technical surfaces. Such results can be used to examine technical faults and to optimize the technical process. (Author)

  20. Query optimization for graph analytics on linked data using SPARQL

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Seokyong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Sangkeun [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lim, Seung -Hwan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sukumar, Sreenivas R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vatsavai, Ranga Raju [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-07-01

    Triplestores that support query languages such as SPARQL are emerging as the preferred and scalable solution to represent data and meta-data as massive heterogeneous graphs using Semantic Web standards. With increasing adoption, the desire to conduct graph-theoretic mining and exploratory analysis has also increased. Addressing that desire, this paper presents a solution that is the marriage of Graph Theory and the Semantic Web. We present software that can analyze Linked Data using graph operations such as counting triangles, finding eccentricity, testing connectedness, and computing PageRank directly on triple stores via the SPARQL interface. We describe the process of optimizing performance of the SPARQL-based implementation of such popular graph algorithms by reducing the space-overhead, simplifying iterative complexity and removing redundant computations by understanding query plans. Our optimized approach shows significant performance gains on triplestores hosted on stand-alone workstations as well as hardware-optimized scalable supercomputers such as the Cray XMT.

  1. The Optimal Operation Criteria for a Gas Turbine Cogeneration System

    Directory of Open Access Journals (Sweden)

    Atsushi Akisawa

    2009-04-01

    Full Text Available The study demonstrated the optimal operation criteria of a gas turbine cogeneration system based on the analytical solution of a linear programming model. The optimal operation criteria gave the combination of equipment to supply electricity and steam with the minimum energy cost using the energy prices and the performance of equipment. By the comparison with a detailed optimization result of an existing cogeneration plant, it was shown that the optimal operation criteria successfully provided a direction for the system operation under the condition where the electric power output of the gas turbine was less than the capacity

  2. A new optimized self-firing mos-thyristor device

    Energy Technology Data Exchange (ETDEWEB)

    Breil, M.; Sanchez, J.L.; Austin, P.; Laur, J.P.

    1998-12-01

    In this paper, a new integrated self-firing and controlled turn-off MOS-thyristor structure is investigated. An analytical model describing the turn-off operation and parasitic latch-up has been developed, allowing to highlight and optimize the physical and geometrical parameters acting upon main electrical characteristics. The analytical model is validated by 2D simulations using PISCES. The technological fabrication process is optimized by 2D simulations using SUPREM IV. Electrical characterization results of fabricated test structures are presented. (authors) 6 refs.

  3. AN ANALYTICAL STUDY OF SWITCHING TRACTION MOTORS

    Directory of Open Access Journals (Sweden)

    V. M. Bezruchenko

    2010-03-01

    Full Text Available The analytical study of switching of the tractive engines of electric locomotives is conducted. It is found that the obtained curves of change of current of the sections commuted correspond to the theory of average rectilinear switching. By means of the proposed method it is possible on the stage of design of tractive engines to forecast the quality of switching and to correct it timely.

  4. Multivariate optimization of an analytical method for the analysis of dog and cat foods by ICP OES.

    Science.gov (United States)

    da Costa, Silvânio Silvério Lopes; Pereira, Ana Cristina Lima; Passos, Elisangela Andrade; Alves, José do Patrocínio Hora; Garcia, Carlos Alexandre Borges; Araujo, Rennan Geovanny Oliveira

    2013-04-15

    Experimental design methodology was used to optimize an analytical method for determination of the mineral element composition (Al, Ca, Cd, Cr, Cu, Ba, Fe, K, Mg, Mn, P, S, Sr and Zn) of dog and cat foods. Two-level full factorial design was applied to define the optimal proportions of the reagents used for microwave-assisted sample digestion (2.0 mol L(-1) HNO3 and 6% m/v H2O2). A three-level factorial design for two variables was used to optimize the operational conditions of the inductively coupled plasma optical emission spectrometer, employed for analysis of the extracts. A radiofrequency power of 1.2 kW and a nebulizer argon flow of 1.0 L min(-1) were selected. The limits of quantification (LOQ) were between 0.03 μg g(-1) (Cr, 267.716 nm) and 87 μg g(-1) (Ca, 373.690 nm). The trueness of the optimized method was evaluated by analysis of five certified reference materials (CRMs): wheat flour (NIST 1567a), bovine liver (NIST 1577), peach leaves (NIST 1547), oyster tissue (NIST 1566b), and fish protein (DORM-3). The recovery values obtained for the CRMs were between 80 ± 4% (Cr) and 117 ± 5% (Cd), with relative standard deviations (RSDs) better than 5%, demonstrating that the proposed method offered good trueness and precision. Ten samples of pet food (five each of cat and dog food) were acquired at supermarkets in Aracaju city (Sergipe State, Brazil). Concentrations in the dog food ranged between 7.1 mg kg(-1) (Ba) and 2.7 g kg(-1) (Ca), while for cat food the values were between 3.7 mg kg(-1) (Ba) and 3.0 g kg(-1) (Ca). The concentrations of Ca, K, Mg, P, Cu, Fe, Mn, and Zn in the food were compared with the guidelines of the United States' Association of American Feed Control Officials (AAFCO) and the Brazilian Ministry of Agriculture, Livestock, and Food Supply (Ministério da Agricultura, Pecuária e Abastecimento-MAPA). Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Analytical study of synchronization in spin-transfer-driven magnetization dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bonin, Roberto [Politecnico di Torino - sede di Verres, via Luigi Barone 8, I-11029 Verres (Italy); Bertotti, Giorgio; Bortolotti, Paolo [Istituto Nazionale di Ricerca Metrologica, Strada delle Cacce 91, I-10135 Torino (Italy); Serpico, Claudio [Dipartimento di Ingegneria Elettrica, Universita di Napoli ' Federico II' , via Claudio 21, I-80125 Napoli (Italy); D' Aquino, Massimiliano [Dipartimento per le Tecnologie, Universita di Napoli ' Parthenope' , via Medina 40, I-80133 Napoli (Italy); Mayergoyz, Isaak D, E-mail: p.bortolotti@inrim.i [Electrical and Computer Engineering Department and UMIACS, University of Maryland, College Park MD 20742 (United States)

    2010-01-01

    An analytical study of the synchronization effects in spin-transfer-driven nanomagnets subjected to either microwave magnetic fields or microwave electrical currents is discussed. Appropriate stability diagrams are constructed and the conditions under which the current-induced magnetization precession is synchronized by the microwave external excitation are derived and discussed. Analytical predictions are given for the existence of phase-locking effects in current-induced magnetization precessions and for the occurrence of hysteresis in phase-locking as a function of the spin-polarized current.

  6. Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China.

    Science.gov (United States)

    Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li'an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling

    2016-03-01

    A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box-Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China.

  7. Automatic differentiation for gradient-based optimization of radiatively heated microelectronics manufacturing equipment

    Energy Technology Data Exchange (ETDEWEB)

    Moen, C.D.; Spence, P.A.; Meza, J.C.; Plantenga, T.D.

    1996-12-31

    Automatic differentiation is applied to the optimal design of microelectronic manufacturing equipment. The performance of nonlinear, least-squares optimization methods is compared between numerical and analytical gradient approaches. The optimization calculations are performed by running large finite-element codes in an object-oriented optimization environment. The Adifor automatic differentiation tool is used to generate analytic derivatives for the finite-element codes. The performance results support previous observations that automatic differentiation becomes beneficial as the number of optimization parameters increases. The increase in speed, relative to numerical differences, has a limited value and results are reported for two different analysis codes.

  8. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2005-01-01

    Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.

  9. A Coflow-based Co-optimization Framework for High-performance Data Analytics

    NARCIS (Netherlands)

    Cheng, Long; Wang, Ying; Pei, Yulong; Epema, D.H.J.

    2017-01-01

    Efficient execution of distributed database operators such as joining and aggregating is critical for the performance of big data analytics. With the increase of the compute speedup of modern CPUs, reducing the network
    communication time of these operators in large systems is becoming

  10. A coflow-based co-optimization framework for high-performance data analytics

    NARCIS (Netherlands)

    Cheng, L.; Wang, Y.; Pei, Y.; Epema, D.H.J.

    2017-01-01

    Efficient execution of distributed database operators such as joining and aggregating is critical for the performance of big data analytics. With the increase of the compute speedup of modern CPUs, reducing the network communication time of these operators in large systems is becoming increasingly

  11. Optimally segmented magnetic structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bahl, Christian; Bjørk, Rasmus

    We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... is not available.We will illustrate the results for magnet design problems from different areas, such as electric motors/generators (as the example in the picture), beam focusing for particle accelerators and magnetic refrigeration devices.......We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... magnets[1][2]. However, the powerful rare-earth magnets are generally expensive, so both the scientific and industrial communities have devoted a lot of effort into developing suitable design methods. Even so, many magnet optimization algorithms either are based on heuristic approaches[3...

  12. Workshop on Analytical Methods in Statistics

    CERN Document Server

    Jurečková, Jana; Maciak, Matúš; Pešta, Michal

    2017-01-01

    This volume collects authoritative contributions on analytical methods and mathematical statistics. The methods presented include resampling techniques; the minimization of divergence; estimation theory and regression, eventually under shape or other constraints or long memory; and iterative approximations when the optimal solution is difficult to achieve. It also investigates probability distributions with respect to their stability, heavy-tailness, Fisher information and other aspects, both asymptotically and non-asymptotically. The book not only presents the latest mathematical and statistical methods and their extensions, but also offers solutions to real-world problems including option pricing. The selected, peer-reviewed contributions were originally presented at the workshop on Analytical Methods in Statistics, AMISTAT 2015, held in Prague, Czech Republic, November 10-13, 2015.

  13. Low energy ion beam systems for surface analytical and structural studies

    International Nuclear Information System (INIS)

    Nelson, G.C.

    1980-01-01

    This paper reviews the use of low energy ion beam systems for surface analytical and structural studies. Areas where analytical methods which utilize ion beams can provide a unique insight into materials problems are discussed. The design criteria of ion beam systems for performing materials studies are described and the systems now being used by a number of laboratories are reviewed. Finally, several specific problems are described where the solution was provided at least in part by information provided by low energy ion analysis techniques

  14. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    Science.gov (United States)

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  15. The analytical and numerical study of the fluorination of uranium dioxide particles

    International Nuclear Information System (INIS)

    Sazhin, S.S.

    1997-01-01

    A detailed analytical study of the equations describing the fluorination of UO 2 particles is presented for some limiting cases assuming that the mass flowrate of these particles is so small that they do not affect the state of the gas. The analytical solutions obtained can be used for approximate estimates of the effect of fluorination on particle diameter and temperature but their major application, however, is probably in the verification of self-consistent numerical solutions. Computational results are presented and discussed for a self-consistent problem in which both the effects of gas on particles and particles on gas are accounted for. It has been shown that in the limiting cases for which analytical solutions have been obtained, the coincidence between numerical and analytical results is almost exact. This can be considered as a verification of both the analytical and numerical solutions. (orig.)

  16. SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.

    Science.gov (United States)

    Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis

    2015-09-01

    Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.

  17. Optimal operation of hybrid-SITs under a SBO accident

    International Nuclear Information System (INIS)

    Jeon, In Seop; Heo, Sun; Kang, Hyun Gook

    2016-01-01

    Highlights: • Operation strategy of hybrid-SIT (H-SIT) in station blackout (SBO) is developed. • There are five main factors which have to be carefully treated in the development of the operation strategy. • Optimal value of each main factor is investigated analytically and then through thermal-hydraulic analysis using computer code. • The optimum operation strategy is suggested based on the optimal value of the main factors. - Abstract: A hybrid safety injection tank (H-SIT) is designed to enhance the capability of pressurized water reactors against high-pressure accidents which might be caused by the combined accidents accompanied by station blackout (SBO), and is suggested as a useful alternative to electricity-driven motor injection pumps. The main purpose of the H-SIT is to provide coolant to the core so that core safety can be maintained for a longer period. As H-SITs have a limited inventory, their efficient use in cooling down the core is paramount to maximize the available time for long-term cooling component restoration. Therefore, an optimum operation strategy must be developed to support the operators for the most efficient H-SIT use. In this study, the main factors which have to be carefully treated in the development of an operation strategy are first identified. Then the optimal value of each main factor is investigated analytically, a process useful to get the basis of the global optimum points. Based on these analytical optimum points, a thermal-hydraulic analysis using MARS code is performed to get more accurate values and to verify the results of the analytical study. The available time for long-term cooling component restoration is also estimated. Finally, an integrated optimum operation strategy for H-SITs in SBO is suggested.

  18. Orbital-optimized coupled-electron pair theory and its analytic gradients: Accurate equilibrium geometries, harmonic vibrational frequencies, and hydrogen transfer reactions

    Science.gov (United States)

    Bozkaya, Uǧur; Sherrill, C. David

    2013-08-01

    Orbital-optimized coupled-electron pair theory [or simply "optimized CEPA(0)," OCEPA(0), for short] and its analytic energy gradients are presented. For variational optimization of the molecular orbitals for the OCEPA(0) method, a Lagrangian-based approach is used along with an orbital direct inversion of the iterative subspace algorithm. The cost of the method is comparable to that of CCSD [O(N6) scaling] for energy computations. However, for analytic gradient computations the OCEPA(0) method is only half as expensive as CCSD since there is no need to solve the λ2-amplitude equation for OCEPA(0). The performance of the OCEPA(0) method is compared with that of the canonical MP2, CEPA(0), CCSD, and CCSD(T) methods, for equilibrium geometries, harmonic vibrational frequencies, and hydrogen transfer reactions between radicals. For bond lengths of both closed and open-shell molecules, the OCEPA(0) method improves upon CEPA(0) and CCSD by 25%-43% and 38%-53%, respectively, with Dunning's cc-pCVQZ basis set. Especially for the open-shell test set, the performance of OCEPA(0) is comparable with that of CCSD(T) (ΔR is 0.0003 Å on average). For harmonic vibrational frequencies of closed-shell molecules, the OCEPA(0) method again outperforms CEPA(0) and CCSD by 33%-79% and 53%-79%, respectively. For harmonic vibrational frequencies of open-shell molecules, the mean absolute error (MAE) of the OCEPA(0) method (39 cm-1) is fortuitously even better than that of CCSD(T) (50 cm-1), while the MAEs of CEPA(0) (184 cm-1) and CCSD (84 cm-1) are considerably higher. For complete basis set estimates of hydrogen transfer reaction energies, the OCEPA(0) method again exhibits a substantially better performance than CEPA(0), providing a mean absolute error of 0.7 kcal mol-1, which is more than 6 times lower than that of CEPA(0) (4.6 kcal mol-1), and comparing to MP2 (7.7 kcal mol-1) there is a more than 10-fold reduction in errors. Whereas the MAE for the CCSD method is only 0.1 kcal

  19. ANALYTICAL CALCULATION OF THE BASIC ELECTROMAGNETIC LOSSES OF THE ENERGY OF THE FREQUENCY-REGULATED ASYNCHRONOUS ENGINE IN POSITIONING

    Directory of Open Access Journals (Sweden)

    V. O. Volkov

    2018-02-01

    Full Text Available Purpose. Obtaining analytical dependencies for the calculation of the main electromagnetic energy losses of a frequency-controlled induction motor in positioning modes with small displacements for various types (linear, parabolic and quasi-optimal of its velocity variation. Methodology. Similarity methods, differential and integral calculus, analytical interpolation, mathematical analysis. Findings. Analytical dependencies for calculation of current electromagnetic power losses and basic electromagnetic energy losses of a frequency-controlled asynchronous motor in the modes of positioning with small displacements for various types (linear, parabolic and quasi-optimal of its velocity are obtained. A universal form of the analytical dependence for calculating the optimal acceleration and deceleration times for a frequency-controlled asynchronous motor for positioning with small displacements, corresponding to minimization of the main electromagnetic energy losses of this engine with the indicated positioning for various species (linear, parabolic and quasi-optimal, is obtained. A comparative quantitative assessment of the change is made: the optimum values of the main electromagnetic energy losses of the frequency-controlled asynchronous engine and the corresponding maximum speed and optimal acceleration and deceleration times, in the function of the set prescribed small displacements for the various engine speed trajectories under consideration. Originality. For the first time, analytical dependencies for the calculation of the main electromagnetic energy losses of a frequency-controlled asynchronous motor are obtained for positioning with small displacements as a function of the set values of the movement of the motor shaft and the set values of its acceleration and deceleration times for the specified specified displacements. For the first time, dependences are obtained for a quantitative estimate of the minimum fundamental electromagnetic

  20. Optimization of fractionated radiotherapy of tumors

    International Nuclear Information System (INIS)

    Ivanov, V.K.

    1984-01-01

    Underlying modern conceptions of clinical radiobiology and mathematic methods in system theory a model of radiation therapy for tumors is developed. To obtain optimal fractionating conditions the principle of gradual optimization is used. A optimal therapeutic method permits to minimize the survival of a tumor cell population with localized lesions of the intact tissue. An analytic research is carried out for the simplest variant of the model. By help of a SORT-program unit the conditions are ascertained for gradual optimization of radiotherapy. (author)

  1. Prioritizing the countries for BOT nuclear power project using Analytic Hierarchy Process

    International Nuclear Information System (INIS)

    Choi, Sun Woo; Roh, Myung Sub

    2013-01-01

    This paper proposes factors influencing the success of BOT nuclear power projects and their weighting method using Analytic Hierarchy Process (AHP) to find the optimal country which developer intends to develop. To summarize, this analytic method enable the developer to select and focus on the country which has preferable circumstance so that it enhances the efficiency of the project promotion by minimizing the opportunity cost. Also, it enables the developer to quantify the qualitative factors so that it diversifies the project success strategy and policy for the targeted country. Although the performance of this study is insufficient due to the limitation of time, small sampling and security of materials, it still has the possibility to improve the analytic model more systematically through further study with more data. Developing Build-Own(or Operate)-Transfer (BOT) nuclear power project carrying large capital in the long term requires initially well-made multi-decision which it prevents sorts of risks from unexpected situation of targeted countries. Moreover, the nuclear power project in most case is practically implemented by Government to Government cooperation, so the key concern for such nuclear power project would be naturally focused on the country situation rather than project viability at planning stage. In this regard, it requires the evaluation of targeted countries before involving the project, comprehensive and proper decision making for complex judgment factors, and efficient integration of expert's opinions, etc. Therefore, prioritizing and evaluating the feasibility of country for identification of optimal project region is very meaningful study

  2. Light distribution in diffractive multifocal optics and its optimization.

    Science.gov (United States)

    Portney, Valdemar

    2011-11-01

    To expand a geometrical model of diffraction efficiency and its interpretation to the multifocal optic and to introduce formulas for analysis of far and near light distribution and their application to multifocal intraocular lenses (IOLs) and to diffraction efficiency optimization. Medical device consulting firm, Newport Coast, California, USA. Experimental study. Application of a geometrical model to the kinoform (single focus diffractive optical element) was expanded to a multifocal optic to produce analytical definitions of light split between far and near images and light loss to other diffraction orders. The geometrical model gave a simple interpretation of light split in a diffractive multifocal IOL. An analytical definition of light split between far, near, and light loss was introduced as curve fitting formulas. Several examples of application to common multifocal diffractive IOLs were developed; for example, to light-split change with wavelength. The analytical definition of diffraction efficiency may assist in optimization of multifocal diffractive optics that minimize light loss. Formulas for analysis of light split between different foci of multifocal diffractive IOLs are useful in interpreting diffraction efficiency dependence on physical characteristics, such as blaze heights of the diffractive grooves and wavelength of light, as well as for optimizing multifocal diffractive optics. Disclosure is found in the footnotes. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  3. Semi Active Control of Civil Structures, Analytical and Numerical Studies

    Science.gov (United States)

    Kerboua, M.; Benguediab, M.; Megnounif, A.; Benrahou, K. H.; Kaoulala, F.

    numerical example of the parallel R-L piezoelectric vibration shunt control simulated with MATLAB® is presented. An analytical study of the resistor-inductor (R-L) passive piezoelectric vibration shunt control of a cantilever beam was undertaken. The modal and strain analyses were performed by varying the material properties and geometric configurations of the piezoelectric transducer in relation to the structure in order to maximize the mechanical strain produced in the piezoelectric transducer.

  4. DCODE: A Distributed Column-Oriented Database Engine for Big Data Analytics

    OpenAIRE

    Liu, Yanchen; Cao, Fang; Mortazavi, Masood; Chen, Mengmeng; Yan, Ning; Ku, Chi; Adnaik, Aniket; Morgan, Stephen; Shi, Guangyu; Wang, Yuhu; Fang, Fan

    2015-01-01

    Part 10: Big Data and Text Mining; International audience; We propose a novel Distributed Column-Oriented Database Engine (DCODE) for efficient analytic query processing that combines advantages of both column storage and parallel processing. In DCODE, we enhance an existing open-source columnar database engine by adding the capability for handling queries over a cluster. Specifically, we studied parallel query execution and optimization techniques such as horizontal partitioning, exchange op...

  5. Web Analytics 2.0 The Art of Online Accountability and Science of Customer Centricity

    CERN Document Server

    Kaushik, Avinash

    2009-01-01

    Adeptly address today's business challenges with this powerful new book from web analytics thought leader Avinash Kaushik. Web Analytics 2.0 presents a new framework that will permanently change how you think about analytics. It provides specific recommendations for creating an actionable strategy, applying analytical techniques correctly, solving challenges such as measuring social media and multichannel campaigns, achieving optimal success by leveraging experimentation, and employing tactics for truly listening to your customers. The book will help your organization become more data driven w

  6. Born analytical or adopted over time? a study investigating if new analytical tools can ensure the survival of market oriented startups.

    OpenAIRE

    Skogen, Hege Janson; De la Cruz, Kai

    2017-01-01

    Masteroppgave(MSc) in Master of Science in Strategic Marketing Management - Handelshøyskolen BI, 2017 This study investigates whether the prevalence of technological advances within quantitative analytics moderates the effect market orientation has on firm performance, and if startups can take advantage of the potential opportunities to ensure their own survival. For this purpose, the authors review previous literature in marketing orientation, startups, marketing analytics, an...

  7. Dispersant testing : a study on analytical test procedures

    International Nuclear Information System (INIS)

    Fingas, M.F.; Fieldhouse, B.; Wang, Z.; Environment Canada, Ottawa, ON

    2004-01-01

    Crude oil is a complex mixture of hydrocarbons, ranging from small, volatile compounds to very large, non-volatile compounds. Analysis of the dispersed oil is crucial. This paper described Environment Canada's ongoing studies on various traits of dispersants. In particular, it describes small studies related to dispersant effectiveness and methods to improve analytical procedures. The study also re-evaluated the analytical procedure for the Swirling Flask Test, which is now part of the ASTM standard procedure. There are new and improved methods for analyzing oil-in-water using gas chromatography (GC). The methods could be further enhanced by integrating the entire chromatogram rather than just peaks. This would result in a decrease in maximum variation from 5 per cent to about 2 per cent. For oil-dispersant studies, the surfactant-dispersed oil hydrocarbons consist of two parts: GC-resolved hydrocarbons and GC-unresolved hydrocarbons. This study also tested a second feature of the Swirling Flask Test in which the side spout was tested and compared with a new vessel with a septum port instead of a side spout. This decreased the variability as well as the energy and mixing in the vessel. Rather than being a variation of the Swirling Flask Test, it was suggested that a spoutless vessel might be considered as a completely separate test. 7 refs., 2 tabs., 4 figs

  8. Optimization of multi-layered metallic shield

    International Nuclear Information System (INIS)

    Ben-Dor, G.; Dubinsky, A.; Elperin, T.

    2011-01-01

    Research highlights: → We investigated the problem of optimization of a multi-layered metallic shield. → The maximum ballistic limit velocity is a criterion of optimization. → The sequence of materials and the thicknesses of layers in the shield are varied. → The general problem is reduced to the problem of Geometric Programming. → Analytical solutions are obtained for two- and three-layered shields. - Abstract: We investigate the problem of optimization of multi-layered metallic shield whereby the goal is to determine the sequence of materials and the thicknesses of the layers that provide the maximum ballistic limit velocity of the shield. Optimization is performed under the following constraints: fixed areal density of the shield, the upper bound on the total thickness of the shield and the bounds on the thicknesses of the plates manufactured from every material. The problem is reduced to the problem of Geometric Programming which can be solved numerically using known methods. For the most interesting in practice cases of two-layered and three-layered shields the solution is obtained in the explicit analytical form.

  9. Spatio-temporal data analytics for wind energy integration

    CERN Document Server

    Yang, Lei; Zhang, Junshan

    2014-01-01

    This SpringerBrief presents spatio-temporal data analytics for wind energy integration using stochastic modeling and optimization methods. It explores techniques for efficiently integrating renewable energy generation into bulk power grids. The operational challenges of wind, and its variability are carefully examined. A spatio-temporal analysis approach enables the authors to develop Markov-chain-based short-term forecasts of wind farm power generation. To deal with the wind ramp dynamics, a support vector machine enhanced Markov model is introduced. The stochastic optimization of economic di

  10. Quantum approximate optimization algorithm for MaxCut: A fermionic view

    Science.gov (United States)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2018-02-01

    Farhi et al. recently proposed a class of quantum algorithms, the quantum approximate optimization algorithm (QAOA), for approximately solving combinatorial optimization problems (E. Farhi et al., arXiv:1411.4028; arXiv:1412.6062; arXiv:1602.07674). A level-p QAOA circuit consists of p steps; in each step a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2 p times for which these two Hamiltonians are applied are the parameters of the algorithm, which are to be optimized classically for the best performance. As p increases, parameter optimization becomes inefficient due to the curse of dimensionality. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here we analytically and numerically study parameter setting for the QAOA applied to MaxCut. For the level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MaxCut, the "ring of disagrees," or the one-dimensional antiferromagnetic ring, we provide an analysis for an arbitrarily high level. Using a fermionic representation, the evolution of the system under the QAOA translates into quantum control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of the QAOA for any p . It also greatly simplifies the numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional submanifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  11. Volume-constrained optimization of magnetorheological and electrorheological valves and dampers

    Science.gov (United States)

    Rosenfeld, Nicholas C.; Wereley, Norman M.

    2004-12-01

    This paper presents a case study of magnetorheological (MR) and electrorheological (ER) valve design within a constrained cylindrical volume. The primary purpose of this study is to establish general design guidelines for volume-constrained MR valves. Additionally, this study compares the performance of volume-constrained MR valves against similarly constrained ER valves. Starting from basic design guidelines for an MR valve, a method for constructing candidate volume-constrained valve geometries is presented. A magnetic FEM program is then used to evaluate the magnetic properties of the candidate valves. An optimized MR valve is chosen by evaluating non-dimensional parameters describing the candidate valves' damping performance. A derivation of the non-dimensional damping coefficient for valves with both active and passive volumes is presented to allow comparison of valves with differing proportions of active and passive volumes. The performance of the optimized MR valve is then compared to that of a geometrically similar ER valve using both analytical and numerical techniques. An analytical equation relating the damping performances of geometrically similar MR and ER valves in as a function of fluid yield stresses and relative active fluid volume, and numerical calculations are provided to calculate each valve's damping performance and to validate the analytical calculations.

  12. An analytical/numerical correlation study of the multiple concentric cylinder model for the thermoplastic response of metal matrix composites

    Science.gov (United States)

    Pindera, Marek-Jerzy; Salzar, Robert S.; Williams, Todd O.

    1993-01-01

    The utility of a recently developed analytical micromechanics model for the response of metal matrix composites under thermal loading is illustrated by comparison with the results generated using the finite-element approach. The model is based on the concentric cylinder assemblage consisting of an arbitrary number of elastic or elastoplastic sublayers with isotropic or orthotropic, temperature-dependent properties. The elastoplastic boundary-value problem of an arbitrarily layered concentric cylinder is solved using the local/global stiffness matrix formulation (originally developed for elastic layered media) and Mendelson's iterative technique of successive elastic solutions. These features of the model facilitate efficient investigation of the effects of various microstructural details, such as functionally graded architectures of interfacial layers, on the evolution of residual stresses during cool down. The available closed-form expressions for the field variables can readily be incorporated into an optimization algorithm in order to efficiently identify optimal configurations of graded interfaces for given applications. Comparison of residual stress distributions after cool down generated using finite-element analysis and the present micromechanics model for four composite systems with substantially different temperature-dependent elastic, plastic, and thermal properties illustrates the efficacy of the developed analytical scheme.

  13. Optimizing detectability

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    HPLC is useful for trace and ultratrace analyses of a variety of compounds. For most applications, HPLC is useful for determinations in the nanogram-to-microgram range; however, detection limits of a picogram or less have been demonstrated in certain cases. These determinations require state-of-the-art capability; several examples of such determinations are provided in this chapter. As mentioned before, to detect and/or analyze low quantities of a given analyte at submicrogram or ultratrace levels, it is necessary to optimize the whole separation system, including the quantity and type of sample, sample preparation, HPLC equipment, chromatographic conditions (including column), choice of detector, and quantitation techniques. A limited discussion is provided here for optimization based on theoretical considerations, chromatographic conditions, detector selection, and miscellaneous approaches to detectability optimization. 59 refs

  14. Optimizing Bus Frequencies under Uncertain Demand: Case Study of the Transit Network in a Developing City

    Directory of Open Access Journals (Sweden)

    Zhengfeng Huang

    2013-01-01

    Full Text Available Various factors can make predicting bus passenger demand uncertain. In this study, a bilevel programming model for optimizing bus frequencies based on uncertain bus passenger demand is formulated. There are two terms constituting the upper-level objective. The first is transit network cost, consisting of the passengers’ expected travel time and operating costs, and the second is transit network robustness performance, indicated by the variance in passenger travel time. The second term reflects the risk aversion of decision maker, and it can make the most uncertain demand be met by the bus operation with the optimal transit frequency. With transit link’s proportional flow eigenvalues (mean and covariance obtained from the lower-level model, the upper-level objective is formulated by the analytical method. In the lower-level model, the above two eigenvalues are calculated by analyzing the propagation of mean transit trips and their variation in the optimal strategy transit assignment process. The genetic algorithm (GA used to solve the model is tested in an example network. Finally, the model is applied to determining optimal bus frequencies in the city of Liupanshui, China. The total cost of the transit system in Liupanshui can be reduced by about 6% via this method.

  15. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    Science.gov (United States)

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  16. Analytical model for nonlinear piezoelectric energy harvesting devices

    International Nuclear Information System (INIS)

    Neiss, S; Goldschmidtboeing, F; M Kroener; Woias, P

    2014-01-01

    In this work we propose analytical expressions for the jump-up and jump-down point of a nonlinear piezoelectric energy harvester. In addition, analytical expressions for the maximum power output at optimal resistive load and the 3 dB-bandwidth are derived. So far, only numerical models have been used to describe the physics of a piezoelectric energy harvester. However, this approach is not suitable to quickly evaluate different geometrical designs or piezoelectric materials in the harvester design process. In addition, the analytical expressions could be used to predict the jump-frequencies of a harvester during operation. In combination with a tuning mechanism, this would allow the design of an efficient control algorithm to ensure that the harvester is always working on the oscillator's high energy attractor. (paper)

  17. Eco-analytical Methodology in Environmental Problems Monitoring

    Science.gov (United States)

    Agienko, M. I.; Bondareva, E. P.; Chistyakova, G. V.; Zhironkina, O. V.; Kalinina, O. I.

    2017-01-01

    Among the problems common to all mankind, which solutions influence the prospects of civilization, the problem of ecological situation monitoring takes very important place. Solution of this problem requires specific methodology based on eco-analytical comprehension of global issues. Eco-analytical methodology should help searching for the optimum balance between environmental problems and accelerating scientific and technical progress. The fact that Governments, corporations, scientists and nations focus on the production and consumption of material goods cause great damage to environment. As a result, the activity of environmentalists is developing quite spontaneously, as a complement to productive activities. Therefore, the challenge posed by the environmental problems for the science is the formation of geo-analytical reasoning and the monitoring of global problems common for the whole humanity. So it is expected to find the optimal trajectory of industrial development to prevent irreversible problems in the biosphere that could stop progress of civilization.

  18. Visualized and Interacted Life: Personal Analytics and Engagements with Data Doubles

    Directory of Open Access Journals (Sweden)

    Minna Ruckenstein

    2014-02-01

    Full Text Available A field of personal analytics has emerged around self-monitoring practices, which includes the visualization and interpretation of the data produced. This paper explores personal analytics from the perspective of self-optimization, arguing that the ways in which people confront and engage with visualized personal data are as significant as the technology itself. The paper leans on the concept of the “data double”: the conversion of human bodies and minds into data flows that can be figuratively reassembled for the purposes of personal reflection and interaction. Based on an empirical study focusing on heart-rate variability measurement, the discussion underlines that a distanced theorizing of personal analytics is not sufficient if one wants to capture affective encounters between humans and their data doubles. Research outcomes suggest that these explanations can produce permanence and stability while also profoundly changing ways in which people reflect on themselves, on others and on their daily lives.

  19. Topology Optimization for Transient Wave Propagation Problems

    DEFF Research Database (Denmark)

    Matzen, René

    The study of elastic and optical waves together with intensive material research has revolutionized everyday as well as cutting edge technology in very tangible ways within the last century. Therefore it is important to continue the investigative work towards improving existing as well as innovate...... new technology, by designing new materials and their layout. The thesis presents a general framework for applying topology optimization in the design of material layouts for transient wave propagation problems. In contrast to the high level of modeling in the frequency domain, time domain topology...... optimization is still in its infancy. A generic optimization problem is formulated with an objective function that can be field, velocity, and acceleration dependent, as well as it can accommodate the dependency of filtered signals essential in signal shape optimization [P3]. The analytical design gradients...

  20. Optimal tax depreciation under a progressive tax system

    OpenAIRE

    Wielhouwer, J.L.; De Waegenaere, A.M.B.; Kort, P.M.

    2002-01-01

    The focus of this paper is on the effect of a progressive tax system on optimal tax depreciation. By using dynamic optimization we show that an optimal strategy exists, and we provide an analytical expression for the optimal depreciation charges. Depreciation charges initially decrease over time, and after a number of periods the firm enters a steady state where depreciation is constant and equal to replacement investments. This way, the optimal solution trades off the benefits of accelerated...

  1. Hazardous Waste Landfill Siting using GIS Technique and Analytical Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Ozeair Abessi

    2010-07-01

    Full Text Available Disposal of large amount of generated hazardous waste in power plants, has always received communities' and authori¬ties attentions. In this paper using site screening method and Analytical Hierarchy Process (AHP a sophisticated approach for siting hazardous waste landfill in large areas is presented. This approach demonstrates how the evaluation criteria such as physical, socio-economical, technical, environmental and their regulatory sub criteria can be introduced into an over layer technique to screen some limited appropriate zones in the area. Then, in order to find the optimal site amongst the primary screened site utilizing a Multiple Criteria Decision Making (MCDM method for hierarchy computations of the process is recommended. Using the introduced method an accurate siting procedure for environmental planning of the landfills in an area would be enabled. In the study this approach was utilized for disposal of hazardous wastes of Shahid Rajaee thermal power plant located in Qazvin province west central part of Iran. As a result of this study 10 suitable zones were screened in the area at first, then using analytical hierarchy process a site near the power plant were chosen as the optimal site for landfilling of the hazardous wastes in Qazvin province.

  2. Seamless Digital Environment - Plan for Data Analytics Use Case Study

    International Nuclear Information System (INIS)

    Oxstrand, Johanna Helene; Bly, Aaron Douglas

    2016-01-01

    The U.S Department of Energy Light Water Reactor Sustainability (LWRS) Program initiated research in to what is needed in order to provide a roadmap or model for Nuclear Power Plants to reference when building an architecture that can support the growing data supply and demand flowing through their networks. The Digital Architecture project published report Digital Architecture Planning Model (Oxstrand et. al, 2016) discusses things to consider when building an architecture to support the increasing needs and demands of data throughout the plant. Once the plant is able to support the data demands it still needs to be able to provide the data in an easy, quick and reliable method. A common method is to create a ''one stop shop'' application that a user can go to get all the data they need. The creation of this leads to the need of creating a Seamless Digital Environment (SDE) to integrate all the ''siloed'' data. An SDE is the desired perception that should be presented to users by gathering the data from any data source (e.g., legacy applications and work management systems) without effort by the user. The goal for FY16 was to complete a feasibility study for data mining and analytics for employing information from computer-based procedures enabled technologies for use in developing improved business analytics. The research team collaborated with multiple organizations to identify use cases or scenarios, which could be beneficial to investigate in a feasibility study. Many interesting potential use cases were identified throughout the FY16 activity. Unfortunately, due to factors out of the research team's control, none of the studies were initiated this year. However, the insights gained and the relationships built with both PVNGS and NextAxiom will be valuable when moving forward with future research. During the 2016 annual Nuclear Information Technology Strategic Leadership (NITSL) group meeting it was identified would be very beneficial to the industry to

  3. Analytical incorporation of fractionation effects in probabilistic treatment planning for intensity-modulated proton therapy.

    Science.gov (United States)

    Wahl, Niklas; Hennig, Philipp; Wieser, Hans-Peter; Bangert, Mark

    2018-04-01

    We show that it is possible to explicitly incorporate fractionation effects into closed-form probabilistic treatment plan analysis and optimization for intensity-modulated proton therapy with analytical probabilistic modeling (APM). We study the impact of different fractionation schemes on the dosimetric uncertainty induced by random and systematic sources of range and setup uncertainty for treatment plans that were optimized with and without consideration of the number of treatment fractions. The APM framework is capable of handling arbitrarily correlated uncertainty models including systematic and random errors in the context of fractionation. On this basis, we construct an analytical dose variance computation pipeline that explicitly considers the number of treatment fractions for uncertainty quantitation and minimization during treatment planning. We evaluate the variance computation model in comparison to random sampling of 100 treatments for conventional and probabilistic treatment plans under different fractionation schemes (1, 5, 30 fractions) for an intracranial, a paraspinal and a prostate case. The impact of neglecting the fractionation scheme during treatment planning is investigated by applying treatment plans that were generated with probabilistic optimization for 1 fraction in a higher number of fractions and comparing them to the probabilistic plans optimized under explicit consideration of the number of fractions. APM enables the construction of an analytical variance computation model for dose uncertainty considering fractionation at negligible computational overhead. It is computationally feasible (a) to simultaneously perform a robustness analysis for all possible fraction numbers and (b) to perform a probabilistic treatment plan optimization for a specific fraction number. The incorporation of fractionation assumptions for robustness analysis exposes a dose to uncertainty trade-off, i.e., the dose in the organs at risk is increased for a

  4. Improvements in Off Design Aeroengine Performance Prediction Using Analytic Compressor Map Interpolation

    Science.gov (United States)

    Mist'e, Gianluigi Alberto; Benini, Ernesto

    2012-06-01

    Compressor map interpolation is usually performed through the introduction of auxiliary coordinates (β). In this paper, a new analytical bivariate β function definition to be used in compressor map interpolation is studied. The function has user-defined parameters that must be adjusted to properly fit to a single map. The analytical nature of β allows for rapid calculations of the interpolation error estimation, which can be used as a quantitative measure of interpolation accuracy and also as a valid tool to compare traditional β function interpolation with new approaches (artificial neural networks, genetic algorithms, etc.). The quality of the method is analyzed by comparing the error output to the one of a well-known state-of-the-art methodology. This comparison is carried out for two different types of compressor and, in both cases, the error output using the method presented in this paper is found to be consistently lower. Moreover, an optimization routine able to locally minimize the interpolation error by shape variation of the β function is implemented. Further optimization introducing other important criteria is discussed.

  5. Analytic energy gradients for orbital-optimized MP3 and MP2.5 with the density-fitting approximation: An efficient implementation.

    Science.gov (United States)

    Bozkaya, Uğur

    2018-03-15

    Efficient implementations of analytic gradients for the orbital-optimized MP3 and MP2.5 and their standard versions with the density-fitting approximation, which are denoted as DF-MP3, DF-MP2.5, DF-OMP3, and DF-OMP2.5, are presented. The DF-MP3, DF-MP2.5, DF-OMP3, and DF-OMP2.5 methods are applied to a set of alkanes and noncovalent interaction complexes to compare the computational cost with the conventional MP3, MP2.5, OMP3, and OMP2.5. Our results demonstrate that density-fitted perturbation theory (DF-MP) methods considered substantially reduce the computational cost compared to conventional MP methods. The efficiency of our DF-MP methods arise from the reduced input/output (I/O) time and the acceleration of gradient related terms, such as computations of particle density and generalized Fock matrices (PDMs and GFM), solution of the Z-vector equation, back-transformations of PDMs and GFM, and evaluation of analytic gradients in the atomic orbital basis. Further, application results show that errors introduced by the DF approach are negligible. Mean absolute errors for bond lengths of a molecular set, with the cc-pCVQZ basis set, is 0.0001-0.0002 Å. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. Modeling and design optimization of adhesion between surfaces at the microscale.

    Energy Technology Data Exchange (ETDEWEB)

    Sylves, Kevin T. (University of Colorado, Boulder, CO)

    2008-08-01

    This research applies design optimization techniques to structures in adhesive contact where the dominant adhesive mechanism is the van der Waals force. Interface finite elements are developed for domains discretized by beam elements, quadrilateral elements or triangular shell elements. Example analysis problems comparing finite element results to analytical solutions are presented. These examples are then optimized, where the objective is matching a force-displacement relationship and the optimization variables are the interface element energy of adhesion or the width of beam elements in the structure. Several parameter studies are conducted and discussed.

  7. Analytical Study on Thermal and Mechanical Design of Printed Circuit Heat Exchanger

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Su-Jong [Idaho National Lab. (INL), Idaho Falls, ID (United States); Sabharwall, Piyush [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kim, Eung-Soo [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2013-09-01

    The analytical methodologies for the thermal design, mechanical design and cost estimation of printed circuit heat exchanger are presented in this study. In this study, three flow arrangements of parallel flow, countercurrent flow and crossflow are taken into account. For each flow arrangement, the analytical solution of temperature profile of heat exchanger is introduced. The size and cost of printed circuit heat exchangers for advanced small modular reactors, which employ various coolants such as sodium, molten salts, helium, and water, are also presented.

  8. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    Science.gov (United States)

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  9. Mastering Search Analytics Measuring SEO, SEM and Site Search

    CERN Document Server

    Chaters, Brent

    2011-01-01

    Many companies still approach Search Engine Optimization (SEO) and paid search as separate initiatives. This in-depth guide shows you how to use these programs as part of a comprehensive strategy-not just to improve your site's search rankings, but to attract the right people and increase your conversion rate. Learn how to measure, test, analyze, and interpret all of your search data with a wide array of analytic tools. Gain the knowledge you need to determine the strategy's return on investment. Ideal for search specialists, webmasters, and search marketing managers, Mastering Search Analyt

  10. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Directory of Open Access Journals (Sweden)

    Samir Saoudi

    2008-07-01

    Full Text Available The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs. Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE depends directly upon J(f which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of J(f, the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  11. Oracle Exalytics: Engineered for Speed-of-Thought Analytics

    Directory of Open Access Journals (Sweden)

    Gabriela GLIGOR

    2011-12-01

    Full Text Available One of the biggest product announcements at 2011's Oracle OpenWorld user conference was Oracle Exalytics In-Memory Machine, the latest addition to the "Exa"-branded suite of Oracle-Sun engineered software-hardware systems. Analytics is all about gaining insights from the data for better decision making. However, the vision of delivering fast, interactive, insightful analytics has remained elusive for most organizations. Most enterprise IT organizations continue to struggle to deliver actionable analytics due to time-sensitive, sprawling requirements and ever tightening budgets. The issue is further exasperated by the fact that most enterprise analytics solutions require dealing with a number of hardware, software, storage and networking vendors and precious resources are wasted integrating the hardware and software components to deliver a complete analytical solution. Oracle Exalytics Business Intelligence Machine is the world’s first engineered system specifically designed to deliver high performance analysis, modeling and planning. Built using industry-standard hardware, market-leading business intelligence software and in-memory database technology, Oracle Exalytics is an optimized system that delivers answers to all your business questions with unmatched speed, intelligence, simplicity and manageability.

  12. A semi-analytical study on helical springs made of shape memory polymer

    International Nuclear Information System (INIS)

    Baghani, M; Naghdabadi, R; Arghavani, J

    2012-01-01

    In this paper, the responses of shape memory polymer (SMP) helical springs under axial force are studied both analytically and numerically. In the analytical solution, we first derive the response of a cylindrical tube under torsional loadings. This solution can be used for helical springs in which both the curvature and pitch effects are negligible. This is the case for helical springs with large ratios of the mean coil radius to the cross sectional radius (spring index) and also small pitch angles. Making use of this solution simplifies the analysis of the helical springs to that of the torsion of a straight bar with circular cross section. The 3D phenomenological constitutive model recently proposed for SMPs is also reduced to the 1D shear case. Thus, an analytical solution for the torsional response of SMP tubes in a full cycle of stress-free strain recovery is derived. In addition, the curvature effect is added to the formulation and the SMP helical spring is analyzed using the exact solution presented for torsion of curved SMP tubes. In this modified solution, the effect of the direct shear force is also considered. In the numerical analysis, the 3D constitutive equations are implemented in a finite element program and a full cycle of stress-free strain recovery of an SMP (extension or compression) helical spring is simulated. Analytical and numerical results are compared and it is shown that the analytical solution gives accurate stress distributions in the cross section of the helical SMP spring besides the global load–deflection response. Some case studies are presented to show the validity of the presented analytical method. (paper)

  13. Analytical Evaluation to Determine Selected PAHs in a Contaminated Soil With Type II Fuel

    International Nuclear Information System (INIS)

    Garcia Alonso, S.; Perez Pastor, R. M.; Sevillano Castano, M. L.; Garcia Frutos, F. J.

    2010-01-01

    A study on the optimization of an ultrasonic extraction method for selected PAHs determination in soil contaminated by type II fuel and by using HPLC with fluorescence detector is presented. The main objective was optimize the analytical procedure, minimizing the volume of solvent and analysis time and avoiding possible loss by evaporation. This work was carried out as part of a project that investigated a remediation process of agricultural land affected by an accidental spillage of fuel (Plan Nacional I + D + i, CTM2007-64 537). The paper is structured as: Optimization of wavelengths in the chromatographic conditions to improve resolution in the analysis of fuel samples. Optimization of the main parameters affecting in the extraction process by sonication. Comparison of results with those obtained by accelerated solvent extraction. (Author) 3 refs.

  14. Big data analytics in support of virtual network topology adaptability

    OpenAIRE

    Gifre Renom, Lluís; Contreras, Luis Miguel; Lopez Alvarez, Victor; Velasco Esteban, Luis Domingo

    2016-01-01

    ABNO's OAM Handler is extended with big data analytics capabilities to anticipate traffic changes in volume and direction. Predicted traffic is used as input for VNT re-optimization. Experimental assessment is realized on UPC's SYNERGY testbed. Peer Reviewed

  15. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    International Nuclear Information System (INIS)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles

    2014-01-01

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr 2 ) than is the pentafluorostyrene component distribution

  16. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    Energy Technology Data Exchange (ETDEWEB)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles, E-mail: cwilkins@uark.edu

    2014-01-15

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr{sub 2}) than is the pentafluorostyrene component distribution.

  17. An analytical study on groundwater flow in drainage basins with horizontal wells

    Science.gov (United States)

    Wang, Jun-Zhi; Jiang, Xiao-Wei; Wan, Li; Wang, Xu-Sheng; Li, Hailong

    2014-06-01

    Analytical studies on release/capture zones are often limited to a uniform background groundwater flow. In fact, for basin-scale problems, the undulating water table would lead to the development of hierarchically nested flow systems, which are more complex than a uniform flow. Under the premise that the water table is a replica of undulating topography and hardly influenced by wells, an analytical solution of hydraulic head is derived for a two-dimensional cross section of a drainage basin with horizontal injection/pumping wells. Based on the analytical solution, distributions of hydraulic head, stagnation points and flow systems (including release/capture zones) are explored. The superposition of injection/pumping wells onto the background flow field leads to the development of new internal stagnation points and new flow systems (including release/capture zones). Generally speaking, the existence of n injection/pumping wells would result in up to n new internal stagnation points and up to 2n new flow systems (including release/capture zones). The analytical study presented, which integrates traditional well hydraulics with the theory of regional groundwater flow, is useful in understanding basin-scale groundwater flow influenced by human activities.

  18. Karlsruhe international conference on analytical chemistry in nuclear technology

    International Nuclear Information System (INIS)

    1985-01-01

    This volume presents 218 abstracts of contributions by researchers working in the analytical chemistry field of nuclear technology. The majority of the papers deal with analysis with respect to process control in fuel reprocessing plants, fission and corrosion product characterization throughout the fuel cycle as well as studies of the chemical composition of radioactive wastes. Great interest is taken in the development and optimization of methods and instrumentation especially for in-line process control. About 3/4 of the papers have been entered into the data base separately. (RB)

  19. Continuous Analytical Performances Monitoring at the On-Site Laboratory through Proficiency, Inter-Laboratory Testing and Inter-Comparison Analytical Methods

    International Nuclear Information System (INIS)

    Duhamel, G.; Decaillon, J.-G.; Dashdondog, S.; Kim, C.-K.; Toervenyi, A.; Hara, S.; Kato, S.; Kawaguchi, T.; Matsuzawa, K.

    2015-01-01

    Since 2008, as one measure to strengthen its quality management system, the On-Site Laboratory for nuclear safeguards at the Rokkasho Reprocessing Plant, has increased its participation in domestic and international proficiency and inter-laboratory testing for the purpose of determining analytical method accuracy, precision and robustness but also to support method development and improvement. This paper provides a description of the testing and its scheduling. It presents the way the testing was optimized to cover most of the analytical methods at the OSL. The paper presents the methodology used for the evaluation of the obtained results based on Analysis of variance (ANOVA). Results are discussed with respect to random, systematic and long term systematic error. (author)

  20. Educational Optimism among Parents: A Pilot Study

    Science.gov (United States)

    Räty, Hannu; Kasanen, Kati

    2016-01-01

    This study explored parents' (N = 351) educational optimism in terms of their trust in the possibilities of school to develop children's intelligence. It was found that educational optimism could be depicted as a bipolar factor with optimism and pessimism on the opposing ends of the same dimension. Optimistic parents indicated more satisfaction…

  1. Optimal policies of non-cross-resistant chemotherapy on Goldie and Coldman's cancer model.

    Science.gov (United States)

    Chen, Jeng-Huei; Kuo, Ya-Hui; Luh, Hsing Paul

    2013-10-01

    Mathematical models can be used to study the chemotherapy on tumor cells. Especially, in 1979, Goldie and Coldman proposed the first mathematical model to relate the drug sensitivity of tumors to their mutation rates. Many scientists have since referred to this pioneering work because of its simplicity and elegance. Its original idea has also been extended and further investigated in massive follow-up studies of cancer modeling and optimal treatment. Goldie and Coldman, together with Guaduskas, later used their model to explain why an alternating non-cross-resistant chemotherapy is optimal with a simulation approach. Subsequently in 1983, Goldie and Coldman proposed an extended stochastic based model and provided a rigorous mathematical proof to their earlier simulation work when the extended model is approximated by its quasi-approximation. However, Goldie and Coldman's analytic study of optimal treatments majorly focused on a process with symmetrical parameter settings, and presented few theoretical results for asymmetrical settings. In this paper, we recast and restate Goldie, Coldman, and Guaduskas' model as a multi-stage optimization problem. Under an asymmetrical assumption, the conditions under which a treatment policy can be optimal are derived. The proposed framework enables us to consider some optimal policies on the model analytically. In addition, Goldie, Coldman and Guaduskas' work with symmetrical settings can be treated as a special case of our framework. Based on the derived conditions, this study provides an alternative proof to Goldie and Coldman's work. In addition to the theoretical derivation, numerical results are included to justify the correctness of our work. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Optimal control of HIV/AIDS dynamic: Education and treatment

    Science.gov (United States)

    Sule, Amiru; Abdullah, Farah Aini

    2014-07-01

    A mathematical model which describes the transmission dynamics of HIV/AIDS is developed. The optimal control representing education and treatment for this model is explored. The existence of optimal Control is established analytically by the use of optimal control theory. Numerical simulations suggest that education and treatment for the infected has a positive impact on HIV/AIDS control.

  3. Streamflow variability and optimal capacity of run-of-river hydropower plants

    Science.gov (United States)

    Basso, S.; Botter, G.

    2012-10-01

    The identification of the capacity of a run-of-river plant which allows for the optimal utilization of the available water resources is a challenging task, mainly because of the inherent temporal variability of river flows. This paper proposes an analytical framework to describe the energy production and the economic profitability of small run-of-river power plants on the basis of the underlying streamflow regime. We provide analytical expressions for the capacity which maximize the produced energy as a function of the underlying flow duration curve and minimum environmental flow requirements downstream of the plant intake. Similar analytical expressions are derived for the capacity which maximize the economic return deriving from construction and operation of a new plant. The analytical approach is applied to a minihydro plant recently proposed in a small Alpine catchment in northeastern Italy, evidencing the potential of the method as a flexible and simple design tool for practical application. The analytical model provides useful insight on the major hydrologic and economic controls (e.g., streamflow variability, energy price, costs) on the optimal plant capacity and helps in identifying policy strategies to reduce the current gap between the economic and energy optimizations of run-of-river plants.

  4. Dynamic stochastic optimization

    CERN Document Server

    Ermoliev, Yuri; Pflug, Georg

    2004-01-01

    Uncertainties and changes are pervasive characteristics of modern systems involving interactions between humans, economics, nature and technology. These systems are often too complex to allow for precise evaluations and, as a result, the lack of proper management (control) may create significant risks. In order to develop robust strategies we need approaches which explic­ itly deal with uncertainties, risks and changing conditions. One rather general approach is to characterize (explicitly or implicitly) uncertainties by objec­ tive or subjective probabilities (measures of confidence or belief). This leads us to stochastic optimization problems which can rarely be solved by using the standard deterministic optimization and optimal control methods. In the stochastic optimization the accent is on problems with a large number of deci­ sion and random variables, and consequently the focus ofattention is directed to efficient solution procedures rather than to (analytical) closed-form solu­ tions. Objective an...

  5. Optimization of IC/HPLC as a rapid analytical tool for characterization of total impurities in UO2

    International Nuclear Information System (INIS)

    Kelkar, A.G.; Kapoor, Y.S.; Mahanty, B.N.; Fulzele, A.K.; Mallik, G.K.

    2007-01-01

    Use of ion chromatography in the determination of metallic and non metallic impurities has been studied and observed to be very satisfactory. In the present paper the total time was monitored in all these experiments and compared with the conventional analytical techniques. (author)

  6. On the analytic and numeric optimisation of airplane trajectories under real atmospheric conditions

    Science.gov (United States)

    Gonzalo, J.; Domínguez, D.; López, D.

    2014-12-01

    From the beginning of aviation era, economic constraints have forced operators to continuously improve the planning of the flights. The revenue is proportional to the cost per flight and the airspace occupancy. Many methods, the first started in the middle of last century, have explore analytical, numerical and artificial intelligence resources to reach the optimal flight planning. In parallel, advances in meteorology and communications allow an almost real-time knowledge of the atmospheric conditions and a reliable, error-bounded forecast for the near future. Thus, apart from weather risks to be avoided, airplanes can dynamically adapt their trajectories to minimise their costs. International regulators are aware about these capabilities, so it is reasonable to envisage some changes to allow this dynamic planning negotiation to soon become operational. Moreover, current unmanned airplanes, very popular and often small, suffer the impact of winds and other weather conditions in form of dramatic changes in their performance. The present paper reviews analytic and numeric solutions for typical trajectory planning problems. Analytic methods are those trying to solve the problem using the Pontryagin principle, where influence parameters are added to state variables to form a split condition differential equation problem. The system can be solved numerically -indirect optimisation- or using parameterised functions -direct optimisation-. On the other hand, numerical methods are based on Bellman's dynamic programming (or Dijkstra algorithms), where the fact that two optimal trajectories can be concatenated to form a new optimal one if the joint point is demonstrated to belong to the final optimal solution. There is no a-priori conditions for the best method. Traditionally, analytic has been more employed for continuous problems whereas numeric for discrete ones. In the current problem, airplane behaviour is defined by continuous equations, while wind fields are given in a

  7. Finite Gaussian Mixture Approximations to Analytically Intractable Density Kernels

    DEFF Research Database (Denmark)

    Khorunzhina, Natalia; Richard, Jean-Francois

    The objective of the paper is that of constructing finite Gaussian mixture approximations to analytically intractable density kernels. The proposed method is adaptive in that terms are added one at the time and the mixture is fully re-optimized at each step using a distance measure that approxima...

  8. Optimization of mobile analysis of radionuclides

    International Nuclear Information System (INIS)

    Labaska, M.

    2016-01-01

    This thesis is focused on optimization of separation and determination of radionuclides which can be used in mobile or field analysis. Mentioned methods are part of procedures and methods of mobile radiometric laboratory which is being developed for Slovak Armed forces. The main principle of these methods is the separation of analytes using high performance liquid chromatography using both reverse phase liquid chromatography and ion exchange chromatography. Chromatography columns such as Dionex IonPack"("R") CS5A, Dionex IonPack"("R") CS3 and Hypersil"("R") BDS C18 have been used. For detection of stabile nuclides, conductivity detection and UV/VIS detection have been employed. Separation of alkali and alkali earth metals. transition metals and lanthanides has been optimized. Combination of chromatographic separation and flow scintillation analysis has been also studied. Radioactive isotopes "5"5Fe, "2"1"0Pb, "6"0Co, "8"5Sr and "1"3"4Cs have been chosen as analytes for nuclear detection techniques. Utilization of well-type and planar NaI(Tl) detector has been investigated together with cloud point extraction. For micelle mediated extraction two possible ligands have been studied - 8-hydroxyquinoline and ammonium pyrolidinedithiocarbamate. Recoveries of cloud point extraction were in range between 80 to 90%. This thesis is also focused on possible application of liquid scintillation analysis with cloud point extraction of analytes. Radioactive standard containing "5"5Fe, "2"1"0Pb, "6"0Co, "8"5Sr and "1"3"4Cs has been separated using liquid chromatography and fractions of individual isotopes have been collected, extracted using cloud point extraction and measured using liquid scintillation analysis. Finally, cloud point extraction coupled with ICP-MS have been studied. (author)

  9. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Science.gov (United States)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  10. Experimental and analytical study of high velocity impact on Kevlar/Epoxy composite plates

    Science.gov (United States)

    Sikarwar, Rahul S.; Velmurugan, Raman; Madhu, Velmuri

    2012-12-01

    In the present study, impact behavior of Kevlar/Epoxy composite plates has been carried out experimentally by considering different thicknesses and lay-up sequences and compared with analytical results. The effect of thickness, lay-up sequence on energy absorbing capacity has been studied for high velocity impact. Four lay-up sequences and four thickness values have been considered. Initial velocities and residual velocities are measured experimentally to calculate the energy absorbing capacity of laminates. Residual velocity of projectile and energy absorbed by laminates are calculated analytically. The results obtained from analytical study are found to be in good agreement with experimental results. It is observed from the study that 0/90 lay-up sequence is most effective for impact resistance. Delamination area is maximum on the back side of the plate for all thickness values and lay-up sequences. The delamination area on the back is maximum for 0/90/45/-45 laminates compared to other lay-up sequences.

  11. Analytical investigation of the thermal optimization of biogas plants

    International Nuclear Information System (INIS)

    Knauer, Thomas; Scholwin, Frank; Nelles, Michael

    2015-01-01

    The economic efficiency of biogas plants is more difficult to display with recent legal regulations than with bonus tariff systems of previous EEG amendments. To enhance efficiency there are different options, often linked with further investments. Direct technical innovations with fast economic yields need exact evaluation of limiting conditions. Within this article the heat sector of agricultural biogas plants is studied. So far scarcely considered, especially the improvement of on-site thermal energy consumption promises a high optimisation. Data basis are feeding protocols and temperature measurements of input substrates, biogas, environment etc., also documentations of on-site thermal consumption over 10 years. Analyzing first results of measurements and primary equilibrations shows, that maintenance of biogas process temperature consumes most thermal energy and therefore has the greatest potential of improvement. Passive and active insulation of feed systems and heat recovery from secondary fermenter liquids are identified as first optimization measures. Depending on amount and temperature raise of input substrates, saving potentials of more than hundred megawatt hours per year were calculated.

  12. PLS2 regression as a tool for selection of optimal analytical modality

    DEFF Research Database (Denmark)

    Madsen, Michael; Esbensen, Kim

    Intelligent use of modern process analysers allows process technicians and engineers to look deep into the dynamic behaviour of production systems. This opens up for a plurality of new possibilities with respect to process optimisation. Oftentimes, several instruments representing different...... technologies and price classes are able to decipher relevant process information simultaneously. The question then is: how to choose between available technologies without compromising the quality and usability of the data. We apply PLS2 modelling to quantify the relative merits of competing, or complementing......, analytical modalities. We here present results from a feasibility study, where Fourier Transform Near InfraRed (FT-NIR), Fourier Transform Mid InfraRed (FT-MIR), and Raman laser spectroscopy were applied on the same set of samples obtained from a pilot-scale beer brewing process. Quantitative PLS1 models...

  13. Analytical study on holographic superfluid in AdS soliton background

    International Nuclear Information System (INIS)

    Lai, Chuyu; Pan, Qiyuan; Jing, Jiliang; Wang, Yongjiu

    2016-01-01

    We analytically study the holographic superfluid phase transition in the AdS soliton background by using the variational method for the Sturm–Liouville eigenvalue problem. By investigating the holographic s-wave and p-wave superfluid models in the probe limit, we observe that the spatial component of the gauge field will hinder the phase transition. Moreover, we note that, different from the AdS black hole spacetime, in the AdS soliton background the holographic superfluid phase transition always belongs to the second order and the critical exponent of the system takes the mean-field value in both s-wave and p-wave models. Our analytical results are found to be in good agreement with the numerical findings.

  14. Examining the Use of a Visual Analytics System for Sensemaking Tasks: Case Studies with Domain Experts.

    Science.gov (United States)

    Kang, Youn-Ah; Stasko, J

    2012-12-01

    While the formal evaluation of systems in visual analytics is still relatively uncommon, particularly rare are case studies of prolonged system use by domain analysts working with their own data. Conducting case studies can be challenging, but it can be a particularly effective way to examine whether visual analytics systems are truly helping expert users to accomplish their goals. We studied the use of a visual analytics system for sensemaking tasks on documents by six analysts from a variety of domains. We describe their application of the system along with the benefits, issues, and problems that we uncovered. Findings from the studies identify features that visual analytics systems should emphasize as well as missing capabilities that should be addressed. These findings inform design implications for future systems.

  15. Analytical solution of a stochastic model of risk spreading with global coupling

    Science.gov (United States)

    Morita, Satoru; Yoshimura, Jin

    2013-11-01

    We study a stochastic matrix model to understand the mechanics of risk spreading (or bet hedging) by dispersion. Up to now, this model has been mostly dealt with numerically, except for the well-mixed case. Here, we present an analytical result that shows that optimal dispersion leads to Zipf's law. Moreover, we found that the arithmetic ensemble average of the total growth rate converges to the geometric one, because the sample size is finite.

  16. Topology optimization for nano-photonics

    DEFF Research Database (Denmark)

    Jensen, Jakob Søndergaard; Sigmund, Ole

    2011-01-01

    Topology optimization is a computational tool that can be used for the systematic design of photonic crystals, waveguides, resonators, filters and plasmonics. The method was originally developed for mechanical design problems but has within the last six years been applied to a range of photonics...... applications. Topology optimization may be based on finite element and finite difference type modeling methods in both frequency and time domain. The basic idea is that the material density of each element or grid point is a design variable, hence the geometry is parameterized in a pixel-like fashion....... The optimization problem is efficiently solved using mathematical programming-based optimization methods and analytical gradient calculations. The paper reviews the basic procedures behind topology optimization, a large number of applications ranging from photonic crystal design to surface plasmonic devices...

  17. Optimization of power system operation

    CERN Document Server

    Zhu, Jizhong

    2015-01-01

    This book applies the latest applications of new technologies topower system operation and analysis, including new and importantareas that are not covered in the previous edition. Optimization of Power System Operation covers both traditional andmodern technologies, including power flow analysis, steady-statesecurity region analysis, security constrained economic dispatch,multi-area system economic dispatch, unit commitment, optimal powerflow, smart grid operation, optimal load shed, optimalreconfiguration of distribution network, power system uncertaintyanalysis, power system sensitivity analysis, analytic hierarchicalprocess, neural network, fuzzy theory, genetic algorithm,evolutionary programming, and particle swarm optimization, amongothers. New topics such as the wheeling model, multi-areawheeling, the total transfer capability computation in multipleareas, are also addressed. The new edition of this book continues to provide engineers andac demics with a complete picture of the optimization of techn...

  18. Library improvement through data analytics

    CERN Document Server

    Farmer, Lesley S J

    2017-01-01

    This book shows how to act on and make sense of data in libraries. Using a range of techniques, tools and methodologies it explains how data can be used to help inform decision making at every level. Sound data analytics is the foundation for making an evidence-based case for libraries, in addition to guiding myriad organizational decisions, from optimizing operations for efficiency to responding to community needs. Designed to be useful for beginners as well as those with a background in data, this book introduces the basics of a six point framework that can be applied to a variety of library settings for effective system based, data-driven management. Library Improvement Through Data Analytics includes: - the basics of statistical concepts - recommended data sources for various library functions and processes, and guidance for using census, university, or - - government data in analysis - techniques for cleaning data - matching data to appropriate data analysis methods - how to make descriptive statistics m...

  19. Case study: IBM Watson Analytics cloud platform as Analytics-as-a-Service system for heart failure early detection

    OpenAIRE

    Guidi, Gabriele; Miniati, Roberto; Mazzola, Matteo; Iadanza, Ernesto

    2016-01-01

    In the recent years the progress in technology and the increasing availability of fast connections have produced a migration of functionalities in Information Technologies services, from static servers to distributed technologies. This article describes the main tools available on the market to perform Analytics as a Service (AaaS) using a cloud platform. It is also described a use case of IBM Watson Analytics, a cloud system for data analytics, applied to the following research scope: detect...

  20. Environment-friendly analytic of mineral oil hydrocarbons. Final report

    International Nuclear Information System (INIS)

    Lipinski, J.; Dethlefs, F.

    1998-01-01

    An analytical method has been developed for the quantitative analysis of petroleum residiues in soils. The measuring method is gas chromatography. Details on the optimization of the method, the calibration and the accuracy are presented. Qualitative analysis of petroleum fractions is only possible for gasoline and higher boiling fractions. (SR) [de

  1. Identifying and prioritizing indicators and effective solutions to optimization the use of wood in construction classical furniture by using AHP (Case study of Qom

    Directory of Open Access Journals (Sweden)

    Mohammad Ghofrani

    2017-02-01

    Full Text Available AbstractThe aim of this study was to identify and prioritize the indicators and provide effective solutions to optimize the use of wood in construction classical furniture using the analytic hierarchy process (case study in Qom. For this purpose, studies and results of other researchers and interviews with experts, the factors affecting the optimization of wood consumption were divided into 4 main categories and 23 sub-indicators. The importance of the sub after getting feedback furniture producers were determined by AHP. The results show that the original surface design and human resources are of great importance. In addition, among 23 sub-effective optimization of the use of wood in construction classical furniture, ergonomics, style, skill training and inlaid in classical furniture industry in order to weight the value of 0/247, 0/181, 0/124 and 0/087 are of paramount importance and the method of use of force specialist solutions were a priority.

  2. Balanced and optimal bianisotropic particles: maximizing power extracted from electromagnetic fields

    International Nuclear Information System (INIS)

    Ra'di, Younes; Tretyakov, Sergei A

    2013-01-01

    Here we introduce the concept of ‘optimal particles’ for strong interactions with electromagnetic fields. We assume that a particle occupies a given electrically small volume in space and study the required optimal relations between the particle polarizabilities. In these optimal particles, the inclusion shape and material are chosen so that the particles extract the maximum possible power from given incident fields. It appears that for different excitation scenarios the optimal particles are bianisotropic chiral, omega, moving and Tellegen particles. The optimal dimensions of resonant canonical chiral and omega particles are found analytically. Such optimal particles have extreme properties in scattering (e.g., zero backscattering or invisibility). Planar arrays of optimal particles possess extreme properties in reflection and transmission (e.g. total absorption or magnetic-wall response), and volumetric composites of optimal particles realize, for example, such extreme materials as the chiral nihility medium. (paper)

  3. Replica Analysis for Portfolio Optimization with Single-Factor Model

    Science.gov (United States)

    Shinzato, Takashi

    2017-06-01

    In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.

  4. A study on optimization of photoneutron shielding in a medical accelerator room by using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Kim, Yong Nam; Jeong, Kyoungkeun; Kim, Joo Young; Lee, Chang Geol; Seong, Jinsil; Choi, Sang Hyun; Kim, Chan Hyeong

    2008-01-01

    Medical linear accelerators operating above 10 MV require door shielding for neutrons in addition to photons. A criterion for choice of optimal configuration of lamination of BPE (Borated Polyethylene) and lead is not clear. Moreover, optimal configuration cannot be determined by the conventional method using an analytical formula and simple measurement. This study performs Monte Carlo simulation of radiation field in a commercial LINAC room with 15 MV X-ray sources. Considering two configuration of lamination such as 'lead-BPE' and 'lead-BPE-lead', dose equivalents are calculated by using the MCNPX code and comparative analyses are performed with each other. The obtained results show that there is no significant difference in neuron shielding between both configurations, whereas lead-BPE-lead is more effective for photon shielding. It is also noted that the absolute values of neutron doses are much greater than that of photon doses outside as well as inside the door, by three orders of magnitude. As a conclusion, the laminating of lead-BPE is suggested as the optimal configuration from the viewpoint of simplicity in fabrication and handling, even though it has no significant difference from lead-BPE-lead in terms of total dose equivalent. (author)

  5. Ultrasonic Generation and Optimization for EMAT

    International Nuclear Information System (INIS)

    Jian, X.; Dixon, Steve; Edwards, Rachel S.

    2005-01-01

    A model for transient ultrasonic wave generation by EMATs in non-magnetic metals is presented. It combines analytical solutions currently available and FEM to calculate ultrasonic bulk and Rayleigh waves generated by the EMAT. Analytical solutions are used as they can be calculated quickly on a standard mathematical computer package. Calculations agree well with the experimental measurement. The model can be used to optimize EMAT design, and has explained some of the results from our previous published measurements

  6. Optimally Fortifying Logic Reliability through Criticality Ranking

    Directory of Open Access Journals (Sweden)

    Yu Bai

    2015-02-01

    Full Text Available With CMOS technology aggressively scaling towards the 22-nm node, modern FPGA devices face tremendous aging-induced reliability challenges due to bias temperature instability (BTI and hot carrier injection (HCI. This paper presents a novel anti-aging technique at the logic level that is both scalable and applicable for VLSI digital circuits implemented with FPGA devices. The key idea is to prolong the lifetime of FPGA-mapped designs by strategically elevating the VDD values of some LUTs based on their modular criticality values. Although the idea of scaling VDD in order to improve either energy efficiency or circuit reliability has been explored extensively, our study distinguishes itself by approaching this challenge through an analytical procedure, therefore being able to maximize the overall reliability of the target FPGA design by rigorously modeling the BTI-induced device reliability and optimally solving the VDD assignment problem. Specifically, we first develop a systematic framework to analytically model the reliability of an FPGA LUT (look-up table, which consists of both RAM memory bits and associated switching circuit. We also, for the first time, establish the relationship between signal transition density and a LUT’s reliability in an analytical way. This key observation further motivates us to define the modular criticality as the product of signal transition density and the logic observability of each LUT. Finally, we analytically prove, for the first time, that the optimal way to improve the overall reliability of a whole FPGA device is to fortify individual LUTs according to their modular criticality. To the best of our knowledge, this work is the first to draw such a conclusion.

  7. Group-analytic training groups for psychology students: A qualitative study

    DEFF Research Database (Denmark)

    Nathan, Vibeke Torpe; Poulsen, Stig

    2004-01-01

    This article presents results from an interview study of psychology students' experiences from group-analytic groups conducted at the University of Copenhagen. The primary foci are the significance of differences in themotivation participants'  personal aims of individual participantsfor particip......This article presents results from an interview study of psychology students' experiences from group-analytic groups conducted at the University of Copenhagen. The primary foci are the significance of differences in themotivation participants'  personal aims of individual participantsfor...... participation in the group, the impact of the composition of participants on the group process, and the professional learning through the group experience. In general the interviews show a marked satisfaction with the group participation. In particular, learning about the importance of group boundaries...

  8. Experimental/analytical approaches to modeling, calibrating and optimizing shaking table dynamics for structural dynamic applications

    Science.gov (United States)

    Trombetti, Tomaso

    This thesis presents an Experimental/Analytical approach to modeling and calibrating shaking tables for structural dynamic applications. This approach was successfully applied to the shaking table recently built in the structural laboratory of the Civil Engineering Department at Rice University. This shaking table is capable of reproducing model earthquake ground motions with a peak acceleration of 6 g's, a peak velocity of 40 inches per second, and a peak displacement of 3 inches, for a maximum payload of 1500 pounds. It has a frequency bandwidth of approximately 70 Hz and is designed to test structural specimens up to 1/5 scale. The rail/table system is mounted on a reaction mass of about 70,000 pounds consisting of three 12 ft x 12 ft x 1 ft reinforced concrete slabs, post-tensioned together and connected to the strong laboratory floor. The slip table is driven by a hydraulic actuator governed by a 407 MTS controller which employs a proportional-integral-derivative-feedforward-differential pressure algorithm to control the actuator displacement. Feedback signals are provided by two LVDT's (monitoring the slip table relative displacement and the servovalve main stage spool position) and by one differential pressure transducer (monitoring the actuator force). The dynamic actuator-foundation-specimen system is modeled and analyzed by combining linear control theory and linear structural dynamics. The analytical model developed accounts for the effects of actuator oil compressibility, oil leakage in the actuator, time delay in the response of the servovalve spool to a given electrical signal, foundation flexibility, and dynamic characteristics of multi-degree-of-freedom specimens. In order to study the actual dynamic behavior of the shaking table, the transfer function between target and actual table accelerations were identified using experimental results and spectral estimation techniques. The power spectral density of the system input and the cross power spectral

  9. Optimal design of damping layers in SMA/GFRP laminated hybrid composites

    Science.gov (United States)

    Haghdoust, P.; Cinquemani, S.; Lo Conte, A.; Lecis, N.

    2017-10-01

    This work describes the optimization of the shape profiles for shape memory alloys (SMA) sheets in hybrid layered composite structures, i.e. slender beams or thinner plates, designed for the passive attenuation of flexural vibrations. The paper starts with the description of the material and architecture of the investigated hybrid layered composite. An analytical method, for evaluating the energy dissipation inside a vibrating cantilever beam is developed. The analytical solution is then followed by a shape profile optimization of the inserts, using a genetic algorithm to minimize the SMA material layer usage, while maintaining target level of structural damping. Delamination problem at SMA/glass fiber reinforced polymer interface is discussed. At the end, the proposed methodology has been applied to study the hybridization of a wind turbine layered structure blade with SMA material, in order to increase its passive damping.

  10. Big data analytics for the virtual network topology reconfiguration use case

    OpenAIRE

    Gifre Renom, Lluís; Morales Alcaide, Fernando; Velasco Esteban, Luis Domingo; Ruiz Ramírez, Marc

    2016-01-01

    ABNO's OAM Handler is extended with big data analytics capabilities to anticipate traffic changes in volume and direction. Predicted traffic is used to trigger virtual network topology re-optimization. When the virtual topology needs to be reconfigured, predicted and current traffic matrices are used to find the optimal topology. A heuristic algorithm to adapt current virtual topology to meet both actual demands and expected traffic matrix is proposed. Experimental assessment is carried ou...

  11. Seamless Digital Environment – Plan for Data Analytics Use Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna Helene [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bly, Aaron Douglas [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    The U.S Department of Energy Light Water Reactor Sustainability (LWRS) Program initiated research in to what is needed in order to provide a roadmap or model for Nuclear Power Plants to reference when building an architecture that can support the growing data supply and demand flowing through their networks. The Digital Architecture project published report Digital Architecture Planning Model (Oxstrand et. al, 2016) discusses things to consider when building an architecture to support the increasing needs and demands of data throughout the plant. Once the plant is able to support the data demands it still needs to be able to provide the data in an easy, quick and reliable method. A common method is to create a “one stop shop” application that a user can go to get all the data they need. The creation of this leads to the need of creating a Seamless Digital Environment (SDE) to integrate all the “siloed” data. An SDE is the desired perception that should be presented to users by gathering the data from any data source (e.g., legacy applications and work management systems) without effort by the user. The goal for FY16 was to complete a feasibility study for data mining and analytics for employing information from computer-based procedures enabled technologies for use in developing improved business analytics. The research team collaborated with multiple organizations to identify use cases or scenarios, which could be beneficial to investigate in a feasibility study. Many interesting potential use cases were identified throughout the FY16 activity. Unfortunately, due to factors out of the research team’s control, none of the studies were initiated this year. However, the insights gained and the relationships built with both PVNGS and NextAxiom will be valuable when moving forward with future research. During the 2016 annual Nuclear Information Technology Strategic Leadership (NITSL) group meeting it was identified would be very beneficial to the industry to

  12. Phase Transitions in Combinatorial Optimization Problems Basics, Algorithms and Statistical Mechanics

    CERN Document Server

    Hartmann, Alexander K

    2005-01-01

    A concise, comprehensive introduction to the topic of statistical physics of combinatorial optimization, bringing together theoretical concepts and algorithms from computer science with analytical methods from physics. The result bridges the gap between statistical physics and combinatorial optimization, investigating problems taken from theoretical computing, such as the vertex-cover problem, with the concepts and methods of theoretical physics. The authors cover rapid developments and analytical methods that are both extremely complex and spread by word-of-mouth, providing all the necessary

  13. Optimal conductive constructal configurations with “parallel design”

    International Nuclear Information System (INIS)

    Eslami, M.

    2016-01-01

    Highlights: • A new parallel design is proposed for conductive cooling of heat generating rectangles. • The geometric features are optimized analytically. • The internal structure morph as a function of available conductive material. • Thermal performance is superior to the previously numerically optimized designs. - Abstract: Today, conductive volume to point cooling of heat generating bodies is under investigation as an alternative method for thermal management of electronic chipsets with high power density. In this paper, a new simple geometry called “parallel design” is proposed for effective conductive cooling of rectangular heat generating bodies. This configuration tries to minimize the thermal resistance associated with the temperature drop inside the heat generating volume. The geometric features of the design are all optimized analytically and expressed with simple explicit equations. It is proved that optimal number of parallel links is equal to the thermal conductivity ratio multiplied by the porosity (or the volume ratio). With the universal aspect ratio of H/L = 2, total thermal resistance of the present parallel design is lower than the recently proposed networks of various shapes that are optimized with help of numerical simulations; especially when more conducting material is available.

  14. Analytic method study of point-reactor kinetic equation when cold start-up

    International Nuclear Information System (INIS)

    Zhang Fan; Chen Wenzhen; Gui Xuewen

    2008-01-01

    The reactor cold start-up is a process of inserting reactivity by lifting control rod discontinuously. Inserting too much reactivity will cause short-period and may cause an overpressure accident in the primary loop. It is therefore very important to understand the rule of neutron density variation and to find out the relationships among the speed of lifting control rod, and the duration and speed of neutron density response. It is also helpful for the operators to grasp the rule in order to avoid a start-up accident. This paper starts with one-group delayed neutron point-reactor kinetics equations and provides their analytic solution when reactivity is introduced by lifting control rods discontinuously. The analytic expression is validated by comparison with practical data. It is shown that the analytic solution agrees well with numerical solution. Using this analytical solution, the relationships among neutron density response with the speed of lifting control rod and its duration are also studied. By comparing the results with those under the condition of step inserted reactivity, useful conclusions are drawn

  15. System for optimizing activation measurements

    International Nuclear Information System (INIS)

    Antonov, V.A.

    1993-01-01

    Optimization procedures make it possible to perform committed activation investigations, reduce the number of experiments, make them less laborious, and increase their productivity. Separate mathematical functions were investigated for given optimization conditions, and these enable numerical optimal parameter values to be established only in the particular cases of specific techniques and mathematical computer programs. In the known mathematical models insufficient account is taken of the variety and complexity of real nuclide mixtures, the influence of background radiation, and the wide diversity of activation measurement conditions, while numerical methods for solving the optimization problem fail to reveal the laws governing the variations of the activation parameters and their functional interdependences. An optimization method was proposed in which was mainly used to estimate the time intervals for activation measurements of a mononuclide, binary or ternary nuclide mixture. However, by forming a mathematical model of activation processes it becomes possible to extend the number of nuclides in the mixture and to take account of the influence of background radiation and the diversity of the measurement alternatives. The analytical expressions and nomograms obtained can be used to determine the number of measurements, their minimum errors, their sensitivities when estimating the quantity of the tracer nuclide, the permissible quantity of interfering nuclides, the permissible background radiation intensity, and the flux of activating radiation. In the worker described herein these investigations are generalized to include spectrally resolved detection of the activation effect in the presence of the tracer and the interfering nuclides. The analytical expressions are combined into a system from which the optimal activation parameters can be found under different given conditions

  16. Reliability Based Optimization of Structural Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    1987-01-01

    The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability...... is estimated using first. order reliability methods ( FORM ). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisfies given requirements or such that the systems reliability satisfies a given requirement....... For these optimization problems it is described how a sensitivity analysis can be performed. Next, new optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability based optimization problem sequentially using quasi-analytical derivatives. Finally...

  17. An optimal generic model for multi-parameters and big data optimizing: a laboratory experimental study

    Science.gov (United States)

    Utama, D. N.; Ani, N.; Iqbal, M. M.

    2018-03-01

    Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.

  18. New Techniques for Optimal Treatment Planning for LINAC-based Sterotactic Radiosurgery

    International Nuclear Information System (INIS)

    Suh, Tae Suk

    1992-01-01

    Since LINAC-based stereotactic radiosurgery uses multiple noncoplanar arcs, three-dimensional dose evaluation and many beam parameters, a lengthy computation time is required to optimize even the simplest case by a trial and error. The basic approach presented in this paper is to show promising methods using an experimental optimization and an analytic optimization. The purpose of this paper is not to describe the detailed methods, but introduce briefly, proceeding research done currently or in near future. A more detailed description will be shown in ongoing published papers. Experimental optimization is based on two approaches. One is shaping the target volumes through the use of multiple isocenters determined from dose experience and testing. The other method is conformal therapy using a beam eye view technique and field shaping. The analytic approach is to adapt computer-aided design optimization in finding optimum irradiation parameters automatically

  19. Boron doped diamond sensor for sensitive determination of metronidazole: Mechanistic and analytical study by cyclic voltammetry and square wave voltammetry

    International Nuclear Information System (INIS)

    Ammar, Hafedh Belhadj; Brahim, Mabrouk Ben; Abdelhédi, Ridha; Samet, Youssef

    2016-01-01

    The performance of boron-doped diamond (BDD) electrode for the detection of metronidazole (MTZ) as the most important drug of the group of 5-nitroimidazole was proven using cyclic voltammetry (CV) and square wave voltammetry (SWV) techniques. A comparison study between BDD, glassy carbon and silver electrodes on the electrochemical response was carried out. The process is pH-dependent. In neutral and alkaline media, one irreversible reduction peak related to the hydroxylamine derivative formation was registered, involving a total of four electrons. In acidic medium, a prepeak appears probably related to the adsorption affinity of hydroxylamine at the electrode surface. The BDD electrode showed higher sensitivity and reproducibility analytical response, compared with the other electrodes. The higher reduction peak current was registered at pH 11. Under optimal conditions, a linear analytical curve was obtained for the MTZ concentration in the range of 0.2–4.2 μmol L"−"1, with a detection limit of 0.065 μmol L"−"1. - Highlights: • SWV for the determination of MTZ • Boron-doped diamond as a new electrochemical sensor • Simple and rapid detection of MTZ • Efficiency of BDD for sensitive determination of MTZ

  20. Analytical study of dissipative solitary waves

    Energy Technology Data Exchange (ETDEWEB)

    Dini, Fatemeh [Department of Physics, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of); Emamzadeh, Mehdi Molaie [Department of Physics, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of); Khorasani, Sina [School of Electrical Engineering, Sharif University of Technology, PO Box 11365-363, Tehran (Iran, Islamic Republic of); Bobin, Jean Louis [Universite Pierre et Marie Curie, Paris (France); Amrollahi, Reza [Department of Physics, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of); Sodagar, Majid [School of Electrical Engineering, Sharif University of Technology, PO Box 11365-363, Tehran (Iran, Islamic Republic of); Khoshnegar, Milad [School of Electrical Engineering, Sharif University of Technology, PO Box 11365-363, Tehran (Iran, Islamic Republic of)

    2008-02-15

    In this paper, the analytical solution to a new class of nonlinear solitons is presented with cubic nonlinearity, subject to a dissipation term arising as a result of a first-order derivative with respect to time, in the weakly nonlinear regime. Exact solutions are found using the combination of the perturbation and Green's function methods up to the third order. We present an example and discuss the asymptotic behavior of the Green's function. The dissipative solitary equation is also studied in the phase space in the non-dissipative and dissipative forms. Bounded and unbounded solutions of this equation are characterized, yielding an energy conversation law for non-dissipative waves. Applications of the model include weakly nonlinear solutions of terahertz Josephson plasma waves in layered superconductors and ablative Rayleigh-Taylor instability.

  1. Permanent Magnet Flux-Switching Machine, Optimal Design and Performance Analysis

    Directory of Open Access Journals (Sweden)

    Liviu Emilian Somesan

    2013-01-01

    Full Text Available In this paper an analytical sizing-design procedure for a typical permanent magnet flux-switching machine (PMFSM with 12 stator and respectively 10 rotor poles is presented. An optimal design, based on Hooke-Jeeves method with the objective functions of maximum torque density, is performed. The results were validated via two dimensions finite element analysis (2D-FEA applied on the optimized structure. The influence of the permanent magnet (PM dimensions and type, respectively of the rotor poles' shape on the machine performance were also studied via 2D-FEA.

  2. Meta-Analytical Studies in Transport Economics. Methodology and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Brons, M.R.E.

    2006-05-18

    Vast increases in the external costs of transport in the late twentieth century have caused national and international governmental bodies to worry about the sustainability of their transport systems. In this thesis we use meta-analysis as a research method to study various topics in transport economics that are relevant for sustainable transport policymaking. Meta-analysis is a research methodology that is based on the quantitative summarisation of a body of previously documented empirical evidence. In several fields of economic, meta-analysis has become a well-accepted research tool. Despite the appeal of the meta-analytical approach, there are methodological difficulties that need to be acknowledged. We study a specific methodological problem which is common in meta-analysis in economics, viz., within-study dependence caused by multiple sampling techniques. By means of Monte Carlo analysis we investigate the effect of such dependence on the performance of various multivariate estimators. In the applied part of the thesis we use and develop meta-analytical techniques to study the empirical variation in indicators of the price sensitivity of demand for aviation transport, the price sensitivity of demand for gasoline, the efficiency of urban public transport and the valuation of the external costs of noise from rail transport. We focus on the estimation of mean values for these indicators and on the identification of the impact of conditioning factors.

  3. Analytical Diagnostics of Non-Optimal Use of Pesticides and Health Hazards for Vegetable Pickers

    International Nuclear Information System (INIS)

    Zafar, M.; Mehmood, T.; Baig, I. A.; Saboor, A.; Sadiq, S.; Mahmood, K.

    2016-01-01

    Economically pesticides are meant to control pests in the fields. Up to certain optimal use of a typical pesticide, it enhances the yield of crops and vegetables. But, eventually amplified use of pesticides results in contamination of environment (water, soil, and air) and increase the health cost of vegetable pickers. The purpose of this study is to estimate the excessive use of pesticides and economic cost of health hazards for the vegetable pickers in district Vehari. Data from 90 respondents were collected and analyzed. The most common health problems identified during the survey were headache, eye irritation, skin infection, cough and shortness of breath. Health cost consists of costs related to precautionary measure, medication, traveling, the opportunity cost of attended persons and productivity loss. The mean health cost of vegetable pickers in the study area was about Rs. 385 per picker per year. Health cost model was used to measure the health cost of vegetable pickers. The regression results showed that pesticides were being applied non-optimally in the study area i.e., number of pesticide applications for vegetables (7-31) were substantially higher than the recommended dose. Health cost function was significantly different from zero as indicated by F-stat (32.18) and it is also supported by R/sup 2/ that about 70 percent variation in health cost is explained by medication accompanied by productivity loss (Rs. 223), precautionary measure (Rs. 134), attended person cost (Rs. 14) and traveling expenditures (Rs. 16). Hence, strict legislation is required to overcome the availability of hazardous pesticides and to keep the vegetable pickers aware of the optimal use of pesticides through appropriate extension services. (author)

  4. Aerothermodynamic shape optimization of hypersonic blunt bodies

    Science.gov (United States)

    Eyi, Sinan; Yumuşak, Mine

    2015-07-01

    The aim of this study is to develop a reliable and efficient design tool that can be used in hypersonic flows. The flow analysis is based on the axisymmetric Euler/Navier-Stokes and finite-rate chemical reaction equations. The equations are coupled simultaneously and solved implicitly using Newton's method. The Jacobian matrix is evaluated analytically. A gradient-based numerical optimization is used. The adjoint method is utilized for sensitivity calculations. The objective of the design is to generate a hypersonic blunt geometry that produces the minimum drag with low aerodynamic heating. Bezier curves are used for geometry parameterization. The performances of the design optimization method are demonstrated for different hypersonic flow conditions.

  5. Determination of serum albumin, analytical challenges: a French multicenter study.

    Science.gov (United States)

    Rossary, Adrien; Blondé-Cynober, Françoise; Bastard, Jean-Philippe; Beauvieux, Marie-Christine; Beyne, Pascale; Drai, Jocelyne; Lombard, Christine; Anglard, Ingrid; Aussel, Christian; Claeyssens, Sophie; Vasson, Marie-Paule

    2017-06-01

    Among the biological markers of morbidity and mortality, albumin holds a key place in the range of criteria used by the High Authority for Health (HAS) for the assessment of malnutrition and the coding of information system medicalization program (PMSI). If the principle of quantification methods have not changed in recent years, the dispersion of external evaluations of the quality (EEQ) data shows that the standardization using the certified reference material (CRM) 470 is not optimal. The aim of this multicenter study involving 7 sites, conducted by a working group of the French Society of Clinical Biology (SFBC), was to assess whether the albuminemia values depend on the analytical system used. The albumin from plasma (n=30) and serum (n=8) pools was quantified by 5 different methods [bromocresol green (VBC) and bromocresol purple (PBC) colorimetry, immunoturbidimetry (IT), immunonephelometry (IN) and capillary electrophoresis (CE)] using 12 analyzers. Bland and Altman's test evaluated the difference between the results obtained by the different methods. For example, a difference as high as 13 g/L was observed for the same sample between the methods (p albumin across the range of values tested compared to PBC (p albumin values inducing a difference of performance between the immunoprecipitation methods (IT vs IN, p albumin results are related to the technical/analyzer tandem used. This variability is usually not taken into account by the clinician. Thus, clinicians and biologists have to be aware and have to check, depending on the method used, the albumin thresholds identified as risk factors for complications related to malnutrition and PMSI coding.

  6. Diffractive variable beam splitter: optimal design.

    Science.gov (United States)

    Borghi, R; Cincotti, G; Santarsiero, M

    2000-01-01

    The analytical expression of the phase profile of the optimum diffractive beam splitter with an arbitrary power ratio between the two output beams is derived. The phase function is obtained by an analytical optimization procedure such that the diffraction efficiency of the resulting optical element is the highest for an actual device. Comparisons are presented with the efficiency of a diffractive beam splitter specified by a sawtooth phase function and with the pertinent theoretical upper bound for this type of element.

  7. GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2011-01-01

    Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image χ 2 and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest χ 2 is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.

  8. Optimal diversification of the securities portfolio

    Directory of Open Access Journals (Sweden)

    Валентина Михайловна Андриенко

    2016-09-01

    Full Text Available The article deals with problems of the theory and methods of forming the optimal portfolio of financial markets. The analytical review of methods in their historical development is given. Recommendations on the use of a particular method depends on the specific conditions are formulated. The classical and alternative methods are considered. The main attention is paid to the analysis of the investment portfolio of derivative securities in B/S-market modelThe article deals with problems of the theory and methods of forming the optimal portfolio of financial markets. The analytical review of methods in their historical development is given. Recommendations on the use of a particular method depends on the specific conditions are formulated. The classical and alternative methods are considered. The main attention is paid to the analysis of the investment portfolio of derivative securities in -market model

  9. Evaluation of analytical performance based on partial order methodology.

    Science.gov (United States)

    Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin

    2015-01-01

    Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.

  10. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics.

    Science.gov (United States)

    Hoyt, Robert Eugene; Snider, Dallas; Thompson, Carla; Mantravadi, Sarita

    2016-10-11

    We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix

  11. Site study plan for geochemical analytical requirements and methodologies: Revision 1

    International Nuclear Information System (INIS)

    1987-12-01

    This site study plan documents the analytical methodologies and procedures that will be used to analyze geochemically the rock and fluid samples collected during Site Characterization. Information relating to the quality aspects of these analyses is also provided, where available. Most of the proposed analytical procedures have been used previously on the program and are sufficiently sensitive to yield high-quality analyses. In a few cases improvements in analytical methodology (e.g., greater sensitivity, fewer interferences) are desired. Suggested improvements to these methodologies are discussed. In most cases these method-development activities have already been initiated. The primary source of rock and fluid samples for geochemical analysis during Site Characterization will be the drilling program, as described in various SRP Site Study Plans. The Salt Repository Project (SRP) Networks specify the schedule under which the program will operate. Drilling will not begin until after site ground water baseline conditions have been established. The Technical Field Services Contractor (TFSC) is responsible for conducting the field program of drilling and testing. Samples and data will be handled and reported in accordance with established SRP procedures. A quality assurance program will be utilized to assure that activities affecting quality are performed correctly and that the appropriate documentation is maintained. 28 refs., 9 figs., 14 tabs

  12. Discrete-continuous analysis of optimal equipment replacement

    OpenAIRE

    YATSENKO, Yuri; HRITONENKO, Natali

    2008-01-01

    In Operations Research, the equipment replacement process is usually modeled in discrete time. The optimal replacement strategies are found from discrete (or integer) programming problems, well known for their analytic and computational complexity. An alternative approach is represented by continuous-time vintage capital models that explicitly involve the equipment lifetime and are described by nonlinear integral equations. Then the optimal replacement is determined via the opt...

  13. Bifurcations in the optimal elastic foundation for a buckling column

    International Nuclear Information System (INIS)

    Rayneau-Kirkhope, Daniel; Farr, Robert; Ding, K.; Mao, Yong

    2010-01-01

    We investigate the buckling under compression of a slender beam with a distributed lateral elastic support, for which there is an associated cost. For a given cost, we study the optimal choice of support to protect against Euler buckling. We show that with only weak lateral support, the optimum distribution is a delta-function at the centre of the beam. When more support is allowed, we find numerically that the optimal distribution undergoes a series of bifurcations. We obtain analytical expressions for the buckling load around the first bifurcation point and corresponding expansions for the optimal position of support. Our theoretical predictions, including the critical exponent of the bifurcation, are confirmed by computer simulations.

  14. Bifurcations in the optimal elastic foundation for a buckling column

    Energy Technology Data Exchange (ETDEWEB)

    Rayneau-Kirkhope, Daniel, E-mail: ppxdr@nottingham.ac.u [School of Physics and Astronomy, University of Nottingham, Nottingham, NG7 2RD (United Kingdom); Farr, Robert [Unilever R and D, Olivier van Noortlaan 120, AT3133, Vlaardingen (Netherlands); London Institute for Mathematical Sciences, 22 South Audley Street, Mayfair, London (United Kingdom); Ding, K. [Department of Physics, Fudan University, Shanghai, 200433 (China); Mao, Yong [School of Physics and Astronomy, University of Nottingham, Nottingham, NG7 2RD (United Kingdom)

    2010-12-01

    We investigate the buckling under compression of a slender beam with a distributed lateral elastic support, for which there is an associated cost. For a given cost, we study the optimal choice of support to protect against Euler buckling. We show that with only weak lateral support, the optimum distribution is a delta-function at the centre of the beam. When more support is allowed, we find numerically that the optimal distribution undergoes a series of bifurcations. We obtain analytical expressions for the buckling load around the first bifurcation point and corresponding expansions for the optimal position of support. Our theoretical predictions, including the critical exponent of the bifurcation, are confirmed by computer simulations.

  15. MonetDB/DataCell: Online Analytics in a Streaming Column-Store

    NARCIS (Netherlands)

    E. Liarou (Erietta); S. Idreos (Stratos); S. Manegold (Stefan); M.L. Kersten (Martin)

    2012-01-01

    textabstractIn DataCell, we design streaming functionalities in a mod- ern relational database kernel which targets big data analyt- ics. This includes exploitation of both its storage/execution engine and its optimizer infrastructure. We investigate the opportunities and challenges that arise with

  16. MonetDB/DataCell: online analytics in a streaming column-store

    NARCIS (Netherlands)

    Liarou, E.; Idreos, S.; Manegold, S.; Kersten, M.

    2012-01-01

    In DataCell, we design streaming functionalities in a modern relational database kernel which targets big data analytics. This includes exploitation of both its storage/execution engine and its optimizer infrastructure. We investigate the opportunities and challenges that arise with such a direction

  17. Optimisation (sampling strategies and analytical procedures) for site specific environment monitoring at the areas of uranium production legacy sites in Ukraine - 59045

    International Nuclear Information System (INIS)

    Voitsekhovych, Oleg V.; Lavrova, Tatiana V.; Kostezh, Alexander B.

    2012-01-01

    There are many sites in the world, where Environment are still under influence of the contamination related to the Uranium production carried out in past. Author's experience shows that lack of site characterization data, incomplete or unreliable environment monitoring studies can significantly limit quality of Safety Assessment procedures and Priority actions analyses needed for Remediation Planning. During recent decades the analytical laboratories of the many enterprises, currently being responsible for establishing the site specific environment monitoring program have been significantly improved their technical sampling and analytical capacities. However, lack of experience in the optimal site specific sampling strategy planning and also not enough experience in application of the required analytical techniques, such as modern alpha-beta radiometers, gamma and alpha spectrometry and liquid-scintillation analytical methods application for determination of U-Th series radionuclides in the environment, does not allow to these laboratories to develop and conduct efficiently the monitoring programs as a basis for further Safety Assessment in decision making procedures. This paper gives some conclusions, which were gained from the experience establishing monitoring programs in Ukraine and also propose some practical steps on optimization in sampling strategy planning and analytical procedures to be applied for the area required Safety assessment and justification for its potential remediation and safe management. (authors)

  18. Virtual network topology reconfiguration based on big data analytics for traffic prediction

    OpenAIRE

    Morales Alcaide, Fernando; Ruiz Ramírez, Marc; Velasco Esteban, Luis Domingo

    2016-01-01

    Big data analytics is applied for IP traffic prediction. When the virtual topology needs to be reconfigured, predicted and current traffic matrices are used to find the optimal topology. Exhaustive simulation results reveal large benefits. Peer Reviewed

  19. Advanced commercial Tokamak optimization studies

    International Nuclear Information System (INIS)

    Whitley, R.H.; Berwald, D.H.; Gordon, J.D.

    1985-01-01

    Our recent studies have concentrated on developing optimal high beta (bean-shaped plasma) commercial tokamak configurations using TRW's Tokamak Reactor Systems Code (TRSC) with special emphasis on lower net electric power reactors that are more easily deployable. A wide range of issues were investigated in the search for the most economic configuration: fusion power, reactor size, wall load, magnet type, inboard blanket and shield thickness, plasma aspect ratio, and operational β value. The costs and configurations of both steady-state and pulsed reactors were also investigated. Optimal small and large reactor concepts were developed and compared by studying the cost of electricity from single units and from multiplexed units. Multiplexed units appear to have advantages because they share some plant equipment and have lower initial capital investment as compared to larger single units

  20. Analytical and biological studies of kanji and extracts of its ingredient, daucus carota L

    International Nuclear Information System (INIS)

    Latif, A.; Hussain, K.; Bukhari, N.; Karim, S.; Hussain, A.; Khurshid, F.

    2013-01-01

    A fermented beverage, Kanji, prepared from roots of Daucus carota L. subsp. sativus (Hoffm.) Arcang. var. vavilovii Mazk. (Apiaceae), despite long usage history has not been investigated for analytical studies and biological activities. Therefore, the present study aimed to investigate different types of Kanji samples and various types of extracts/fractions of root of the plant for a number of analytical studies and in vitro antioxidant activities. The Kanji sample, Lab-made Kanji, having better analytical and biological profile was further investigated for preliminary clinical studies. The analytical studies indicated that Lab-made Kanji was having comparatively higher contents of phytochemicals than that of the commercial Kanji samples, different types of extracts and fractions (P < 0.05). All the Kanji samples and aqueous and ethanol extracts of fresh roots exhibited comparable antioxidant activities in DPPH assay (52.20 - 54.19%) that were higher than that of methanol extract (48.78%) of dried roots. The antiradical powers (1/ EC50) of Lab-made Kanji and aqueous extract were found to be higher than that of the ethanol and methanol extracts. In beta-carotene linoleate assay, the Kanji samples showed higher activity than that of the methanol extract, but comparable to that of the vitamin-E and butylated hydroxyl anisole (BHA) (P < 0.05). A preliminary clinical evaluation indicated that Kanji has no harmful effect on blood components, liver function and serum lipid profile. The results of the present study indicate that Kanji is an effective antioxidant beverage. (author)

  1. Application of nuclear analytical methods to heavy metal pollution studies of estuaries

    International Nuclear Information System (INIS)

    Anders, B.; Junge, W.; Knoth, J.; Michaelis, W.; Pepelnik, R.; Schwenke, H.

    1984-01-01

    Important objectives of heavy metal pollution studies of estuaries are the understanding of the transport phenomena in these complex ecosystems and the discovery of the pollution history and the geochemical background. Such studies require high precision and accuracy of the analytical methods. Moreover, pronounced spatial heterogeneities and temporal variabilities that are typical for estuaries necessitate the analysis of a great number of samples if relevant results are to be obtained. Both requirements can economically be fulfilled by a proper combination of analytical methods. Applications of energy-dispersive X-ray fluorescence analysis with total reflection of the exciting beam at the sample support and of neutron activation analysis with both thermal and fast neutrons are reported in the light of pollution studies performed in the Lower Elbe River. (orig.)

  2. Optimal simulation of a perfect entangler

    International Nuclear Information System (INIS)

    Yu Nengkun; Duan Runyao; Ying Mingsheng

    2010-01-01

    A 2 x 2 unitary operation is called a perfect entangler if it can generate a maximally entangled state from some unentangled input. We study the following question: How many runs of a given two-qubit entangling unitary operation are required to simulate some perfect entangler with one-qubit unitary operations as free resources? We completely solve this problem by presenting an analytical formula for the optimal number of runs of the entangling operation. Our result reveals an entanglement strength of two-qubit unitary operations.

  3. Geometry Optimization Approaches of Inductively Coupled Printed Spiral Coils for Remote Powering of Implantable Biomedical Sensors

    Directory of Open Access Journals (Sweden)

    Sondos Mehri

    2016-01-01

    Full Text Available Electronic biomedical implantable sensors need power to perform. Among the main reported approaches, inductive link is the most commonly used method for remote powering of such devices. Power efficiency is the most important characteristic to be considered when designing inductive links to transfer energy to implantable biomedical sensors. The maximum power efficiency is obtained for maximum coupling and quality factors of the coils and is generally limited as the coupling between the inductors is usually very small. This paper is dealing with geometry optimization of inductively coupled printed spiral coils for powering a given implantable sensor system. For this aim, Iterative Procedure (IP and Genetic Algorithm (GA analytic based optimization approaches are proposed. Both of these approaches implement simple mathematical models that approximate the coil parameters and the link efficiency values. Using numerical simulations based on Finite Element Method (FEM and with experimental validation, the proposed analytic approaches are shown to have improved accurate performance results in comparison with the obtained performance of a reference design case. The analytical GA and IP optimization methods are also compared to a purely Finite Element Method based on numerical optimization approach (GA-FEM. Numerical and experimental validations confirmed the accuracy and the effectiveness of the analytical optimization approaches to design the optimal coil geometries for the best values of efficiency.

  4. Analytical characterization and optimization in the determination of trihalomethanes on drinking water by purge and trap coupled to a gas chromatography

    International Nuclear Information System (INIS)

    Costa Junior, Nelson Vicente da

    2010-01-01

    This work shows an analytical methodology developed and optimized to determine trihalomethanes level THMs, in drinking water samples, using purge and trap coupled to gas chromatography (GC-PT). THMs are byproducts water chlorination, these compounds must have a maximum of 100 μg.L -1 under Brazilian law, due these compounds be suspected human carcinogens base on studies in laboratory animals. The technique of purge and trap efficiently extracts these compounds from water, and the gas chromatograph separates the THMs. The GC uses a light polarity column and electron capture detector. This detector is selective and more sensitive in the detection of these compounds. The methodology was validated in terms of: linearity, selectivity, accuracy, precision, quantification limit, detection limit and robustness. The detection limit was less than 0,5 μg.L -1 . The accuracy and precision were adequate for testing the trace compounds. The drinking water samples were collected in the city of Suzano-SP, which belongs to 'Alto do Tiete', in this region lay 'Tiete' river with predominant vegetation. The THMs compound found in drinking water at higher concentrations was chloroform where the spread of values found between 15,9 to 111,0 μg.L -1 in drinking water. (author)

  5. A process insight repository supporting process optimization

    OpenAIRE

    Vetlugin, Andrey

    2012-01-01

    Existing solutions for analysis and optimization of manufacturing processes, such as online analysis processing or statistical calculations, have shortcomings that limit continuous process improvements. In particular, they lack means of storing and integrating the results of analysis. This makes the valuable information that can be used for process optimizations used only once and then disposed. The goal of the Advanced Manufacturing Analytics (AdMA) research project is to design an integrate...

  6. Optimization of a coaxial electron cyclotron resonance plasma thruster with an analytical model

    Energy Technology Data Exchange (ETDEWEB)

    Cannat, F., E-mail: felix.cannat@onera.fr, E-mail: felix.cannat@gmail.com; Lafleur, T. [Physics and Instrumentation Department, Onera -The French Aerospace Lab, Palaiseau, Cedex 91123 (France); Laboratoire de Physique des Plasmas, CNRS, Sorbonne Universites, UPMC Univ Paris 06, Univ Paris-Sud, Ecole Polytechnique, 91128 Palaiseau (France); Jarrige, J.; Elias, P.-Q.; Packan, D. [Physics and Instrumentation Department, Onera -The French Aerospace Lab, Palaiseau, Cedex 91123 (France); Chabert, P. [Laboratoire de Physique des Plasmas, CNRS, Sorbonne Universites, UPMC Univ Paris 06, Univ Paris-Sud, Ecole Polytechnique, 91128 Palaiseau (France)

    2015-05-15

    A new cathodeless plasma thruster currently under development at Onera is presented and characterized experimentally and analytically. The coaxial thruster consists of a microwave antenna immersed in a magnetic field, which allows electron heating via cyclotron resonance. The magnetic field diverges at the thruster exit and forms a nozzle that accelerates the quasi-neutral plasma to generate a thrust. Different thruster configurations are tested, and in particular, the influence of the source diameter on the thruster performance is investigated. At microwave powers of about 30 W and a xenon flow rate of 0.1 mg/s (1 SCCM), a mass utilization of 60% and a thrust of 1 mN are estimated based on angular electrostatic probe measurements performed downstream of the thruster in the exhaust plume. Results are found to be in fair agreement with a recent analytical helicon thruster model that has been adapted for the coaxial geometry used here.

  7. Industrial and environmental applications of nuclear analytical techniques. Report of a workshop

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-11-01

    The IAEA has programme the utilisation of nuclear analytical techniques (NATs), in particular for industrial and environmental applications. A major purpose is to help the developing Member States apply their analytical capabilities optimally for socio-economic progress and development. A large number of institutions in Europe, Africa, Latin America and Asia have established X ray fluorescence (XRF) and gamma ray measurement techniques and facilities for neutron activation analysis (NAA) have been initiated in institutions in these regions. Moreover, there is a growing interest among many institutes in applying more advanced analytical techniques, such as particle induced X ray emission (PIXE) and microanalytical techniques based on X ray emission induced by conventional sources or synchrotron radiation to the analysis of environmental and biological materials and industrial products. In order to define new areas of application of NATs and to extend the range of these techniques, a number of initiatives have recently been taken. It includes a workshop on industrial and environmental applications of nuclear analytical techniques, organized by the IAEA in Vienna, 7-11 September 1998. The main objectives of the workshop were as follows: (1) to review recent applications of NATs in industrial and environmental studies; (2) to identify emerging trends in methodologies and applications of NATs; (3) to demonstrate analytical capabilities of selected NATs. The following topics were reviewed during the workshop: (1) XRF and accelerator based analytical techniques; (2) portable XRF systems and their applications in industry, mineral prospecting and processing, (3) portable gamma ray spectrometers; and (4) NAA and its applications in industry and environmental studies. Micro-XRF and micro-PIXE methods and their applications in the above fields were also discussed, including aspects of synchrotron radiation induced X ray emission.

  8. Industrial and environmental applications of nuclear analytical techniques. Report of a workshop

    International Nuclear Information System (INIS)

    1999-11-01

    The IAEA has programme the utilisation of nuclear analytical techniques (NATs), in particular for industrial and environmental applications. A major purpose is to help the developing Member States apply their analytical capabilities optimally for socio-economic progress and development. A large number of institutions in Europe, Africa, Latin America and Asia have established X ray fluorescence (XRF) and gamma ray measurement techniques and facilities for neutron activation analysis (NAA) have been initiated in institutions in these regions. Moreover, there is a growing interest among many institutes in applying more advanced analytical techniques, such as particle induced X ray emission (PIXE) and microanalytical techniques based on X ray emission induced by conventional sources or synchrotron radiation to the analysis of environmental and biological materials and industrial products. In order to define new areas of application of NATs and to extend the range of these techniques, a number of initiatives have recently been taken. It includes a workshop on industrial and environmental applications of nuclear analytical techniques, organized by the IAEA in Vienna, 7-11 September 1998. The main objectives of the workshop were as follows: (1) to review recent applications of NATs in industrial and environmental studies; (2) to identify emerging trends in methodologies and applications of NATs; (3) to demonstrate analytical capabilities of selected NATs. The following topics were reviewed during the workshop: (1) XRF and accelerator based analytical techniques; (2) portable XRF systems and their applications in industry, mineral prospecting and processing, (3) portable gamma ray spectrometers; and (4) NAA and its applications in industry and environmental studies. Micro-XRF and micro-PIXE methods and their applications in the above fields were also discussed, including aspects of synchrotron radiation induced X ray emission

  9. A simple stationary semi-analytical wake model

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.

    We present an idealized simple, but fast, semi-analytical algorithm for computation of stationary wind farm wind fields with a possible potential within a multi-fidelity strategy for wind farm topology optimization. Basically, the model considers wakes as linear perturbations on the ambient non......-linear. With each of these approached, a parabolic system are described, which is initiated by first considering the most upwind located turbines and subsequently successively solved in the downstream direction. Algorithms for the resulting wind farm flow fields are proposed, and it is shown that in the limit......-uniform mean wind field, although the modelling of the individual stationary wake flow fields includes non-linear terms. The simulation of the individual wake contributions are based on an analytical solution of the thin shear layer approximation of the NS equations. The wake flow fields are assumed...

  10. The Skinny on Big Data in Education: Learning Analytics Simplified

    Science.gov (United States)

    Reyes, Jacqueleen A.

    2015-01-01

    This paper examines the current state of learning analytics (LA), its stakeholders and the benefits and challenges these stakeholders face. LA is a field of research that involves the gathering, analyzing and reporting of data related to learners and their environments with the purpose of optimizing the learning experience. Stakeholders in LA are…

  11. Optimal Power Flow Pursuit

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Anese, Emiliano; Simonetto, Andrea

    2018-03-01

    This paper considers distribution networks featuring inverter-interfaced distributed energy resources, and develops distributed feedback controllers that continuously drive the inverter output powers to solutions of AC optimal power flow (OPF) problems. Particularly, the controllers update the power setpoints based on voltage measurements as well as given (time-varying) OPF targets, and entail elementary operations implementable onto low-cost microcontrollers that accompany power-electronics interfaces of gateways and inverters. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. Convergence and OPF-target tracking capabilities of the controllers are analytically established. Overall, the proposed method allows to bypass traditional hierarchical setups where feedback control and optimization operate at distinct time scales, and to enable real-time optimization of distribution systems.

  12. A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance

    Science.gov (United States)

    Woolley, Ryan C.

    2014-01-01

    The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.

  13. Social Learning Analytics in Higher Education. An experience at the Primary Education stage

    Directory of Open Access Journals (Sweden)

    Jose Javier Díaz-Lázaro

    2017-07-01

    Full Text Available The concept of Learning Analytics, as we understand it today, is relatively new but the practice of evaluating user behavior is not innovative. For years, technological development, along with other educational aspects, have encouraged, developed and facilitated this practice as a way of providing a personalized quality experience to students. The main goal of this study, carried out in the Primary Education Degree of the University of Murcia, was to research, from the perspective of Social Learning Analytics, how students learn and collaborate in online environments, specifically through their use of social media. With the idea of improving and optimizing future teaching experiences, a pilot study was conducted using weblog, Twitter and Facebook to work with different topics on the subject. The method used in this research was a participant observation and the analysis performed was both quantitative, based mainly on the data gathered from the learning analytics, and qualitative (analyzing students’ content from comments. Results show that there was greater interaction on Facebook than weblogs, where students interacted to deal with aspects related to the learning process and the topic of the subject. This exchange of information grew during the development of the experience. In addition, learning analytics shows that there is a relationship between group members and their interaction and behavior in networks.

  14. Analytic modeling of the feedback stabilization of resistive wall modes

    International Nuclear Information System (INIS)

    Pustovitov, Vladimir D.

    2003-01-01

    Feedback suppression of resistive wall modes (RWM) is studied analytically using a model based on a standard cylindrical approximation. Optimal choice of the input signal for the feedback, effects related to the geometry of the feedback active coils, RWM suppression in a configuration with ITER-like double wall, are considered here. The widespread opinion that the feedback with poloidal sensors is better than that with radial sensors is discussed. It is shown that for an ideal feedback system the best input signal would be a combination of radial and poloidal perturbations measured inside the vessel. (author)

  15. An analytic solution for the enrichment of uranium hexafluoride in long countercurrent centrifuges

    International Nuclear Information System (INIS)

    Raetz, E.

    1977-01-01

    The paper describes an analytic solution for the enrichment and the separative power of long countercurrent centrifuges. Equations to derive optimal operation parameters like feed and feed input height are derived and solved. (orig.) [de

  16. An analytical method for defining the pump`s power optimum of a water-to-water heat pump heating system using COP

    Directory of Open Access Journals (Sweden)

    Nyers Jozsef

    2017-01-01

    Full Text Available This paper analyzes the energy efficiency of the heat pump and the complete heat pump heating system. Essentially, the maximum of the coefficient of performance of the heat pump and the heat pump heating system are investigated and determined by applying a new analytical optimization procedure. The analyzed physical system consists of the water-to-water heat pump, circulation and well pump. In the analytical optimization procedure the "first derivative equal to zero" mathematical method is applied. The objective function is the coefficient of performance of the heat pump, and the heat pump heating system. By using the analytical optimization procedure and the objective function, as the result, the local and the total energy optimum conditions with respect to the mass flow rate of hot and cold water i. e. the power of circulation or well pump are defined.

  17. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  18. On the (non-)optimality of Michell structures

    DEFF Research Database (Denmark)

    Sigmund, Ole; Aage, Niels; Andreassen, Erik

    2016-01-01

    Optimal analytical Michell frame structures have been extensively used as benchmark examples in topology optimization, including truss, frame, homogenization, density and level-set based approaches. However, as we will point out, partly the interpretation of Michell’s structural continua...... as discrete frame structures is not accurate and partly, it turns out that limiting structural topology to frame-like structures is a rather severe design restriction and results in structures that are quite far from being stiffness optimal. The paper discusses the interpretation of Michell’s theory...... in the context of numerical topology optimization and compares various topology optimization results obtained with the frame restriction to cases with no design restrictions. For all examples considered, the true stiffness optimal structures are composed of sheets (2D) or closed-walled shell structures (3D...

  19. Baseline configuration for GNSS attitude determination with an analytical least-squares solution

    International Nuclear Information System (INIS)

    Chang, Guobin; Wang, Qianxin; Xu, Tianhe

    2016-01-01

    The GNSS attitude determination using carrier phase measurements with 4 antennas is studied on condition that the integer ambiguities have been resolved. The solution to the nonlinear least-squares is often obtained iteratively, however an analytical solution can exist for specific baseline configurations. The main aim of this work is to design this class of configurations. Both single and double difference measurements are treated which refer to the dedicated and non-dedicated receivers respectively. More realistic error models are employed in which the correlations between different measurements are given full consideration. The desired configurations are worked out. The configurations are rotation and scale equivariant and can be applied to both the dedicated and non-dedicated receivers. For these configurations, the analytical and optimal solution for the attitude is also given together with its error variance–covariance matrix. (paper)

  20. Rethinking Visual Analytics for Streaming Data Applications

    Energy Technology Data Exchange (ETDEWEB)

    Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris

    2017-01-01

    In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between the two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive

  1. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  2. Process Parameters Optimization of 14nm MOSFET Using 2-D Analytical Modelling

    Directory of Open Access Journals (Sweden)

    Noor Faizah Z.A.

    2016-01-01

    Full Text Available This paper presents the modeling and optimization of 14nm gate length CMOS transistor which is down-scaled from previous 32nm gate length. High-k metal gate material was used in this research utilizing Hafnium Dioxide (HfO2 as dielectric and Tungsten Silicide (WSi2 and Titanium Silicide (TiSi2 as a metal gate for NMOS and PMOS respectively. The devices are fabricated virtually using ATHENA module and characterized its performance evaluation via ATLAS module; both in Virtual Wafer Fabrication (VWF of Silvaco TCAD Tools. The devices were then optimized through a process parameters variability using L9 Taguchi Method. There were four process parameter with two noise factor of different values were used to analyze the factor effect. The results show that the optimal value for both transistors are well within ITRS 2013 prediction where VTH and IOFF are 0.236737V and 6.995705nA/um for NMOS device and 0.248635 V and 5.26nA/um for PMOS device respectively.

  3. Optimization of the scheme for natural ecology planning of urban rivers based on ANP (analytic network process) model.

    Science.gov (United States)

    Zhang, Yichuan; Wang, Jiangping

    2015-07-01

    Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.

  4. Probing Entanglement in Adiabatic Quantum Optimization with Trapped Ions

    Directory of Open Access Journals (Sweden)

    Philipp eHauke

    2015-04-01

    Full Text Available Adiabatic quantum optimization has been proposed as a route to solve NP-complete problems, with a possible quantum speedup compared to classical algorithms. However, the precise role of quantum effects, such as entanglement, in these optimization protocols is still unclear. We propose a setup of cold trapped ions that allows one to quantitatively characterize, in a controlled experiment, the interplay of entanglement, decoherence, and non-adiabaticity in adiabatic quantum optimization. We show that, in this way, a broad class of NP-complete problems becomes accessible for quantum simulations, including the knapsack problem, number partitioning, and instances of the max-cut problem. Moreover, a general theoretical study reveals correlations of the success probability with entanglement at the end of the protocol. From exact numerical simulations for small systems and linear ramps, however, we find no substantial correlations with the entanglement during the optimization. For the final state, we derive analytically a universal upper bound for the success probability as a function of entanglement, which can be measured in experiment. The proposed trapped-ion setups and the presented study of entanglement address pertinent questions of adiabatic quantum optimization, which may be of general interest across experimental platforms.

  5. Optimal processing of reversible quantum channels

    Energy Technology Data Exchange (ETDEWEB)

    Bisio, Alessandro, E-mail: alessandro.bisio@unipv.it [QUIT Group, Dipartimento di Fisica, INFN Sezione di Pavia, via Bassi 6, 27100 Pavia (Italy); D' Ariano, Giacomo Mauro; Perinotti, Paolo [QUIT Group, Dipartimento di Fisica, INFN Sezione di Pavia, via Bassi 6, 27100 Pavia (Italy); Sedlák, Michal [Department of Optics, Palacký University, 17. Listopadu 1192/12, CZ-771 46 Olomouc (Czech Republic); Institute of Physics, Slovak Academy of Sciences, Dúbravská Cesta 9, 845 11 Bratislava (Slovakia)

    2014-05-01

    We consider the general problem of the optimal transformation of N uses of (possibly different) unitary channels to a single use of another unitary channel in any finite dimension. We show how the optimal transformation can be fully parallelized, consisting in a preprocessing channel followed by a parallel action of all the N unitaries and a final postprocessing channel. Our techniques allow to achieve an exponential reduction in the number of the free parameters of the optimization problem making it amenable to an efficient numerical treatment. Finally, we apply our general results to find the analytical solution for special cases of interest like the cloning of qubit phase gates.

  6. Boron doped diamond sensor for sensitive determination of metronidazole: Mechanistic and analytical study by cyclic voltammetry and square wave voltammetry

    Energy Technology Data Exchange (ETDEWEB)

    Ammar, Hafedh Belhadj, E-mail: hbelhadjammar@yahoo.fr; Brahim, Mabrouk Ben; Abdelhédi, Ridha; Samet, Youssef

    2016-02-01

    The performance of boron-doped diamond (BDD) electrode for the detection of metronidazole (MTZ) as the most important drug of the group of 5-nitroimidazole was proven using cyclic voltammetry (CV) and square wave voltammetry (SWV) techniques. A comparison study between BDD, glassy carbon and silver electrodes on the electrochemical response was carried out. The process is pH-dependent. In neutral and alkaline media, one irreversible reduction peak related to the hydroxylamine derivative formation was registered, involving a total of four electrons. In acidic medium, a prepeak appears probably related to the adsorption affinity of hydroxylamine at the electrode surface. The BDD electrode showed higher sensitivity and reproducibility analytical response, compared with the other electrodes. The higher reduction peak current was registered at pH 11. Under optimal conditions, a linear analytical curve was obtained for the MTZ concentration in the range of 0.2–4.2 μmol L{sup −1}, with a detection limit of 0.065 μmol L{sup −1}. - Highlights: • SWV for the determination of MTZ • Boron-doped diamond as a new electrochemical sensor • Simple and rapid detection of MTZ • Efficiency of BDD for sensitive determination of MTZ.

  7. Developing a business analytics methodology: a case study in the foodbank sector

    OpenAIRE

    Hindle, Giles; Vidgen, Richard

    2017-01-01

    The current research seeks to address the following question: how can organizations align their business analytics development projects with their business goals? To pursue this research agenda we adopt an action research framework to develop and apply a business analytics methodology (BAM). The four-stage BAM (problem situation structuring, business model mapping, analytics leverage analysis, and analytics implementation) is not a prescription. Rather, it provides a logical structure and log...

  8. Optimal time-domain technique for pulse width modulation in power electronics

    Directory of Open Access Journals (Sweden)

    I. Mayergoyz

    2018-05-01

    Full Text Available Optimal time-domain technique for pulse width modulation is presented. It is based on exact and explicit analytical solutions for inverter circuits, obtained for any sequence of input voltage rectangular pulses. Two optimal criteria are discussed and illustrated by numerical examples.

  9. Evaluation of sample preparation methods and optimization of nickel determination in vegetable tissues

    Directory of Open Access Journals (Sweden)

    Rodrigo Fernando dos Santos Salazar

    2011-02-01

    Full Text Available Nickel, although essential to plants, may be toxic to plants and animals. It is mainly assimilated by food ingestion. However, information about the average levels of elements (including Ni in edible vegetables from different regions is still scarce in Brazil. The objectives of this study were to: (a evaluate and optimize a method for preparation of vegetable tissue samples for Ni determination; (b optimize the analytical procedures for determination by Flame Atomic Absorption Spectrometry (FAAS and by Electrothermal Atomic Absorption (ETAAS in vegetable samples and (c determine the Ni concentration in vegetables consumed in the cities of Lorena and Taubaté in the Vale do Paraíba, State of São Paulo, Brazil. By means of the analytical technique for determination by ETAAS or FAAS, the results were validated by the test of analyte addition and recovery. The most viable method tested for quantification of this element was HClO4-HNO3 wet digestion. All samples but carrot tissue collected in Lorena contained Ni levels above the permitted by the Brazilian Ministry of Health. The most disturbing results, requiring more detailed studies, were the Ni concentrations measured in carrot samples from Taubaté, where levels were five times higher than permitted by Brazilian regulations.

  10. PROGRESSIVE DATA ANALYTICS IN HEALTH INFORMATICS USING AMAZON ELASTIC MAPREDUCE (EMR

    Directory of Open Access Journals (Sweden)

    J S Shyam Mohan

    2016-04-01

    Full Text Available Identifying, diagnosing and treatment of cancer involves a thorough investigation that involves data collection called big data from multi and different sources that are helpful for making effective and quick decision making. Similarly data analytics is used to find remedial actions for newly arriving diseases spread across multiple warehouses. Analytics can be performed on collected or available data from various data clusters that contains pieces of data. We provide an effective framework that provides a way for effective decision making using Amazon EMR. Through various experiments done on different biological datasets, we reveal the advantages of the proposed model and present numerical results. These results indicate that the proposed framework can efficiently perform analytics over any biological datasets and obtain results in optimal time thereby maintaining the quality of the result.

  11. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    Science.gov (United States)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  12. Studies on the spectral interference of gadolinium on different analytes in inductively coupled plasma atomic emission spectroscopy

    International Nuclear Information System (INIS)

    Sengupta, Arijit; Thulasidas, S.K.; Natarajan, V.; Airan, Yougant

    2015-01-01

    Due to the multi-electronic nature, rare earth elements are prone to exhibit spectral interference in ICP-AES, which leads to erroneous determination of analytes in presence of such matrix. This interference is very significant, when the analytes are to be determined at trace level in presence of emission rich matrix elements. An attempt was made to understand the spectral interference of Gd on 29 common analytes like Ag, Al, B, Ba, Bi, Ca, Cd, Ce, Co, Cr, Cu, Dy, Fe, Ga, Gd, In, La, Li, Lu, Mg, Mn, Na, Nd, Ni, Pb, Pr, Sr, Tl and Zn using ICP-AES with capacitive Charged Coupled Device (CCD) as detector. The present study includes identification of suitable interference free analytical lines of these analytes, evaluation of correction factor for each analytical line and determination of tolerance levels of these analytical lines along with the ICP-AES based methodology for simultaneous determination of Gd. Based on the spectral interference study, an ICP-AES based method was developed for the determination of these analytes at trace level in presence of Gd matrix without chemical separation. Further the developed methodology was validated using synthetic samples prepared from commercially available reference material solution of individual element; the results were found to be satisfactory. The method was also compared with other existing techniques

  13. Bayesian emulation for optimization in multi-step portfolio decisions

    OpenAIRE

    Irie, Kaoru; West, Mike

    2016-01-01

    We discuss the Bayesian emulation approach to computational solution of multi-step portfolio studies in financial time series. "Bayesian emulation for decisions" involves mapping the technical structure of a decision analysis problem to that of Bayesian inference in a purely synthetic "emulating" statistical model. This provides access to standard posterior analytic, simulation and optimization methods that yield indirect solutions of the decision problem. We develop this in time series portf...

  14. Analytical mechanics

    CERN Document Server

    Lemos, Nivaldo A

    2018-01-01

    Analytical mechanics is the foundation of many areas of theoretical physics including quantum theory and statistical mechanics, and has wide-ranging applications in engineering and celestial mechanics. This introduction to the basic principles and methods of analytical mechanics covers Lagrangian and Hamiltonian dynamics, rigid bodies, small oscillations, canonical transformations and Hamilton–Jacobi theory. This fully up-to-date textbook includes detailed mathematical appendices and addresses a number of advanced topics, some of them of a geometric or topological character. These include Bertrand's theorem, proof that action is least, spontaneous symmetry breakdown, constrained Hamiltonian systems, non-integrability criteria, KAM theory, classical field theory, Lyapunov functions, geometric phases and Poisson manifolds. Providing worked examples, end-of-chapter problems, and discussion of ongoing research in the field, it is suitable for advanced undergraduate students and graduate students studying analyt...

  15. Parameter Optimization for Feature and Hit Generation in a General Unknown Screening Method-Proof of Concept Study Using a Design of Experiment Approach for a High Resolution Mass Spectrometry Procedure after Data Independent Acquisition.

    Science.gov (United States)

    Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas

    2018-03-06

    High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.

  16. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2017-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 17-19, 2016. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, health, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  17. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2015-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 13-15, 2014. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, healthcare, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  18. Nuclear analytical techniques for nanotoxicology studies

    International Nuclear Information System (INIS)

    Zhang, Z.Y.; Zhao, Y.L.; Chai, Z.F.

    2011-01-01

    With the rapid development of nanotechnology and its applications, a wide variety of nanomaterials are now used in commodities, pharmaceutics, cosmetics, biomedical products, and industries. The potential interactions of nanomaterials with living systems and the environment have attracted increasing attention from the public, as well as from manufacturers of nanomaterial-based products, academic researchers and policymakers. It is important to consider the environmental, health and safety aspects at an early stage of nanomaterial development and application in order to more effectively identify and manage potential human and environmental health impacts from nanomaterial exposure. This will require research in a range of areas, including detection and characterization, environmental fate and transport, ecotoxicology and toxicology. Nuclear analytical techniques (NATs) can play an important role in such studies due to their intrinsic merits such as high sensitivity, good accuracy, high space resolution, ability to distinguish the endogenous or exogenous sources of materials, and ability of in situ and in vivo analysis. In this paper, the applications of NATs in nanotoxicological and nano-ecotoxicological studies are outlined, and some recent results obtained in our laboratory are reported. (orig.)

  19. Horizontal Parallel Pipe Ground Heat Exchanger : Analytical Conception and Experimental Study

    International Nuclear Information System (INIS)

    Naili, Nabiha; Jemli, Ramzi; Farhat, Abdel Hamid; Ben Nasrallah, Sassi

    2009-01-01

    Due to limited amount of natural resources exploited for heating, and in order to reduce the environmental impact, people should strive to use renewable energy resources. Ambient low-grade energy may be upgraded by the ground heat exchanger (GH E), which exploits the ground thermal inertia for buildings heating and cooling. In this study, analytical performance and experiments analysis of a horizontal ground heat exchanger have been performed. The analytical study, relates to the dimensioning of the heat exchanger, shows that the heat exchanger characteristics are very important for the determination of heat extracted from ground. The experimental results were obtained during the period 30 November to 10 December 2007, in the heating season of the greenhouses. Measurements show that the ground temperature under a certain depth remains relatively constant. To exploit effectively the heat capacity of the ground, a horizontal heat exchanger system has to be constructed and tested in the Center of Research and Technology of Energy, in Tunisia

  20. Acid-base chemistry of white wine: analytical characterisation and chemical modelling.

    Science.gov (United States)

    Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe

    2012-01-01

    A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic "wine" especially adapted for testing.

  1. Acid-Base Chemistry of White Wine: Analytical Characterisation and Chemical Modelling

    Directory of Open Access Journals (Sweden)

    Enrico Prenesti

    2012-01-01

    Full Text Available A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria. Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture, ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic “wine” especially adapted for testing.

  2. Acid-Base Chemistry of White Wine: Analytical Characterisation and Chemical Modelling

    Science.gov (United States)

    Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe

    2012-01-01

    A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic “wine” especially adapted for testing. PMID:22566762

  3. The analytic impact of a reduced centrifugation step on chemistry and immunochemistry assays: an evaluation of the Modular Pre-Analytics.

    Science.gov (United States)

    Koenders, Mieke M J F; van Hurne, Marco E J F; Glasmacher-Van Zijl, Monique; van der Linde, Geesje; Westerhuis, Bert W J J M

    2012-09-01

    The COBAS 6000 system can be completed by a Modular Pre-Analytics (MPA), an integrated laboratory automation system that streamlines preanalysis. For an optimal throughput, the MPA centrifuges blood collection tubes for 5 min at 1885 × g - a centrifugation time that is not in concordance with the World Health Organization guidelines which suggest centrifugation for 10/15 min at 2000-3000 × g. In this study, the analytical outcome of 50 serum and 50 plasma samples centrifuged for 5 or 10 min at 1885 × g was investigated. The study included routine chemistry and immunochemistry assays on the COBAS 6000 and the Minicap capillary electrophoresis. Deming-fit and Bland-Altman plots of the 5-min and 10-min centrifugation steps indicated a significant correlation in serum samples. The lipaemia index in plasma samples centrifuged for 5 min displayed a statistically significant variation when compared with the 10-min centrifugation. Preanalytical centrifugation can be successfully down-scaled to a duration of 5 min for most routine chemistry and immunochemistry assays in serum and plasma samples. To prevent inaccurate results in plasma samples with an increased lipaemia index from being reported, the laboratory information system was programmed to withhold results above certain lipaemia indices. The presented data support the use of a 5-min centrifugation step to improve turnaround times, thereby meeting one of the desires of the requesting clinicians.

  4. Optimization of radiation protection in gamma radiography facilities

    International Nuclear Information System (INIS)

    Antonio Filho, Joao

    1999-01-01

    To determine optimized dose limits for workers, a study of optimization of radiation protection was undertaken in gamma radiography facilities closed, using the Technique Multiple Attributes Utility Analysis. A total of 217 protection options, distributed in 34 irradiation scenarios for tree facility types ( fixed open, moveable and closed (bunker) were analyzed. In the determination of the optimized limit dose, the following attributes were considered; costs of the protection barriers, costs attributed to the biological detriment for different alpha (the reference value of unit collective dose), size of the isolation area, constrained limits dose of annual individual equivalent doses and collective dose. The variables studied in the evaluation included: effective work load, type and activity of the sources of radiation ( 192 Ir and 60 Co), source-operator distance related to the characteristic of the length of the command cable and the guide tube, type and thickness of the materials used in the protection barriers (concrete, barite, ceramic, lead, steel alloy and tungsten). The optimal analytic solutions obtained in the optimization process that resulted in the indication of the optimized dose limit were determined by means of a sensitivity analysis and by direct and logic evaluations, thus, independent of the values of the monetary coefficient attributed to the biological detriment, of the annual interest rate applied to the protection cost and of the type of installation studied, it was concluded that the primary limit of annual equivalent dose for workers (now 50 mSv) can be easily reduced to an optimized annual dose limit of 5 mSv. (author)

  5. Enzyme Biosensors for Biomedical Applications: Strategies for Safeguarding Analytical Performances in Biological Fluids

    Science.gov (United States)

    Rocchitta, Gaia; Spanu, Angela; Babudieri, Sergio; Latte, Gavinella; Madeddu, Giordano; Galleri, Grazia; Nuvoli, Susanna; Bagella, Paola; Demartis, Maria Ilaria; Fiore, Vito; Manetti, Roberto; Serra, Pier Andrea

    2016-01-01

    Enzyme-based chemical biosensors are based on biological recognition. In order to operate, the enzymes must be available to catalyze a specific biochemical reaction and be stable under the normal operating conditions of the biosensor. Design of biosensors is based on knowledge about the target analyte, as well as the complexity of the matrix in which the analyte has to be quantified. This article reviews the problems resulting from the interaction of enzyme-based amperometric biosensors with complex biological matrices containing the target analyte(s). One of the most challenging disadvantages of amperometric enzyme-based biosensor detection is signal reduction from fouling agents and interference from chemicals present in the sample matrix. This article, therefore, investigates the principles of functioning of enzymatic biosensors, their analytical performance over time and the strategies used to optimize their performance. Moreover, the composition of biological fluids as a function of their interaction with biosensing will be presented. PMID:27249001

  6. Space engineering modeling and optimization with case studies

    CERN Document Server

    Pintér, János

    2016-01-01

    This book presents a selection of advanced case studies that cover a substantial range of issues and real-world challenges and applications in space engineering. Vital mathematical modeling, optimization methodologies and numerical solution aspects of each application case study are presented in detail, with discussions of a range of advanced model development and solution techniques and tools. Space engineering challenges are discussed in the following contexts: •Advanced Space Vehicle Design •Computation of Optimal Low Thrust Transfers •Indirect Optimization of Spacecraft Trajectories •Resource-Constrained Scheduling, •Packing Problems in Space •Design of Complex Interplanetary Trajectories •Satellite Constellation Image Acquisition •Re-entry Test Vehicle Configuration Selection •Collision Risk Assessment on Perturbed Orbits •Optimal Robust Design of Hybrid Rocket Engines •Nonlinear Regression Analysis in Space Engineering< •Regression-Based Sensitivity Analysis and Robust Design ...

  7. Optimization of hospital ward resources with patient relocation using Markov chain modeling

    DEFF Research Database (Denmark)

    Andersen, Anders Reenberg; Nielsen, Bo Friis; Reinhardt, Line Blander

    2017-01-01

    available to the hospital. Patient flow is modeled using a homogeneous continuous-time Markov chain and optimization is conducted using a local search heuristic. Our model accounts for patient relocation, which has not been done analytically in literature with similar scope. The study objective is to ensure...... are distributed. Furthermore, our heuristic is found to efficiently derive the optimal solution. Applying our model to the hospital case, we found that relocation of daily arrivals can be reduced by 11.7% by re-distributing beds that are already available to the hospital....

  8. N-Springs pump and treat system optimization study

    International Nuclear Information System (INIS)

    1997-03-01

    This letter report describes and presents the results of a system optimization study conducted to evaluate the N-Springs pump and treat system. The N-Springs pump and treat is designed to remove strontium-90 (90Sr) found in the groundwater in the 100-NR-2 Operable Unit near the Columbia River. The goal of the system optimization study was to assess and quantify what conditions and operating parameters could be employed to enhance the operating and cost effectiveness of the recently upgraded pump and treat system.This report provides the results of the system optimization study, reports the cost effectiveness of operating the pump and treat at various operating modes and 90Sr removal goals, and provides recommendations for operating the pump and treat

  9. A study of optical design and optimization of laser optics

    Science.gov (United States)

    Tsai, C.-M.; Fang, Yi-Chin

    2013-09-01

    This paper propose a study of optical design of laser beam shaping optics with aspheric surface and application of genetic algorithm (GA) to find the optimal results. Nd: YAG 355 waveband laser flat-top optical system, this study employed the Light tools LDS (least damped square) and the GA of artificial intelligence optimization method to determine the optimal aspheric coefficient and obtain the optimal solution. This study applied the aspheric lens with GA for the flattening of laser beams using collimated laser beam light, aspheric lenses in order to achieve best results.

  10. Post-analytical stability of 23 common chemistry and immunochemistry analytes in incurred samples

    DEFF Research Database (Denmark)

    Nielsen, Betina Klint; Frederiksen, Tina; Friis-Hansen, Lennart

    2017-01-01

    BACKGROUND: Storage of blood samples after centrifugation, decapping and initial sampling allows ordering of additional blood tests. The pre-analytic stability of biochemistry and immunochemistry analytes has been studied in detail, but little is known about the post-analytical stability...... in incurred samples. METHODS: We examined the stability of 23 routine analytes on the Dimension Vista® (Siemens Healthineers, Denmark): 42-60 routine samples in lithium-heparin gel tubes (Vacutainer, BD, USA) were centrifuged at 3000×g for 10min. Immediately after centrifugation, initial concentration...... of analytes were measured in duplicate (t=0). The tubes were stored decapped at room temperature and re-analyzed after 2, 4, 6, 8 and 10h in singletons. The concentration from reanalysis were normalized to initial concentration (t=0). Internal acceptance criteria for bias and total error were used...

  11. Energy-optimal electrical excitation of nerve fibers.

    Science.gov (United States)

    Jezernik, Saso; Morari, Manfred

    2005-04-01

    We derive, based on an analytical nerve membrane model and optimal control theory of dynamical systems, an energy-optimal stimulation current waveform for electrical excitation of nerve fibers. Optimal stimulation waveforms for nonleaky and leaky membranes are calculated. The case with a leaky membrane is a realistic case. Finally, we compare the waveforms and energies necessary for excitation of a leaky membrane in the case where the stimulation waveform is a square-wave current pulse, and in the case of energy-optimal stimulation. The optimal stimulation waveform is an exponentially rising waveform and necessitates considerably less energy to excite the nerve than a square-wave pulse (especially true for larger pulse durations). The described theoretical results can lead to drastically increased battery lifetime and/or decreased energy transmission requirements for implanted biomedical systems.

  12. Impact of two-stage turbocharging architectures on pumping losses of automotive engines based on an analytical model

    International Nuclear Information System (INIS)

    Galindo, J.; Serrano, J.R.; Climent, H.; Varnier, O.

    2010-01-01

    Present work presents an analytical study of two-stage turbocharging configuration performance. The aim of this work is to understand the influence of different two-stage-architecture parameters to optimize the use of exhaust manifold gases energy and to aid decision making process. An analytical model giving the relationship between global compression ratio and global expansion ratio is developed as a function of basic engine and turbocharging system parameters. Having an analytical solution, the influence of different variables, such as expansion ratio between HP and LP turbine, intercooler efficiency, turbochargers efficiency, cooling fluid temperature and exhaust temperature are studied independently. Engine simulations with proposed analytical model have been performed to analyze the influence of these different parameters on brake thermal efficiency and pumping mean effective pressure. The results obtained show the overall performance of the two-stage system for the whole operative range and characterize the optimum control of the elements for each operative condition. The model was also used to compare single-stage and two-stage architectures performance for the same engine operative conditions. Benefits and limits in terms of breathing capabilities and brake thermal efficiency of each type of system have been presented and analyzed.

  13. Performance Optimization of Irreversible Air Heat Pumps Considering Size Effect

    Science.gov (United States)

    Bi, Yuehong; Chen, Lingen; Ding, Zemin; Sun, Fengrui

    2018-06-01

    Considering the size of an irreversible air heat pump (AHP), heating load density (HLD) is taken as thermodynamic optimization objective by using finite-time thermodynamics. Based on an irreversible AHP with infinite reservoir thermal-capacitance rate model, the expression of HLD of AHP is put forward. The HLD optimization processes are studied analytically and numerically, which consist of two aspects: (1) to choose pressure ratio; (2) to distribute heat-exchanger inventory. Heat reservoir temperatures, heat transfer performance of heat exchangers as well as irreversibility during compression and expansion processes are important factors influencing on the performance of an irreversible AHP, which are characterized with temperature ratio, heat exchanger inventory as well as isentropic efficiencies, respectively. Those impacts of parameters on the maximum HLD are thoroughly studied. The research results show that HLD optimization can make the size of the AHP system smaller and improve the compactness of system.

  14. BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.

    Science.gov (United States)

    White, B J; Amrine, D E; Larson, R L

    2018-04-14

    Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.

  15. An analytical study on the thermal stress of mass concrete

    International Nuclear Information System (INIS)

    Yoshida, H.; Sawada, T.; Yamazaki, M.; Miyashita, T.; Morikawa, H.; Hayami, Y.; Shibata, K.

    1983-01-01

    The thermal stress in mass concrete occurs as a result of the effect associated with the heat of hydration of the cement. Sometimes, the excessive stresses cause the cracking or other tensile failure in concrete. Therefore it is becoming necessary in the design and construction of mass concrete to predict the thermal stress. The thermal stress analysis of mass concrete requires to take account of the dependence of the elastic modulus on the age of concrete as well as the stress relaxation by creep effect. The studies of those phenomena and the analytical methods have been reported so far. The paper presents the analytical method and discusses its reliability through the application of the method to the actual structure, measuring the temperatures and the thermal stresses. The method is the time dependent thermal stress analysis based on the finite element method, which takes account of creep effect, the aging of concrete and the effect of temperature variation in time. (orig./HP)

  16. Analytical study on the self-healing property of Bessel beam

    Science.gov (United States)

    Chu, X.

    2012-10-01

    With the help of Babinet principle, an analytical expression for the self-healing of Bessel beam is derived by using the Gaussian absorption function to describe the obstacle. Based on the analytical expression, the self-healing properties of Bessel beam are studied. It shows that Bessel beam has the ability to reconstruct its beam shape disturbed by an obstacle. However, during the self-healing process, not only the intensity of the beam behind the obstacle but also the other part will be affected by the obstruction. Meanwhile, the highlight spot, which intensity is larger than that without the obstacle will appear, and the size and strength of the highlight spot is determined by the size of the obstacle. From the change of Poynting vector and Babinet principle, the physical interpretations for the self-healing ability, the effects of the obstruction on the other part and the appearance of highlight spot are given.

  17. A European multicenter study on the analytical performance of the VERIS HBV assay.

    Science.gov (United States)

    Braun, Patrick; Delgado, Rafael; Drago, Monica; Fanti, Diana; Fleury, Hervé; Izopet, Jacques; Lombardi, Alessandra; Mancon, Alessandro; Marcos, Maria Angeles; Sauné, Karine; O Shea, Siobhan; Pérez-Rivilla, Alfredo; Ramble, John; Trimoulet, Pascale; Vila, Jordi; Whittaker, Duncan; Artus, Alain; Rhodes, Daniel

    Hepatitis B viral load monitoring is an essential part of managing patients with chronic Hepatits B infection. Beckman Coulter has developed the VERIS HBV Assay for use on the fully automated Beckman Coulter DxN VERIS Molecular Diagnostics System. 1 OBJECTIVES: To evaluate the analytical performance of the VERIS HBV Assay at multiple European virology laboratories. Precision, analytical sensitivity, negative sample performance, linearity and performance with major HBV genotypes/subtypes for the VERIS HBV Assay was evaluated. Precision showed an SD of 0.15 log 10 IU/mL or less for each level tested. Analytical sensitivity determined by probit analysis was between 6.8-8.0 IU/mL. Clinical specificity on 90 unique patient samples was 100.0%. Performance with 754 negative samples demonstrated 100.0% not detected results, and a carryover study showed no cross contamination. Linearity using clinical samples was shown from 1.23-8.23 log 10 IU/mL and the assay detected and showed linearity with major HBV genotypes/subtypes. The VERIS HBV Assay demonstrated comparable analytical performance to other currently marketed assays for HBV DNA monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Analytical and experimental study of the acoustics and the flow field characteristics of cavitating self-resonating water jets

    Energy Technology Data Exchange (ETDEWEB)

    Chahine, G.L.; Genoux, P.F.; Johnson, V.E. Jr.; Frederick, G.S.

    1984-09-01

    Waterjet nozzles (STRATOJETS) have been developed which achieve passive structuring of cavitating submerged jets into discrete ring vortices, and which possess cavitation incipient numbers six times higher than obtained with conventional cavitating jet nozzles. In this study we developed analytical and numerical techniques and conducted experimental work to gain an understanding of the basic phenomena involved. The achievements are: (1) a thorough analysis of the acoustic dynamics of the feed pipe to the nozzle; (2) a theory for bubble ring growth and collapse; (3) a numerical model for jet simulation; (4) an experimental observation and analysis of candidate second-generation low-sigma STRATOJETS. From this study we can conclude that intensification of bubble ring collapse and design of highly resonant feed tubes can lead to improved drilling rates. The models here described are excellent tools to analyze the various parameters needed for STRATOJET optimizations. Further analysis is needed to introduce such important factors as viscosity, nozzle-jet interaction, and ring-target interaction, and to develop the jet simulation model to describe the important fine details of the flow field at the nozzle exit.

  19. Decision support for environmental management of industrial non-hazardous secondary materials: New analytical methods combined with simulation and optimization modeling.

    Science.gov (United States)

    Little, Keith W; Koralegedara, Nadeesha H; Northeim, Coleen M; Al-Abed, Souhail R

    2017-07-01

    Non-hazardous solid materials from industrial processes, once regarded as waste and disposed in landfills, offer numerous environmental and economic advantages when put to beneficial uses (BUs). Proper management of these industrial non-hazardous secondary materials (INSM) requires estimates of their probable environmental impacts among disposal as well as BU options. The U.S. Environmental Protection Agency (EPA) has recently approved new analytical methods (EPA Methods 1313-1316) to assess leachability of constituents of potential concern in these materials. These new methods are more realistic for many disposal and BU options than historical methods, such as the toxicity characteristic leaching protocol. Experimental data from these new methods are used to parameterize a chemical fate and transport (F&T) model to simulate long-term environmental releases from flue gas desulfurization gypsum (FGDG) when disposed of in an industrial landfill or beneficially used as an agricultural soil amendment. The F&T model is also coupled with optimization algorithms, the Beneficial Use Decision Support System (BUDSS), under development by EPA to enhance INSM management. Published by Elsevier Ltd.

  20. Optimization of wet digestion procedure of blood and tissue for selenium determination by means of 75Se tracer

    International Nuclear Information System (INIS)

    Holynska, B.; Lipinska, K.

    1977-01-01

    Selenium-75 tracer has been used for optimization of analytical procedure of selenium determination in blood and tissue. Wet digestion procedure and reduction of selenium to its elemental form with tellurium as coprecipitant have been tested. Recovery of selenium obtained with the use of optimized analytical procedure amounts up 95% and precision is equal to 4.2%. (author)

  1. Analytic functionals on the sphere

    CERN Document Server

    Morimoto, Mitsuo

    1998-01-01

    This book treats spherical harmonic expansion of real analytic functions and hyperfunctions on the sphere. Because a one-dimensional sphere is a circle, the simplest example of the theory is that of Fourier series of periodic functions. The author first introduces a system of complex neighborhoods of the sphere by means of the Lie norm. He then studies holomorphic functions and analytic functionals on the complex sphere. In the one-dimensional case, this corresponds to the study of holomorphic functions and analytic functionals on the annular set in the complex plane, relying on the Laurent series expansion. In this volume, it is shown that the same idea still works in a higher-dimensional sphere. The Fourier-Borel transformation of analytic functionals on the sphere is also examined; the eigenfunction of the Laplacian can be studied in this way.

  2. Continuous and Discrete-Time Optimal Controls for an Isolated Signalized Intersection

    Directory of Open Access Journals (Sweden)

    Jiyuan Tan

    2017-01-01

    Full Text Available A classical control problem for an isolated oversaturated intersection is revisited with a focus on the optimal control policy to minimize total delay. The difference and connection between existing continuous-time planning models and recently proposed discrete-time planning models are studied. A gradient descent algorithm is proposed to convert the optimal control plan of the continuous-time model to the plan of the discrete-time model in many cases. Analytic proof and numerical tests for the algorithm are also presented. The findings shed light on the links between two kinds of models.

  3. Cost Optimization of Product Families using Analytic Cost Models

    DEFF Research Database (Denmark)

    Brunø, Thomas Ditlev; Nielsen, Peter

    2012-01-01

    This paper presents a new method for analysing the cost structure of a mass customized product family. The method uses linear regression and backwards selection to reduce the complexity of a data set describing a number of historical product configurations and incurred costs. By reducing the data...... set, the configuration variables which best describe the variation in product costs are identified. The method is tested using data from a Danish manufacturing company and the results indicate that the method is able to identify the most critical configuration variables. The method can be applied...... in product family redesign projects focusing on cost reduction to identify which modules contribute the most to cost variation and should thus be optimized....

  4. Analytical methods applied to the study of lattice gauge and spin theories

    International Nuclear Information System (INIS)

    Moreo, Adriana.

    1985-01-01

    A study of interactions between quarks and gluons is presented. Certain difficulties of the quantum chromodynamics to explain the behaviour of quarks has given origin to the technique of lattice gauge theories. First the phase diagrams of the discrete space-time theories are studied. The analysis of the phase diagrams is made by numerical and analytical methods. The following items were investigated and studied: a) A variational technique was proposed to obtain very accurated values for the ground and first excited state energy of the analyzed theory; b) A mean-field-like approximation for lattice spin models in the link formulation which is a generalization of the mean-plaquette technique was developed; c) A new method to study lattice gauge theories at finite temperature was proposed. For the first time, a non-abelian model was studied with analytical methods; d) An abelian lattice gauge theory with fermionic matter at the strong coupling limit was analyzed. Interesting results applicable to non-abelian gauge theories were obtained. (M.E.L.) [es

  5. Pole-shape optimization of permanent-magnet linear synchronous motor for reduction of thrust ripple

    Energy Technology Data Exchange (ETDEWEB)

    Tavana, Nariman Roshandel, E-mail: nroshandel@ee.iust.ac.i [Department of Electrical Engineering, Iran University of Science and Technology, Narmak, Tehran 16846-13114 (Iran, Islamic Republic of); Shoulaie, Abbas, E-mail: shoulaie@iust.ac.i [Department of Electrical Engineering, Iran University of Science and Technology, Narmak, Tehran 16846-13114 (Iran, Islamic Republic of)

    2011-01-15

    In this paper, we have used magnet arc shaping technique in order to improve the performance of permanent-magnet linear synchronous motor (PMLSM). At first, a detailed analytical modeling based on Maxwell equations is presented for the analysis and design of PMLSM with the arc-shaped magnetic poles (ASMPs). Then the accuracy of presented method is verified by finite-element method. Very close agreement between the analytical and finite-element results shows the effectiveness of the proposed method. Finally, a magnet shape design is carried out based on the analytical method to enhance the motor developed thrust. Pertinent evaluations on the optimal design performance demonstrate that shape optimization leads to a design with extra low thrust ripple.

  6. Pole-shape optimization of permanent-magnet linear synchronous motor for reduction of thrust ripple

    International Nuclear Information System (INIS)

    Tavana, Nariman Roshandel; Shoulaie, Abbas

    2011-01-01

    In this paper, we have used magnet arc shaping technique in order to improve the performance of permanent-magnet linear synchronous motor (PMLSM). At first, a detailed analytical modeling based on Maxwell equations is presented for the analysis and design of PMLSM with the arc-shaped magnetic poles (ASMPs). Then the accuracy of presented method is verified by finite-element method. Very close agreement between the analytical and finite-element results shows the effectiveness of the proposed method. Finally, a magnet shape design is carried out based on the analytical method to enhance the motor developed thrust. Pertinent evaluations on the optimal design performance demonstrate that shape optimization leads to a design with extra low thrust ripple.

  7. Optimization of a gas turbine cogeneration plant

    International Nuclear Information System (INIS)

    Wallin, J.; Wessman, M.

    1991-11-01

    This work describes an analytical method of optimizing a cogeneration with a gas turbine as prime mover. The method is based on an analytical function. The function describes the total costs of the heat production, described by the heat load duration curve. The total costs consist of the prime costs and fixed costs of the gas turbine and the other heating plants. The parameters of interest at optimization are the heat efficiency produced by the gas turbine and the utilization time of the gas turbine. With todays prices for electricity, fuel and heating as well as maintenance- personnel and investment costs, extremely good conditions are needed to make the gas turbine profitable. Either a raise of the price for the electricity with about 33% is needed or that the ratio of electricity and fuel increases to approx 2.5. High investment subsidies for the gas turbines could make a gas turbine profitable, even with todays electricity- and fuel prices. Besides being a good help when projecting cogeneration plants with a gas turbine as prime mover, the method gives a possibility to optimize the annual operating time for a certain gas turbine when changing the operating conditions. 6 refs

  8. Sub-Riemannian geometry and optimal transport

    CERN Document Server

    Rifford, Ludovic

    2014-01-01

    The book provides an introduction to sub-Riemannian geometry and optimal transport and presents some of the recent progress in these two fields. The text is completely self-contained: the linear discussion, containing all the proofs of the stated results, leads the reader step by step from the notion of distribution at the very beginning to the existence of optimal transport maps for Lipschitz sub-Riemannian structure. The combination of geometry presented from an analytic point of view and of optimal transport, makes the book interesting for a very large community. This set of notes grew from a series of lectures given by the author during a CIMPA school in Beirut, Lebanon.

  9. Evaluation of the analytic performance of laboratories: inter-laboratorial study of the spectroscopy of atomic absorption

    International Nuclear Information System (INIS)

    Wong Wong, S. M.

    1996-01-01

    The author made an inter-laboratorial study, with the participation of 18 national laboratories, that have spectrophotometer of atomic absorption. To evaluate the methods of analysis of lead, sodium, potasium, calcium, magnesium, zinc, copper, manganese, and iron, in the ambit of mg/l. The samples, distributed in four rounds to the laboratories, were prepared from primary patterns, deionized and distilled water. The study evaluated the homogeneity and stability, and verified its concentration, using as a reference method, the spectrometry method of Inductively Coupled Plasma emission (1CP). To obtain the characteristics of analytic performance, it applied the norm ASTM E 691. To evaluated the analytic performance, it used harmonized protocol of the International Union of Pure and applied chemistry (IUPAC). The study obtained the 29% of the laboratories had a satisfactory analytic performance, 9% had a questionable performance and 62% made an unsatisfactory analytic performance, according to the IUPAC norm. The results of the values of the characteristic performance method, show that there is no intercomparability between the laboratories, which is attributed to the different methodologies of analysis. (S. Grainger)

  10. An analytical approach to activating demand elasticity with a demand response mechanism

    International Nuclear Information System (INIS)

    Clastres, Cedric; Khalfallah, Haikel

    2015-01-01

    The aim of this work is to demonstrate analytically the conditions under which activating the elasticity of consumer demand could benefit social welfare. We have developed an analytical equilibrium model to quantify the effect of deploying demand response on social welfare and energy trade. The novelty of this research is that it demonstrates the existence of an optimal area for the price signal in which demand response enhances social welfare. This optimal area is negatively correlated to the degree of competitiveness of generation technologies and the market size of the system. In particular, it should be noted that the value of un-served energy or energy reduction which the producers could lose from such a demand response scheme would limit its effectiveness. This constraint is even greater if energy trade between countries is limited. Finally, we have demonstrated scope for more aggressive demand response, when only considering the impact in terms of consumer surplus. (authors)

  11. Business Analytics and Performance Management: A Small Data Example Combining TD-ABC and BSC for Simulation and Optimization

    DEFF Research Database (Denmark)

    Nielsen, Steen

    The purpose of this paper is twofold: first, it discuss the potentials of combining performance management with the concept and methodology of business analytics. The inspiration for this stems from the intensified discussions and use of business analytics and performance in organizations by both...

  12. Design optimization for active twist rotor blades

    Science.gov (United States)

    Mok, Ji Won

    This dissertation introduces the process of optimizing active twist rotor blades in the presence of embedded anisotropic piezo-composite actuators. Optimum design of active twist blades is a complex task, since it involves a rich design space with tightly coupled design variables. The study presents the development of an optimization framework for active helicopter rotor blade cross-sectional design. This optimization framework allows for exploring a rich and highly nonlinear design space in order to optimize the active twist rotor blades. Different analytical components are combined in the framework: cross-sectional analysis (UM/VABS), an automated mesh generator, a beam solver (DYMORE), a three-dimensional local strain recovery module, and a gradient based optimizer within MATLAB. Through the mathematical optimization problem, the static twist actuation performance of a blade is maximized while satisfying a series of blade constraints. These constraints are associated with locations of the center of gravity and elastic axis, blade mass per unit span, fundamental rotating blade frequencies, and the blade strength based on local three-dimensional strain fields under worst loading conditions. Through pre-processing, limitations of the proposed process have been studied. When limitations were detected, resolution strategies were proposed. These include mesh overlapping, element distortion, trailing edge tab modeling, electrode modeling and foam implementation of the mesh generator, and the initial point sensibility of the current optimization scheme. Examples demonstrate the effectiveness of this process. Optimization studies were performed on the NASA/Army/MIT ATR blade case. Even though that design was built and shown significant impact in vibration reduction, the proposed optimization process showed that the design could be improved significantly. The second example, based on a model scale of the AH-64D Apache blade, emphasized the capability of this framework to

  13. Case Study: IBM Watson Analytics Cloud Platform as Analytics-as-a-Service System for Heart Failure Early Detection

    Directory of Open Access Journals (Sweden)

    Gabriele Guidi

    2016-07-01

    Full Text Available In the recent years the progress in technology and the increasing availability of fast connections have produced a migration of functionalities in Information Technologies services, from static servers to distributed technologies. This article describes the main tools available on the market to perform Analytics as a Service (AaaS using a cloud platform. It is also described a use case of IBM Watson Analytics, a cloud system for data analytics, applied to the following research scope: detecting the presence or absence of Heart Failure disease using nothing more than the electrocardiographic signal, in particular through the analysis of Heart Rate Variability. The obtained results are comparable with those coming from the literature, in terms of accuracy and predictive power. Advantages and drawbacks of cloud versus static approaches are discussed in the last sections.

  14. Proposal optimization in nuclear accident emergency decision based on IAHP

    International Nuclear Information System (INIS)

    Xin Jing

    2007-01-01

    On the basis of establishing the multi-layer structure of nuclear accident emergency decision, several decision objectives are synthetically analyzed, and an optimization model of decision proposals for nuclear accident emergency based on interval analytic hierarchy process is proposed in the paper. The model makes comparisons among several emergency decision proposals quantified, and the optimum proposal is selected out, which solved the uncertain and fuzzy decision problem of judgments by experts' experiences in nuclear accidents emergency decision. Case study shows that the optimization result is much more reasonable, objective and reliable than subjective judgments, and it could be decision references for nuclear accident emergency. (authors)

  15. Application of nuclear analytical methods to heavy metal pollution studies of estuaries

    International Nuclear Information System (INIS)

    Anders, B.; Junge, W.; Knoth, J.; Michaelis, W.; Pepelnik, R.; Schwenke, H.

    1984-01-01

    Important objectives of heavy metal pollution studies of estuaries are the understanding of the transport phenomena in these complex ecosystems and the discovery of the pollution history and the geochemical background. Such studies require high precision and accuracy of the analytical methods. Moreover, pronounced spatial heterogeneities and temporal variabilities that are typical for estuaries necessitate the analysis of a great number of samples if relevant results are to be obtained. Both requirements can economically be fulfilled by a proper combination of analytical methods. Applications of energy-dispersive X-ray fluorescence analysis with total reflection of the exciting beam at the sample support and of neutron activation analysis with both thermal and fast neutrons are reported in the light of pollution studies performed in the Lower Elbe River. Profiles are presented for the total heavy metal content determined from particulate matter and sediment. They include V, Mn, Fe, Ni, Cu, Zn, As, Pb, and Cd. 16 references 10 figures, 1 table

  16. Algorithms for optimal sequencing of dynamic multileaf collimators

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, Srijit [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Sahni, Sartaj [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Palta, Jatinder [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Ranka, Sanjay [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States)

    2004-01-07

    Dynamic multileaf collimator (DMLC) intensity modulated radiation therapy (IMRT) is used to deliver intensity modulated beams using a multileaf collimator (MLC), with the leaves in motion. DMLC-IMRT requires the conversion of a radiation intensity map into a leaf sequence file that controls the movement of the MLC while the beam is on. It is imperative that the intensity map delivered using the leaf sequence file be as close as possible to the intensity map generated by the dose optimization algorithm, while satisfying hardware constraints of the delivery system. Optimization of the leaf-sequencing algorithm has been the subject of several recent investigations. In this work, we present a systematic study of the optimization of leaf-sequencing algorithms for dynamic multileaf collimator beam delivery and provide rigorous mathematical proofs of optimized leaf sequence settings in terms of monitor unit (MU) efficiency under the most common leaf movement constraints that include leaf interdigitation constraint. Our analytical analysis shows that leaf sequencing based on unidirectional movement of the MLC leaves is as MU efficient as bi-directional movement of the MLC leaves.

  17. Algorithms for optimal sequencing of dynamic multileaf collimators

    International Nuclear Information System (INIS)

    Kamath, Srijit; Sahni, Sartaj; Palta, Jatinder; Ranka, Sanjay

    2004-01-01

    Dynamic multileaf collimator (DMLC) intensity modulated radiation therapy (IMRT) is used to deliver intensity modulated beams using a multileaf collimator (MLC), with the leaves in motion. DMLC-IMRT requires the conversion of a radiation intensity map into a leaf sequence file that controls the movement of the MLC while the beam is on. It is imperative that the intensity map delivered using the leaf sequence file be as close as possible to the intensity map generated by the dose optimization algorithm, while satisfying hardware constraints of the delivery system. Optimization of the leaf-sequencing algorithm has been the subject of several recent investigations. In this work, we present a systematic study of the optimization of leaf-sequencing algorithms for dynamic multileaf collimator beam delivery and provide rigorous mathematical proofs of optimized leaf sequence settings in terms of monitor unit (MU) efficiency under the most common leaf movement constraints that include leaf interdigitation constraint. Our analytical analysis shows that leaf sequencing based on unidirectional movement of the MLC leaves is as MU efficient as bi-directional movement of the MLC leaves

  18. Fuzzy optimization in hydrodynamic analysis of groundwater control systems: Case study of the pumping station "Bezdan 1", Serbia

    Directory of Open Access Journals (Sweden)

    Bajić Dragoljub

    2014-01-01

    Full Text Available A groundwater control system was designed to lower the water table and allow the pumping station “Bezdan 1” to be built. Based on a hydrodynamic analysis that suggested three alternative solutions, multicriteria optimization was applied to select the best alternative. The fuzzy analytic hierarchy process method was used, based on triangular fuzzy numbers. An assessment of the various factors that influenced the selection of the best alternative, as well as fuzzy optimization calculations, yielded the “weights” of the alternatives and the best alternative was selected for groundwater control at the site of the pumping station “Bezdan 1”. [Projekat Ministarstva nauke Republike Srbije, br. OI-176022, TR-33039 i br. III-43004

  19. Analytical and Experimental Study of Residual Stresses in CFRP

    Directory of Open Access Journals (Sweden)

    Chia-Chin Chiang

    2013-01-01

    Full Text Available Fiber Bragg Grating sensors (FBGs have been utilized in various engineering and photoelectric fields because of their good environment tolerance. In this research, residual stresses of carbon fiber reinforced polymer composites (CFRP were studied using both experimental and analytical approach. The FBGs were embedded inside middle layers of CFRP to study the formation of residual stress during curing process. Finite element analysis was performed using ABAQUS software to simulate the CFRP curing process. Both experimental and simulation results showed that the residual stress appeared during cooling process and the residual stresses could be released when the CFRP was machined to a different shape.

  20. Optimization of solar assisted heat pump systems via a simple analytic approach

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J W

    1980-01-01

    An analytic method for calculating the optimum operating temperature of the collector/storage subsystem in a solar assisted heat pump is presented. A tradeoff exists between rising heat pump coefficient of performance and falling collector efficiency as this temperature is increased, resulting in an optimum temperature whose value increases with increasing efficiency of the auxiliary energy source. Electric resistance is shown to be a poor backup to such systems. A number of options for thermally coupling the system to the ground are analyzed and compared.

  1. Surveillance test interval optimization

    International Nuclear Information System (INIS)

    Cepin, M.; Mavko, B.

    1995-01-01

    Technical specifications have been developed on the bases of deterministic analyses, engineering judgment, and expert opinion. This paper introduces our risk-based approach to surveillance test interval (STI) optimization. This approach consists of three main levels. The first level is the component level, which serves as a rough estimation of the optimal STI and can be calculated analytically by a differentiating equation for mean unavailability. The second and third levels give more representative results. They take into account the results of probabilistic risk assessment (PRA) calculated by a personal computer (PC) based code and are based on system unavailability at the system level and on core damage frequency at the plant level

  2. Notes on analytical study of holographic superconductors with Lifshitz scaling in external magnetic field

    International Nuclear Information System (INIS)

    Zhao, Zixu; Pan, Qiyuan; Jing, Jiliang

    2014-01-01

    We employ the matching method to analytically investigate the holographic superconductors with Lifshitz scaling in an external magnetic field. We discuss systematically the restricted conditions for the matching method and find that this analytic method is not always powerful to explore the effect of external magnetic field on the holographic superconductors unless the matching point is chosen in an appropriate range and the dynamical exponent z satisfies the relation z=d−1 or z=d−2. From the analytic treatment, we observe that Lifshitz scaling can hinder the condensation to be formed, which can be used to back up the numerical results. Moreover, we study the effect of Lifshitz scaling on the upper critical magnetic field and reproduce the well-known relation obtained from Ginzburg–Landau theory

  3. Geospatial Analytics in Retail Site Selection and Sales Prediction.

    Science.gov (United States)

    Ting, Choo-Yee; Ho, Chiung Ching; Yee, Hui Jia; Matsah, Wan Razali

    2018-03-01

    Studies have shown that certain features from geography, demography, trade area, and environment can play a vital role in retail site selection, largely due to the impact they asserted on retail performance. Although the relevant features could be elicited by domain experts, determining the optimal feature set can be intractable and labor-intensive exercise. The challenges center around (1) how to determine features that are important to a particular retail business and (2) how to estimate retail sales performance given a new location? The challenges become apparent when the features vary across time. In this light, this study proposed a nonintervening approach by employing feature selection algorithms and subsequently sales prediction through similarity-based methods. The results of prediction were validated by domain experts. In this study, data sets from different sources were transformed and aggregated before an analytics data set that is ready for analysis purpose could be obtained. The data sets included data about feature location, population count, property type, education status, and monthly sales from 96 branches of a telecommunication company in Malaysia. The finding suggested that (1) optimal retail performance can only be achieved through fulfillment of specific location features together with the surrounding trade area characteristics and (2) similarity-based method can provide solution to retail sales prediction.

  4. Structural Analysis of Composite Laminates using Analytical and Numerical Techniques

    Directory of Open Access Journals (Sweden)

    Sanghi Divya

    2016-01-01

    Full Text Available A laminated composite material consists of different layers of matrix and fibres. Its properties can vary a lot with each layer’s or ply’s orientation, material property and the number of layers itself. The present paper focuses on a novel approach of incorporating an analytical method to arrive at a preliminary ply layup order of a composite laminate, which acts as a feeder data for the further detailed analysis done on FEA tools. The equations used in our MATLAB are based on analytical study code and supply results that are remarkably close to the final optimized layup found through extensive FEA analysis with a high probabilistic degree. This reduces significant computing time and saves considerable FEA processing to obtain efficient results quickly. The result output by our method also provides the user with the conditions that predicts the successive failure sequence of the composite plies, a result option which is not even available in popular FEM tools. The predicted results are further verified by testing the laminates in the laboratory and the results are found in good agreement.

  5. Incorporating technology-based learning tools into teaching and learning of optimization problems

    Science.gov (United States)

    Yang, Irene

    2014-07-01

    The traditional approach of teaching optimization problems in calculus emphasizes more on teaching the students using analytical approach through a series of procedural steps. However, optimization normally involves problem solving in real life problems and most students fail to translate the problems into mathematic models and have difficulties to visualize the concept underlying. As an educator, it is essential to embed technology in suitable content areas to engage students in construction of meaningful learning by creating a technology-based learning environment. This paper presents the applications of technology-based learning tool in designing optimization learning activities with illustrative examples, as well as to address the challenges in the implementation of using technology in teaching and learning optimization. The suggestion activities in this paper allow flexibility for educator to modify their teaching strategy and apply technology to accommodate different level of studies for the topic of optimization. Hence, this provides great potential for a wide range of learners to enhance their understanding of the concept of optimization.

  6. Criteria for analysis and optimization of longitudinal fins with convective tip

    International Nuclear Information System (INIS)

    Gomes, E.S.

    1983-01-01

    The problem of heat transfer in longitudinal fins with the main geometries used in equipaments of heat transfer by convection is analyzed. The equation of energy is solved analytically of several geometries fins, with unidimensional formulation, through the use of the convective heat transfer coefficient. The problem of fin optimization is approached analytically yielding the parameters which allow the maximum heat transfer for each particular material waste in the fin. The use of the insulated tip model suggests the use of fins and its optimization for any Biot number of the fin. The use of the convective tip model allows us to determine when is vantageous or disadvantageous to use fins and when fin optimization is possible according to the value of the Biot number and to a convection parameter on the fin tip. (Author) [pt

  7. Can neutral analytes be concentrated by transient isotachophoresis in micellar electrokinetic chromatography and how much?

    Science.gov (United States)

    Matczuk, Magdalena; Foteeva, Lidia S; Jarosz, Maciej; Galanski, Markus; Keppler, Bernhard K; Hirokawa, Takeshi; Timerbaev, Andrei R

    2014-06-06

    Transient isotachophoresis (tITP) is a versatile sample preconcentration technique that uses ITP to focus electrically charged analytes at the initial stage of CE analysis. However, according to the ruling principle of tITP, uncharged analytes are beyond its capacity while being separated and detected by micellar electrokinetic chromatography (MEKC). On the other hand, when these are charged micelles that undergo the tITP focusing, one can anticipate the concentration effect, resulting from the formation of transient micellar stack at moving sample/background electrolyte (BGE) boundary, which increasingly accumulates the analytes. This work expands the enrichment potential of tITP for MEKC by demonstrating the quantitative analysis of uncharged metal-based drugs from highly saline samples and introducing to the BGE solution anionic surfactants and buffer (terminating) co-ions of different mobility and concentration to optimize performance. Metallodrugs of assorted lipophilicity were chosen so as to explore whether their varying affinity toward micelles plays the role. In addition to altering the sample and BGE composition, optimization of the detection capability was achieved due to fine-tuning operational variables such as sample volume, separation voltage and pressure, etc. The results of optimization trials shed light on the mechanism of micellar tITP and render effective determination of selected drugs in human urine, with practical limits of detection using conventional UV detector. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Modelling a flows in supply chain with analytical models: Case of a chemical industry

    Science.gov (United States)

    Benhida, Khalid; Azougagh, Yassine; Elfezazi, Said

    2016-02-01

    This study is interested on the modelling of the logistics flows in a supply chain composed on a production sites and a logistics platform. The contribution of this research is to develop an analytical model (integrated linear programming model), based on a case study of a real company operating in the phosphate field, considering a various constraints in this supply chain to resolve the planning problems for a better decision-making. The objectives of this model is to determine and define the optimal quantities of different products to route, to and from the various entities in the supply chain studied.

  9. Numerical optimization using flow equations

    Science.gov (United States)

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  10. Nuclear steam generator tube to tubesheet joint optimization

    International Nuclear Information System (INIS)

    McGregor, Rod

    1999-01-01

    Industry-wide problems with Stress Corrosion Cracking in the Nuclear Steam Generator tube-to-tubesheet joint have led to costly repairs, plugging, and replacement of entire vessels. To improve corrosion resistance, new and replacement Steam Generator developments typically employ the hydraulic tube expansion process (full depth) to minimize tensile residual stresses and cold work at the critical transition zone between the expanded and unexpanded tube. These variables have undergone detailed study using specialized X-ray diffraction and analytical techniques. Responding to increased demands from Nuclear Steam Generator operators and manufacturers to credit the leak-tightness and strength contributions of the hydraulic expansion, various experimental tasks with complimentary analytical modelling were applied to improve understanding and control of tube to hole contact pressure. With careful consideration to residual stress impact, design for strength/leak tightness optimization addresses: Experimentally determined minimum contact pressure levels necessary to preclude incipient leakage into the tube/hole interface. The degradation of contact pressure at surrounding expansions caused by the sequential expansion process. The transient and permanent contact pressure variation associated with tubesheet hole dilation during Steam Generator operation. An experimental/analytical simulation has been developed to reproduce cyclic Steam Generator operating strains on the tubesheet and expanded joint. Leak tightness and pullout tests were performed during and following simulated Steam Generator operating transients. The overall development has provided a comprehensive understanding of the fabrication and in-service mechanics of hydraulically expanded joints. Based on this, the hydraulic expansion process can be optimized with respect to critical residual stress/cold work and the strength/leakage barrier criteria. (author)

  11. Structural Design Optimization of Doubly-Fed Induction Generators Using GeneratorSE

    Energy Technology Data Exchange (ETDEWEB)

    Sethuraman, Latha [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Fingersh, Lee J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dykes, Katherine L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Arthurs, Claire [Georgia Institute of Technology

    2017-11-13

    A wind turbine with a larger rotor swept area can generate more electricity, however, this increases costs disproportionately for manufacturing, transportation, and installation. This poster presents analytical models for optimizing doubly-fed induction generators (DFIGs), with the objective of reducing the costs and mass of wind turbine drivetrains. The structural design for the induction machine includes models for the casing, stator, rotor, and high-speed shaft developed within the DFIG module in the National Renewable Energy Laboratory's wind turbine sizing tool, GeneratorSE. The mechanical integrity of the machine is verified by examining stresses, structural deflections, and modal properties. The optimization results are then validated using finite element analysis (FEA). The results suggest that our analytical model correlates with the FEA in some areas, such as radial deflection, differing by less than 20 percent. But the analytical model requires further development for axial deflections, torsional deflections, and stress calculations.

  12. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    Science.gov (United States)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  13. Measuring myokines with cardiovascular functions: pre-analytical variables affecting the analytical output.

    Science.gov (United States)

    Lombardi, Giovanni; Sansoni, Veronica; Banfi, Giuseppe

    2017-08-01

    In the last few years, a growing number of molecules have been associated to an endocrine function of the skeletal muscle. Circulating myokine levels, in turn, have been associated with several pathophysiological conditions including the cardiovascular ones. However, data from different studies are often not completely comparable or even discordant. This would be due, at least in part, to the whole set of situations related to the preparation of the patient prior to blood sampling, blood sampling procedure, processing and/or store. This entire process constitutes the pre-analytical phase. The importance of the pre-analytical phase is often not considered. However, in routine diagnostics, the 70% of the errors are in this phase. Moreover, errors during the pre-analytical phase are carried over in the analytical phase and affects the final output. In research, for example, when samples are collected over a long time and by different laboratories, a standardized procedure for sample collecting and the correct procedure for sample storage are acknowledged. In this review, we discuss the pre-analytical variables potentially affecting the measurement of myokines with cardiovascular functions.

  14. Web Analytics

    Science.gov (United States)

    EPA’s Web Analytics Program collects, analyzes, and provides reports on traffic, quality assurance, and customer satisfaction metrics for EPA’s website. The program uses a variety of analytics tools, including Google Analytics and CrazyEgg.

  15. A functional-analytic method for the study of difference equations

    Directory of Open Access Journals (Sweden)

    Siafarikas Panayiotis D

    2004-01-01

    Full Text Available We will give the generalization of a recently developed functional-analytic method for studying linear and nonlinear, ordinary and partial, difference equations in the and spaces, p∈ℕ, . The method will be illustrated by use of two examples concerning a nonlinear ordinary difference equation known as the Putnam equation, and a linear partial difference equation of three variables describing the discrete Newton law of cooling in three dimensions.

  16. Optimum shape design of incompressible hyperelastic structures with analytical sensitivity analysis

    International Nuclear Information System (INIS)

    Jarraya, A.; Wali, M.; Dammark, F.

    2014-01-01

    This paper is focused on the structural shape optimization of incompressible hyperelastic structures. An analytical sensitivity is developed for the rubber like materials. The whole shape optimization process is carried out by coupling a closed geometric shape in R 2 with boundaries, defined by B-splines curves, exact sensitivity analysis and mathematical programming method (S.Q.P: sequential quadratic programming). Design variables are the control points coordinate. The objective function is to minimize Von-Mises stress, constrained to the total material volume of the structure remains constant. In order to validate the exact Jacobian method, the sensitivity calculation is performed: numerically by an efficient finite difference scheme and by the exact Jacobian method. Numerical optimization examples are presented for elastic and hyperelastic materials using the proposed method.

  17. Directed transport by surface chemical potential gradients for enhancing analyte collection in nanoscale sensors.

    Science.gov (United States)

    Sitt, Amit; Hess, Henry

    2015-05-13

    Nanoscale detectors hold great promise for single molecule detection and the analysis of small volumes of dilute samples. However, the probability of an analyte reaching the nanosensor in a dilute solution is extremely low due to the sensor's small size. Here, we examine the use of a chemical potential gradient along a surface to accelerate analyte capture by nanoscale sensors. Utilizing a simple model for transport induced by surface binding energy gradients, we study the effect of the gradient on the efficiency of collecting nanoparticles and single and double stranded DNA. The results indicate that chemical potential gradients along a surface can lead to an acceleration of analyte capture by several orders of magnitude compared to direct collection from the solution. The improvement in collection is limited to a relatively narrow window of gradient slopes, and its extent strongly depends on the size of the gradient patch. Our model allows the optimization of gradient layouts and sheds light on the fundamental characteristics of chemical potential gradient induced transport.

  18. A semi-analytical study of positive corona discharge in wire–plane electrode configuration

    International Nuclear Information System (INIS)

    Yanallah, K; Pontiga, F; Chen, J H

    2013-01-01

    Wire-to-plane positive corona discharge in air has been studied using an analytical model of two species (electrons and positive ions). The spatial distributions of electric field and charged species are obtained by integrating Gauss's law and the continuity equations of species along the Laplacian field lines. The experimental values of corona current intensity and applied voltage, together with Warburg's law, have been used to formulate the boundary condition for the electron density on the corona wire. To test the accuracy of the model, the approximate electric field distribution has been compared with the exact numerical solution obtained from a finite element analysis. A parametrical study of wire-to-plane corona discharge has then been undertaken using the approximate semi-analytical solutions. Thus, the spatial distributions of electric field and charged particles have been computed for different values of the gas pressure, wire radius and electrode separation. Also, the two dimensional distribution of ozone density has been obtained using a simplified plasma chemistry model. The approximate semi-analytical solutions can be evaluated in a negligible computational time, yet provide precise estimates of corona discharge variables. (paper)

  19. A semi-analytical study of positive corona discharge in wire-plane electrode configuration

    Science.gov (United States)

    Yanallah, K.; Pontiga, F.; Chen, J. H.

    2013-08-01

    Wire-to-plane positive corona discharge in air has been studied using an analytical model of two species (electrons and positive ions). The spatial distributions of electric field and charged species are obtained by integrating Gauss's law and the continuity equations of species along the Laplacian field lines. The experimental values of corona current intensity and applied voltage, together with Warburg's law, have been used to formulate the boundary condition for the electron density on the corona wire. To test the accuracy of the model, the approximate electric field distribution has been compared with the exact numerical solution obtained from a finite element analysis. A parametrical study of wire-to-plane corona discharge has then been undertaken using the approximate semi-analytical solutions. Thus, the spatial distributions of electric field and charged particles have been computed for different values of the gas pressure, wire radius and electrode separation. Also, the two dimensional distribution of ozone density has been obtained using a simplified plasma chemistry model. The approximate semi-analytical solutions can be evaluated in a negligible computational time, yet provide precise estimates of corona discharge variables.

  20. Analytic solution of magnetic induction distribution of ideal hollow spherical field sources

    Science.gov (United States)

    Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min

    2017-12-01

    The Halbach type hollow spherical permanent magnet arrays (HSPMA) are volume compacted, energy efficient field sources, and capable of producing multi-Tesla field in the cavity of the array, which have attracted intense interests in many practical applications. Here, we present analytical solutions of magnetic induction to the ideal HSPMA in entire space, outside of array, within the cavity of array, and in the interior of the magnet. We obtain solutions using concept of magnetic charge to solve the Poisson's and Laplace's equations for the HSPMA. Using these analytical field expressions inside the material, a scalar demagnetization function is defined to approximately indicate the regions of magnetization reversal, partial demagnetization, and inverse magnetic saturation. The analytical field solution provides deeper insight into the nature of HSPMA and offer guidance in designing optimized one.

  1. Optimally frugal foraging

    Science.gov (United States)

    Bénichou, O.; Bhat, U.; Krapivsky, P. L.; Redner, S.

    2018-02-01

    We introduce the frugal foraging model in which a forager performs a discrete-time random walk on a lattice in which each site initially contains S food units. The forager metabolizes one unit of food at each step and starves to death when it last ate S steps in the past. Whenever the forager eats, it consumes all food at its current site and this site remains empty forever (no food replenishment). The crucial property of the forager is that it is frugal and eats only when encountering food within at most k steps of starvation. We compute the average lifetime analytically as a function of the frugality threshold and show that there exists an optimal strategy, namely, an optimal frugality threshold k* that maximizes the forager lifetime.

  2. Analytic network process (ANP approach for product mix planning in railway industry

    Directory of Open Access Journals (Sweden)

    Hadi Pazoki Toroudi

    2016-08-01

    Full Text Available Given the competitive environment in the global market in recent years, organizations need to plan for increased profitability and optimize their performance. Planning for an appropriate product mix plays essential role for the success of most production units. This paper applies analytical network process (ANP approach for product mix planning for a part supplier in Iran. The proposed method uses four criteria including cost of production, sales figures, supply of raw materials and quality of products. In addition, the study proposes different set of products as alternatives for production planning. The preliminary results have indicated that that the proposed study of this paper could increase productivity, significantly.

  3. Big data optimization recent developments and challenges

    CERN Document Server

    2016-01-01

    The main objective of this book is to provide the necessary background to work with big data by introducing some novel optimization algorithms and codes capable of working in the big data setting as well as introducing some applications in big data optimization for both academics and practitioners interested, and to benefit society, industry, academia, and government. Presenting applications in a variety of industries, this book will be useful for the researchers aiming to analyses large scale data. Several optimization algorithms for big data including convergent parallel algorithms, limited memory bundle algorithm, diagonal bundle method, convergent parallel algorithms, network analytics, and many more have been explored in this book.

  4. Improving acute kidney injury diagnostics using predictive analytics.

    Science.gov (United States)

    Basu, Rajit K; Gist, Katja; Wheeler, Derek S

    2015-12-01

    Acute kidney injury (AKI) is a multifactorial syndrome affecting an alarming proportion of hospitalized patients. Although early recognition may expedite management, the ability to identify patients at-risk and those suffering real-time injury is inconsistent. The review will summarize the recent reports describing advancements in the area of AKI epidemiology, specifically focusing on risk scoring and predictive analytics. In the critical care population, the primary underlying factors limiting prediction models include an inability to properly account for patient heterogeneity and underperforming metrics used to assess kidney function. Severity of illness scores demonstrate limited AKI predictive performance. Recent evidence suggests traditional methods for detecting AKI may be leveraged and ultimately replaced by newer, more sophisticated analytical tools capable of prediction and identification: risk stratification, novel AKI biomarkers, and clinical information systems. Additionally, the utility of novel biomarkers may be optimized through targeting using patient context, and may provide more granular information about the injury phenotype. Finally, manipulation of the electronic health record allows for real-time recognition of injury. Integrating a high-functioning clinical information system with risk stratification methodology and novel biomarker yields a predictive analytic model for AKI diagnostics.

  5. Analytical Study of the Effect of the System Geometry on Photon Sensitivity and Depth of Interaction of Positron Emission Mammography

    Directory of Open Access Journals (Sweden)

    Pablo Aguiar

    2012-01-01

    Full Text Available Positron emission mammography (PEM cameras are novel-dedicated PET systems optimized to image the breast. For these cameras it is essential to achieve an optimum trade-off between sensitivity and spatial resolution and therefore the main challenge for the novel cameras is to improve the sensitivity without degrading the spatial resolution. We carry out an analytical study of the effect of the different detector geometries on the photon sensitivity and the angle of incidence of the detected photons which is related to the DOI effect and therefore to the intrinsic spatial resolution. To this end, dual head detectors were compared to box and different polygon-detector configurations. Our results showed that higher sensitivity and uniformity were found for box and polygon-detector configurations compared to dual-head cameras. Thus, the optimal configuration in terms of sensitivity is a PEM scanner based on a polygon of twelve (dodecagon or more detectors. We have shown that this configuration is clearly superior to dual-head detectors and slightly higher than box, octagon, and hexagon detectors. Nevertheless, DOI effects are increased for this configuration compared to dual head and box scanners and therefore an accurate compensation for this effect is required.

  6. Analytical free energy gradient for the molecular Ornstein-Zernike self-consistent-field method

    Directory of Open Access Journals (Sweden)

    N.Yoshida

    2007-09-01

    Full Text Available An analytical free energy gradient for the molecular Ornstein-Zernike self-consistent-field (MOZ-SCF method is presented. MOZ-SCF theory is one of the theories to considering the solvent effects on the solute electronic structure in solution. [Yoshida N. et al., J. Chem. Phys., 2000, 113, 4974] Molecular geometries of water, formaldehyde, acetonitrile and acetone in water are optimized by analytical energy gradient formula. The results are compared with those from the polarizable continuum model (PCM, the reference interaction site model (RISM-SCF and the three dimensional (3D RISM-SCF.

  7. Design optimization of piezoresistive cantilevers for force sensing in air and water

    Science.gov (United States)

    Doll, Joseph C.; Park, Sung-Jin; Pruitt, Beth L.

    2009-01-01

    Piezoresistive cantilevers fabricated from doped silicon or metal films are commonly used for force, topography, and chemical sensing at the micro- and macroscales. Proper design is required to optimize the achievable resolution by maximizing sensitivity while simultaneously minimizing the integrated noise over the bandwidth of interest. Existing analytical design methods are insufficient for modeling complex dopant profiles, design constraints, and nonlinear phenomena such as damping in fluid. Here we present an optimization method based on an analytical piezoresistive cantilever model. We use an existing iterative optimizer to minimimize a performance goal, such as minimum detectable force. The design tool is available as open source software. Optimal cantilever design and performance are found to strongly depend on the measurement bandwidth and the constraints applied. We discuss results for silicon piezoresistors fabricated by epitaxy and diffusion, but the method can be applied to any dopant profile or material which can be modeled in a similar fashion or extended to other microelectromechanical systems. PMID:19865512

  8. Physical optimization of afterloading techniques

    International Nuclear Information System (INIS)

    Anderson, L.L.

    1985-01-01

    Physical optimization in brachytherapy refers to the process of determining the radioactive-source configuration which yields a desired dose distribution. In manually afterloaded intracavitary therapy for cervix cancer, discrete source strengths are selected iteratively to minimize the sum of squares of differences between trial and target doses. For remote afterloading with a stepping-source device, optimized (continuously variable) dwell times are obtained, either iteratively or analytically, to give least squares approximations to dose at an arbitrary number of points; in vaginal irradiation for endometrial cancer, the objective has included dose uniformity at applicator surface points in addition to a tapered contour of target dose at depth. For template-guided interstitial implants, seed placement at rectangular-grid mesh points may be least squares optimized within target volumes defined by computerized tomography; effective optimization is possible only for (uniform) seed strength high enough that the desired average peripheral dose is achieved with a significant fraction of empty seed locations. (orig.) [de

  9. Multiobjective Optimization Involving Quadratic Functions

    Directory of Open Access Journals (Sweden)

    Oscar Brito Augusto

    2014-01-01

    Full Text Available Multiobjective optimization is nowadays a word of order in engineering projects. Although the idea involved is simple, the implementation of any procedure to solve a general problem is not an easy task. Evolutionary algorithms are widespread as a satisfactory technique to find a candidate set for the solution. Usually they supply a discrete picture of the Pareto front even if this front is continuous. In this paper we propose three methods for solving unconstrained multiobjective optimization problems involving quadratic functions. In the first, for biobjective optimization defined in the bidimensional space, a continuous Pareto set is found analytically. In the second, applicable to multiobjective optimization, a condition test is proposed to check if a point in the decision space is Pareto optimum or not and, in the third, with functions defined in n-dimensional space, a direct noniterative algorithm is proposed to find the Pareto set. Simple problems highlight the suitability of the proposed methods.

  10. Time-optimal thermalization of single-mode Gaussian states

    Science.gov (United States)

    Carlini, Alberto; Mari, Andrea; Giovannetti, Vittorio

    2014-11-01

    We consider the problem of time-optimal control of a continuous bosonic quantum system subject to the action of a Markovian dissipation. In particular, we consider the case of a one-mode Gaussian quantum system prepared in an arbitrary initial state and which relaxes to the steady state due to the action of the dissipative channel. We assume that the unitary part of the dynamics is represented by Gaussian operations which preserve the Gaussian nature of the quantum state, i.e., arbitrary phase rotations, bounded squeezing, and unlimited displacements. In the ideal ansatz of unconstrained quantum control (i.e., when the unitary phase rotations, squeezing, and displacement of the mode can be performed instantaneously), we study how control can be optimized for speeding up the relaxation towards the fixed point of the dynamics and we analytically derive the optimal relaxation time. Our model has potential and interesting applications to the control of modes of electromagnetic radiation and of trapped levitated nanospheres.

  11. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  12. Analytical study of zirconium and hafnium α-hydroxy carboxylates

    International Nuclear Information System (INIS)

    Terra, V.R.

    1991-01-01

    The analytical study of zirconium and hafnium α-hydroxy carboxylates was described. For this purpose dl-mandelic, dl-p-bromo mandelic, dl-2-naphthyl glycolic, and benzilic acids were prepared. These were used in conjunction with glycolic, dl-lactic, dl-2-hydroxy isovaleric, dl-2-hydroxy hexanoic, and dl-2-hydroxy dodecanoic acids in order to synthesize the zirconium(IV) and hafnium(IV) tetrakis(α-hydroxy carboxylates). The compounds were characterized by melting point determination, infrared spectroscopy, thermogravimetric analysis, calcination to oxides and X-ray diffractometry by the powder method. (C.G.C)

  13. Analytic scattering kernels for neutron thermalization studies

    International Nuclear Information System (INIS)

    Sears, V.F.

    1990-01-01

    Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results

  14. Setting analytical performance specifications based on outcome studies - is it possible?

    NARCIS (Netherlands)

    Horvath, Andrea Rita; Bossuyt, Patrick M. M.; Sandberg, Sverre; John, Andrew St; Monaghan, Phillip J.; Verhagen-Kamerbeek, Wilma D. J.; Lennartz, Lieselotte; Cobbaert, Christa M.; Ebert, Christoph; Lord, Sarah J.

    2015-01-01

    The 1st Strategic Conference of the European Federation of Clinical Chemistry and Laboratory Medicine proposed a simplified hierarchy for setting analytical performance specifications (APS). The top two levels of the 1999 Stockholm hierarchy, i.e., evaluation of the effect of analytical performance

  15. A study of optimization techniques in HDR brachytherapy for the prostate

    Science.gov (United States)

    Pokharel, Ghana Shyam

    Several studies carried out thus far are in favor of dose escalation to the prostate gland to have better local control of the disease. But optimal way of delivery of higher doses of radiation therapy to the prostate without hurting neighboring critical structures is still debatable. In this study, we proposed that real time high dose rate (HDR) brachytherapy with highly efficient and effective optimization could be an alternative means of precise delivery of such higher doses. This approach of delivery eliminates the critical issues such as treatment setup uncertainties and target localization as in external beam radiation therapy. Likewise, dosimetry in HDR brachytherapy is not influenced by organ edema and potential source migration as in permanent interstitial implants. Moreover, the recent report of radiobiological parameters further strengthen the argument of using hypofractionated HDR brachytherapy for the management of prostate cancer. Firstly, we studied the essential features and requirements of real time HDR brachytherapy treatment planning system. Automating catheter reconstruction with fast editing tools, fast yet accurate dose engine, robust and fast optimization and evaluation engine are some of the essential requirements for such procedures. Moreover, in most of the cases we performed, treatment plan optimization took significant amount of time of overall procedure. So, making treatment plan optimization automatic or semi-automatic with sufficient speed and accuracy was the goal of the remaining part of the project. Secondly, we studied the role of optimization function and constraints in overall quality of optimized plan. We have studied the gradient based deterministic algorithm with dose volume histogram (DVH) and more conventional variance based objective functions for optimization. In this optimization strategy, the relative weight of particular objective in aggregate objective function signifies its importance with respect to other objectives

  16. Separation of very hydrophobic analytes by micellar electrokinetic chromatography IV. Modeling of the effective electrophoretic mobility from carbon number equivalents and octanol-water partition coefficients.

    Science.gov (United States)

    Huhn, Carolin; Pyell, Ute

    2008-07-11

    It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.

  17. Optimal control of a waste water cleaning plant

    Directory of Open Access Journals (Sweden)

    Ellina V. Grigorieva

    2010-09-01

    Full Text Available In this work, a model of a waste water treatment plant is investigated. The model is described by a nonlinear system of two differential equations with one bounded control. An optimal control problem of minimizing concentration of the polluted water at the terminal time T is stated and solved analytically with the use of the Pontryagin Maximum Principle. Dependence of the optimal solution on the initial conditions is established. Computer simulations of a model of an industrial waste water treatment plant show the advantage of using our optimal strategy. Possible applications are discussed.

  18. Core optimization studies at JEN-Spain

    International Nuclear Information System (INIS)

    Gomez Alonso, M.

    1983-01-01

    The JEN-1 is a 3-MW reactor which uses flat-plate fuel elements. It was originally fueled with 20%-enriched uranium but more recently with 90%-enriched fuel. It now appears that it will have to be converted back to using 20%- enriched fuel. Progress is presently being made in fuel fabrication. Plates with meat thicknesses of up to 1.5 mm have been fabricated. Plates are being tested with 40 wt % uranium in the fuel meat. Progress is also being made in reactor design in collaboration with atomic energy commissions of other countries for swimming pool reactors being designed or under construction in Chile, Ecuador, and Spain itself. The design studies address core optimization, safety analysis report updating, irradiation facilities, etc. Core optimization is specifically addressed in this paper. A common swimming-pool-type reactor such as the JEN-1 served as an example. The philosophy adopted in this study is not to try to match the high enrichment core, but rather to treat the design as new and try to optimize it using simplified neutronic/thermal hydraulic/economic models. This philosophy appears to be somewhat original. As many as possible of the fuel parameters are constrained to remain constant

  19. The Role of Nanoparticle Design in Determining Analytical Performance of Lateral Flow Immunoassays.

    Science.gov (United States)

    Zhan, Li; Guo, Shuang-Zhuang; Song, Fayi; Gong, Yan; Xu, Feng; Boulware, David R; McAlpine, Michael C; Chan, Warren C W; Bischof, John C

    2017-12-13

    Rapid, simple, and cost-effective diagnostics are needed to improve healthcare at the point of care (POC). However, the most widely used POC diagnostic, the lateral flow immunoassay (LFA), is ∼1000-times less sensitive and has a smaller analytical range than laboratory tests, requiring a confirmatory test to establish truly negative results. Here, a rational and systematic strategy is used to design the LFA contrast label (i.e., gold nanoparticles) to improve the analytical sensitivity, analytical detection range, and antigen quantification of LFAs. Specifically, we discovered that the size (30, 60, or 100 nm) of the gold nanoparticles is a main contributor to the LFA analytical performance through both the degree of receptor interaction and the ultimate visual or thermal contrast signals. Using the optimal LFA design, we demonstrated the ability to improve the analytical sensitivity by 256-fold and expand the analytical detection range from 3 log 10 to 6 log 10 for diagnosing patients with inflammatory conditions by measuring C-reactive protein. This work demonstrates that, with appropriate design of the contrast label, a simple and commonly used diagnostic technology can compete with more expensive state-of-the-art laboratory tests.

  20. Experimental and analytical study on removal of strontium from cultivated soil

    International Nuclear Information System (INIS)

    Fukutani, Satoshi; Takahashi, Tomoyuki

    2003-01-01

    Experimental and analytical study was done to estimate the removal of strontium from cultivated soil. The continuous batch tests were made and uneasy desorption form or immobility form was proved to exist. 2-Component Model, which considers easy desorption and uneasy desorption form fraction, was constructed and it showed good explanation of the continuous batch test results. (author)

  1. Pre-analytical and analytical aspects affecting clinical reliability of plasma glucose results.

    Science.gov (United States)

    Pasqualetti, Sara; Braga, Federica; Panteghini, Mauro

    2017-07-01

    The measurement of plasma glucose (PG) plays a central role in recognizing disturbances in carbohydrate metabolism, with established decision limits that are globally accepted. This requires that PG results are reliable and unequivocally valid no matter where they are obtained. To control the pre-analytical variability of PG and prevent in vitro glycolysis, the use of citrate as rapidly effective glycolysis inhibitor has been proposed. However, the commercial availability of several tubes with studies showing different performance has created confusion among users. Moreover, and more importantly, studies have shown that tubes promptly inhibiting glycolysis give PG results that are significantly higher than tubes containing sodium fluoride only, used in the majority of studies generating the current PG cut-points, with a different clinical classification of subjects. From the analytical point of view, to be equivalent among different measuring systems, PG results should be traceable to a recognized higher-order reference via the implementation of an unbroken metrological hierarchy. In doing this, it is important that manufacturers of measuring systems consider the uncertainty accumulated through the different steps of the selected traceability chain. In particular, PG results should fulfil analytical performance specifications defined to fit the intended clinical application. Since PG has tight homeostatic control, its biological variability may be used to define these limits. Alternatively, given the central diagnostic role of the analyte, an outcome model showing the impact of analytical performance of test on clinical classifications of subjects can be used. Using these specifications, performance assessment studies employing commutable control materials with values assigned by reference procedure have shown that the quality of PG measurements is often far from desirable and that problems are exacerbated using point-of-care devices. Copyright © 2017 The Canadian

  2. 1-D DC Resistivity Modeling and Interpretation in Anisotropic Media Using Particle Swarm Optimization

    Science.gov (United States)

    Pekşen, Ertan; Yas, Türker; Kıyak, Alper

    2014-09-01

    We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.

  3. Model Risk in Portfolio Optimization

    Directory of Open Access Journals (Sweden)

    David Stefanovits

    2014-08-01

    Full Text Available We consider a one-period portfolio optimization problem under model uncertainty. For this purpose, we introduce a measure of model risk. We derive analytical results for this measure of model risk in the mean-variance problem assuming we have observations drawn from a normal variance mixture model. This model allows for heavy tails, tail dependence and leptokurtosis of marginals. The results show that mean-variance optimization is seriously compromised by model uncertainty, in particular, for non-Gaussian data and small sample sizes. To mitigate these shortcomings, we propose a method to adjust the sample covariance matrix in order to reduce model risk.

  4. Cryogenic parallel, single phase flows: an analytical approach

    Science.gov (United States)

    Eichhorn, R.

    2017-02-01

    Managing the cryogenic flows inside a state-of-the-art accelerator cryomodule has become a demanding endeavour: In order to build highly efficient modules, all heat transfers are usually intercepted at various temperatures. For a multi-cavity module, operated at 1.8 K, this requires intercepts at 4 K and at 80 K at different locations with sometimes strongly varying heat loads which for simplicity reasons are operated in parallel. This contribution will describe an analytical approach, based on optimization theories.

  5. Optimal Control of Connected and Automated Vehicles at Roundabouts

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Liuhui [University of Delaware; Malikopoulos, Andreas [ORNL; Rios-Torres, Jackeline [ORNL

    2018-01-01

    Connectivity and automation in vehicles provide the most intriguing opportunity for enabling users to better monitor transportation network conditions and make better operating decisions to improve safety and reduce pollution, energy consumption, and travel delays. This study investigates the implications of optimally coordinating vehicles that are wirelessly connected to each other and to an infrastructure in roundabouts to achieve a smooth traffic flow without stop-and-go driving. We apply an optimization framework and an analytical solution that allows optimal coordination of vehicles for merging in such traffic scenario. The effectiveness of the efficiency of the proposed approach is validated through simulation and it is shown that coordination of vehicles can reduce total travel time by 3~49% and fuel consumption by 2~27% with respect to different traffic levels. In addition, network throughput is improved by up to 25% due to elimination of stop-and-go driving behavior.

  6. Analytic solution of field distribution and demagnetization function of ideal hollow cylindrical field source

    Science.gov (United States)

    Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min

    2017-09-01

    The Halbach type hollow cylindrical permanent magnet array (HCPMA) is a volume compact and energy conserved field source, which have attracted intense interests in many practical applications. Here, using the complex variable integration method based on the Biot-Savart Law (including current distributions inside the body and on the surfaces of magnet), we derive analytical field solutions to an ideal multipole HCPMA in entire space including the interior of magnet. The analytic field expression inside the array material is used to construct an analytic demagnetization function, with which we can explain the origin of demagnetization phenomena in HCPMA by taking into account an ideal magnetic hysteresis loop with finite coercivity. These analytical field expressions and demagnetization functions provide deeper insight into the nature of such permanent magnet array systems and offer guidance in designing optimized array system.

  7. Optimal cure cycle design of a resin-fiber composite laminate

    Science.gov (United States)

    Hou, Jean W.; Sheen, Jeenson

    1987-01-01

    A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.

  8. An Analytical Study of Prostate-Specific Antigen Dynamics.

    Science.gov (United States)

    Esteban, Ernesto P; Deliz, Giovanni; Rivera-Rodriguez, Jaileen; Laureano, Stephanie M

    2016-01-01

    The purpose of this research is to carry out a quantitative study of prostate-specific antigen dynamics for patients with prostatic diseases, such as benign prostatic hyperplasia (BPH) and localized prostate cancer (LPC). The proposed PSA mathematical model was implemented using clinical data of 218 Japanese patients with histological proven BPH and 147 Japanese patients with LPC (stages T2a and T2b). For prostatic diseases (BPH and LPC) a nonlinear equation was obtained and solved in a close form to predict PSA progression with patients' age. The general solution describes PSA dynamics for patients with both diseases LPC and BPH. Particular solutions allow studying PSA dynamics for patients with BPH or LPC. Analytical solutions have been obtained and solved in a close form to develop nomograms for a better understanding of PSA dynamics in patients with BPH and LPC. This study may be useful to improve the diagnostic and prognosis of prostatic diseases.

  9. Sensitivity analysis and optimization algorithms for 3D forging process design

    International Nuclear Information System (INIS)

    Do, T.T.; Fourment, L.; Laroussi, M.

    2004-01-01

    This paper presents several approaches for preform shape optimization in 3D forging. The process simulation is carried out using the FORGE3 registered finite element software, and the optimization problem regards the shape of initial axisymmetrical preforms. Several objective functions are considered, like the forging energy, the forging force or a surface defect criterion. Both deterministic and stochastic optimization algorithms are tested for 3D applications. The deterministic approach uses the sensitivity analysis that provides the gradient of the objective function. It is obtained by the adjoint-state method and semi-analytical differentiation. The study of stochastic approaches aims at comparing genetic algorithms and evolution strategies. Numerical results show the feasibility of such approaches, i.e. the achieving of satisfactory solutions within a limited number of 3D simulations, less than fifty. For a more industrial problem, the forging of a gear, encouraging optimization results are obtained

  10. Particle Swarm Optimization Toolbox

    Science.gov (United States)

    Grant, Michael J.

    2010-01-01

    The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry

  11. Probabilistic Cloning of Three Real States with Optimal Success Probabilities

    Science.gov (United States)

    Rui, Pin-shu

    2017-06-01

    We investigate the probabilistic quantum cloning (PQC) of three real states with average probability distribution. To get the analytic forms of the optimal success probabilities we assume that the three states have only two pairwise inner products. Based on the optimal success probabilities, we derive the explicit form of 1 →2 PQC for cloning three real states. The unitary operation needed in the PQC process is worked out too. The optimal success probabilities are also generalized to the M→ N PQC case.

  12. Optimal river monitoring network using optimal partition analysis: a case study of Hun River, Northeast China.

    Science.gov (United States)

    Wang, Hui; Liu, Chunyue; Rong, Luge; Wang, Xiaoxu; Sun, Lina; Luo, Qing; Wu, Hao

    2018-01-09

    River monitoring networks play an important role in water environmental management and assessment, and it is critical to develop an appropriate method to optimize the monitoring network. In this study, an effective method was proposed based on the attainment rate of National Grade III water quality, optimal partition analysis and Euclidean distance, and Hun River was taken as a method validation case. There were 7 sampling sites in the monitoring network of the Hun River, and 17 monitoring items were analyzed once a month during January 2009 to December 2010. The results showed that the main monitoring items in the surface water of Hun River were ammonia nitrogen (NH 4 + -N), chemical oxygen demand, and biochemical oxygen demand. After optimization, the required number of monitoring sites was reduced from seven to three, and 57% of the cost was saved. In addition, there were no significant differences between non-optimized and optimized monitoring networks, and the optimized monitoring networks could correctly represent the original monitoring network. The duplicate setting degree of monitoring sites decreased after optimization, and the rationality of the monitoring network was improved. Therefore, the optimal method was identified as feasible, efficient, and economic.

  13. Development of a Suite of Analytical Tools for Energy and Water Infrastructure Knowledge Discovery

    Science.gov (United States)

    Morton, A.; Piburn, J.; Stewart, R.; Chandola, V.

    2017-12-01

    Energy and water generation and delivery systems are inherently interconnected. With demand for energy growing, the energy sector is experiencing increasing competition for water. With increasing population and changing environmental, socioeconomic, and demographic scenarios, new technology and investment decisions must be made for optimized and sustainable energy-water resource management. This also requires novel scientific insights into the complex interdependencies of energy-water infrastructures across multiple space and time scales. To address this need, we've developed a suite of analytical tools to support an integrated data driven modeling, analysis, and visualization capability for understanding, designing, and developing efficient local and regional practices related to the energy-water nexus. This work reviews the analytical capabilities available along with a series of case studies designed to demonstrate the potential of these tools for illuminating energy-water nexus solutions and supporting strategic (federal) policy decisions.

  14. Standardless quantification by parameter optimization in electron probe microanalysis

    International Nuclear Information System (INIS)

    Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.

    2012-01-01

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: ► A method for standardless quantification in EPMA is presented. ► It gives better results than the commercial software GENESIS Spectrum. ► It gives better results than the software DTSA. ► It allows the determination of the conductive coating thickness. ► It gives an estimation for the concentration uncertainties.

  15. Application of tracer gas studies in the optimal design of soil vapor extraction systems

    International Nuclear Information System (INIS)

    Marley, M.C.; Cody, R.J.; Polonsky, J.D.; Woodward, D.D.; Buterbaugh, G.J.

    1992-01-01

    In the design of an optimal, cost effective vapor extraction system (VE) for the remediation of volatile organic compounds (VOCs), it is necessary to account for heterogeneities in the vadose zone. In some cases, such as those found in relatively homogeneous sands, heterogeneities can be neglected as induced air flow through the subsurface can be considered uniform. The subsurface conditions encountered at many sites (soil/bedrock interfaces, fractured bedrock) will result in preferential subsurface-air flow pathways during the operation of the VES. The use of analytical and numerical compressible fluid flow models calibrated and verified from parameter evaluation tests can be utilized to determine vadose zone permeability tensors in heterogeneous stratifications and can be used to project optimal, full scale VES performance. Model-derived estimations of the effect of uniform and/or preferential air flow pathways on subsurface induced air flow velocities can be enhanced, confirmed utilizing tracer gas studies. A vadose zone tracer gas study entails the injection of an easily detected, preferably inert gas into differing locations within the vadose zone at distances away from the VES extraction well. The VES extraction well is monitored for the detection of the gas. This is an effective field methodology to qualify and quantify the subsurface air flow pathways. It is imperative to gain an understanding of the dynamics of the air flow in the soils and lithologies of each individual site, and design quick and effective methodologies for the characterization of the subsurface to streamline remediation costs and system operations. This paper focuses on the use of compressible fluid flow models and tracer gas studies in the enhancement of the design of vapor extraction systems

  16. Analytic Hypoellipticity and the Treves Conjecture

    Directory of Open Access Journals (Sweden)

    Marco Mughetti

    2016-12-01

    Full Text Available We are concerned with the problem of the analytic hypoellipticity; precisely, we focus on the real analytic regularity of the solutions of sums of squares with real analytic coefficients. Treves conjecture states that an operator of this type is analytic hypoelliptic if and only if all the strata in the Poisson-Treves stratification are symplectic. We discuss a model operator, P, (firstly appeared and studied in [3] having a single symplectic stratum and prove that it is not analytic hypoelliptic. This yields a counterexample to the sufficient part of Treves conjecture; the necessary part is still an open problem.

  17. The physics of an optimal basketball free throw

    OpenAIRE

    Barzykina, Irina

    2017-01-01

    A physical model is developed, which suggests a pathway to determining the optimal release conditions for a basketball free throw. Theoretical framework is supported by Monte Carlo simulations and a series of free throws performed and analysed at Southbank International School. The model defines a smile-shaped success region in angle-velocity space where a free throw will score. A formula for the minimum throwing angle is derived analytically. The optimal throwing conditions are determined nu...

  18. Optimal Investment-Consumption Strategy under Inflation in a Markovian Regime-Switching Market

    Directory of Open Access Journals (Sweden)

    Huiling Wu

    2016-01-01

    Full Text Available This paper studies an investment-consumption problem under inflation. The consumption price level, the prices of the available assets, and the coefficient of the power utility are assumed to be sensitive to the states of underlying economy modulated by a continuous-time Markovian chain. The definition of admissible strategies and the verification theory corresponding to this stochastic control problem are presented. The analytical expression of the optimal investment strategy is derived. The existence, boundedness, and feasibility of the optimal consumption are proven. Finally, we analyze in detail by mathematical and numerical analysis how the risk aversion, the correlation coefficient between the inflation and the stock price, the inflation parameters, and the coefficient of utility affect the optimal investment and consumption strategy.

  19. Augmented Lagrangian Method For Discretized Optimal Control ...

    African Journals Online (AJOL)

    In this paper, we are concerned with one-dimensional time invariant optimal control problem, whose objective function is quadratic and the dynamical system is a differential equation with initial condition .Since most real life problems are nonlinear and their analytical solutions are not readily available, we resolve to ...

  20. Phase Transitions in Combinatorial Optimization Problems: Basics, Algorithms and Statistical Mechanics

    Science.gov (United States)

    Hartmann, Alexander K.; Weigt, Martin

    2005-10-01

    A concise, comprehensive introduction to the topic of statistical physics of combinatorial optimization, bringing together theoretical concepts and algorithms from computer science with analytical methods from physics. The result bridges the gap between statistical physics and combinatorial optimization, investigating problems taken from theoretical computing, such as the vertex-cover problem, with the concepts and methods of theoretical physics. The authors cover rapid developments and analytical methods that are both extremely complex and spread by word-of-mouth, providing all the necessary basics in required detail. Throughout, the algorithms are shown with examples and calculations, while the proofs are given in a way suitable for graduate students, post-docs, and researchers. Ideal for newcomers to this young, multidisciplinary field.