WorldWideScience

Sample records for optimized field sampling

  1. Optimal sampling schemes for vegetation and geological field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2012-07-01

    Full Text Available The presentation made to Wits Statistics Department was on common classification methods used in the field of remote sensing, and the use of remote sensing to design optimal sampling schemes for field visits with applications in vegetation...

  2. Using remotely-sensed data for optimal field sampling

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-09-01

    Full Text Available M B E R 2 0 0 8 15 USING REMOTELY- SENSED DATA FOR OPTIMAL FIELD SAMPLING BY DR PRAVESH DEBBA STATISTICS IS THE SCIENCE pertaining to the collection, summary, analysis, interpretation and presentation of data. It is often impractical... studies are: where to sample, what to sample and how many samples to obtain. Conventional sampling techniques are not always suitable in environmental studies and scientists have explored the use of remotely-sensed data as ancillary information to aid...

  3. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available sampling schemes case studies Optimized field sampling representing the overall distribution of a particular mineral Deriving optimal exploration target zones CONTINUUM REMOVAL for vegetation [13, 27, 46]. The convex hull transform is a method... of normalizing spectra [16, 41]. The convex hull technique is anal- ogous to fitting a rubber band over a spectrum to form a continuum. Figure 5 shows the concept of the convex hull transform. The differ- ence between the hull and the orig- inal spectrum...

  4. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  5. Field Sampling from a Segmented Image

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-06-01

    Full Text Available This paper presents a statistical method for deriving the optimal prospective field sampling scheme on a remote sensing image to represent different categories in the field. The iterated conditional modes algorithm (ICM) is used for segmentation...

  6. Visual Sample Plan (VSP) - FIELDS Integration

    Energy Technology Data Exchange (ETDEWEB)

    Pulsipher, Brent A.; Wilson, John E.; Gilbert, Richard O.; Hassig, Nancy L.; Carlson, Deborah K.; Bing-Canar, John; Cooper, Brian; Roth, Chuck

    2003-04-19

    Two software packages, VSP 2.1 and FIELDS 3.5, are being used by environmental scientists to plan the number and type of samples required to meet project objectives, display those samples on maps, query a database of past sample results, produce spatial models of the data, and analyze the data in order to arrive at defensible decisions. VSP 2.0 is an interactive tool to calculate optimal sample size and optimal sample location based on user goals, risk tolerance, and variability in the environment and in lab methods. FIELDS 3.0 is a set of tools to explore the sample results in a variety of ways to make defensible decisions with quantified levels of risk and uncertainty. However, FIELDS 3.0 has a small sample design module. VSP 2.0, on the other hand, has over 20 sampling goals, allowing the user to input site-specific assumptions such as non-normality of sample results, separate variability between field and laboratory measurements, make two-sample comparisons, perform confidence interval estimation, use sequential search sampling methods, and much more. Over 1,000 copies of VSP are in use today. FIELDS is used in nine of the ten U.S. EPA regions, by state regulatory agencies, and most recently by several international countries. Both software packages have been peer-reviewed, enjoy broad usage, and have been accepted by regulatory agencies as well as site project managers as key tools to help collect data and make environmental cleanup decisions. Recently, the two software packages were integrated, allowing the user to take advantage of the many design options of VSP, and the analysis and modeling options of FIELDS. The transition between the two is simple for the user – VSP can be called from within FIELDS, automatically passing a map to VSP and automatically retrieving sample locations and design information when the user returns to FIELDS. This paper will describe the integration, give a demonstration of the integrated package, and give users download

  7. Sampling soils for 137Cs using various field-sampling volumes

    International Nuclear Information System (INIS)

    Nyhan, J.W.; Schofield, T.G.; White, G.C.; Trujillo, G.

    1981-10-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from intensive study area in the fallout pathway of Trinity were sampled for 137 Cs using 25-, 500-, 2500-, and 12 500-cm 3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137 Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137 Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, where CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137 Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2 to 4 aliquots out of an many as 30 collected need be assayed for 137 Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137 Cs concentration decreased dramatically, but decreased very little with additional labor

  8. Sampling Criterion for EMC Near Field Measurements

    DEFF Research Database (Denmark)

    Franek, Ondrej; Sørensen, Morten; Ebert, Hans

    2012-01-01

    An alternative, quasi-empirical sampling criterion for EMC near field measurements intended for close coupling investigations is proposed. The criterion is based on maximum error caused by sub-optimal sampling of near fields in the vicinity of an elementary dipole, which is suggested as a worst......-case representative of a signal trace on a typical printed circuit board. It has been found that the sampling density derived in this way is in fact very similar to that given by the antenna near field sampling theorem, if an error less than 1 dB is required. The principal advantage of the proposed formulation is its...

  9. An evaluation of soil sampling for 137Cs using various field-sampling volumes.

    Science.gov (United States)

    Nyhan, J W; White, G C; Schofield, T G; Trujillo, G

    1983-05-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from an intensive study area in the fallout pathway of Trinity were sampled for 137Cs using 25-, 500-, 2500- and 12,500-cm3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, whereas CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2-4 aliquots out of as many as 30 collected need be assayed for 137Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137Cs concentration decreased dramatically, but decreased very little with additional labor.

  10. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    Science.gov (United States)

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  11. The optimal sampling of outsourcing product

    International Nuclear Information System (INIS)

    Yang Chao; Pei Jiacheng

    2014-01-01

    In order to improve quality and cost, the sampling c = 0 has been introduced to the inspection of outsourcing product. According to the current quality level (p = 0.4%), we confirmed the optimal sampling that is: Ac = 0; if N ≤ 3000, n = 55; 3001 ≤ N ≤ 10000, n = 86; N ≥ 10001, n = 108. Through analyzing the OC curve, we came to the conclusion that when N ≤ 3000, the protective ability of optimal sampling for product quality is stronger than current sampling. Corresponding to the same 'consumer risk', the product quality of optimal sampling is superior to current sampling. (authors)

  12. β-NMR sample optimization

    CERN Document Server

    Zakoucka, Eva

    2013-01-01

    During my summer student programme I was working on sample optimization for a new β-NMR project at the ISOLDE facility. The β-NMR technique is well-established in solid-state physics and just recently it is being introduced for applications in biochemistry and life sciences. The β-NMR collaboration will be applying for beam time to the INTC committee in September for three nuclei: Cu, Zn and Mg. Sample optimization for Mg was already performed last year during the summer student programme. Therefore sample optimization for Cu and Zn had to be completed as well for the project proposal. My part in the project was to perform thorough literature research on techniques studying Cu and Zn complexes in native conditions, search for relevant binding candidates for Cu and Zn applicable for ß-NMR and eventually evaluate selected binding candidates using UV-VIS spectrometry.

  13. Magnetorheological measurements with consideration for the internal magnetic field in samples

    Energy Technology Data Exchange (ETDEWEB)

    Kordonski, W; Gorodkin, S [QED Technologies International, 1040 University Ave., Rochester, NY 14607 (United States)], E-mail: kordonski@qedmrf.com

    2009-02-01

    The magnetically induced yield stress in a sample of suspension of magnetic particles is associated with formation of a field-oriented structure, the strength of which depends on the degree of particles magnetization. This factor is largely defined by the actual magnetic field strength in the sample. At the same time it is common practice to present and analyze magnetorheological characteristics as a function of the applied magnetic field. Uncertainty of an influence function in magnetorheology hampers interpretation of data obtained with different measurement configurations. It was shown in this paper that rheological response of magnetorheological fluid to the applied magnetic field is defined by the sample's actual (internal) magnetic field intensity, which, in turn, depends on sample geometry and field orientation all other factors being equal. Utilization of the sample's actual field as an influence function in magnetorheology allows proper interpretation of data obtained with different measuring system configurations. Optimization of the actual internal field is a promising approach in designing of energy efficient magnetorheological devices.

  14. Optimization of well field management

    DEFF Research Database (Denmark)

    Hansen, Annette Kirstine

    Groundwater is a limited but important resource for fresh water supply. Differ- ent conflicting objectives are important when operating a well field. This study investigates how the management of a well field can be improved with respect to different objectives simultaneously. A framework...... for optimizing well field man- agement using multi-objective optimization is developed. The optimization uses the Strength Pareto Evolutionary Algorithm 2 (SPEA2) to find the Pareto front be- tween the conflicting objectives. The Pareto front is a set of non-inferior optimal points and provides an important tool...... for the decision-makers. The optimization framework is tested on two case studies. Both abstract around 20,000 cubic meter of water per day, but are otherwise rather different. The first case study concerns the management of Hardhof waterworks, Switzer- land, where artificial infiltration of river water...

  15. Serum Dried Samples to Detect Dengue Antibodies: A Field Study

    Directory of Open Access Journals (Sweden)

    Angelica Maldonado-Rodríguez

    2017-01-01

    Full Text Available Background. Dried blood and serum samples are useful resources for detecting antiviral antibodies. The conditions for elution of the sample need to be optimized for each disease. Dengue is a widespread disease in Mexico which requires continuous surveillance. In this study, we standardized and validated a protocol for the specific detection of dengue antibodies from dried serum spots (DSSs. Methods. Paired serum and DSS samples from 66 suspected cases of dengue were collected in a clinic in Veracruz, Mexico. Samples were sent to our laboratory, where the conditions for optimal elution of DSSs were established. The presence of anti-dengue antibodies was determined in the paired samples. Results. DSS elution conditions were standardized as follows: 1 h at 4°C in 200 µl of DNase-, RNase-, and protease-free PBS (1x. The optimal volume of DSS eluate to be used in the IgG assay was 40 µl. Sensitivity of 94%, specificity of 93.3%, and kappa concordance of 0.87 were obtained when comparing the antidengue reactivity between DSSs and serum samples. Conclusion. DSS samples are useful for detecting anti-dengue IgG antibodies in the field.

  16. Sampling optimization for printer characterization by direct search.

    Science.gov (United States)

    Bianco, Simone; Schettini, Raimondo

    2012-12-01

    Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.

  17. Minimal BRDF Sampling for Two-Shot Near-Field Reflectance Acquisition

    DEFF Research Database (Denmark)

    Xu, Zexiang; Nielsen, Jannik Boll; Yu, Jiyang

    2016-01-01

    We develop a method to acquire the BRDF of a homogeneous flat sample from only two images, taken by a near-field perspective camera, and lit by a directional light source. Our method uses the MERL BRDF database to determine the optimal set of lightview pairs for data-driven reflectance acquisition...

  18. Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP

    Science.gov (United States)

    Roshani, E.; Berg, A. A.; Lindsay, J.

    2013-12-01

    Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories

  19. Optimal field splitting for large intensity-modulated fields

    International Nuclear Information System (INIS)

    Kamath, Srijit; Sahni, Sartaj; Ranka, Sanjay; Li, Jonathan; Palta, Jatinder

    2004-01-01

    The multileaf travel range limitations on some linear accelerators require the splitting of a large intensity-modulated field into two or more adjacent abutting intensity-modulated subfields. The abutting subfields are then delivered as separate treatment fields. This workaround not only increases the treatment delivery time but it also increases the total monitor units (MU) delivered to the patient for a given prescribed dose. It is imperative that the cumulative intensity map of the subfields is exactly the same as the intensity map of the large field generated by the dose optimization algorithm, while satisfying hardware constraints of the delivery system. In this work, we describe field splitting algorithms that split a large intensity-modulated field into two or more intensity-modulated subfields with and without feathering, with optimal MU efficiency while satisfying the hardware constraints. Compared to a field splitting technique (without feathering) used in a commercial planning system, our field splitting algorithm (without feathering) shows a decrease in total MU of up to 26% on clinical cases and up to 63% on synthetic cases

  20. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  1. Neuro-genetic system for optimization of GMI samples sensitivity.

    Science.gov (United States)

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Analytic Optimization of Near-Field Optical Chirality Enhancement

    Science.gov (United States)

    2017-01-01

    We present an analytic derivation for the enhancement of local optical chirality in the near field of plasmonic nanostructures by tuning the far-field polarization of external light. We illustrate the results by means of simulations with an achiral and a chiral nanostructure assembly and demonstrate that local optical chirality is significantly enhanced with respect to circular polarization in free space. The optimal external far-field polarizations are different from both circular and linear. Symmetry properties of the nanostructure can be exploited to determine whether the optimal far-field polarization is circular. Furthermore, the optimal far-field polarization depends on the frequency, which results in complex-shaped laser pulses for broadband optimization. PMID:28239617

  3. Optimal sampling schemes applied in geology

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-05-01

    Full Text Available Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology UP 2010 2 / 47 Outline 1 Introduction to hyperspectral remote... sensing 2 Objective of Study 1 3 Study Area 4 Data used 5 Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology...

  4. Feasible sampling plan for Bemisia tabaci control decision-making in watermelon fields.

    Science.gov (United States)

    Lima, Carlos Ho; Sarmento, Renato A; Pereira, Poliana S; Galdino, Tarcísio Vs; Santos, Fábio A; Silva, Joedna; Picanço, Marcelo C

    2017-11-01

    The silverleaf whitefly Bemisia tabaci is one of the most important pests of watermelon fields worldwide. Conventional sampling plans are the starting point for the generation of decision-making systems of integrated pest management programs. The aim of this study was to determine a conventional sampling plan for B. tabaci in watermelon fields. The optimal leaf for B. tabaci adult sampling was the 6 th most apical leaf. Direct counting was the best pest sampling technique. Crop pest densities fitted the negative binomial distribution and had a common aggregation parameter (K common ). The sampling plan consisted of evaluating 103 samples per plot. This sampling plan was conducted for 56 min, costing US$ 2.22 per sampling and with a 10% maximum evaluation error. The sampling plan determined in this study can be adopted by farmers because it enables the adequate evaluation of B. tabaci populations in watermelon fields (10% maximum evaluation error) and is a low-cost (US$ 2.22 per sampling), fast (56 min per sampling) and feasible (because it may be used in a standardized way throughout the crop cycle) technique. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  5. Choice of sample size for high transport critical current density in a granular superconductor: percolation versus self-field effects

    International Nuclear Information System (INIS)

    Mulet, R.; Diaz, O.; Altshuler, E.

    1997-01-01

    The percolative character of the current paths and the self-field effects were considered to estimate optimal sample dimensions for the transport current of a granular superconductor by means of a Monte Carlo algorithm and critical-state model calculations. We showed that, under certain conditions, self-field effects are negligible and the J c dependence on sample dimensions is determined by the percolative character of the current. Optimal dimensions are demonstrated to be a function of the fraction of superconducting phase in the sample. (author)

  6. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  7. Penumbra modifier for optimal electron field combination

    International Nuclear Information System (INIS)

    El-Sherbini, N.; Hejazy, M.; Khalil, W.

    2008-01-01

    Treatment with megavoltage electron beam is ideal for irradiating shallow seated tumors because of their limited range in tissues. However, the treatment of extended areas with electrons requires the use of two or more adjacent fields. Variations may arise at the junction of the fields. These dose variations come from the presence of large bulges in the low value isodose curves created by electron beam divergence and lateral scattering in tissues. Overlapping of these bulges, creates a high dose region at depths. While constriction of the isodose curves near the surface may produce a Long-term follow-up study critically on the fields separation. To overcome this problem, several authors have proposed techniques for matching electron beam edge in such a way as to make the overlap region as uniform as possible. The simplest approach to the problem is to optimize the skin gap between the two adjacent electron field edges. The increased lateral scatter of low-energy electrons and the machine specific characteristics of an electron beam penumbra make the determination of an optimized skin gap somewhat complicated. Optimization is achieved by a complete set of trial and error measurements. The main limitation to the usefulness of the optimized skin gap technique is the strong sensitivity of the dose distribution in the field junction region to small deviation in field separation or in the angulation of the incident electron beams, making it strongly dependent on positioning. The present study is done at electron beam energies of 6, 8, and 15 MeV. The method depends on the abutment of different field areas using beam edge modifier (Penumbra Generator) made of cerrobend. The objectives of this study are to present a systematic study of the modified electron field for better under standing of the behavior and physical characteristics of the penumbra generator, and to investigate the feasibility of using this technique for large electron fields. Also to obtain a quantitative

  8. Optimization of the GBMV2 implicit solvent force field for accurate simulation of protein conformational equilibria.

    Science.gov (United States)

    Lee, Kuo Hao; Chen, Jianhan

    2017-06-15

    Accurate treatment of solvent environment is critical for reliable simulations of protein conformational equilibria. Implicit treatment of solvation, such as using the generalized Born (GB) class of models arguably provides an optimal balance between computational efficiency and physical accuracy. Yet, GB models are frequently plagued by a tendency to generate overly compact structures. The physical origins of this drawback are relatively well understood, and the key to a balanced implicit solvent protein force field is careful optimization of physical parameters to achieve a sufficient level of cancellation of errors. The latter has been hampered by the difficulty of generating converged conformational ensembles of non-trivial model proteins using the popular replica exchange sampling technique. Here, we leverage improved sampling efficiency of a newly developed multi-scale enhanced sampling technique to re-optimize the generalized-Born with molecular volume (GBMV2) implicit solvent model with the CHARMM36 protein force field. Recursive optimization of key GBMV2 parameters (such as input radii) and protein torsion profiles (via the CMAP torsion cross terms) has led to a more balanced GBMV2 protein force field that recapitulates the structures and stabilities of both helical and β-hairpin model peptides. Importantly, this force field appears to be free of the over-compaction bias, and can generate structural ensembles of several intrinsically disordered proteins of various lengths that seem highly consistent with available experimental data. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  10. Variable-Field Analytical Ultracentrifugation: I. Time-Optimized Sedimentation Equilibrium

    Science.gov (United States)

    Ma, Jia; Metrick, Michael; Ghirlando, Rodolfo; Zhao, Huaying; Schuck, Peter

    2015-01-01

    Sedimentation equilibrium (SE) analytical ultracentrifugation (AUC) is a gold standard for the rigorous determination of macromolecular buoyant molar masses and the thermodynamic study of reversible interactions in solution. A significant experimental drawback is the long time required to attain SE, which is usually on the order of days. We have developed a method for time-optimized SE (toSE) with defined time-varying centrifugal fields that allow SE to be attained in a significantly (up to 10-fold) shorter time than is usually required. To achieve this, numerical Lamm equation solutions for sedimentation in time-varying fields are computed based on initial estimates of macromolecular transport properties. A parameterized rotor-speed schedule is optimized with the goal of achieving a minimal time to equilibrium while limiting transient sample preconcentration at the base of the solution column. The resulting rotor-speed schedule may include multiple over- and underspeeding phases, balancing the formation of gradients from strong sedimentation fluxes with periods of high diffusional transport. The computation is carried out in a new software program called TOSE, which also facilitates convenient experimental implementation. Further, we extend AUC data analysis to sedimentation processes in such time-varying centrifugal fields. Due to the initially high centrifugal fields in toSE and the resulting strong migration, it is possible to extract sedimentation coefficient distributions from the early data. This can provide better estimates of the size of macromolecular complexes and report on sample homogeneity early on, which may be used to further refine the prediction of the rotor-speed schedule. In this manner, the toSE experiment can be adapted in real time to the system under study, maximizing both the information content and the time efficiency of SE experiments. PMID:26287634

  11. Field sampling for monitoring, migration and defining the areal extent of chemical contamination

    International Nuclear Information System (INIS)

    Thomas, J.M.; Skalski, J.R.; Eberhardt, L.L.; Simmons, M.A.

    1984-01-01

    As part of two studies funded by the U.S. Nuclear Regulatory Commission and the USEPA, the authors have investigated field sampling strategies and compositing as a means of detecting spills or migration at commercial low-level radioactive and chemical waste disposal sites and bioassays for detecting contamination at chemical waste sites. Compositing (pooling samples) for detection is discussed first, followed by the development of a statistical test to determine whether any component of a composite exceeds a prescribed maximum acceptable level. Subsequently, the authors explore the question of optimal field sampling designs and present the features of a microcomputer program designed to show the difficulties in constructing efficient field designs and using compositing schemes. Finally, they propose the use of bioassays as an adjunct or replacement for chemical analysis as a means of detecting and defining the areal extent of chemical migration

  12. Low NOx combustion and SCR flow field optimization in a low volatile coal fired boiler.

    Science.gov (United States)

    Liu, Xing; Tan, Houzhang; Wang, Yibin; Yang, Fuxin; Mikulčić, Hrvoje; Vujanović, Milan; Duić, Neven

    2018-08-15

    Low NO x burner redesign and deep air staging have been carried out to optimize the poor ignition and reduce the NO x emissions in a low volatile coal fired 330 MW e boiler. Residual swirling flow in the tangentially-fired furnace caused flue gas velocity deviations at furnace exit, leading to flow field unevenness in the SCR (selective catalytic reduction) system and poor denitrification efficiency. Numerical simulations on the velocity field in the SCR system were carried out to determine the optimal flow deflector arrangement to improve flow field uniformity of SCR system. Full-scale experiment was performed to investigate the effect of low NO x combustion and SCR flow field optimization. Compared with the results before the optimization, the NO x emissions at furnace exit decreased from 550 to 650 mg/Nm³ to 330-430 mg/Nm³. The sample standard deviation of the NO x emissions at the outlet section of SCR decreased from 34.8 mg/Nm³ to 7.8 mg/Nm³. The consumption of liquid ammonia reduced from 150 to 200 kg/h to 100-150 kg/h after optimization. Copyright © 2018. Published by Elsevier Ltd.

  13. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Directory of Open Access Journals (Sweden)

    Jake M Ferguson

    2014-06-01

    Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  14. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  15. Method optimization for non-equilibrium solid phase microextraction sampling of HAPs for GC/MS analysis

    Science.gov (United States)

    Zawadowicz, M. A.; Del Negro, L. A.

    2010-12-01

    Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.

  16. Optimization Models for Petroleum Field Exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Jonsbraaten, Tore Wiig

    1998-12-31

    This thesis presents and discusses various models for optimal development of a petroleum field. The objective of these optimization models is to maximize, under many uncertain parameters, the project`s expected net present value. First, an overview of petroleum field optimization is given from the point of view of operations research. Reservoir equations for a simple reservoir system are derived and discretized and included in optimization models. Linear programming models for optimizing production decisions are discussed and extended to mixed integer programming models where decisions concerning platform, wells and production strategy are optimized. Then, optimal development decisions under uncertain oil prices are discussed. The uncertain oil price is estimated by a finite set of price scenarios with associated probabilities. The problem is one of stochastic mixed integer programming, and the solution approach is to use a scenario and policy aggregation technique developed by Rockafellar and Wets although this technique was developed for continuous variables. Stochastic optimization problems with focus on problems with decision dependent information discoveries are also discussed. A class of ``manageable`` problems is identified and an implicit enumeration algorithm for finding optimal decision policy is proposed. Problems involving uncertain reservoir properties but with a known initial probability distribution over possible reservoir realizations are discussed. Finally, a section on Nash-equilibrium and bargaining in an oil reservoir management game discusses the pool problem arising when two lease owners have access to the same underlying oil reservoir. Because the oil tends to migrate, both lease owners have incentive to drain oil from the competitors part of the reservoir. The discussion is based on a numerical example. 107 refs., 31 figs., 14 tabs.

  17. A demonstration of magnetic field optimization in LHD

    Energy Technology Data Exchange (ETDEWEB)

    Murakami, S.; Yamada, H. [National Inst. for Fusion Science, Toki, Gifu (Japan); Wakasa, A. [Hokkaido Univ., Graduate School of Engineering, Sapporo, Hokkaido (JP)] [and others

    2002-11-01

    An optimized configuration of the neoclassical transport and the energetic particle confinement to a level typical of so-called 'advanced stellarators' is found by shifting the magnetic axis position in LHD. Electron heat transport and NBI beam ion distribution are investigated in low-collisionality LHD plasma in order to study the magnetic field optimization effect on the thermal plasma transport and the energetic particle confinement. A higher electron temperature is obtained in the optimized configuration, and the transport analysis suggests a considerable effect of neoclassical transport on the electron heat transport assuming the ion-root level of radial electric field. Also a higher energetic ion distribution of NBI beam ions is observed showing the improvement of the energetic particle confinement. These obtained results support a future reactor design by magnetic field optimization in a non-axisymmetric configuration. (author)

  18. A demonstration of magnetic field optimization in LHD

    Energy Technology Data Exchange (ETDEWEB)

    Murakami, S.; Yamada, H. [National Inst. for Fusion Science, Toki, Gifu (Japan); Wakasa, A. [Hokkaido Univ., Graduate School of Engineering, Sapporo, Hokkaido (JP)] [and others

    2002-10-01

    An optimized configuration of the neoclassical transport and the energetic particle confinement to a level typical of so-called 'advanced stellarators' is found by shifting the magnetic axis position in LHD. Electron heat transport and NBI beam ion distribution are investigated in low-collisionality LHD plasma in order to study the magnetic field optimization effect on the thermal plasma transport and the energetic particle confinement. A higher electron temperature is obtained in the optimized configuration, and the transport analysis suggests a considerable effect of neoclassical transport on the electron heat transport assuming the ion-root level of radial electric field. Also a higher energetic ion distribution of NBI beam ions is observed showing the improvement of the energetic particle confinement. These obtained results support a future reactor design by magnetic field optimization in a non-axisymmetric configuration. (author)

  19. Optimization of lift gas allocation in a gas lifted oil field as non-linear optimization problem

    Directory of Open Access Journals (Sweden)

    Roshan Sharma

    2012-01-01

    Full Text Available Proper allocation and distribution of lift gas is necessary for maximizing total oil production from a field with gas lifted oil wells. When the supply of the lift gas is limited, the total available gas should be optimally distributed among the oil wells of the field such that the total production of oil from the field is maximized. This paper describes a non-linear optimization problem with constraints associated with the optimal distribution of the lift gas. A non-linear objective function is developed using a simple dynamic model of the oil field where the decision variables represent the lift gas flow rate set points of each oil well of the field. The lift gas optimization problem is solved using the emph'fmincon' solver found in MATLAB. As an alternative and for verification, hill climbing method is utilized for solving the optimization problem. Using both of these methods, it has been shown that after optimization, the total oil production is increased by about 4. For multiple oil wells sharing lift gas from a common source, a cascade control strategy along with a nonlinear steady state optimizer behaves as a self-optimizing control structure when the total supply of lift gas is assumed to be the only input disturbance present in the process. Simulation results show that repeated optimization performed after the first time optimization under the presence of the input disturbance has no effect in the total oil production.

  20. Sampling Design of Soil Physical Properties in a Conilon Coffee Field

    Directory of Open Access Journals (Sweden)

    Eduardo Oliveira de Jesus Santos

    Full Text Available ABSTRACT Establishing the number of samples required to determine values of soil physical properties ultimately results in optimization of labor and allows better representation of such attributes. The objective of this study was to analyze the spatial variability of soil physical properties in a Conilon coffee field and propose a soil sampling method better attuned to conditions of the management system. The experiment was performed in a Conilon coffee field in Espírito Santo state, Brazil, under a 3.0 × 2.0 × 1.0 m (4,000 plants ha-1 double spacing design. An irregular grid, with dimensions of 107 × 95.7 m and 65 sampling points, was set up. Soil samples were collected from the 0.00-0.20 m depth from each sampling point. Data were analyzed under descriptive statistical and geostatistical methods. Using statistical parameters, the adequate number of samples for analyzing the attributes under study was established, which ranged from 1 to 11 sampling points. With the exception of particle density, all soil physical properties showed a spatial dependence structure best fitted to the spherical model. Establishment of the number of samples and spatial variability for the physical properties of soils may be useful in developing sampling strategies that minimize costs for farmers within a tolerable and predictable level of error.

  1. Time-optimal path planning in uncertain flow fields using ensemble method

    KAUST Repository

    Wang, Tong

    2016-01-06

    An ensemble-based approach is developed to conduct time-optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where a set deterministic predictions is used to model and quantify uncertainty in the predictions. In the operational setting, much about dynamics, topography and forcing of the ocean environment is uncertain, and as a result a single path produced by a model simulation has limited utility. To overcome this limitation, we rely on a finitesize ensemble of deterministic forecasts to quantify the impact of variability in the dynamics. The uncertainty of flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each the resulting realizations of the uncertain current field, we predict the optimal path by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of sampling strategy, and develop insight into extensions dealing with regional or general circulation models. In particular, the ensemble method enables us to perform a statistical analysis of travel times, and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  2. Optimal sampling in damage detection of flexural beams by continuous wavelet transform

    International Nuclear Information System (INIS)

    Basu, B; Broderick, B M; Montanari, L; Spagnoli, A

    2015-01-01

    Modern measurement techniques are improving in capability to capture spatial displacement fields occurring in deformed structures with high precision and in a quasi-continuous manner. This in turn has made the use of vibration-based damage identification methods more effective and reliable for real applications. However, practical measurement and data processing issues still present barriers to the application of these methods in identifying several types of structural damage. This paper deals with spatial Continuous Wavelet Transform (CWT) damage identification methods in beam structures with the aim of addressing the following key questions: (i) can the cost of damage detection be reduced by down-sampling? (ii) what is the minimum number of sampling intervals required for optimal damage detection ? The first three free vibration modes of a cantilever and a simple supported beam with an edge open crack are numerically simulated. A thorough parametric study is carried out by taking into account the key parameters governing the problem, including level of noise, crack depth and location, mechanical and geometrical parameters of the beam. The results are employed to assess the optimal number of sampling intervals for effective damage detection. (paper)

  3. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  4. Optimal updating magnitude in adaptive flat-distribution sampling.

    Science.gov (United States)

    Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery

    2017-11-07

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  5. Optimal design of sampling and mapping schemes in the radiometric exploration of Chipilapa, El Salvador (Geo-statistics)

    International Nuclear Information System (INIS)

    Balcazar G, M.; Flores R, J.H.

    1992-01-01

    As part of the knowledge about the radiometric surface exploration, carried out in the geothermal field of Chipilapa, El Salvador, its were considered the geo-statistical parameters starting from the calculated variogram of the field data, being that the maxim distance of correlation of the samples in 'radon' in the different observation addresses (N-S, E-W, N W-S E, N E-S W), it was of 121 mts for the monitoring grill in future prospectus in the same area. Being derived of it an optimization (minimum cost) in the spacing of the field samples by means of geo-statistical techniques, without losing the detection of the anomaly. (Author)

  6. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  7. Optimal relaxed causal sampler using sampled-date system theory

    NARCIS (Netherlands)

    Shekhawat, Hanumant; Meinsma, Gjerrit

    This paper studies the design of an optimal relaxed causal sampler using sampled data system theory. A lifted frequency domain approach is used to obtain the existence conditions and the optimal sampler. A state space formulation of the results is also provided. The resulting optimal relaxed causal

  8. Sample Adaptive Offset Optimization in HEVC

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2014-11-01

    Full Text Available As the next generation of video coding standard, High Efficiency Video Coding (HEVC adopted many useful tools to improve coding efficiency. Sample Adaptive Offset (SAO, is a technique to reduce sample distortion by providing offsets to pixels in in-loop filter. In SAO, pixels in LCU are classified into several categories, then categories and offsets are given based on Rate-Distortion Optimization (RDO of reconstructed pixels in a Largest Coding Unit (LCU. Pixels in a LCU are operated by the same SAO process, however, transform and inverse transform makes the distortion of pixels in Transform Unit (TU edge larger than the distortion inside TU even after deblocking filtering (DF and SAO. And the categories of SAO can also be refined, since it is not proper for many cases. This paper proposed a TU edge offset mode and a category refinement for SAO in HEVC. Experimental results shows that those two kinds of optimization gets -0.13 and -0.2 gain respectively compared with the SAO in HEVC. The proposed algorithm which using the two kinds of optimization gets -0.23 gain on BD-rate compared with the SAO in HEVC which is a 47 % increase with nearly no increase on coding time.

  9. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  10. On Optimal, Minimal BRDF Sampling for Reflectance Acquisition

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Jensen, Henrik Wann; Ramamoorthi, Ravi

    2015-01-01

    The bidirectional reflectance distribution function (BRDF) is critical for rendering, and accurate material representation requires data-driven reflectance models. However, isotropic BRDFs are 3D functions, and measuring the reflectance of a flat sample can require a million incident and outgoing...... direction pairs, making the use of measured BRDFs impractical. In this paper, we address the problem of reconstructing a measured BRDF from a limited number of samples. We present a novel mapping of the BRDF space, allowing for extraction of descriptive principal components from measured databases......, such as the MERL BRDF database. We optimize for the best sampling directions, and explicitly provide the optimal set of incident and outgoing directions in the Rusinkiewicz parameterization for n = {1, 2, 5, 10, 20} samples. Based on the principal components, we describe a method for accurately reconstructing BRDF...

  11. Generating the optimal magnetic field for magnetic refrigeration

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Insinga, Andrea Roberto; Smith, Anders

    2016-01-01

    In a magnetic refrigeration device the magnet is the single most expensive component, and therefore it is crucially important to ensure that an effective magnetic field as possible is generated using the least amount of permanent magnets. Here we present a method for calculating the optimal...... remanence distribution for any desired magnetic field. The method is based on the reciprocity theorem, which through the use of virtual magnets can be used to calculate the optimal remanence distribution. Furthermore, we present a method for segmenting a given magnet design that always results...... in the optimal segmentation, for any number of segments specified. These two methods are used to determine the optimal magnet design of a 12-piece, two-pole concentric cylindrical magnet for use in a continuously rotating magnetic refrigeration device....

  12. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  13. Optimizing sampling approaches along ecological gradients

    DEFF Research Database (Denmark)

    Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel

    2016-01-01

    1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...... patterns in a statistically solid and reproducible manner, given the normal restrictions in labour, time and money. However, a technical guideline about an adequate sampling design to maximize prediction success under restricted resources is lacking. This study aims at developing such a solid...... and reproducible guideline for sampling along gradients in all fields of ecology and science in general. 2. We conducted simulations with artificial data for five common response types known in ecology, each represented by a simple function (no response, linear, exponential, symmetric unimodal and asymmetric...

  14. Optimization of 3D Field Design

    Science.gov (United States)

    Logan, Nikolas; Zhu, Caoxiang

    2017-10-01

    Recent progress in 3D tokamak modeling is now leveraged to create a conceptual design of new external 3D field coils for the DIII-D tokamak. Using the IPEC dominant mode as a target spectrum, the Finding Optimized Coils Using Space-curves (FOCUS) code optimizes the currents and 3D geometry of multiple coils to maximize the total set's resonant coupling. The optimized coils are individually distorted in space, creating toroidal ``arrays'' containing a variety of shapes that often wrap around a significant poloidal extent of the machine. The generalized perturbed equilibrium code (GPEC) is used to determine optimally efficient spectra for driving total, core, and edge neoclassical toroidal viscosity (NTV) torque and these too provide targets for the optimization of 3D coil designs. These conceptual designs represent a fundamentally new approach to 3D coil design for tokamaks targeting desired plasma physics phenomena. Optimized coil sets based on plasma response theory will be relevant to designs for future reactors or on any active machine. External coils, in particular, must be optimized for reliable and efficient fusion reactor designs. Work supported by the US Department of Energy under DE-AC02-09CH11466.

  15. Scanning SQUID microscope with an in-situ magnetization/demagnetization field for geological samples

    Science.gov (United States)

    Du, Junwei; Liu, Xiaohong; Qin, Huafeng; Wei, Zhao; Kong, Xiangyang; Liu, Qingsong; Song, Tao

    2018-04-01

    Magnetic properties of rocks are crucial for paleo-, rock-, environmental-magnetism, and magnetic material sciences. Conventional rock magnetometers deal with bulk properties of samples, whereas scanning microscope can map the distribution of remanent magnetization. In this study, a new scanning microscope based on a low-temperature DC superconducting quantum interference device (SQUID) equipped with an in-situ magnetization/demagnetization device was developed. To realize the combination of sensitive instrument as SQUID with high magnetizing/demagnetizing fields, the pick-up coil, the magnetization/demagnetization coils and the measurement mode of the system were optimized. The new microscope has a field sensitivity of 250 pT/√Hz at a coil-to-sample spacing of ∼350 μm, and high magnetization (0-1 T)/ demagnetization (0-300 mT, 400 Hz) functions. With this microscope, isothermal remanent magnetization (IRM) acquisition and the according alternating field (AF) demagnetization curves can be obtained for each point without transferring samples between different procedures, which could result in position deviation, waste of time, and other interferences. The newly-designed SQUID microscope, thus, can be used to investigate the rock magnetic properties of samples at a micro-area scale, and has a great potential to be an efficient tool in paleomagnetism, rock magnetism, and magnetic material studies.

  16. Fringing field optimization of hemispherical deflector analyzers using BEM and FDM

    Energy Technology Data Exchange (ETDEWEB)

    Sise, Omer, E-mail: omersise@aku.edu.t [Department of Physics, Science and Arts Faculty, Afyon Kocatepe University, 03200 Afyonkarahisar (Turkey); Ulu, Melike; Dogan, Mevlut [Department of Physics, Science and Arts Faculty, Afyon Kocatepe University, 03200 Afyonkarahisar (Turkey); Martinez, Genoveva [Department Fisica Aplicada III, Fac. de Fisica, UCM 28040-Madrid (Spain); Zouros, Theo J.M. [Department of Physics, University of Crete, P.O. Box 2208, 71003 Heraklion, Crete (Greece); TANDEM Accelerator Laboratory, Institute of Nuclear Physics, NCSR ' Demokritos' , 153.10 Aghia Paraskevi, Athens (Greece)

    2010-02-15

    In this paper we present numerical modeling results for fringing field optimization of hemispherical deflector analyzers (HDAs), simulated using boundary-element and finite-difference numerical methods. Optimization of the fringing field aberrations of HDAs is performed by using a biased optical axis and an optimized entry position offset (paracentric) from the center position used in conventional HDAs. The described optimization achieves first-order focusing thus also further improving the energy resolution of HDAs.

  17. An analytical-numerical comprehensive method for optimizing the fringing magnetic field

    International Nuclear Information System (INIS)

    Xiao Meiqin; Mao Naifeng

    1991-01-01

    The criterion of optimizing the fringing magnetic field is discussed, and an analytical-numerical comprehensive method for realizing the optimization is introduced. The method mentioned above consists of two parts, the analytical part calculates the field of the shims, which corrects the fringing magnetic field by using uniform magnetizing method; the numerical part fulfils the whole calculation of the field distribution by solving the equation of magnetic vector potential A within the region covered by arbitrary triangular meshes with the aid of finite difference method and successive over relaxation method. On the basis of the method, the optimization of the fringing magnetic field for a large-scale electromagnetic isotope separator is finished

  18. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  19. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  20. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  1. Optimism is universal: exploring the presence and benefits of optimism in a representative sample of the world.

    Science.gov (United States)

    Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D

    2013-10-01

    Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations. © 2012 Wiley Periodicals, Inc.

  2. The collection and field chemical analysis of water samples

    International Nuclear Information System (INIS)

    Korte, N.E.; Ealey, D.T.; Hollenbach, M.H.

    1984-01-01

    A successful water sampling program requires a clear understanding of appropriate measurement and sampling procedures in order to obtain reliable field data and representative samples. It is imperative that the personnel involved have a thorough knowledge of the limitations of the techniques being used. Though this seems self-evident, many sampling and field-chemical-analysis programs are still not properly conducted. Recognizing these problems, the Department of Energy contracted with Bendix Field Engineering Corporation through the Technical Measurements Center to develop and select procedures for water sampling and field chemical analysis at waste sites. The fundamental causese of poor field programs are addressed in this paper, largely through discussion of specific field-measurement techniques and their limitations. Recommendations for improvement, including quality-assurance measures, are also presented

  3. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  4. ROAM: A Radial-Basis-Function Optimization Approximation Method for Diagnosing the Three-Dimensional Coronal Magnetic Field

    International Nuclear Information System (INIS)

    Dalmasse, Kevin; Nychka, Douglas W.; Gibson, Sarah E.; Fan, Yuhong; Flyer, Natasha

    2016-01-01

    The Coronal Multichannel Polarimeter (CoMP) routinely performs coronal polarimetric measurements using the Fe XIII 10747 and 10798 lines, which are sensitive to the coronal magnetic field. However, inverting such polarimetric measurements into magnetic field data is a difficult task because the corona is optically thin at these wavelengths and the observed signal is therefore the integrated emission of all the plasma along the line of sight. To overcome this difficulty, we take on a new approach that combines a parameterized 3D magnetic field model with forward modeling of the polarization signal. For that purpose, we develop a new, fast and efficient, optimization method for model-data fitting: the Radial-basis-functions Optimization Approximation Method (ROAM). Model-data fitting is achieved by optimizing a user-specified log-likelihood function that quantifies the differences between the observed polarization signal and its synthetic/predicted analog. Speed and efficiency are obtained by combining sparse evaluation of the magnetic model with radial-basis-function (RBF) decomposition of the log-likelihood function. The RBF decomposition provides an analytical expression for the log-likelihood function that is used to inexpensively estimate the set of parameter values optimizing it. We test and validate ROAM on a synthetic test bed of a coronal magnetic flux rope and show that it performs well with a significantly sparse sample of the parameter space. We conclude that our optimization method is well-suited for fast and efficient model-data fitting and can be exploited for converting coronal polarimetric measurements, such as the ones provided by CoMP, into coronal magnetic field data.

  5. Focusing light through dynamical samples using fast continuous wavefront optimization.

    Science.gov (United States)

    Blochet, B; Bourdieu, L; Gigan, S

    2017-12-01

    We describe a fast continuous optimization wavefront shaping system able to focus light through dynamic scattering media. A micro-electro-mechanical system-based spatial light modulator, a fast photodetector, and field programmable gate array electronics are combined to implement a continuous optimization of a wavefront with a single-mode optimization rate of 4.1 kHz. The system performances are demonstrated by focusing light through colloidal solutions of TiO 2 particles in glycerol with tunable temporal stability.

  6. Optimization study on the magnetic field of superconducting Halbach Array magnet

    Science.gov (United States)

    Shen, Boyang; Geng, Jianzhao; Li, Chao; Zhang, Xiuchang; Fu, Lin; Zhang, Heng; Ma, Jun; Coombs, T. A.

    2017-07-01

    This paper presents the optimization on the strength and homogeneity of magnetic field from superconducting Halbach Array magnet. Conventional Halbach Array uses a special arrangement of permanent magnets which can generate homogeneous magnetic field. Superconducting Halbach Array utilizes High Temperature Superconductor (HTS) to construct an electromagnet to work below its critical temperature, which performs equivalently to the permanent magnet based Halbach Array. The simulations of superconducting Halbach Array were carried out using H-formulation based on B-dependent critical current density and bulk approximation, with the FEM platform COMSOL Multiphysics. The optimization focused on the coils' location, as well as the geometry and numbers of coils on the premise of maintaining the total amount of superconductor. Results show Halbach Array configuration based superconducting magnet is able to generate the magnetic field with intensity over 1 Tesla and improved homogeneity using proper optimization methods. Mathematical relation of these optimization parameters with the intensity and homogeneity of magnetic field was developed.

  7. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  8. Balanced and optimal bianisotropic particles: maximizing power extracted from electromagnetic fields

    International Nuclear Information System (INIS)

    Ra'di, Younes; Tretyakov, Sergei A

    2013-01-01

    Here we introduce the concept of ‘optimal particles’ for strong interactions with electromagnetic fields. We assume that a particle occupies a given electrically small volume in space and study the required optimal relations between the particle polarizabilities. In these optimal particles, the inclusion shape and material are chosen so that the particles extract the maximum possible power from given incident fields. It appears that for different excitation scenarios the optimal particles are bianisotropic chiral, omega, moving and Tellegen particles. The optimal dimensions of resonant canonical chiral and omega particles are found analytically. Such optimal particles have extreme properties in scattering (e.g., zero backscattering or invisibility). Planar arrays of optimal particles possess extreme properties in reflection and transmission (e.g. total absorption or magnetic-wall response), and volumetric composites of optimal particles realize, for example, such extreme materials as the chiral nihility medium. (paper)

  9. Handbook of simulation optimization

    CERN Document Server

    Fu, Michael C

    2014-01-01

    The Handbook of Simulation Optimization presents an overview of the state of the art of simulation optimization, providing a survey of the most well-established approaches for optimizing stochastic simulation models and a sampling of recent research advances in theory and methodology. Leading contributors cover such topics as discrete optimization via simulation, ranking and selection, efficient simulation budget allocation, random search methods, response surface methodology, stochastic gradient estimation, stochastic approximation, sample average approximation, stochastic constraints, variance reduction techniques, model-based stochastic search methods and Markov decision processes. This single volume should serve as a reference for those already in the field and as a means for those new to the field for understanding and applying the main approaches. The intended audience includes researchers, practitioners and graduate students in the business/engineering fields of operations research, management science,...

  10. Field-amplified sample stacking-sweeping of vitamins B determination in capillary electrophoresis.

    Science.gov (United States)

    Dziomba, Szymon; Kowalski, Piotr; Bączek, Tomasz

    2012-12-07

    A capillary electrophoretic method for determination of five water soluble vitamins B along with baclofen as an internal standard has been developed and assessed in context of precision, accuracy, sensitivity, freedom from interference, linearity, detection and quantification limits. On-line preconcentration technique, namely field-amplified sample stacking (FASS)-sweeping, has been employed in respect to obtain more sensitive analysis. Separation conditions received after optimization procedure were as following background electrolyte (BGE), 10 mM NaH(2)PO(4), 80 mM SDS, (pH 7.25); sample matrix (SM), 10 mM NaH(2)PO(4) (pH 4.60); uncoated fused silica capillary (50 μm i.d. × 67 cm length); UV spectrophotometric detection at 200 nm; injection times: 10s and 30s at 3.45 kPa; applied voltage 22 kV; temperature 22°C. Validation parameters, namely precision, accuracy and linearity, were considered as satisfactory. Under the optimized conditions, it has been also successfully applied for vitamins B determination in bacterial growth medium and commercially available Ilex paraguariensis leaves. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Optimized preparation of urine samples for two-dimensional electrophoresis and initial application to patient samples

    DEFF Research Database (Denmark)

    Lafitte, Daniel; Dussol, Bertrand; Andersen, Søren

    2002-01-01

    OBJECTIVE: We optimized of the preparation of urinary samples to obtain a comprehensive map of urinary proteins of healthy subjects and then compared this map with the ones obtained with patient samples to show that the pattern was specific of their kidney disease. DESIGN AND METHODS: The urinary...

  12. Optimization of sampling for the determination of the mean Radium-226 concentration in surface soil

    International Nuclear Information System (INIS)

    Williams, L.R.; Leggett, R.W.; Espegren, M.L.; Little, C.A.

    1987-08-01

    This report describes a field experiment that identifies an optimal method for determination of compliance with the US Environmental Protection Agency's Ra-226 guidelines for soil. The primary goals were to establish practical levels of accuracy and precision in estimating the mean Ra-226 concentration of surface soil in a small contaminated region; to obtain empirical information on composite vs. individual soil sampling and on random vs. uniformly spaced sampling; and to examine the practicality of using gamma measurements in predicting the average surface radium concentration and in estimating the number of soil samples required to obtain a given level of accuracy and precision. Numerous soil samples were collected on each six sites known to be contaminated with uranium mill tailings. Three types of samples were collected on each site: 10-composite samples, 20-composite samples, and individual or post hole samples; 10-composite sampling is the method of choice because it yields a given level of accuracy and precision for the least cost. Gamma measurements can be used to reduce surface soil sampling on some sites. 2 refs., 5 figs., 7 tabs

  13. Topology optimization of nanoparticles for localized electromagnetic field enhancement

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk; Vester-Petersen, Joakim; Madsen, Søren Peder

    2017-01-01

    We consider the design of individual and periodic arrangements of metal or semiconductor nanoparticles for localized electromagnetic field enhancement utilizing a topology optimization based numerical framework as the design tool. We aim at maximizing a function of the electromagnetic field...

  14. Field emission from optimized structure of carbon nanotube field emitter array

    International Nuclear Information System (INIS)

    Chouhan, V.; Noguchi, T.; Kato, S.

    2016-01-01

    The authors report a detail study on the emission properties of field emitter array (FEA) of micro-circular emitters of multiwall carbon nanotubes (CNTs). The FEAs were fabricated on patterned substrates prepared with an array of circular titanium (Ti) islands on titanium nitride coated tantalum substrates. CNTs were rooted into these Ti islands to prepare an array of circular emitters. The circular emitters were prepared in different diameters and pitches in order to optimize their structure for acquiring a high emission current. The pitch was varied from 0 to 600 μm, while a diameter of circular emitters was kept constant to be 50 μm in order to optimize a pitch. For diameter optimization, a diameter was changed from 50 to 200 μm while keeping a constant edge-to-edge distance of 150 μm between the circular emitters. The FEA with a diameter of 50 μm and a pitch of 120 μm was found to be the best to achieve an emission current of 47 mA corresponding to an effective current density of 30.5 A/cm"2 at 7 V/μm. The excellent emission current was attributed to good quality of CNT rooting into the substrate and optimized FEA structure, which provided a high electric field on a whole circular emitter of 50 μm and the best combination of the strong edge effect and CNT coverage. The experimental results were confirmed with computer simulation.

  15. Field emission from optimized structure of carbon nanotube field emitter array

    Energy Technology Data Exchange (ETDEWEB)

    Chouhan, V., E-mail: vchouhan@post.kek.jp, E-mail: vijaychouhan84@gmail.com [School of High Energy Accelerator, The Graduate University for Advanced Studies, Tsukuba 305-0801 (Japan); Noguchi, T. [High Energy Accelerator Research Organization (KEK), Tsukuba 305-0801 (Japan); Kato, S. [School of High Energy Accelerator, The Graduate University for Advanced Studies, Tsukuba 305-0801 (Japan); High Energy Accelerator Research Organization (KEK), Tsukuba 305-0801 (Japan)

    2016-04-07

    The authors report a detail study on the emission properties of field emitter array (FEA) of micro-circular emitters of multiwall carbon nanotubes (CNTs). The FEAs were fabricated on patterned substrates prepared with an array of circular titanium (Ti) islands on titanium nitride coated tantalum substrates. CNTs were rooted into these Ti islands to prepare an array of circular emitters. The circular emitters were prepared in different diameters and pitches in order to optimize their structure for acquiring a high emission current. The pitch was varied from 0 to 600 μm, while a diameter of circular emitters was kept constant to be 50 μm in order to optimize a pitch. For diameter optimization, a diameter was changed from 50 to 200 μm while keeping a constant edge-to-edge distance of 150 μm between the circular emitters. The FEA with a diameter of 50 μm and a pitch of 120 μm was found to be the best to achieve an emission current of 47 mA corresponding to an effective current density of 30.5 A/cm{sup 2} at 7 V/μm. The excellent emission current was attributed to good quality of CNT rooting into the substrate and optimized FEA structure, which provided a high electric field on a whole circular emitter of 50 μm and the best combination of the strong edge effect and CNT coverage. The experimental results were confirmed with computer simulation.

  16. Optimal control of quantum systems: Origins of inherent robustness to control field fluctuations

    International Nuclear Information System (INIS)

    Rabitz, Herschel

    2002-01-01

    The impact of control field fluctuations on the optimal manipulation of quantum dynamics phenomena is investigated. The quantum system is driven by an optimal control field, with the physical focus on the evolving expectation value of an observable operator. A relationship is shown to exist between the system dynamics and the control field fluctuations, wherein the process of seeking optimal performance assures an inherent degree of system robustness to such fluctuations. The presence of significant field fluctuations breaks down the evolution of the observable expectation value into a sequence of partially coherent robust steps. Robustness occurs because the optimization process reduces sensitivity to noise-driven quantum system fluctuations by taking advantage of the observable expectation value being bilinear in the evolution operator and its adjoint. The consequences of this inherent robustness are discussed in the light of recent experiments and numerical simulations on the optimal control of quantum phenomena. The analysis in this paper bodes well for the future success of closed-loop quantum optimal control experiments, even in the presence of reasonable levels of field fluctuations

  17. Topology optimization based design of unilateral NMR for generating a remote homogeneous field.

    Science.gov (United States)

    Wang, Qi; Gao, Renjing; Liu, Shutian

    2017-06-01

    This paper presents a topology optimization based design method for the design of unilateral nuclear magnetic resonance (NMR), with which a remote homogeneous field can be obtained. The topology optimization is actualized by seeking out the optimal layout of ferromagnetic materials within a given design domain. The design objective is defined as generating a sensitive magnetic field with optimal homogeneity and maximal field strength within a required region of interest (ROI). The sensitivity of the objective function with respect to the design variables is derived and the method for solving the optimization problem is presented. A design example is provided to illustrate the utility of the design method, specifically the ability to improve the quality of the magnetic field over the required ROI by determining the optimal structural topology for the ferromagnetic poles. Both in simulations and experiments, the sensitive region of the magnetic field achieves about 2 times larger than that of the reference design, validating validates the feasibility of the design method. Copyright © 2017. Published by Elsevier Inc.

  18. Optimal experiment design in a filtering context with application to sampled network data

    OpenAIRE

    Singhal, Harsh; Michailidis, George

    2010-01-01

    We examine the problem of optimal design in the context of filtering multiple random walks. Specifically, we define the steady state E-optimal design criterion and show that the underlying optimization problem leads to a second order cone program. The developed methodology is applied to tracking network flow volumes using sampled data, where the design variable corresponds to controlling the sampling rate. The optimal design is numerically compared to a myopic and a naive strategy. Finally, w...

  19. Optimization of protein samples for NMR using thermal shift assays

    International Nuclear Information System (INIS)

    Kozak, Sandra; Lercher, Lukas; Karanth, Megha N.; Meijers, Rob; Carlomagno, Teresa; Boivin, Stephane

    2016-01-01

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor"® provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  20. Optimization of protein samples for NMR using thermal shift assays

    Energy Technology Data Exchange (ETDEWEB)

    Kozak, Sandra [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Lercher, Lukas; Karanth, Megha N. [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Meijers, Rob [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Carlomagno, Teresa, E-mail: teresa.carlomagno@oci.uni-hannover.de [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Boivin, Stephane, E-mail: sboivin77@hotmail.com, E-mail: s.boivin@embl-hamburg.de [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany)

    2016-04-15

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor{sup ®} provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  1. Improvement of Low-Frequency Sound Field Obtained by an Optimized Boundary

    Institute of Scientific and Technical Information of China (English)

    JING Lu; ZHU Xiao-tian

    2006-01-01

    An approach based on the finite element analysis was introduced to improve low-frequency sound field. The optimized scatters on the wall redistribute the modes of the room and provide effective diffusion of sound field. The frequency response, eigenfrequency, spatial distribution and transient response were calculated. Experimental data were obtained through a 1:5 scaled set up. The results show that the optimized treatment has a positive effect on sound field and the improvement is obvious.

  2. Monte Carlo importance sampling optimization for system reliability applications

    International Nuclear Information System (INIS)

    Campioni, Luca; Vestrucci, Paolo

    2004-01-01

    This paper focuses on the reliability analysis of multicomponent systems by the importance sampling technique, and, in particular, it tackles the optimization aspect. A methodology based on the minimization of the variance at the component level is proposed for the class of systems consisting of independent components. The claim is that, by means of such a methodology, the optimal biasing could be achieved without resorting to the typical approach by trials

  3. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  4. Remedial investigation sampling and analysis plan for J-Field, Aberdeen Proving Ground, Maryland. Volume 1: Field Sampling Plan

    Energy Technology Data Exchange (ETDEWEB)

    Benioff, P.; Biang, R.; Dolak, D.; Dunn, C.; Martino, L.; Patton, T.; Wang, Y.; Yuen, C.

    1995-03-01

    The Environmental Management Division (EMD) of Aberdeen Proving Ground (APG), Maryland, is conducting a remedial investigation and feasibility study (RI/FS) of the J-Field area at APG pursuant to the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), as amended. J-Field is within the Edgewood Area of APG in Harford County, Maryland (Figure 1. 1). Since World War II activities in the Edgewood Area have included the development, manufacture, testing, and destruction of chemical agents and munitions. These materials were destroyed at J-Field by open burning and open detonation (OB/OD). Considerable archival information about J-Field exists as a result of efforts by APG staff to characterize the hazards associated with the site. Contamination of J-Field was first detected during an environmental survey of the Edgewood Area conducted in 1977 and 1978 by the US Army Toxic and Hazardous Materials Agency (USATHAMA) (predecessor to the US Army Environmental Center [AEC]). As part of a subsequent USATHAMA -environmental survey, 11 wells were installed and sampled at J-Field. Contamination at J-Field was also detected during a munitions disposal survey conducted by Princeton Aqua Science in 1983. The Princeton Aqua Science investigation involved the installation and sampling of nine wells and the collection and analysis of surficial and deep composite soil samples. In 1986, a Resource Conservation and Recovery Act (RCRA) permit (MD3-21-002-1355) requiring a basewide RCRA Facility Assessment (RFA) and a hydrogeologic assessment of J-Field was issued by the US Environmental Protection Agency (EPA). In 1987, the US Geological Survey (USGS) began a two-phased hydrogeologic assessment in data were collected to model, groundwater flow at J-Field. Soil gas investigations were conducted, several well clusters were installed, a groundwater flow model was developed, and groundwater and surface water monitoring programs were established that continue today.

  5. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  6. Optimized design of micromachined electric field mills to maximize electrostatic field sensitivity

    OpenAIRE

    Zhou, Yu; Shafai, Cyrus

    2016-01-01

    This paper describes the design optimization of a micromachined electric field mill, in relation to maximizing its output signal. The cases studied are for a perforated electrically grounded shutter vibrating laterally over sensing electrodes. It is shown that when modeling the output signal of the sensor, the differential charge on the sense electrodes when exposed to vs. visibly shielded from the incident electric field must be considered. Parametric studies of device dimensions show that t...

  7. Image-optimized Coronal Magnetic Field Models

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M., E-mail: shaela.i.jones-mecholsky@nasa.gov, E-mail: shaela.i.jonesmecholsky@nasa.gov [NASA Goddard Space Flight Center, Code 670, Greenbelt, MD 20771 (United States)

    2017-08-01

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.

  8. Image-Optimized Coronal Magnetic Field Models

    Science.gov (United States)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.

    2017-01-01

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work we presented early tests of the method which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane, and the effect on the outcome of the optimization of errors in localization of constraints. We find that substantial improvement in the model field can be achieved with this type of constraints, even when magnetic features in the images are located outside of the image plane.

  9. Near-field acoustic holography using sparse regularization and compressive sampling principles.

    Science.gov (United States)

    Chardon, Gilles; Daudet, Laurent; Peillot, Antoine; Ollivier, François; Bertin, Nancy; Gribonval, Rémi

    2012-09-01

    Regularization of the inverse problem is a complex issue when using near-field acoustic holography (NAH) techniques to identify the vibrating sources. This paper shows that, for convex homogeneous plates with arbitrary boundary conditions, alternative regularization schemes can be developed based on the sparsity of the normal velocity of the plate in a well-designed basis, i.e., the possibility to approximate it as a weighted sum of few elementary basis functions. In particular, these techniques can handle discontinuities of the velocity field at the boundaries, which can be problematic with standard techniques. This comes at the cost of a higher computational complexity to solve the associated optimization problem, though it remains easily tractable with out-of-the-box software. Furthermore, this sparsity framework allows us to take advantage of the concept of compressive sampling; under some conditions on the sampling process (here, the design of a random array, which can be numerically and experimentally validated), it is possible to reconstruct the sparse signals with significantly less measurements (i.e., microphones) than classically required. After introducing the different concepts, this paper presents numerical and experimental results of NAH with two plate geometries, and compares the advantages and limitations of these sparsity-based techniques over standard Tikhonov regularization.

  10. Optimal sampling plan for clean development mechanism energy efficiency lighting projects

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2013-01-01

    Highlights: • A metering cost minimisation model is built to assist the sampling plan for CDM projects. • The model minimises the total metering cost by the determination of optimal sample size. • The required 90/10 criterion sampling accuracy is maintained. • The proposed metering cost minimisation model is applicable to other CDM projects as well. - Abstract: Clean development mechanism (CDM) project developers are always interested in achieving required measurement accuracies with the least metering cost. In this paper, a metering cost minimisation model is proposed for the sampling plan of a specific CDM energy efficiency lighting project. The problem arises from the particular CDM sampling requirement of 90% confidence and 10% precision for the small-scale CDM energy efficiency projects, which is known as the 90/10 criterion. The 90/10 criterion can be met through solving the metering cost minimisation problem. All the lights in the project are classified into different groups according to uncertainties of the lighting energy consumption, which are characterised by their statistical coefficient of variance (CV). Samples from each group are randomly selected to install power meters. These meters include less expensive ones with less functionality and more expensive ones with greater functionality. The metering cost minimisation model will minimise the total metering cost through the determination of the optimal sample size at each group. The 90/10 criterion is formulated as constraints to the metering cost objective. The optimal solution to the minimisation problem will therefore minimise the metering cost whilst meeting the 90/10 criterion, and this is verified by a case study. Relationships between the optimal metering cost and the population sizes of the groups, CV values and the meter equipment cost are further explored in three simulations. The metering cost minimisation model proposed for lighting systems is applicable to other CDM projects as

  11. Comparison of dechlorination rates for field DNAPL vs synthetic samples: effect of sample matrix

    Science.gov (United States)

    O'Carroll, D. M.; Sakulchaicharoen, N.; Herrera, J. E.

    2015-12-01

    Nanometals have received significant attention in recent years due to their ability to rapidly destroy numerous priority source zone contaminants in controlled laboratory studies. This has led to great optimism surrounding nanometal particle injection for insitu remediation. Reported dechlorination rates vary widely among different investigators. These differences have been ascribed to differences in the iron types (granular, micro, or nano-sized iron), matrix solution chemistry and the morphology of the nZVI surface. Among these, the effects of solution chemistry on rates of reductive dechlorination of various chlorinated compounds have been investigated in several short-term laboratory studies. Variables investigated include the effect of anions or groundwater solutes such as SO4-2, Cl-, NO3-, pH, natural organic matters (NOM), surfactant, and humic acid on dechlorination reaction of various chlorinated compounds such as TCE, carbon tetrachloride (CT), and chloroform (CF). These studies have normally centered on the assessment of nZVI reactivity toward dechlorination of an isolated individual contaminant spiked into a ground water sample under ideal conditions, with limited work conducted using real field samples. In this work, the DNAPL used for the dechlorination study was obtained from a contaminatied site. This approach was selected to adequately simulate a condition where the nZVI suspension was in direct contact with DNAPL and to isolate the dechlorination activity shown by the nZVI from the groundwater matrix effects. An ideal system "synthetic DNAPL" composed of a mixture of chlorinated compounds mimicking the composition of the actual DNAPL was also dechlorinated to evaluate the DNAPL "matrix effect" on NZVI dechlorination activity. This approach allowed us to evaluate the effect of the presence of different types of organic compounds (volatile fatty acids and humic acids) found in the actual DNAPL on nZVI dechlorination activity. This presentation will

  12. Trapped field measurements on MgB{sub 2} bulk samples

    Energy Technology Data Exchange (ETDEWEB)

    Koblischka, Michael; Karwoth, Thomas; Zeng, XianLin; Hartmann, Uwe [Institute of Experimental Physics, Saarland University, P. O. Box 151150, D-66041 Saarbruecken (Germany); Berger, Kevin; Douine, Bruno [University of Lorraine, GREEN, 54506 Vandoeuvre-les-Nancy (France)

    2016-07-01

    Trapped field measurements were performed on bulk, polycrystalline MgB{sub 2} samples stemming from different sources with the emphasis to develop applications like superconducting permanent magnets ('supermagnets') and electric motors. We describe the setup for the trapped field measurements and the experimental procedure (field cooling, zero-field cooling, field sweep rates). The trapped field measurements were conducted using a cryocooling system to cool the bulk samples to the desired temperatures, and a low-loss cryostat equipped with a room-temperature bore and a maximum field of ±5 T was employed to provide the external magnetic field. The superconducting coil of this cryostat is operated using a bidirectional power supply. Various sweep rates of the external magnetic field ranging between 1 mT/s and 40 mT/s were used to generate the applied field. The measurements were performed with one sample and two samples stacked together. A maximum trapped field of 7 T was recorded. We discuss the results obtained and the problems arising due to flux jumping, which is often seen for the MgB{sub 2} samples cooled to temperatures below 10 K.

  13. Optimal orientation field to manufacture magnetostrictive composites with high magnetostrictive performance

    International Nuclear Information System (INIS)

    Dong Xufeng; Ou Jinping; Guan Xinchun; Qi Min

    2010-01-01

    Magnetostrictive properties have relationship with the applied orientation field during the preparation of giant magnetostrictive composites. To understand the dependence of the optimal orientation field on particle volume fraction, composites with 20%, 30% and 50% particles by volume were fabricated by distributing Terfenol-D particles in an unsaturated polyester resin under various orientation fields. Their magnetostrictive properties were tested without pre-stress at room temperature. The results indicate that as the particle volume fraction increases, the optimal orientation field increases. The main reason for this phenomenon is the packing density for the composites with higher particle volume fraction is larger than that for those with lower particle content.

  14. Optimal orientation field to manufacture magnetostrictive composites with high magnetostrictive performance

    Energy Technology Data Exchange (ETDEWEB)

    Dong Xufeng, E-mail: dongxf@dlut.edu.c [School of Materials Science and Engineering, Dalian University of Technology, Dalian, Liaoning 116024 (China); Ou Jinping [School of Civil and Hydraulic Engineering, Dalian University of Technology, Dalian, Liaoning 116024 (China); School of Civil Engineering, Harbin Institute of Technology, Harbin, Heilongjiang 150090 (China); Guan Xinchun [School of Civil Engineering, Harbin Institute of Technology, Harbin, Heilongjiang 150090 (China); Qi Min [School of Materials Science and Engineering, Dalian University of Technology, Dalian, Liaoning 116024 (China)

    2010-11-15

    Magnetostrictive properties have relationship with the applied orientation field during the preparation of giant magnetostrictive composites. To understand the dependence of the optimal orientation field on particle volume fraction, composites with 20%, 30% and 50% particles by volume were fabricated by distributing Terfenol-D particles in an unsaturated polyester resin under various orientation fields. Their magnetostrictive properties were tested without pre-stress at room temperature. The results indicate that as the particle volume fraction increases, the optimal orientation field increases. The main reason for this phenomenon is the packing density for the composites with higher particle volume fraction is larger than that for those with lower particle content.

  15. An optimization of a GPU-based parallel wind field module

    International Nuclear Information System (INIS)

    Pinheiro, André L.S.; Shirru, Roberto

    2017-01-01

    Atmospheric radionuclide dispersion systems (ARDS) are important tools to predict the impact of radioactive releases from Nuclear Power Plants and guide people evacuation from affected areas. Four modules comprise ARDS: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The slowest is the Wind Field Module that was previously parallelized using the CUDA C language. The statement purpose of this work is to show the speedup gain with the optimization of the already parallel code of the GPU-based Wind Field module, based in WEST model (Extrapolated from Stability and Terrain). Due to the parallelization done in the wind field module, it was observed that some CUDA processors became idle, thus contributing to a reduction in speedup. It was proposed in this work a way of allocating these idle CUDA processors in order to increase the speedup. An acceleration of about 4 times can be seen in the comparative case study between the regular CUDA code and the optimized CUDA code. These results are quite motivating and point out that even after a parallelization of code, a parallel code optimization should be taken into account. (author)

  16. An optimization of a GPU-based parallel wind field module

    Energy Technology Data Exchange (ETDEWEB)

    Pinheiro, André L.S.; Shirru, Roberto [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Pereira, Cláudio M.N.A., E-mail: apinheiro99@gmail.com, E-mail: schirru@lmp.ufrj.br, E-mail: cmnap@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    Atmospheric radionuclide dispersion systems (ARDS) are important tools to predict the impact of radioactive releases from Nuclear Power Plants and guide people evacuation from affected areas. Four modules comprise ARDS: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The slowest is the Wind Field Module that was previously parallelized using the CUDA C language. The statement purpose of this work is to show the speedup gain with the optimization of the already parallel code of the GPU-based Wind Field module, based in WEST model (Extrapolated from Stability and Terrain). Due to the parallelization done in the wind field module, it was observed that some CUDA processors became idle, thus contributing to a reduction in speedup. It was proposed in this work a way of allocating these idle CUDA processors in order to increase the speedup. An acceleration of about 4 times can be seen in the comparative case study between the regular CUDA code and the optimized CUDA code. These results are quite motivating and point out that even after a parallelization of code, a parallel code optimization should be taken into account. (author)

  17. Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.

    Science.gov (United States)

    Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh

    2012-02-28

    Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.

  18. OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Maja Vrkljan

    2004-12-01

    Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.

  19. Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.

    Science.gov (United States)

    Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie

    2018-05-04

    Particle swarm optimization is a powerful metaheuristic population-based global optimization algorithm. However, when applied to non-separable objective functions its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant particle swarm optimization algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates a superior performance across several nonlinear, multimodal benchmark functions compared to the rotation-invariant Particle Swam Optimization (PSO) algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in ReaxFF-lg reactive force field is carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents a better performance compared to a Genetic Algorithm optimization method in the optimization of a ReaxFF-lg correction model parameters. The computational framework is implemented in a standalone C++ code that allows a straightforward development of ReaxFF reactive force fields.

  20. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  1. Spin imaging in solids using synchronously rotating field gradients and samples

    International Nuclear Information System (INIS)

    Wind, R.A.; Yannoni, C.S.

    1983-01-01

    A method for spin-imaging in solids using nuclear magnetic resonance (NMR) spectroscopy is described. With this method, the spin density distribution of a two- or three-dimensional object such as a solid can be constructed resulting in an image of the sample. This method lends itself to computer control to map out an image of the object. This spin-imaging method involves the steps of placing a solid sample in the rf coil field and the external magnetic field of an NMR spectrometer. A magnetic field gradient is superimposed across the sample to provide a field gradient which results in a varying DC field that has different values over different parts of the sample. As a result, nuclei in different parts of the sample have different resonant NMR frequencies. The sample is rotated about an axis which makes a particular angle of 54.7 degrees with the static external magnetic field. The magnetic field gradient which has a spatial distribution related to the sample spinning axis is then rotated synchronously with the sample. Data is then collected while performing a solid state NMR line narrowing procedure. The next step is to change the phase relation between the sample rotation and the field gradient rotation. The data is again collected as before while the sample and field gradient are synchronously rotated. The phase relation is changed a number of times and data collected each time. The spin image of the solid sample is then reconstructed from the collected data

  2. Site-specific waste management instruction for the field sampling organization

    International Nuclear Information System (INIS)

    Bryant, D.L.

    1997-01-01

    The Site-Specific Waste Management Instruction (SSWMI) provides guidance for the management of waste generated from field-sampling activities performed by the Environment Restoration Contractor (ERC) Sampling Organization that are not managed as part of a project SSWMI. Generally, the waste is unused preserved groundwater trip blanks, used and expired calibration solutions, and other similar waste that cannot be returned to an ERC project for disposal. The specific waste streams addressed by this SSWMI are identified in Section 2.0. This SSWMI was prepared in accordance with BHI-EE-02, Environmental Requirements. Waste generated from field sample collection activities should be returned to the project and managed in accordance with the applicable project-specific SSWMI whenever possible. However, returning all field sample collection and associated waste to a project for disposal may not always be practical or cost effective. Therefore, the ERC field sampling organization must manage and arrange to dispose of the waste using the (Bechtel Hanford, Inc. [BHI]) Field Support Waste Management (FSWM) services. This SSWMI addresses those waste streams that are the responsibility of the field sampling organization to manage and make arrangements for disposal

  3. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  4. Multiple constant multiplication optimizations for field programmable gate arrays

    CERN Document Server

    Kumm, Martin

    2016-01-01

    This work covers field programmable gate array (FPGA)-specific optimizations of circuits computing the multiplication of a variable by several constants, commonly denoted as multiple constant multiplication (MCM). These optimizations focus on low resource usage but high performance. They comprise the use of fast carry-chains in adder-based constant multiplications including ternary (3-input) adders as well as the integration of look-up table-based constant multipliers and embedded multipliers to get the optimal mapping to modern FPGAs. The proposed methods can be used for the efficient implementation of digital filters, discrete transforms and many other circuits in the domain of digital signal processing, communication and image processing. Contents Heuristic and ILP-Based Optimal Solutions for the Pipelined Multiple Constant Multiplication Problem Methods to Integrate Embedded Multipliers, LUT-Based Constant Multipliers and Ternary (3-Input) Adders An Optimized Multiple Constant Multiplication Architecture ...

  5. Resolution optimization with irregularly sampled Fourier data

    International Nuclear Information System (INIS)

    Ferrara, Matthew; Parker, Jason T; Cheney, Margaret

    2013-01-01

    Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer–Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications. (paper)

  6. Ad-Hoc vs. Standardized and Optimized Arthropod Diversity Sampling

    Directory of Open Access Journals (Sweden)

    Pedro Cardoso

    2009-09-01

    Full Text Available The use of standardized and optimized protocols has been recently advocated for different arthropod taxa instead of ad-hoc sampling or sampling with protocols defined on a case-by-case basis. We present a comparison of both sampling approaches applied for spiders in a natural area of Portugal. Tests were made to their efficiency, over-collection of common species, singletons proportions, species abundance distributions, average specimen size, average taxonomic distinctness and behavior of richness estimators. The standardized protocol revealed three main advantages: (1 higher efficiency; (2 more reliable estimations of true richness; and (3 meaningful comparisons between undersampled areas.

  7. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    International Nuclear Information System (INIS)

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-01-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality

  8. Optimization of Pockels electric field in transverse modulated optical voltage sensor

    Science.gov (United States)

    Huang, Yifan; Xu, Qifeng; Chen, Kun-Long; Zhou, Jie

    2018-05-01

    This paper investigates the possibilities of optimizing the Pockels electric field in a transverse modulated optical voltage sensor with a spherical electrode structure. The simulations show that due to the edge effect and the electric field concentrations and distortions, the electric field distributions in the crystal are non-uniform. In this case, a tiny variation in the light path leads to an integral error of more than 0.5%. Moreover, a 2D model cannot effectively represent the edge effect, so a 3D model is employed to optimize the electric field distributions. Furthermore, a new method to attach a quartz crystal to the electro-optic crystal along the electric field direction is proposed to improve the non-uniformity of the electric field. The integral error is reduced therefore from 0.5% to 0.015% and less. The proposed method is simple, practical and effective, and it has been validated by numerical simulations and experimental tests.

  9. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  10. Generalized field-splitting algorithms for optimal IMRT delivery efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, Srijit [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Sahni, Sartaj [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Li, Jonathan [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Ranka, Sanjay [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Palta, Jatinder [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States)

    2007-09-21

    Intensity-modulated radiation therapy (IMRT) uses radiation beams of varying intensities to deliver varying doses of radiation to different areas of the tissue. The use of IMRT has allowed the delivery of higher doses of radiation to the tumor and lower doses to the surrounding healthy tissue. It is not uncommon for head and neck tumors, for example, to have large treatment widths that are not deliverable using a single field. In such cases, the intensity matrix generated by the optimizer needs to be split into two or three matrices, each of which may be delivered using a single field. Existing field-splitting algorithms used the pre-specified arbitrary split line or region where the intensity matrix is split along a column, i.e., all rows of the matrix are split along the same column (with or without the overlapping of split fields, i.e., feathering). If three fields result, then the two splits are along the same two columns for all rows. In this paper we study the problem of splitting a large field into two or three subfields with the field width as the only constraint, allowing for an arbitrary overlap of the split fields, so that the total MU efficiency of delivering the split fields is maximized. Proof of optimality is provided for the proposed algorithm. An average decrease of 18.8% is found in the total MUs when compared to the split generated by a commercial treatment planning system and that of 10% is found in the total MUs when compared to the split generated by our previously published algorithm. For more information on this article, see medicalphysicsweb.org.

  11. Optimal CCD readout by digital correlated double sampling

    Science.gov (United States)

    Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.

    2016-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.

  12. Recent developments on field gas extraction and sample preparation methods for radiokrypton dating of groundwater

    Science.gov (United States)

    Yokochi, Reika

    2016-09-01

    Current and foreseen population growths will lead to an increased demand in freshwater, large quantities of which is stored as groundwater. The ventilation age is crucial to the assessment of groundwater resources, complementing the hydrological model approach based on hydrogeological parameters. Ultra-trace radioactive isotopes of Kr (81 Kr and 85 Kr) possess the ideal physical and chemical properties for groundwater dating. The recent advent of atom trap trace analyses (ATTA) has enabled determination of ultra-trace noble gas radioisotope abundances using 5-10 μ L of pure Kr. Anticipated developments will enable ATTA to analyze radiokrypton isotope abundances at high sample throughput, which necessitates simple and efficient sample preparation techniques that are adaptable to various sample chemistries. Recent developments of field gas extraction devices and simple and rapid Kr separation method at the University of Chicago are presented herein. Two field gas extraction devices optimized for different sampling conditions were recently designed and constructed, aiming at operational simplicity and portability. A newly developed Kr purification system enriches Kr by flowing a sample gas through a moderately cooled (138 K) activated charcoal column, followed by a gentle fractionating desorption. This simple process uses a single adsorbent and separates 99% of the bulk atmospheric gases from Kr without significant loss. The subsequent two stages of gas chromatographic separation and a hot Ti sponge getter further purify the Kr-enriched gas. Abundant CH4 necessitates multiple passages through one of the gas chromatographic separation columns. The presented Kr separation system has a demonstrated capability of extracting Kr with > 90% yield and 99% purity within 75 min from 1.2 to 26.8 L STP of atmospheric air with various concentrations of CH4. The apparatuses have successfully been deployed for sampling in the field and purification of groundwater samples.

  13. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  14. Magnetostatic modes in ferromagnetic samples with inhomogeneous internal fields

    Science.gov (United States)

    Arias, Rodrigo

    2015-03-01

    Magnetostatic modes in ferromagnetic samples are very well characterized and understood in samples with uniform internal magnetic fields. More recently interest has shifted to the study of magnetization modes in ferromagnetic samples with inhomogeneous internal fields. The present work shows that under the magnetostatic approximation and for samples of arbitrary shape and/or arbitrary inhomogeneous internal magnetic fields the modes can be classified as elliptic or hyperbolic, and their associated frequency spectrum can be delimited. This results from the analysis of the character of the second order partial differential equation for the magnetostatic potential under these general conditions. In general, a sample with an inhomogeneous internal field and at a given frequency, may have regions of elliptic and hyperbolic character separated by a boundary. In the elliptic regions the magnetostatic modes have a smooth monotonic character (generally decaying form the surfaces (a ``tunneling'' behavior)) and in hyperbolic regions an oscillatory wave-like character. A simple local criterion distinguishes hyperbolic from elliptic regions: the sign of a susceptibility parameter. This study shows that one may control to some extent magnetostatic modes via external fields or geometry. R.E.A. acknowledges Financiamiento Basal para Centros Cientificos y Tecnologicos de Excelencia under Project No. FB 0807 (Chile), Grant No. ICM P10-061-F by Fondo de Innovacion para la Competitividad-MINECON, and Proyecto Fondecyt 1130192.

  15. Optimization of heliostat field layout in solar central receiver systems on annual basis using differential evolution algorithm

    International Nuclear Information System (INIS)

    Atif, Maimoon; Al-Sulaiman, Fahad A.

    2015-01-01

    Highlights: • Differential evolution optimization model was developed to optimize the heliostat field. • Five optical parameters were considered for the optimization of the optical efficiency. • Optimization using insolation weighted and un-weighted annual efficiency are developed. • The daily averaged annual optical efficiency was calculated to be 0.5023 while the monthly was 0.5025. • The insolation weighted daily averaged annual efficiency was 0.5634. - Abstract: Optimization of a heliostat field is an essential task to make a solar central receiver system effective because major optical losses are associated with the heliostat fields. In this study, a mathematical model was developed to effectively optimize the heliostat field on annual basis using differential evolution, which is an evolutionary algorithm. The heliostat field layout optimization is based on the calculation of five optical performance parameters: the mirror or the heliostat reflectivity, the cosine factor, the atmospheric attenuation factor, the shadowing and blocking factor, and the intercept factor. This model calculates all the aforementioned performance parameters at every stage of the optimization, until the best heliostat field layout based on annual performance is obtained. Two different approaches were undertaken to optimize the heliostat field layout: one with optimizing insolation weighted annual efficiency and the other with optimizing the un-weighted annual efficiency. Moreover, an alternate approach was also proposed to efficiently optimize the heliostat field in which the number of computational time steps was considerably reduced. It was observed that the daily averaged annual optical efficiency was calculated to be 0.5023 as compared to the monthly averaged annual optical efficiency, 0.5025. Moreover, the insolation weighted daily averaged annual efficiency of the heliostat field was 0.5634 for Dhahran, Saudi Arabia. The code developed can be used for any other

  16. First principles molecular dynamics without self-consistent field optimization

    International Nuclear Information System (INIS)

    Souvatzis, Petros; Niklasson, Anders M. N.

    2014-01-01

    We present a first principles molecular dynamics approach that is based on time-reversible extended Lagrangian Born-Oppenheimer molecular dynamics [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] in the limit of vanishing self-consistent field optimization. The optimization-free dynamics keeps the computational cost to a minimum and typically provides molecular trajectories that closely follow the exact Born-Oppenheimer potential energy surface. Only one single diagonalization and Hamiltonian (or Fockian) construction are required in each integration time step. The proposed dynamics is derived for a general free-energy potential surface valid at finite electronic temperatures within hybrid density functional theory. Even in the event of irregular functional behavior that may cause a dynamical instability, the optimization-free limit represents a natural starting guess for force calculations that may require a more elaborate iterative electronic ground state optimization. Our optimization-free dynamics thus represents a flexible theoretical framework for a broad and general class of ab initio molecular dynamics simulations

  17. Optimization of Transverse Oscillating Fields for Vector Velocity Estimation with Convex Arrays

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2013-01-01

    A method for making Vector Flow Images using the transverse oscillation (TO) approach on a convex array is presented. The paper presents optimization schemes for TO fields for convex probes and evaluates their performance using Field II simulations and measurements using the SARUS experimental...... from 90 to 45 degrees in steps of 15 degrees. The optimization routine changes the lateral oscillation period lx to yield the best possible estimates based on the energy ratio between positive and negative spatial frequencies in the ultrasound field. The basic equation for lx gives 1.14 mm at 40 mm...

  18. Bias Magnetic Field of Stack Giant Magnetostrictive Actuator: Design, Analysis, and Optimization

    Directory of Open Access Journals (Sweden)

    Zhaoshu Yang

    2016-01-01

    Full Text Available Many novel applications using giant magnetostrictive actuators (GMA require their actuators output bidirectional strokes to be large enough to drive the load. In these cases, the sophisticated method to form such a sufficient bias field with minimum power and bulk consumption should be considered in the principal stage of GMA design. This paper concerns the methodology of bias field design for a specific GMA with stack PMs and GMMs (SGMA: both loop and field models for its bias field are established; the optimization method for given SGMA structure is outlined; a prototype is fabricated to verify the theory. Simulation and test results indicate that the bias field could be exerted more easily using SGMA structure; the modeling and optimization methodology for SGMA is valid in practical design.

  19. Comparison of leach results from field and laboratory prepared samples

    International Nuclear Information System (INIS)

    Oblath, S.B.; Langton, C.A.

    1985-01-01

    The leach behavior of saltstone prepared in the laboratory agrees well with that from samples mixed in the field using the Littleford mixer. Leach rates of nitrates and cesium from the current reference formulation saltstone were compared. The laboratory samples were prepared using simulated salt solution; those in the field used Tank 50 decontaminated supernate. For both nitrate and cesium, the field and laboratory samples showed nearly identical leach rates for the first 30 to 50 days. For the remaining period of the test, the field samples showed higher leach rates with the maximum difference being less than a factor of three. Ruthenium and antimony were present in the Tank 50 supernate in known amounts. Antimony-125 was observed in the leachate and a fractional leach rate was calculated to be at least a factor of ten less than that of 137 Cs. No 106 Ru was observed in the leachate, and the release rate was not calculated. However, based on the detection limits for the analysis, the ruthenium leach rate must also be at least a factor of ten less than cesium. These data are the first measurements of the leach rates of Ru and Sb from saltstone. The nitrate leach rates for these samples were 5 x 10 -5 grams of nitrate per square cm per day after 100 days for the laboratory samples and after 200 days for the field samples. These values are consistent with the previously measured leach rates for reference formulation saltstone. The relative standard deviation in the leach rate is about 15% for the field samples, which all were produced from one batch of saltstone, and about 35% for the laboratory samples, which came from different batches. These are the first recorded estimates of the error in leach rates for saltstone

  20. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  1. Topology Optimized Nanostrips for Electric Field Enhancements

    DEFF Research Database (Denmark)

    Vester-Petersen, Joakim; Christiansen, Rasmus E.; Julsgaard, Brian

    This work addresses efficiency improvements of solar cells by manipulating the spectrum of sunlight to bettermatch the range of efficient current generation. The intrinsic transmission losses in crystalline silicon can effectivelybe reduced using photon upconversion in erbium ions in which low...... energy photons are converted to higher energy photons able to bridge the band gap energy and contribute the energy generation. The upconversion process in erbium is inefficient under the natural solar irradiation, and without any electric field enhancements of the incident light, the process...... is negligible for photo-voltaic applications. However, the probability for upconversion can be increased by focusing the incident light onto the erbium ions using optimized metal nanostructures[1, 2, 3]. The aim of this work is to increase the photon upconversion yield by optimizing the design of metalic...

  2. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  3. Optimization of the two-sample rank Neyman-Pearson detector

    Science.gov (United States)

    Akimov, P. S.; Barashkov, V. M.

    1984-10-01

    The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.

  4. Design of Field Experiments for Adaptive Sampling of the Ocean with Autonomous Vehicles

    Science.gov (United States)

    Zheng, H.; Ooi, B. H.; Cho, W.; Dao, M. H.; Tkalich, P.; Patrikalakis, N. M.

    2010-05-01

    Due to the highly non-linear and dynamical nature of oceanic phenomena, the predictive capability of various ocean models depends on the availability of operational data. A practical method to improve the accuracy of the ocean forecast is to use a data assimilation methodology to combine in-situ measured and remotely acquired data with numerical forecast models of the physical environment. Autonomous surface and underwater vehicles with various sensors are economic and efficient tools for exploring and sampling the ocean for data assimilation; however there is an energy limitation to such vehicles, and thus effective resource allocation for adaptive sampling is required to optimize the efficiency of exploration. In this paper, we use physical oceanography forecasts of the coastal zone of Singapore for the design of a set of field experiments to acquire useful data for model calibration and data assimilation. The design process of our experiments relied on the oceanography forecast including the current speed, its gradient, and vorticity in a given region of interest for which permits for field experiments could be obtained and for time intervals that correspond to strong tidal currents. Based on these maps, resources available to our experimental team, including Autonomous Surface Craft (ASC) are allocated so as to capture the oceanic features that result from jets and vortices behind bluff bodies (e.g., islands) in the tidal current. Results are summarized from this resource allocation process and field experiments conducted in January 2009.

  5. Supersonic acoustic intensity with statistically optimized near-field acoustic holography

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Jacobsen, Finn

    2011-01-01

    The concept of supersonic acoustic intensity was introduced some years ago for estimating the fraction of the flow of energy radiated by a source that propagates to the far field. It differs from the usual (active) intensity by excluding the near-field energy resulting from evanescent waves...... to the information provided by the near-field acoustic holography technique. This study proposes a version of the supersonic acoustic intensity applied to statistically optimized near-field acoustic holography (SONAH). The theory, numerical results and an experimental study are presented. The possibility of using...

  6. Optimized design of micromachined electric field mills to maximize electrostatic field sensitivity

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2016-07-01

    Full Text Available This paper describes the design optimization of a micromachined electric field mill, in relation to maximizing its output signal. The cases studied are for a perforated electrically grounded shutter vibrating laterally over sensing electrodes. It is shown that when modeling the output signal of the sensor, the differential charge on the sense electrodes when exposed to vs. visibly shielded from the incident electric field must be considered. Parametric studies of device dimensions show that the shutter thickness and its spacing from the underlying electrodes should be minimized as these parameters very strongly affect the MEFM signal. Exploration of the shutter perforation size and sense electrode width indicate that the best MEFM design is one where shutter perforation widths are a few times larger than the sense electrode widths. Keywords: MEFM, Finite element method, Electric field measurement, MEMS, Micromachining

  7. Sample cell for in-field X-ray diffraction experiments

    Directory of Open Access Journals (Sweden)

    Viktor Höglin

    2015-01-01

    Full Text Available A sample cell making it possible to perform synchrotron radiation X-ray powder diffraction experiments in a magnetic field of 0.35 T has been constructed. The device is an add-on to an existing sample cell and contains a strong permanent magnet of NdFeB-type. Experiments have shown that the setup is working satisfactory making it possible to perform in-field measurements.

  8. Plasma treatment of bulk niobium surface for superconducting rf cavities: Optimization of the experimental conditions on flat samples

    Directory of Open Access Journals (Sweden)

    M. Rašković

    2010-11-01

    Full Text Available Accelerator performance, in particular the average accelerating field and the cavity quality factor, depends on the physical and chemical characteristics of the superconducting radio-frequency (SRF cavity surface. Plasma based surface modification provides an excellent opportunity to eliminate nonsuperconductive pollutants in the penetration depth region and to remove the mechanically damaged surface layer, which improves the surface roughness. Here we show that the plasma treatment of bulk niobium (Nb presents an alternative surface preparation method to the commonly used buffered chemical polishing and electropolishing methods. We have optimized the experimental conditions in the microwave glow discharge system and their influence on the Nb removal rate on flat samples. We have achieved an etching rate of 1.7  μm/min⁡ using only 3% chlorine in the reactive mixture. Combining a fast etching step with a moderate one, we have improved the surface roughness without exposing the sample surface to the environment. We intend to apply the optimized experimental conditions to the preparation of single cell cavities, pursuing the improvement of their rf performance.

  9. Generalized filtering of laser fields in optimal control theory: application to symmetry filtering of quantum gate operations

    International Nuclear Information System (INIS)

    Schroeder, Markus; Brown, Alex

    2009-01-01

    We present a modified version of a previously published algorithm (Gollub et al 2008 Phys. Rev. Lett.101 073002) for obtaining an optimized laser field with more general restrictions on the search space of the optimal field. The modification leads to enforcement of the constraints on the optimal field while maintaining good convergence behaviour in most cases. We demonstrate the general applicability of the algorithm by imposing constraints on the temporal symmetry of the optimal fields. The temporal symmetry is used to reduce the number of transitions that have to be optimized for quantum gate operations that involve inversion (NOT gate) or partial inversion (Hadamard gate) of the qubits in a three-dimensional model of ammonia.

  10. Communication: Multiple atomistic force fields in a single enhanced sampling simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hoang Viet, Man [Department of Physics, North Carolina State University, Raleigh, North Carolina 27695-8202 (United States); Derreumaux, Philippe, E-mail: philippe.derreumaux@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS, Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Institut Universitaire de France, 103 Bvd Saint-Germain, 75005 Paris (France); Nguyen, Phuong H., E-mail: phuong.nguyen@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS, Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France)

    2015-07-14

    The main concerns of biomolecular dynamics simulations are the convergence of the conformational sampling and the dependence of the results on the force fields. While the first issue can be addressed by employing enhanced sampling techniques such as simulated tempering or replica exchange molecular dynamics, repeating these simulations with different force fields is very time consuming. Here, we propose an automatic method that includes different force fields into a single advanced sampling simulation. Conformational sampling using three all-atom force fields is enhanced by simulated tempering and by formulating the weight parameters of the simulated tempering method in terms of the energy fluctuations, the system is able to perform random walk in both temperature and force field spaces. The method is first demonstrated on a 1D system and then validated by the folding of the 10-residue chignolin peptide in explicit water.

  11. In-well time-of-travel approach to evaluate optimal purge duration during low-flow sampling of monitoring wells

    Science.gov (United States)

    Harte, Philip T.

    2017-01-01

    A common assumption with groundwater sampling is that low (time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.

  12. Optimal design of an automotive magnetorheological brake considering geometric dimensions and zero-field friction heat

    International Nuclear Information System (INIS)

    Nguyen, Q H; Choi, S B

    2010-01-01

    This paper presents an optimal design of a magnetorheological (MR) brake for a middle-sized passenger car which can replace a conventional hydraulic disc-type brake. In the optimization, the required braking torque, the temperature due to zero-field friction of MR fluid, the mass of the brake system and all significant geometric dimensions are considered. After describing the configuration, the braking torque of the proposed MR brake is derived on the basis of the field-dependent Bingham and Herschel–Bulkley rheological model of the MR fluid. The optimal design of the MR brake is then analyzed taking into account available space, mass, braking torque and steady heat generated by zero-field friction torque of the MR brake. The optimization procedure based on the finite element analysis integrated with an optimization tool is proposed to obtain optimal geometric dimensions of the MR brake. Based on the proposed procedure, optimal solutions of single and multiple disc-type MR brakes featuring different types of MR fluid are achieved. From the results, the most effective MR brake for the middle-sized passenger car is identified and some discussions on the performance improvement of the optimized MR brake are described

  13. An Optimal Electric Dipole Antenna Model and Its Field Propagation

    Directory of Open Access Journals (Sweden)

    Yidong Xu

    2016-01-01

    Full Text Available An optimal electric dipole antennas model is presented and analyzed, based on the hemispherical grounding equivalent model and the superposition principle. The paper also presents a full-wave electromagnetic simulation for the electromagnetic field propagation in layered conducting medium, which is excited by the horizontal electric dipole antennas. Optimum frequency for field transmission in different depth is carried out and verified by the experimental results in comparison with previously reported simulation over a digital wireless Through-The-Earth communication system. The experimental results demonstrate that the dipole antenna grounding impedance and the output power can be efficiently reduced by using the optimal electric dipole antenna model and operating at the optimum frequency in a vertical transmission depth up to 300 m beneath the surface of the earth.

  14. Search for life on Mars in surface samples: Lessons from the 1999 Marsokhod rover field experiment

    Science.gov (United States)

    Newsom, Horton E.; Bishop, J.L.; Cockell, C.; Roush, T.L.; Johnson, J. R.

    2001-01-01

    The Marsokhod 1999 field experiment in the Mojave Desert included a simulation of a rover-based sample selection mission. As part of this mission, a test was made of strategies and analytical techniques for identifying past or present life in environments expected to be present on Mars. A combination of visual clues from high-resolution images and the detection of an important biomolecule (chlorophyll) with visible/near-infrared (NIR) spectroscopy led to the successful identification of a rock with evidence of cryptoendolithic organisms. The sample was identified in high-resolution images (3 times the resolution of the Imager for Mars Pathfinder camera) on the basis of a green tinge and textural information suggesting the presence of a thin, partially missing exfoliating layer revealing the organisms. The presence of chlorophyll bands in similar samples was observed in visible/NIR spectra of samples in the field and later confirmed in the laboratory using the same spectrometer. Raman spectroscopy in the laboratory, simulating a remote measurement technique, also detected evidence of carotenoids in samples from the same area. Laboratory analysis confirmed that the subsurface layer of the rock is inhabited by a community of coccoid Chroococcidioposis cyanobacteria. The identification of minerals in the field, including carbonates and serpentine, that are associated with aqueous processes was also demonstrated using the visible/NIR spectrometer. Other lessons learned that are applicable to future rover missions include the benefits of web-based programs for target selection and for daily mission planning and the need for involvement of the science team in optimizing image compression schemes based on the retention of visual signature characteristics. Copyright 2000 by the American Geophysical Union.

  15. Portable field water sample filtration unit

    International Nuclear Information System (INIS)

    Hebert, A.J.; Young, G.G.

    1977-01-01

    A lightweight back-packable field-tested filtration unit is described. The unit is easily cleaned without cross contamination at the part-per-billion level and allows rapid filtration of boiling hot and sometimes muddy water. The filtration results in samples that are free of bacteria and particulates and which resist algae growth even after storage for months. 3 figures

  16. Topology Optimization of a High-Temperature Superconducting Field Winding of a Synchronous Machine

    DEFF Research Database (Denmark)

    Pozzi, Matias; Mijatovic, Nenad; Jensen, Bogi Bech

    2013-01-01

    This paper presents topology optimization (TO) of the high-temperature superconductor (HTS) field winding of an HTS synchronous machine. The TO problem is defined in order to find the minimum HTS material usage for a given HTS synchronous machine design. Optimization is performed using a modified...... genetic algorithm with local optimization search based on on/off sensitivity analysis. The results show an optimal HTS coil distribution, achieving compact designs with a maximum of approximately 22% of the available space for the field winding occupied with HTS tape. In addition, this paper describes...... potential HTS savings, which could be achieved using multiple power supplies for the excitation of the machine. Using the TO approach combined with two excitation currents, an additional HTS saving of 9.1% can be achieved....

  17. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. An effective parameter optimization technique for vibration flow field characterization of PP melts via LS-SVM combined with SALS in an electromagnetism dynamic extruder

    Science.gov (United States)

    Xian, Guangming

    2018-03-01

    A method for predicting the optimal vibration field parameters by least square support vector machine (LS-SVM) is presented in this paper. One convenient and commonly used technique for characterizing the the vibration flow field of polymer melts films is small angle light scattering (SALS) in a visualized slit die of the electromagnetism dynamic extruder. The optimal value of vibration vibration frequency, vibration amplitude, and the maximum light intensity projection area can be obtained by using LS-SVM for prediction. For illustrating this method and show its validity, the flowing material is used with polypropylene (PP) and fifteen samples are tested at the rotation speed of screw at 36rpm. This paper first describes the apparatus of SALS to perform the experiments, then gives the theoretical basis of this new method, and detail the experimental results for parameter prediction of vibration flow field. It is demonstrated that it is possible to use the method of SALS and obtain detailed information on optimal parameter of vibration flow field of PP melts by LS-SVM.

  19. Phase-Field Relaxation of Topology Optimization with Local Stress Constraints

    DEFF Research Database (Denmark)

    Stainko, Roman; Burger, Martin

    2006-01-01

    inequality constraints. We discretize the problem by finite elements and solve the arising finite-dimensional programming problems by a primal-dual interior point method. Numerical experiments for problems with local stress constraints based on different criteria indicate the success and robustness......We introduce a new relaxation scheme for structural topology optimization problems with local stress constraints based on a phase-field method. In the basic formulation we have a PDE-constrained optimization problem, where the finite element and design analysis are solved simultaneously...

  20. Penumbra modifier for optimal electron fields combination

    International Nuclear Information System (INIS)

    ElSherbini, N.; Hejazy, M.A.; Khalil, W.

    2003-01-01

    Abutment of two or more electron fields to irradiate extended areas may lead to significant dose inhomogeneities in the junction region. This study describes the geometric and dosimetric characteristics of a device developed to modify the penumbra of an electron beam and therapy improve of dose uniformity in the over lap region when fields are abutted. The device is lipowitz metal block placed on top of the insertion plate of the electron applicator and positioned to stop part of he electron beam on side of field abutment. The air-scattered electrons beyond the block increase the penumbra width from about 1,4 to 2-7-43.4 cm at SSD 100 cm, the modified penumbra is broad and almost linear at all depths for the 6.8, and 15 MeV electron beams used. Film dosimetry was used to obtain profiles, iso-dose distributions, single modified beams and matched fields of 6, 10, and 15 MeV. Wellhofer dosimetry system was used to obtain beam profiles and iso-dose distributions of single modified beams needed for CADPLAN treatment planning system, which used to optimize and compare the skin gap to be used and to quantify the dose uniformity in a junction of the field separation for both modified and non-modified beams. Results are presented for various field configurations without the penumbra modifier; lateral setup error of 2-3 mm may introduce dose variations of 20% or more in the junction region. Similar setup error cause less than 5% dose variations when the penumbra modifier is used to match the field

  1. The optimization of the electrostatic field inside the ZEUS forward drift chambers: Calculations and measurements

    International Nuclear Information System (INIS)

    Dobberstein, M.P.; Krawczyk, F.; Schaefer-Jotter, M.

    1990-11-01

    The electrostatic field inside small drift cells shows in general edge effects which are not negligible. These are usually corrected by field shaping wires or strips. The operating voltages of the field shaping electrodes have to be adjusted to maximize the field homogeneity. We present the underlying ideas of such an optimization procedure for the cells of the ZEUS forward drift chambers. Using the finite difference code PROFI, the optimization can be performed automatically by a multiple solution of the Poisson equation. An experimental verification of the optimal voltages was carried out measuring the gas amplifications at the six sense wires. Modifications of the drift cell geometry were necessary for calibration measurements with a laser beam. This caused additional distortions of the electrostatic field. Their influence was calculated using the MAFIA code, which allows to include open boundary conditions. (orig.)

  2. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  3. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  4. Long-Term Ecological Monitoring Field Sampling Plan for 2007

    International Nuclear Information System (INIS)

    T. Haney R. VanHorn

    2007-01-01

    This field sampling plan describes the field investigations planned for the Long-Term Ecological Monitoring Project at the Idaho National Laboratory Site in 2007. This plan and the Quality Assurance Project Plan for Waste Area Groups 1, 2, 3, 4, 5, 6, 7, 10, and Removal Actions constitute the sampling and analysis plan supporting long-term ecological monitoring sampling in 2007. The data collected under this plan will become part of the long-term ecological monitoring data set that is being collected annually. The data will be used to determine the requirements for the subsequent long-term ecological monitoring. This plan guides the 2007 investigations, including sampling, quality assurance, quality control, analytical procedures, and data management. As such, this plan will help to ensure that the resulting monitoring data will be scientifically valid, defensible, and of known and acceptable quality

  5. Long-Term Ecological Monitoring Field Sampling Plan for 2007

    Energy Technology Data Exchange (ETDEWEB)

    T. Haney

    2007-07-31

    This field sampling plan describes the field investigations planned for the Long-Term Ecological Monitoring Project at the Idaho National Laboratory Site in 2007. This plan and the Quality Assurance Project Plan for Waste Area Groups 1, 2, 3, 4, 5, 6, 7, 10, and Removal Actions constitute the sampling and analysis plan supporting long-term ecological monitoring sampling in 2007. The data collected under this plan will become part of the long-term ecological monitoring data set that is being collected annually. The data will be used t determine the requirements for the subsequent long-term ecological monitoring. This plan guides the 2007 investigations, including sampling, quality assurance, quality control, analytical procedures, and data management. As such, this plan will help to ensure that the resulting monitoring data will be scientifically valid, defensible, and of known and acceptable quality.

  6. Optimized Field Sampling and Monitoring of Airborne Hazardous Transport Plumes; A Geostatistical Simulation Approach

    International Nuclear Information System (INIS)

    Chen, DI-WEN

    2001-01-01

    Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian/geostatistical methodology, quantitatively combines soft information

  7. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    Science.gov (United States)

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  8. Simultaneous beam sampling and aperture shape optimization for SPORT.

    Science.gov (United States)

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case

  9. Simultaneous beam sampling and aperture shape optimization for SPORT

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Ye, Yinyu [Department of Management Science and Engineering, Stanford University, Stanford, California 94305 (United States)

    2015-02-15

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  10. Simultaneous beam sampling and aperture shape optimization for SPORT

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu

    2015-01-01

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  11. An optimized field coverage planning approach for navigation of agricultural robots in fields involving obstacle areas

    DEFF Research Database (Denmark)

    Hameed, Ibahim; Bochtis, D.; Sørensen, C.A.

    2013-01-01

    -field obstacle areas, the headland paths generation for the field and each obstacle area, the implementation of a genetic algorithm to optimize the sequence that the field robot vehicle will follow to visit the blocks, and an algorithmically generation of the task sequences derived from the farmer practices......Technological advances combined with the demand of cost efficiency and environmental considerations lead farmers to review their practices towards the adoption of new managerial approaches including enhanced automation. The application of field robots is one of the most promising advances among....... This approach has proven that it is possible to capture the practices of farmers and embed these practices in an algorithmic description providing a complete field area coverage plan in a form prepared for execution by the navigation system of a field robot....

  12. Exponentially Biased Ground-State Sampling of Quantum Annealing Machines with Transverse-Field Driving Hamiltonians.

    Science.gov (United States)

    Mandrà, Salvatore; Zhu, Zheng; Katzgraber, Helmut G

    2017-02-17

    We study the performance of the D-Wave 2X quantum annealing machine on systems with well-controlled ground-state degeneracy. While obtaining the ground state of a spin-glass benchmark instance represents a difficult task, the gold standard for any optimization algorithm or machine is to sample all solutions that minimize the Hamiltonian with more or less equal probability. Our results show that while naive transverse-field quantum annealing on the D-Wave 2X device can find the ground-state energy of the problems, it is not well suited in identifying all degenerate ground-state configurations associated with a particular instance. Even worse, some states are exponentially suppressed, in agreement with previous studies on toy model problems [New J. Phys. 11, 073021 (2009)NJOPFM1367-263010.1088/1367-2630/11/7/073021]. These results suggest that more complex driving Hamiltonians are needed in future quantum annealing machines to ensure a fair sampling of the ground-state manifold.

  13. Optimization Design of Bipolar Plate Flow Field in PEM Stack

    Science.gov (United States)

    Wen, Ming; He, Kanghao; Li, Peilong; Yang, Lei; Deng, Li; Jiang, Fei; Yao, Yong

    2017-12-01

    A new design of bipolar plate flow field in proton exchange membrane (PEM) stack was presented to develop a high-performance transfer efficiency of the two-phase flow. Two different flow fields were studied by using numerical simulations and the performance of the flow fields was presented. the hydrodynamic properties include pressure gap between inlet and outlet, the Reynold’s number of the two types were compared based on the Navier-Stokes equations. Computer aided optimization software was implemented in the design of experiments of the preferable flow field. The design of experiments (DOE) for the favorable concept was carried out to study the hydrodynamic properties when changing the design parameters of the bipolar plate.

  14. Synthetical optimization of hydraulic radius and acoustic field for thermoacoustic cooler

    International Nuclear Information System (INIS)

    Kang Huifang; Li Qing; Zhou Gang

    2009-01-01

    It is well known that the acoustic field and the hydraulic radius of the regenerator play key roles in thermoacoustic processes. The optimization of hydraulic radius strongly depends on the acoustic field in the regenerator. This paper investigates the synthetical optimization of hydraulic radius and acoustic field which is characterized by the ratio of the traveling wave component to the standing wave component. In this paper, we discussed the heat flux, cooling power, temperature gradient and coefficient of performance of thermoacoustic cooler with different combinations of hydraulic radiuses and acoustic fields. The calculation results show that, in the cooler's regenerator, due to the acoustic wave, the heat is transferred towards the pressure antinodes in the pure standing wave, while the heat is transferred in the opposite direction of the wave propagation in the pure traveling wave. The better working condition for the regenerator appears in the traveling wave phase region of the like-standing wave, where the directions of the heat transfer by traveling wave component and standing wave component are the same. Otherwise, the small hydraulic radius is not a good choice for acoustic field with excessively high ratio of traveling wave, and the small hydraulic radius is only needed by the traveling wave phase region of like-standing wave.

  15. Optimization of a large integrated area development of gas fields offshore Sarawak, Malaysia

    International Nuclear Information System (INIS)

    Inyang, S.E.; Tak, A.N.H.; Costello, G.

    1995-01-01

    Optimizations of field development plans are routine in the industry. The size, schedule and nature of the upstream gas supply project to the second Malaysia LNG (MLNG Dua) plant in Bintulu, Sarawak made the need for extensive optimizations critical to realizing a robust and cost effective development scheme, and makes the work of more general interest. The project comprises the upstream development of 11 offshore fields for gas supply to MLNG Dua plant at an initial plateau production of 7.8 million tons per year of LNG. The gas fields span a large geographical area in medium water depths (up to 440 ft), and contain gas reserves of a distinctly variable gas quality. This paper describes the project optimization efforts aimed to ensure an upstream gas supply system effectiveness of over 99% throughout the project life while maintaining high safety and environmental standards and also achieving an economic development in an era of low hydrocarbon prices. Fifty percent of the first of the three phases of this gas supply project has already been completed and the first gas from these fields is scheduled to be available by the end of 1995

  16. Second harmonic sound field after insertion of a biological tissue sample

    Science.gov (United States)

    Zhang, Dong; Gong, Xiu-Fen; Zhang, Bo

    2002-01-01

    Second harmonic sound field after inserting a biological tissue sample is investigated by theory and experiment. The sample is inserted perpendicular to the sound axis, whose acoustical properties are different from those of surrounding medium (distilled water). By using the superposition of Gaussian beams and the KZK equation in quasilinear and parabolic approximations, the second harmonic field after insertion of the sample can be derived analytically and expressed as a linear combination of self- and cross-interaction of the Gaussian beams. Egg white, egg yolk, porcine liver, and porcine fat are used as the samples and inserted in the sound field radiated from a 2 MHz uniformly excited focusing source. Axial normalized sound pressure curves of the second harmonic wave before and after inserting the sample are measured and compared with the theoretical results calculated with 10 items of Gaussian beam functions.

  17. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  18. An analytical model for the vertical electric field distribution and optimization of high voltage REBULF LDMOS

    International Nuclear Information System (INIS)

    Hu Xia-Rong; Lü Rui

    2014-01-01

    In this paper, an analytical model for the vertical electric field distribution and optimization of a high voltage-reduced bulk field (REBULF) lateral double-diffused metal—oxide-semiconductor (LDMOS) transistor is presented. The dependences of the breakdown voltage on the buried n-layer depth, thickness, and doping concentration are discussed in detail. The REBULF criterion and the optimal vertical electric field distribution condition are derived on the basis of the optimization of the electric field distribution. The breakdown voltage of the REBULF LDMOS transistor is always higher than that of a single reduced surface field (RESURF) LDMOS transistor, and both analytical and numerical results show that it is better to make a thick n-layer buried deep into the p-substrate. (interdisciplinary physics and related areas of science and technology)

  19. Pareto-Optimization of HTS CICC for High-Current Applications in Self-Field

    Directory of Open Access Journals (Sweden)

    Giordano Tomassetti

    2018-01-01

    Full Text Available The ENEA superconductivity laboratory developed a novel design for Cable-in-Conduit Conductors (CICCs comprised of stacks of 2nd-generation REBCO coated conductors. In its original version, the cable was made up of 150 HTS tapes distributed in five slots, twisted along an aluminum core. In this work, taking advantage of a 2D finite element model, able to estimate the cable’s current distribution in the cross-section, a multiobjective optimization procedure was implemented. The aim of optimization was to simultaneously maximize both engineering current density and total current flowing inside the tapes when operating in self-field, by varying the cross-section layout. Since the optimization process involved both integer and real geometrical variables, the choice of an evolutionary search algorithm was strictly necessary. The use of an evolutionary algorithm in the frame of a multiple objective optimization made it an obliged choice to numerically approach the problem using a nonstandard fast-converging optimization algorithm. By means of this algorithm, the Pareto frontiers for the different configurations were calculated, providing a powerful tool for the designer to achieve the desired preliminary operating conditions in terms of engineering current density and/or total current, depending on the specific application field, that is, power transmission cable and bus bar systems.

  20. Sugar as an optimal carbon source for the enhanced performance of MgB2 superconductors at high magnetic fields

    Science.gov (United States)

    Shcherbakova, O. V.; Pan, A. V.; Wang, J. L.; Shcherbakov, A. V.; Dou, S. X.; Wexler, D.; Babić, E.; Jerčinović, M.; Husnjak, O.

    2008-01-01

    In this paper we report the results of an extended study of the effect of sugar doping on the structural and electromagnetic properties of MgB2 superconductors. High values of the upper critical field (Bc2) of 36 T and the irreversibility field (Birr) of 27 T have been estimated at the temperature of 5 K in a bulk MgB2 sample with the addition of 10 wt% of sugar. The critical current density (Jc(Ba)) of sugar-doped samples has been significantly improved in the high field region. The value of transport Jc has reached as high as 108 A m-2 at 10 T and 5 K for Fe-sheathed sugar-doped MgB2 wire. The analysis of the pinning mechanism in the samples investigated indicated that dominant vortex pinning occurs on the surface type of pinning defects, such as grain boundaries, dislocations, stacking faults etc, for both pure and doped MgB2. In sugar-doped samples, pinning is governed by numerous crystal lattice defects, which appear in MgB2 grains as a result of crystal lattice distortion caused by carbon substitution for boron and nano-inclusions. The drastically improved superconducting properties of sugar-doped samples are also attributed to the highly homogeneous distribution and enhanced reactivity of this dopant with host Mg and B powders. The results of this work suggest that sugar is the optimal source of carbon for doping MgB2 superconductor, especially for application at high magnetic fields.

  1. RANKED SET SAMPLING FOR ECOLOGICAL RESEARCH: ACCOUNTING FOR THE TOTAL COSTS OF SAMPLING

    Science.gov (United States)

    Researchers aim to design environmental studies that optimize precision and allow for generalization of results, while keeping the costs of associated field and laboratory work at a reasonable level. Ranked set sampling is one method to potentially increase precision and reduce ...

  2. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    Science.gov (United States)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  3. Field Sample Preparation Method Development for Isotope Ratio Mass Spectrometry

    International Nuclear Information System (INIS)

    Leibman, C.; Weisbrod, K.; Yoshida, T.

    2015-01-01

    Non-proliferation and International Security (NA-241) established a working group of researchers from Los Alamos National Laboratory (LANL), Pacific Northwest National Laboratory (PNNL) and Savannah River National Laboratory (SRNL) to evaluate the utilization of in-field mass spectrometry for safeguards applications. The survey of commercial off-the-shelf (COTS) mass spectrometers (MS) revealed no instrumentation existed capable of meeting all the potential safeguards requirements for performance, portability, and ease of use. Additionally, fieldable instruments are unlikely to meet the International Target Values (ITVs) for accuracy and precision for isotope ratio measurements achieved with laboratory methods. The major gaps identified for in-field actinide isotope ratio analysis were in the areas of: 1. sample preparation and/or sample introduction, 2. size reduction of mass analyzers and ionization sources, 3. system automation, and 4. decreased system cost. Development work in 2 through 4, numerated above continues, in the private and public sector. LANL is focusing on developing sample preparation/sample introduction methods for use with the different sample types anticipated for safeguard applications. Addressing sample handling and sample preparation methods for MS analysis will enable use of new MS instrumentation as it becomes commercially available. As one example, we have developed a rapid, sample preparation method for dissolution of uranium and plutonium oxides using ammonium bifluoride (ABF). ABF is a significantly safer and faster alternative to digestion with boiling combinations of highly concentrated mineral acids. Actinides digested with ABF yield fluorides, which can then be analyzed directly or chemically converted and separated using established column chromatography techniques as needed prior to isotope analysis. The reagent volumes and the sample processing steps associated with ABF sample digestion lend themselves to automation and field

  4. Optimized molecular dynamics force fields applied to the helix-coil transition of polypeptides.

    Science.gov (United States)

    Best, Robert B; Hummer, Gerhard

    2009-07-02

    Obtaining the correct balance of secondary structure propensities is a central priority in protein force-field development. Given that current force fields differ significantly in their alpha-helical propensities, a correction to match experimental results would be highly desirable. We have determined simple backbone energy corrections for two force fields to reproduce the fraction of helix measured in short peptides at 300 K. As validation, we show that the optimized force fields produce results in excellent agreement with nuclear magnetic resonance experiments for folded proteins and short peptides not used in the optimization. However, despite the agreement at ambient conditions, the dependence of the helix content on temperature is too weak, a problem shared with other force fields. A fit of the Lifson-Roig helix-coil theory shows that both the enthalpy and entropy of helix formation are too small: the helix extension parameter w agrees well with experiment, but its entropic and enthalpic components are both only about half the respective experimental estimates. Our structural and thermodynamic analyses point toward the physical origins of these shortcomings in current force fields, and suggest ways to address them in future force-field development.

  5. Optimized IMAC-IMAC protocol for phosphopeptide recovery from complex biological samples

    DEFF Research Database (Denmark)

    Ye, Juanying; Zhang, Xumin; Young, Clifford

    2010-01-01

    using Fe(III)-NTA IMAC resin and it proved to be highly selective in the phosphopeptide enrichment of a highly diluted standard sample (1:1000) prior to MALDI MS analysis. We also observed that a higher iron purity led to an increased IMAC enrichment efficiency. The optimized method was then adapted...... to phosphoproteome analyses of cell lysates of high protein complexity. From either 20 microg of mouse sample or 50 microg of Drosophila melanogaster sample, more than 1000 phosphorylation sites were identified in each study using IMAC-IMAC and LC-MS/MS. We demonstrate efficient separation of multiply phosphorylated...... characterization of phosphoproteins in functional phosphoproteomics research projects....

  6. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  7. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  8. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.

    Science.gov (United States)

    Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden.

  9. An optimized intermolecular force field for hydrogen-bonded organic molecular crystals using atomic multipole electrostatics

    International Nuclear Information System (INIS)

    Pyzer-Knapp, Edward O.; Thompson, Hugh P. G.; Day, Graeme M.

    2016-01-01

    An empirically parameterized intermolecular force field is developed for crystal structure modelling and prediction. The model is optimized for use with an atomic multipole description of electrostatic interactions. We present a re-parameterization of a popular intermolecular force field for describing intermolecular interactions in the organic solid state. Specifically we optimize the performance of the exp-6 force field when used in conjunction with atomic multipole electrostatics. We also parameterize force fields that are optimized for use with multipoles derived from polarized molecular electron densities, to account for induction effects in molecular crystals. Parameterization is performed against a set of 186 experimentally determined, low-temperature crystal structures and 53 measured sublimation enthalpies of hydrogen-bonding organic molecules. The resulting force fields are tested on a validation set of 129 crystal structures and show improved reproduction of the structures and lattice energies of a range of organic molecular crystals compared with the original force field with atomic partial charge electrostatics. Unit-cell dimensions of the validation set are typically reproduced to within 3% with the re-parameterized force fields. Lattice energies, which were all included during parameterization, are systematically underestimated when compared with measured sublimation enthalpies, with mean absolute errors of between 7.4 and 9.0%

  10. The optimal use of contrast agents at high field MRI

    International Nuclear Information System (INIS)

    Trattnig, Siegfried; Pinker, Kathia; Ba-Ssalamah, Ahmed; Noebauer-Huhmann, Iris-Melanie

    2006-01-01

    The intravenous administration of a standard dose of conventional gadolinium-based contrast agents produces higher contrast between the tumor and normal brain at 3.0 Tesla (T) than at 1.5 T, which allows reducing the dose to half of the standard one to produce similar contrast at 3.0 T compared to 1.5 T. The assessment of cumulative triple-dose 3.0 T images obtained the best results in the detection of brain metastases compared to other sequences. The contrast agent dose for dynamic susceptibility-weighted contrast-enhanced perfusion MR imaging at 3.0 T can be reduced to 0.1 mmol compared to 0.2 mmol at 1.5 T due to the increased susceptibility effects at higher magnetic field strengths. Contrast agent application makes susceptibility-weighted imaging (SWI) at 3.0 T clinically attractive, with an increase in spatial resolution within the same scan time. Whereas a double dose of conventional gadolinium-based contrast agents was optimal in SWI with respect to sensitivity and image quality, a standard dose of gadobenate dimeglumine, which has a two-fold higher T1-relaxivity in blood, produced the same effect. For MR-arthrography, optimized concentrations of gadolinium-based contrast agents are similar at 3.0 and 1.5 T. In summary, high field MRI requires the optimization of the contrast agent dose in different clinical applications. (orig.)

  11. Sampling atmospheric pesticides with SPME: Laboratory developments and field study

    International Nuclear Information System (INIS)

    Wang Junxia; Tuduri, Ludovic; Mercury, Maud; Millet, Maurice; Briand, Olivier; Montury, Michel

    2009-01-01

    To estimate the atmospheric exposure of the greenhouse workers to pesticides, solid phase microextraction (SPME) was used under non-equilibrium conditions. Using Fick's law of diffusion, the concentrations of pesticides in the greenhouse can be calculated using pre-determined sampling rates (SRs). Thus the sampling rates (SRs) of two modes of SPME in the lab and in the field were determined and compared. The SRs for six pesticides in the lab were 20.4-48.3 mL min -1 for the exposed fiber and 0.166-0.929 mL min -1 for the retracted fiber. In field sampling, two pesticides, dichlorvos and cyprodinil were detected with exposed SPME. SR with exposed SPME for dichlorvos in the field (32.4 mL min -1 ) was consistent with that in the lab (34.5 mL min -1 ). SR for dichlorvos in the field (32.4 mL min -1 ) was consistent with that in the lab (34.5 mL min -1 ). The trends of temporal concentration and the inhalation exposure were also obtained. - SPME was proved to be a powerful and simple tool for determining pesticides' atmospheric concentration

  12. Optimality Conditions in Vector Optimization

    CERN Document Server

    Jiménez, Manuel Arana; Lizana, Antonio Rufián

    2011-01-01

    Vector optimization is continuously needed in several science fields, particularly in economy, business, engineering, physics and mathematics. The evolution of these fields depends, in part, on the improvements in vector optimization in mathematical programming. The aim of this Ebook is to present the latest developments in vector optimization. The contributions have been written by some of the most eminent researchers in this field of mathematical programming. The Ebook is considered essential for researchers and students in this field.

  13. Time optimization of 90Sr measurements: Sequential measurement of multiple samples during ingrowth of 90Y

    International Nuclear Information System (INIS)

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-01-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing 90 Sr by making the Cherenkov measurement of the daughter nuclide 90 Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of 90 Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21 h to 6.5 h, when assuming a MDA of 1 Bq/L and at a background count rate of approximately 0.8 cpm. - Highlights: • An approach roughly a factor of three more efficient than an un-optimized method. • The optimization gives a more efficient use of instrument time. • The efficiency increase ranges from a factor of three to 10, for 10 to 40 samples.

  14. An automated analysis workflow for optimization of force-field parameters using neutron scattering data

    Energy Technology Data Exchange (ETDEWEB)

    Lynch, Vickie E.; Borreguero, Jose M. [Neutron Data Analysis & Visualization Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Bhowmik, Debsindhu [Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Ganesh, Panchapakesan; Sumpter, Bobby G. [Center for Nanophase Material Sciences, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Proffen, Thomas E. [Neutron Data Analysis & Visualization Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Goswami, Monojoy, E-mail: goswamim@ornl.gov [Center for Nanophase Material Sciences, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States)

    2017-07-01

    Graphical abstract: - Highlights: • An automated workflow to optimize force-field parameters. • Used the workflow to optimize force-field parameter for a system containing nanodiamond and tRNA. • The mechanism relies on molecular dynamics simulation and neutron scattering experimental data. • The workflow can be generalized to any other experimental and simulation techniques. - Abstract: Large-scale simulations and data analysis are often required to explain neutron scattering experiments to establish a connection between the fundamental physics at the nanoscale and data probed by neutrons. However, to perform simulations at experimental conditions it is critical to use correct force-field (FF) parameters which are unfortunately not available for most complex experimental systems. In this work, we have developed a workflow optimization technique to provide optimized FF parameters by comparing molecular dynamics (MD) to neutron scattering data. We describe the workflow in detail by using an example system consisting of tRNA and hydrophilic nanodiamonds in a deuterated water (D{sub 2}O) environment. Quasi-elastic neutron scattering (QENS) data show a faster motion of the tRNA in the presence of nanodiamond than without the ND. To compare the QENS and MD results quantitatively, a proper choice of FF parameters is necessary. We use an efficient workflow to optimize the FF parameters between the hydrophilic nanodiamond and water by comparing to the QENS data. Our results show that we can obtain accurate FF parameters by using this technique. The workflow can be generalized to other types of neutron data for FF optimization, such as vibrational spectroscopy and spin echo.

  15. Predictive simulations and optimization of nanowire field-effect PSA sensors including screening

    KAUST Repository

    Baumgartner, Stefan; Heitzinger, Clemens; Vacic, Aleksandar; Reed, Mark A

    2013-01-01

    We apply our self-consistent PDE model for the electrical response of field-effect sensors to the 3D simulation of nanowire PSA (prostate-specific antigen) sensors. The charge concentration in the biofunctionalized boundary layer at the semiconductor-electrolyte interface is calculated using the propka algorithm, and the screening of the biomolecules by the free ions in the liquid is modeled by a sensitivity factor. This comprehensive approach yields excellent agreement with experimental current-voltage characteristics without any fitting parameters. Having verified the numerical model in this manner, we study the sensitivity of nanowire PSA sensors by changing device parameters, making it possible to optimize the devices and revealing the attributes of the optimal field-effect sensor. © 2013 IOP Publishing Ltd.

  16. Predictive simulations and optimization of nanowire field-effect PSA sensors including screening

    KAUST Repository

    Baumgartner, Stefan

    2013-05-03

    We apply our self-consistent PDE model for the electrical response of field-effect sensors to the 3D simulation of nanowire PSA (prostate-specific antigen) sensors. The charge concentration in the biofunctionalized boundary layer at the semiconductor-electrolyte interface is calculated using the propka algorithm, and the screening of the biomolecules by the free ions in the liquid is modeled by a sensitivity factor. This comprehensive approach yields excellent agreement with experimental current-voltage characteristics without any fitting parameters. Having verified the numerical model in this manner, we study the sensitivity of nanowire PSA sensors by changing device parameters, making it possible to optimize the devices and revealing the attributes of the optimal field-effect sensor. © 2013 IOP Publishing Ltd.

  17. A Two-Mode Mean-Field Optimal Switching Problem for the Full Balance Sheet

    Directory of Open Access Journals (Sweden)

    Boualem Djehiche

    2014-01-01

    a two-mode optimal switching problem of mean-field type, which can be described by a system of Snell envelopes where the obstacles are interconnected and nonlinear. The main result of the paper is a proof of a continuous minimal solution to the system of Snell envelopes, as well as the full characterization of the optimal switching strategy.

  18. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  19. Optimization of multi-channel neutron focusing guides for extreme sample environments

    International Nuclear Information System (INIS)

    Di Julio, D D; Lelièvre-Berna, E; Andersen, K H; Bentley, P M; Courtois, P

    2014-01-01

    In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.

  20. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...

  1. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, M; Li, R; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States); Ye, Y [Stanford Univ, Management Science and Engineering, Stanford, Ca (United States); Boyd, S [Stanford University, Electrical Engineering, Stanford, CA (United States)

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  2. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    International Nuclear Information System (INIS)

    Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S

    2014-01-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  3. Optimization of s-Polarization Sensitivity in Apertureless Near-Field Optical Microscopy

    Directory of Open Access Journals (Sweden)

    Yuika Saito

    2012-01-01

    Full Text Available It is a general belief in apertureless near-field microscopy that the so-called p-polarization configuration, where the incident light is polarized parallel to the axis of the probe, is advantageous to its counterpart, the s-polarization configuration, where the incident light is polarized perpendicular to the probe axis. While this is true for most samples under common near-field experimental conditions, there are samples which respond better to the s-polarization configuration due to their orientations. Indeed, there have been several reports that have discussed such samples. This leads us to an important requirement that the near-field experimental setup should be equipped with proper sensitivity for measurements with s-polarization configuration. This requires not only creation of effective s-polarized illumination at the near-field probe, but also proper enhancement of s-polarized light by the probe. In this paper, we have examined the s-polarization enhancement sensitivity of near-field probes by measuring and evaluating the near-field Rayleigh scattering images constructed by a variety of probes. We found that the s-polarization enhancement sensitivity strongly depends on the sharpness of the apex of near-field probes. We have discussed the efficient value of probe sharpness by considering a balance between the enhancement and the spatial resolution, both of which are essential requirements of apertureless near-field microscopy.

  4. Norm in soil and sludge samples in Dukhan oil Field, Qatar state

    Energy Technology Data Exchange (ETDEWEB)

    Al-Kinani, A.T.; Hushari, M.; Al-Sulaiti, Huda; Alsadig, I.A., E-mail: mmhushari@moe.gov.qa [Radiation and Chemical Protection Department, Ministry of Environment, Doha (Qatar)

    2015-07-01

    The main objective of this work is to measure the activity concentrations of Naturally Occurring radioactive Materials (NORM) produced as a buy products in oil production. The analyses of NORM give available information for guidelines concerning radiation protection. Recently NORM subjected to restricted regulation issued by high legal authority at Qatar state. Twenty five samples of soil from Dukhan onshore oil field and 10 sludge samples collected from 2 offshore fields at Qatar state. High resolution low-level gamma-ray spectrometry used to measure gamma emitters of NORM. The activity concentrations of natural radionuclide in 22 samples from Dukhan oil field, were with average worldwide values . Only three soil samples have high activity concentration of Ra-226 which is more than 185 Bq/kg the exempted level for NORM in the Quatrain regulation. The natural radionuclide activity concentrations of 10 sludge samples from offshore oil fields was greater than 1100Bq/kg the exempted values of NORM set by Quatrain regulation so the sludge need special treatments. The average hazards indices (H{sub ex} , D , and Ra{sub eq}), for the 22 samples were below the word permissible values .This means that the human exposure to such material not impose any radiation risk. The average hazards indices (H{sub ex} , D , and Ra{sub eq}), for 3 soil samples and sludge samples are higher than the published maximal permissible. Thus human exposure to such material impose radiation risk. (author)

  5. Norm in soil and sludge samples in Dukhan oil Field, Qatar state

    International Nuclear Information System (INIS)

    Al-Kinani, A.T.; Hushari, M.; Al-Sulaiti, Huda; Alsadig, I.A.

    2015-01-01

    The main objective of this work is to measure the activity concentrations of Naturally Occurring radioactive Materials (NORM) produced as a buy products in oil production. The analyses of NORM give available information for guidelines concerning radiation protection. Recently NORM subjected to restricted regulation issued by high legal authority at Qatar state. Twenty five samples of soil from Dukhan onshore oil field and 10 sludge samples collected from 2 offshore fields at Qatar state. High resolution low-level gamma-ray spectrometry used to measure gamma emitters of NORM. The activity concentrations of natural radionuclide in 22 samples from Dukhan oil field, were with average worldwide values . Only three soil samples have high activity concentration of Ra-226 which is more than 185 Bq/kg the exempted level for NORM in the Quatrain regulation. The natural radionuclide activity concentrations of 10 sludge samples from offshore oil fields was greater than 1100Bq/kg the exempted values of NORM set by Quatrain regulation so the sludge need special treatments. The average hazards indices (H ex , D , and Ra eq ), for the 22 samples were below the word permissible values .This means that the human exposure to such material not impose any radiation risk. The average hazards indices (H ex , D , and Ra eq ), for 3 soil samples and sludge samples are higher than the published maximal permissible. Thus human exposure to such material impose radiation risk. (author)

  6. Combination of micelle collapse and field-amplified sample stacking in capillary electrophoresis for determination of trimethoprim and sulfamethoxazole in animal-originated foodstuffs.

    Science.gov (United States)

    Liu, Lihong; Wan, Qian; Xu, Xiaoying; Duan, Shunshan; Yang, Chunli

    2017-03-15

    An on-line preconcentration method combining micelle to solvent stacking (MSS) with field-amplified sample stacking (FASS) was employed for the analysis of trimethoprim (TMP) and sulfamethoxazole (SMZ) by capillary zone electrophoresis (CZE). The optimized experimental conditions were as followings: (1) sample matrix, 10.0mM SDS-5% (v/v) methanol; (2) trapping solution (TS), 35mM H 3 PO 4 -60% acetonitrile (CH 3 CN); (3) running buffer, 30mM Na 2 HPO 4 (pH=7.3); (4) sample solution volume, 168nL; TS volume, 168nL; and (5) 9kV voltage, 214nm UV detection. Under the optimized conditions, the limits of detection (LODs) for SMZ and TMP were 7.7 and 8.5ng/mL, and they were 301 and 329 times better compared to a typical injection, respectively. The contents of TMP and SMZ in animal foodstuffs such as dairy products, eggs and honey were analyzed, too. Recoveries of 80-104% were acquired with relative standard deviations of 0.5-5.4%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  8. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  9. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    International Nuclear Information System (INIS)

    Oliveira, Karina B. de; Oliveira, Bras H. de

    2013-01-01

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C 18 column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min−1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 ± 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  10. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. A novel patch-field design using an optimized grid filter for passively scattered proton beams

    International Nuclear Information System (INIS)

    Li Yupeng; Zhang Xiaodong; Dong Lei; Mohan, Radhe

    2007-01-01

    For tumors with highly complex shapes, a 'patching' strategy is often used in passively scattered proton therapy to match the sharp distal edge of the spread-out Bragg peak (SOBP) of the patch field to the lateral penumbra of the through field at 50% dose level. The differences in the dose gradients at the distal edge and at the lateral penumbra could cause hot and cold doses at the junction. In this note, we describe an algorithm developed to optimize the range compensator design to yield a more uniform dose distribution at the junction. The algorithm is based on the fact that the distal fall-off of the SOBP can be tailored using a grid filter that is placed perpendicular to the beam's path. The filter is optimized so that the distal fall-off of the patch field complements the lateral penumbra fall-off of the through field. In addition to optimizing the fall-off, the optimization process implicitly accounts for the limitations of conventional compensator design algorithms. This algorithm uses simple ray tracing to determine the compensator shape and ignore scatter. The compensated dose distribution may therefore differ substantially from the intended dose distribution, especially when complex heterogeneities are encountered, such as those in the head and neck. In such a case, an adaptive optimization strategy can be used to optimize the 'grid' filter locally considering the tissue heterogeneities. The grid filter thus obtained is superimposed on the original range compensator so that the composite compensator leads to a more uniform dose distribution at the patch junction. An L-shaped head and neck tumor was used to demonstrate the validity of the proposed algorithm. A robustness analysis with focus on range uncertainty effect is carried out. (note)

  12. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  13. Experimental study on the luminous radiation associated to the field emission of samples submitted to high RF fields

    International Nuclear Information System (INIS)

    Maissa, S.; Junquera, T.; Fouaidy, M.; Le Goff, A.; Luong, M.; Tan, J.; Bonin, B.; Safa, H.

    1996-01-01

    The accelerating gradient of the RF cavities is limited by the strong field emission (FE) of electrons stemming from the metallic walls. Previous experiments evidenced luminous radiations associated with electron emission of cathodes subjected to intense DC electric field. These observations invoked the proposal of new theoretical models of the field emission phenomenon. This experimental study extends the previous DC works to the RF case. A special copper RF cavity has been developed equipped with an optical window and a removable sample. It has been designed for measuring both electron current and luminous radiation emitted by the sample, subjected to maximum RF electric field. The optical apparatus attached to the cavity permits to characterize the radiation in terms of intensity, glowing duration and spectral distribution. The results concerning different niobium or copper samples, whom top was either scratched or intentionally contaminated with metallic or dielectric particles are summarized. (author)

  14. SU-F-T-387: A Novel Optimization Technique for Field in Field (FIF) Chestwall Radiation Therapy Using a Single Plan to Improve Delivery Safety and Treatment Planning Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Tabibian, A; Kim, A; Rose, J; Alvelo, M; Perel, C; Laiken, K; Sheth, N [Bayonne Medical Center, Bayonne, New Jersey (United States)

    2016-06-15

    Purpose: A novel optimization technique was developed for field-in-field (FIF) chestwall radiotherapy using bolus every other day. The dosimetry was compared to currently used optimization. Methods: The prior five patients treated at our clinic to the chestwall and supraclavicular nodes with a mono-isocentric four-field arrangement were selected for this study. The prescription was 5040 cGy in 28 fractions, 5 mm bolus every other day on the tangent fields, 6 and/or 10 MV x-rays, and multileaf collimation.Novelly, tangents FIF segments were forward planned optimized based on the composite bolus and non-bolus dose distribution simultaneously. The prescription was spilt into 14 fractions for both bolus and non-bolus tangents. The same segments and monitor units were used for the bolus and non-bolus treatment. The plan was optimized until the desired coverage was achieved, minimized 105% hotspots, and a maximum dose of less than 108%. Each tangential field had less than 5 segments.Comparison plans were generated using FIF optimization with the same dosimetric goals, but using only the non-bolus calculation for FIF optimization. The non-bolus fields were then copied and bolus was applied. The same segments and monitor units were used for the bolus and non-bolus segments. Results: The prescription coverage of the chestwall, as defined by RTOG guidelines, was on average 51.8% for the plans that optimized bolus and non-bolus treatments simultaneous (SB) and 43.8% for the plans optimized to the non-bolus treatments (NB). Chestwall coverage of 90% prescription averaged to 80.4% for SB and 79.6% for NB plans. The volume receiving 105% of the prescription was 1.9% for SB and 0.8% for NB plans on average. Conclusion: Simultaneously optimizing for bolus and non-bolus treatments noticeably improves prescription coverage of the chestwall while maintaining similar hotspots and 90% prescription coverage in comparison to optimizing only to non-bolus treatments.

  15. SU-F-T-387: A Novel Optimization Technique for Field in Field (FIF) Chestwall Radiation Therapy Using a Single Plan to Improve Delivery Safety and Treatment Planning Efficiency

    International Nuclear Information System (INIS)

    Tabibian, A; Kim, A; Rose, J; Alvelo, M; Perel, C; Laiken, K; Sheth, N

    2016-01-01

    Purpose: A novel optimization technique was developed for field-in-field (FIF) chestwall radiotherapy using bolus every other day. The dosimetry was compared to currently used optimization. Methods: The prior five patients treated at our clinic to the chestwall and supraclavicular nodes with a mono-isocentric four-field arrangement were selected for this study. The prescription was 5040 cGy in 28 fractions, 5 mm bolus every other day on the tangent fields, 6 and/or 10 MV x-rays, and multileaf collimation.Novelly, tangents FIF segments were forward planned optimized based on the composite bolus and non-bolus dose distribution simultaneously. The prescription was spilt into 14 fractions for both bolus and non-bolus tangents. The same segments and monitor units were used for the bolus and non-bolus treatment. The plan was optimized until the desired coverage was achieved, minimized 105% hotspots, and a maximum dose of less than 108%. Each tangential field had less than 5 segments.Comparison plans were generated using FIF optimization with the same dosimetric goals, but using only the non-bolus calculation for FIF optimization. The non-bolus fields were then copied and bolus was applied. The same segments and monitor units were used for the bolus and non-bolus segments. Results: The prescription coverage of the chestwall, as defined by RTOG guidelines, was on average 51.8% for the plans that optimized bolus and non-bolus treatments simultaneous (SB) and 43.8% for the plans optimized to the non-bolus treatments (NB). Chestwall coverage of 90% prescription averaged to 80.4% for SB and 79.6% for NB plans. The volume receiving 105% of the prescription was 1.9% for SB and 0.8% for NB plans on average. Conclusion: Simultaneously optimizing for bolus and non-bolus treatments noticeably improves prescription coverage of the chestwall while maintaining similar hotspots and 90% prescription coverage in comparison to optimizing only to non-bolus treatments.

  16. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    Science.gov (United States)

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  17. Adapting crop management practices to climate change: Modeling optimal solutions at the field scale

    NARCIS (Netherlands)

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.; Walter, A.

    2013-01-01

    Climate change will alter the environmental conditions for crop growth and require adjustments in management practices at the field scale. In this paper, we analyzed the impacts of two different climate change scenarios on optimal field management practices in winterwheat and grain maize production

  18. Experimental study on the luminous radiation associated to the field emission of samples submitted to high RF fields

    International Nuclear Information System (INIS)

    Maissa, S.; Junquera, T.; Fouaidy, M.; Le Goff, A.; Luong, M.; Tan, J.; Bonin, B.; Safa, H.

    1996-01-01

    Nowadays the accelerating gradient of the RF cavities is limited by the strong field emission (FE) of electrons stemming from the metallic walls. Previous experiments evidenced luminous radiations associated with electron emission on cathodes subjected to intense DC electric field. These observations led these authors to propose new theoretical models of the field emission phenomenon. The presented experimental study extends these previous DC works to the RF case. A special copper RF cavity has been developed equipped with an optical window and a removable sample. It has been designed for measuring both electron current and luminous radiation emitted by the sample, subjected to maximum RF electric field. The optical apparatus attached to the cavity permits to characterize the radiation in terms of intensity, glowing duration and spectral distribution. The results concerning different niobium or copper samples, whom top was either scratched or intentionally contaminated with metallic or dielectric particles are summarized. (author)

  19. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    Science.gov (United States)

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost

  20. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  1. An analytic approach to optimize tidal turbine fields

    Science.gov (United States)

    Pelz, P.; Metzler, M.

    2013-12-01

    Motivated by global warming due to CO2-emission various technologies for harvesting of energy from renewable sources are developed. Hydrokinetic turbines get applied to surface watercourse or tidal flow to gain electrical energy. Since the available power for hydrokinetic turbines is proportional to the projected cross section area, fields of turbines are installed to scale shaft power. Each hydrokinetic turbine of a field can be considered as a disk actuator. In [1], the first author derives the optimal operation point for hydropower in an open-channel. The present paper concerns about a 0-dimensional model of a disk-actuator in an open-channel flow with bypass, as a special case of [1]. Based on the energy equation, the continuity equation and the momentum balance an analytical approach is made to calculate the coefficient of performance for hydrokinetic turbines with bypass flow as function of the turbine head and the ratio of turbine width to channel width.

  2. Optimization and Optimal Control

    CERN Document Server

    Chinchuluun, Altannar; Enkhbat, Rentsen; Tseveendorj, Ider

    2010-01-01

    During the last four decades there has been a remarkable development in optimization and optimal control. Due to its wide variety of applications, many scientists and researchers have paid attention to fields of optimization and optimal control. A huge number of new theoretical, algorithmic, and computational results have been observed in the last few years. This book gives the latest advances, and due to the rapid development of these fields, there are no other recent publications on the same topics. Key features: Provides a collection of selected contributions giving a state-of-the-art accou

  3. Evaluation of sample preparation methods and optimization of nickel determination in vegetable tissues

    Directory of Open Access Journals (Sweden)

    Rodrigo Fernando dos Santos Salazar

    2011-02-01

    Full Text Available Nickel, although essential to plants, may be toxic to plants and animals. It is mainly assimilated by food ingestion. However, information about the average levels of elements (including Ni in edible vegetables from different regions is still scarce in Brazil. The objectives of this study were to: (a evaluate and optimize a method for preparation of vegetable tissue samples for Ni determination; (b optimize the analytical procedures for determination by Flame Atomic Absorption Spectrometry (FAAS and by Electrothermal Atomic Absorption (ETAAS in vegetable samples and (c determine the Ni concentration in vegetables consumed in the cities of Lorena and Taubaté in the Vale do Paraíba, State of São Paulo, Brazil. By means of the analytical technique for determination by ETAAS or FAAS, the results were validated by the test of analyte addition and recovery. The most viable method tested for quantification of this element was HClO4-HNO3 wet digestion. All samples but carrot tissue collected in Lorena contained Ni levels above the permitted by the Brazilian Ministry of Health. The most disturbing results, requiring more detailed studies, were the Ni concentrations measured in carrot samples from Taubaté, where levels were five times higher than permitted by Brazilian regulations.

  4. Field desorption and field ion surface studies of samples exposed to the plasmas of PLT and ISX

    International Nuclear Information System (INIS)

    Kellogg, G.L.; Panitz, J.A.

    1978-01-01

    Modifications to the surface of field-ion specimens exposed to plasma discharges in PLT and ISX determined by Imaging Probe, Field Ion Microscope, and Transmission Electron Microscope analysis have in the past shown several consistent features. Surface films consisting primarily of limiter material with trapped plasma and impurity species have been found to reside on samples with direct line of sight exposure to the plasma during the discharges. Control specimens placed in the tokamak, but shielded from the plasma, on the other hand, remained free of deposits. When exposed to only high power plasma discharges, samples placed at the wall position in PLT and ISX have survived the exposures with no evidence of damage or implantation. In this paper we describe the results of a recent exposure in PLT in which for the first time samples of stainless steel were included for High-Field Surface Analysis. Tokamak operating conditions, including stainless-steel limiters, titanium gettering between discharges, and the occurrence of a disruption, also distinguished this exposure from those carried out previously. Surprisingly, even with stainless-steel limiters, carbon films were found to be deposited on the samples at a rate

  5. Steering Electromagnetic Fields in MRI: Investigating Radiofrequency Field Interactions with Endogenous and External Dielectric Materials for Improved Coil Performance at High Field

    Science.gov (United States)

    Vaidya, Manushka

    inferior regions of the brain, where the specific coil's imaging efficiency was inherently poor. Results showed a gain in SNR, while the maximum local and head SAR values remained below the prescribed limits. We showed that increasing coil performance with HPM could improve detection of functional MR activation during a motor-based task for whole brain fMRI. Finally, to gain an intuitive understanding of how HPM improves coil performance, we investigated how HPM separately affects signal and noise sensitivity to improve SNR. For this purpose, we employed a theoretical model based on dyadic Green's functions to compare the characteristics of current patterns, i.e. the optimal spatial distribution of coil conductors, that would either maximize SNR (ideal current patterns), maximize signal reception (signal-only optimal current patterns), or minimize sample noise (dark mode current patterns). Our results demonstrated that the presence of a lossless HPM changed the relative balance of signal-only optimal and dark mode current patterns. For a given relative permittivity, increasing the thickness of the HPM altered the magnitude of the currents required to optimize signal sensitivity at the voxel of interest as well as decreased the net electric field in the sample, which is associated, via reciprocity, to the noise received from the sample. Our results also suggested that signal-only current patterns could be used to identify HPM configurations that lead to high SNR gain for RF coil arrays. We anticipate that physical insights from this work could be utilized to build the next generation of high performing RF coils integrated with HPM.

  6. Investigation on the optimal magnetic field of a cusp electron gun for a W-band gyro-TWA

    Science.gov (United States)

    Zhang, Liang; He, Wenlong; Donaldson, Craig R.; Cross, Adrian W.

    2018-05-01

    High efficiency and broadband operation of a gyrotron traveling wave amplifier (gyro-TWA) require a high-quality electron beam with low-velocity spreads. The beam velocity spreads are mainly due to the differences of the electric and magnetic fields that the electrons withstand the electron gun. This paper investigates the possibility to decouple the design of electron gun geometry and the magnet system while still achieving optimal results, through a case study of designing a cusp electron gun for a W-band gyro-TWA. A global multiple-objective optimization routing was used to optimize the electron gun geometry for different predefined magnetic field profiles individually. Their results were compared and the properties of the required magnetic field profile are summarized.

  7. Optimal Magnetic Field Shielding Method by Metallic Sheets in Wireless Power Transfer System

    Directory of Open Access Journals (Sweden)

    Feng Wen

    2016-09-01

    Full Text Available To meet the regulations established to limit human exposure to time-varying electromagnetic fields (EMFs such as the International Committee on Non-Ionizing Radiation Protection (ICNIRP guidelines, thin metallic sheets are often used to shield magnetic field leakage in high power applications of wireless power transfer (WPT systems based on magnetic field coupling. However, the metals in the vicinity of the WPT coils cause the decrease of self and mutual inductances and increase of effective series resistance; as such, the electric performance including transmission power and the efficiency of the system is affected. With the research objective of further investigating excellent shielding effectiveness associated with system performance, the utilization of the optimal magnetic field shielding method by metallic sheets in magnetic field coupling WPT is carried out in this paper. The circuit and 3D Finite Element Analysis (FEA models are combined to predict the magnetic field distribution and electrical performance. Simulation and experiment results show that the method is very effective by obtaining the largest possible coupling coefficient of the WPT coils within the allowable range and then reducing the value nearest to and no smaller than the critical coupling coefficient via geometric unbroken metallic sheets. The optimal magnetic field shielding method which considers the system efficiency, transmission power, transmission distance, and system size is also achieved using the analytic hierarchy process (AHP. The results can benefit WPT by helping to achieve efficient energy transfer and safe use in metal shielded equipment.

  8. Moving your laboratories to the field – Advantages and limitations of the use of field portable instruments in environmental sample analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gałuszka, Agnieszka, E-mail: Agnieszka.Galuszka@ujk.edu.pl [Geochemistry and the Environment Division, Institute of Chemistry, Jan Kochanowski University, 15G Świętokrzyska St., 25-406 Kielce (Poland); Migaszewski, Zdzisław M. [Geochemistry and the Environment Division, Institute of Chemistry, Jan Kochanowski University, 15G Świętokrzyska St., 25-406 Kielce (Poland); Namieśnik, Jacek [Department of Analytical Chemistry, Chemical Faculty, Gdańsk University of Technology (GUT), 11/12 G. Narutowicz St., 80-233 Gdańsk (Poland)

    2015-07-15

    The recent rapid progress in technology of field portable instruments has increased their applications in environmental sample analysis. These instruments offer a possibility of cost-effective, non-destructive, real-time, direct, on-site measurements of a wide range of both inorganic and organic analytes in gaseous, liquid and solid samples. Some of them do not require the use of reagents and do not produce any analytical waste. All these features contribute to the greenness of field portable techniques. Several stationary analytical instruments have their portable versions. The most popular ones include: gas chromatographs with different detectors (mass spectrometer (MS), flame ionization detector, photoionization detector), ultraviolet–visible and near-infrared spectrophotometers, X-ray fluorescence spectrometers, ion mobility spectrometers, electronic noses and electronic tongues. The use of portable instruments in environmental sample analysis gives a possibility of on-site screening and a subsequent selection of samples for routine laboratory analyses. They are also very useful in situations that require an emergency response and for process monitoring applications. However, quantification of results is still problematic in many cases. The other disadvantages include: higher detection limits and lower sensitivity than these obtained in laboratory conditions, a strong influence of environmental factors on the instrument performance and a high possibility of sample contamination in the field. This paper reviews recent applications of field portable instruments in environmental sample analysis and discusses their analytical capabilities. - Highlights: • Field portable instruments are widely used in environmental sample analysis. • Field portable instruments are indispensable for analysis in emergency response. • Miniaturization of field portable instruments reduces resource consumption. • In situ analysis is in agreement with green analytical chemistry

  9. Moving your laboratories to the field – Advantages and limitations of the use of field portable instruments in environmental sample analysis

    International Nuclear Information System (INIS)

    Gałuszka, Agnieszka; Migaszewski, Zdzisław M.; Namieśnik, Jacek

    2015-01-01

    The recent rapid progress in technology of field portable instruments has increased their applications in environmental sample analysis. These instruments offer a possibility of cost-effective, non-destructive, real-time, direct, on-site measurements of a wide range of both inorganic and organic analytes in gaseous, liquid and solid samples. Some of them do not require the use of reagents and do not produce any analytical waste. All these features contribute to the greenness of field portable techniques. Several stationary analytical instruments have their portable versions. The most popular ones include: gas chromatographs with different detectors (mass spectrometer (MS), flame ionization detector, photoionization detector), ultraviolet–visible and near-infrared spectrophotometers, X-ray fluorescence spectrometers, ion mobility spectrometers, electronic noses and electronic tongues. The use of portable instruments in environmental sample analysis gives a possibility of on-site screening and a subsequent selection of samples for routine laboratory analyses. They are also very useful in situations that require an emergency response and for process monitoring applications. However, quantification of results is still problematic in many cases. The other disadvantages include: higher detection limits and lower sensitivity than these obtained in laboratory conditions, a strong influence of environmental factors on the instrument performance and a high possibility of sample contamination in the field. This paper reviews recent applications of field portable instruments in environmental sample analysis and discusses their analytical capabilities. - Highlights: • Field portable instruments are widely used in environmental sample analysis. • Field portable instruments are indispensable for analysis in emergency response. • Miniaturization of field portable instruments reduces resource consumption. • In situ analysis is in agreement with green analytical chemistry

  10. Field sampling of residual aviation gasoline in sandy soil

    International Nuclear Information System (INIS)

    Ostendorf, D.W.; Hinlein, E.S.; Yuefeng, Xie; Leach, L.E.

    1991-01-01

    Two complementary field sampling methods for the determination of residual aviation gasoline content in the contaminated capillary fringe of a fine, uniform, sandy soil were investigated. The first method featured field extrusion of core barrels into pint-size Mason jars, while the second consisted of laboratory partitioning of intact stainless steel core sleeves. Soil samples removed from the Mason jars (in the field) and sleeve segments (in the laboratory) were subjected to methylene chloride extraction and gas chromatographic analysis to compare their aviation gasoline content. The barrel extrusion sampling method yielded a vertical profile with 0.10m resolution over an essentially continuous 5.0m interval from the ground surface to the water table. The sleeve segment alternative yielded a more resolved 0.03m vertical profile over a shorter 0.8m interval through the capillary fringe. The two methods delivered precise estimates of the vertically integrated mass of aviation gasoline at a given horizontal location, and a consistent view of the vertical profile as well. In the latter regard, a 0.2m thick lens of maximum contamination was found in the center of the capillary fringe, where moisture filled all voids smaller than the mean pore size. The maximum peak was resolved by the core sleeve data, but was partially obscured by the barrel extrusion observations, so that replicate barrels or a half-pint Mason jar size should be considered for data supporting vertical transport analyses in the absence of sleeve partitions

  11. Mitigation of Power frequency Magnetic Fields. Using Scale Invariant and Shape Optimization Methods

    Energy Technology Data Exchange (ETDEWEB)

    Salinas, Ener; Yueqiang Liu; Daalder, Jaap; Cruz, Pedro; Antunez de Souza, Paulo Roberto Jr; Atalaya, Juan Carlos; Paula Marciano, Fabianna de; Eskinasy, Alexandre

    2006-10-15

    The present report describes the development and application of two novel methods for implementing mitigation techniques of magnetic fields at power frequencies. The first method makes use of scaling rules for electromagnetic quantities, while the second one applies a 2D shape optimization algorithm based on gradient methods. Before this project, the first method had already been successfully applied (by some of the authors of this report) to electromagnetic designs involving pure conductive Material (e.g. copper, aluminium) which implied a linear formulation. Here we went beyond this approach and tried to develop a formulation involving ferromagnetic (i.e. non-linear) Materials. Surprisingly, we obtained good equivalent replacement for test-transformers by varying the input current. In spite of the validity of this equivalence constrained to regions not too close to the source, the results can still be considered useful, as most field mitigation techniques are precisely developed for reducing the magnetic field in regions relatively far from the sources. The shape optimization method was applied in this project to calculate the optimal geometry of a pure conductive plate to mitigate the magnetic field originated from underground cables. The objective function was a weighted combination of magnetic energy at the region of interest and dissipated heat at the shielding Material. To our surprise, shapes of complex structure, difficult to interpret (and probably even harder to anticipate) were the results of the applied process. However, the practical implementation (using some approximation of these shapes) gave excellent experimental mitigation factors.

  12. Field-amplified sample stacking capillary electrophoresis with electrochemiluminescence applied to the determination of illicit drugs on banknotes.

    Science.gov (United States)

    Xu, Yuanhong; Gao, Ying; Wei, Hui; Du, Yan; Wang, Erkang

    2006-05-19

    Capillary electrophoresis (CE) with Ru(bpy)3(2+) electrochemiluminescence (ECL) detection system was established to the determination of contamination of banknotes with controlled drugs and a high efficiency on-column field-amplified sample stacking (FASS) technique was also optimized to increase the ECL intensity. The method was illustrated using heroin and cocaine, which are two typical and popular illicit drugs. Highest sample stacking was obtained when 0.01 mM acetic acid was chosen for sample dissolution with electrokinetical injection for 6 s at 17 kV. Under the optimized conditions: ECL detection at 1.2 V, separation voltage 10.0 kV, 20 mM phosphate-acetate (pH 7.2) as running buffer, 5 mM Ru(bpy)3(2+) with 50 mM phosphate-acetate (pH 7.2) in the detection cell, the standard curves were linear in the range of 7.50x10(-8) to 1.00x10(-5) M for heroin and 2.50x10(-7) to 1.00x10(-4) M for cocaine and detection limits of 50 nM for heroin and 60 nM for cocaine were achieved (S/N = 3), respectively. Relative standard derivations of the ECL intensity and the migration time were 3.50 and 0.51% for heroin and 4.44 and 0.12% for cocaine, respectively. The developed method was successfully applied to the determination of heroin and cocaine on illicit drug contaminated banknotes without any damage of the paper currency. A baseline resolution for heroin and cocaine was achieved within 6 min.

  13. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 2. Case study

    Science.gov (United States)

    Graham, Wendy D.; Neff, Christina R.

    1994-05-01

    The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.

  14. Sampling of high amounts of bioaerosols using a high-volume electrostatic field sampler

    DEFF Research Database (Denmark)

    Madsen, A. M.; Sharma, Anoop Kumar

    2008-01-01

    For studies of the biological effects of bioaerosols, large samples are necessary. To be able to sample enough material and to cover the variations in aerosol content during and between working days, a long sampling time is necessary. Recently, a high-volume transportable electrostatic field...... and 315 mg dust (net recovery of the lyophilized dust) was sampled during a period of 7 days, respectively. The sampling rates of the electrostatic field samplers were between 1.34 and 1.96 mg dust per hour, the value for the Gravikon was between 0.083 and 0.108 mg dust per hour and the values for the GSP...... samplers were between 0.0031 and 0.032 mg dust per hour. The standard deviations of replica samplings and the following microbial analysis using the electrostatic field sampler and GSP samplers were at the same levels. The exposure to dust in the straw storage was 7.7 mg m(-3) when measured...

  15. Spatio-temporal optimization of sampling for bluetongue vectors (Culicoides) near grazing livestock

    DEFF Research Database (Denmark)

    Kirkeby, Carsten; Stockmarr, Anders; Bødker, Rene

    2013-01-01

    BACKGROUND: Estimating the abundance of Culicoides using light traps is influenced by a large variation in abundance in time and place. This study investigates the optimal trapping strategy to estimate the abundance or presence/absence of Culicoides on a field with grazing animals. We used 45 light...... absence of vectors on the field. The variation in the estimated abundance decreased steeply when using up to six traps, and was less pronounced when using more traps, although no clear cutoff was found. CONCLUSIONS: Despite spatial clustering in vector abundance, we found no effect of increasing...... monitoring programmes on fields with grazing animals....

  16. The Proteome of Ulcerative Colitis in Colon Biopsies from Adults - Optimized Sample Preparation and Comparison with Healthy Controls.

    Science.gov (United States)

    Schniers, Armin; Anderssen, Endre; Fenton, Christopher Graham; Goll, Rasmus; Pasing, Yvonne; Paulssen, Ruth Hracky; Florholmen, Jon; Hansen, Terkel

    2017-12-01

    The purpose of the study was to optimize the sample preparation and to further use an improved sample preparation to identify proteome differences between inflamed ulcerative colitis tissue from untreated adults and healthy controls. To optimize the sample preparation, we studied the effect of adding different detergents to a urea containing lysis buffer for a Lys-C/trypsin tandem digestion. With the optimized method, we prepared clinical samples from six ulcerative colitis patients and six healthy controls and analysed them by LC-MS/MS. We examined the acquired data to identify differences between the states. We improved the protein extraction and protein identification number by utilizing a urea and sodium deoxycholate containing buffer. Comparing ulcerative colitis and healthy tissue, we found 168 of 2366 identified proteins differently abundant. Inflammatory proteins are higher abundant in ulcerative colitis, proteins related to anion-transport and mucus production are lower abundant. A high proportion of S100 proteins is differently abundant, notably with both up-regulated and down-regulated proteins. The optimized sample preparation method will improve future proteomic studies on colon mucosa. The observed protein abundance changes and their enrichment in various groups improve our understanding of ulcerative colitis on protein level. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. A simplified field protocol for genetic sampling of birds using buccal swabs

    Science.gov (United States)

    Vilstrup, Julia T.; Mullins, Thomas D.; Miller, Mark P.; McDearman, Will; Walters, Jeffrey R.; Haig, Susan M.

    2018-01-01

    DNA sampling is an essential prerequisite for conducting population genetic studies. For many years, blood sampling has been the preferred method for obtaining DNA in birds because of their nucleated red blood cells. Nonetheless, use of buccal swabs has been gaining favor because they are less invasive yet still yield adequate amounts of DNA for amplifying mitochondrial and nuclear markers; however, buccal swab protocols often include steps (e.g., extended air-drying and storage under frozen conditions) not easily adapted to field settings. Furthermore, commercial extraction kits and swabs for buccal sampling can be expensive for large population studies. We therefore developed an efficient, cost-effective, and field-friendly protocol for sampling wild birds after comparing DNA yield among 3 inexpensive buccal swab types (2 with foam tips and 1 with a cotton tip). Extraction and amplification success was high (100% and 97.2% respectively) using inexpensive generic swabs. We found foam-tipped swabs provided higher DNA yields than cotton-tipped swabs. We further determined that omitting a drying step and storing swabs in Longmire buffer increased efficiency in the field while still yielding sufficient amounts of DNA for detailed population genetic studies using mitochondrial and nuclear markers. This new field protocol allows time- and cost-effective DNA sampling of juveniles or small-bodied birds for which drawing blood may cause excessive stress to birds and technicians alike.

  18. On the optimal sampling of bandpass measurement signals through data acquisition systems

    International Nuclear Information System (INIS)

    Angrisani, L; Vadursi, M

    2008-01-01

    Data acquisition systems (DAS) play a fundamental role in a lot of modern measurement solutions. One of the parameters characterizing a DAS is its maximum sample rate, which imposes constraints on the signals that can be alias-free digitized. Bandpass sampling theory singles out separated ranges of admissible sample rates, which can be significantly lower than carrier frequency. But, how to choose the most convenient sample rate according to the purpose at hand? The paper proposes a method for the automatic selection of the optimal sample rate in measurement applications involving bandpass signals; the effects of sample clock instability and limited resolution are also taken into account. The method allows the user to choose the location of spectral replicas of the sampled signal in terms of normalized frequency, and the minimum guard band between replicas, thus introducing a feature that no DAS currently available on the market seems to offer. A number of experimental tests on bandpass digitally modulated signals are carried out to assess the concurrence of the obtained central frequency with the expected one

  19. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  20. Virtual solar field - An opportunity to optimize transient processes in line-focus CSP power plants

    Science.gov (United States)

    Noureldin, Kareem; Hirsch, Tobias; Pitz-Paal, Robert

    2017-06-01

    Optimizing solar field operation and control is a key factor to improve the competitiveness of line-focus solar thermal power plants. However, the risks of assessing new and innovative control strategies on operational power plants hinder such optimizations and result in applying more conservative control schemes. In this paper, we describe some applications for a whole solar field transient in-house simulation tool developed at the German Aerospace Centre (DLR), the Virtual Solar Field (VSF). The tool offers a virtual platform to simulate real solar fields while coupling the thermal and hydraulic conditions of the field with high computational efficiency. Using the tool, developers and operator can probe their control strategies and assess the potential benefits while avoiding the high risks and costs. In this paper, we study the benefits gained from controlling the loop valves and of using direct normal irradiance maps and forecasts for the field control. Loop valve control is interesting for many solar field operators since it provides a high degree of flexibility to the control of the solar field through regulating the flow rate in each loop. This improves the reaction to transient condition, such as passing clouds and field start-up in the morning. Nevertheless, due to the large number of loops and the sensitivity of the field control to the valve settings, this process needs to be automated and the effect of changing the setting of each valve on the whole field control needs to be taken into account. We used VSF to implement simple control algorithms to control the loop valves and to study the benefits that could be gained from using active loop valve control during transient conditions. Secondly, we study how using short-term highly spatially-resolved DNI forecasts provided by cloud cameras could improve the plant energy yield. Both cases show an improvement in the plant efficiency and outlet temperature stability. This paves the road for further

  1. Optimization of a novel large field of view distortion phantom for MR-only treatment planning.

    Science.gov (United States)

    Price, Ryan G; Knight, Robert A; Hwang, Ken-Pin; Bayram, Ersin; Nejad-Davarani, Siamak P; Glide-Hurst, Carri K

    2017-07-01

    MR-only treatment planning requires images of high geometric fidelity, particularly for large fields of view (FOV). However, the availability of large FOV distortion phantoms with analysis software is currently limited. This work sought to optimize a modular distortion phantom to accommodate multiple bore configurations and implement distortion characterization in a widely implementable solution. To determine candidate materials, 1.0 T MR and CT images were acquired of twelve urethane foam samples of various densities and strengths. Samples were precision-machined to accommodate 6 mm diameter paintballs used as landmarks. Final material candidates were selected by balancing strength, machinability, weight, and cost. Bore sizes and minimum aperture width resulting from couch position were tabulated from the literature (14 systems, 5 vendors). Bore geometry and couch position were simulated using MATLAB to generate machine-specific models to optimize the phantom build. Previously developed software for distortion characterization was modified for several magnet geometries (1.0 T, 1.5 T, 3.0 T), compared against previously published 1.0 T results, and integrated into the 3D Slicer application platform. All foam samples provided sufficient MR image contrast with paintball landmarks. Urethane foam (compressive strength ∼1000 psi, density ~20 lb/ft 3 ) was selected for its accurate machinability and weight characteristics. For smaller bores, a phantom version with the following parameters was used: 15 foam plates, 55 × 55 × 37.5 cm 3 (L×W×H), 5,082 landmarks, and weight ~30 kg. To accommodate > 70 cm wide bores, an extended build used 20 plates spanning 55 × 55 × 50 cm 3 with 7,497 landmarks and weight ~44 kg. Distortion characterization software was implemented as an external module into 3D Slicer's plugin framework and results agreed with the literature. The design and implementation of a modular, extendable distortion phantom was optimized for several bore

  2. Optimization of Decision-Making for Spatial Sampling in the North China Plain, Based on Remote-Sensing a Priori Knowledge

    Science.gov (United States)

    Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.

    2012-07-01

    In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.

  3. Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples

    Directory of Open Access Journals (Sweden)

    Hyunok Oh

    2003-05-01

    Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.

  4. Determination of total concentration of chemically labeled metabolites as a means of metabolome sample normalization and sample loading optimization in mass spectrometry-based metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2012-12-18

    For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C

  5. Optimizing the multicycle subrotational internal cooling of diatomic molecules

    Science.gov (United States)

    Aroch, A.; Kallush, S.; Kosloff, R.

    2018-05-01

    Subrotational cooling of the AlH+ ion to the miliKelvin regime, using optimally shaped pulses, is computed. The coherent electromagnetic fields induce purity-conserved transformations and do not change the sample temperature. A decrease in a sample temperature, manifested by an increase of purity, is achieved by the complementary uncontrolled spontaneous emission which changes the entropy of the system. We employ optimal control theory to find a pulse that stirs the system into a population configuration that will result in cooling, upon multicycle excitation-emission steps. The obtained optimal transformation was shown capable to cool molecular ions to the subkelvins regime.

  6. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    OpenAIRE

    Ahmet Demir; Utku kose

    2017-01-01

    In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an...

  7. Topology optimized and 3D printed polymer-bonded permanent magnets for a predefined external field

    Science.gov (United States)

    Huber, C.; Abert, C.; Bruckner, F.; Pfaff, C.; Kriwet, J.; Groenefeld, M.; Teliban, I.; Vogler, C.; Suess, D.

    2017-08-01

    Topology optimization offers great opportunities to design permanent magnetic systems that have specific external field characteristics. Additive manufacturing of polymer-bonded magnets with an end-user 3D printer can be used to manufacture permanent magnets with structures that had been difficult or impossible to manufacture previously. This work combines these two powerful methods to design and manufacture permanent magnetic systems with specific properties. The topology optimization framework is simple, fast, and accurate. It can also be used for the reverse engineering of permanent magnets in order to find the topology from field measurements. Furthermore, a magnetic system that generates a linear external field above the magnet is presented. With a volume constraint, the amount of magnetic material can be minimized without losing performance. Simulations and measurements of the printed systems show very good agreement.

  8. High Fidelity Multi-Objective Design Optimization of a Downscaled Cusped Field Thruster

    Directory of Open Access Journals (Sweden)

    Thomas Fahey

    2017-11-01

    Full Text Available The Cusped Field Thruster (CFT concept has demonstrated significantly improved performance over the Hall Effect Thruster and the Gridded Ion Thruster; however, little is understood about the complexities of the interactions and interdependencies of the geometrical, magnetic and ion beam properties of the thruster. This study applies an advanced design methodology combining a modified power distribution calculation and evolutionary algorithms assisted by surrogate modeling to a multi-objective design optimization for the performance optimization and characterization of the CFT. Optimization is performed for maximization of performance defined by five design parameters (i.e., anode voltage, anode current, mass flow rate, and magnet radii, simultaneously aiming to maximize three objectives; that is, thrust, efficiency and specific impulse. Statistical methods based on global sensitivity analysis are employed to assess the optimization results in conjunction with surrogate models to identify key design factors with respect to the three design objectives and additional performance measures. The research indicates that the anode current and the Outer Magnet Radius have the greatest effect on the performance parameters. An optimal value for the anode current is determined, and a trend towards maximizing anode potential and mass flow rate is observed.

  9. Optimization of the solvent-based dissolution method to sample volatile organic compound vapors for compound-specific isotope analysis.

    Science.gov (United States)

    Bouchard, Daniel; Wanner, Philipp; Luo, Hong; McLoughlin, Patrick W; Henderson, James K; Pirkle, Robert J; Hunkeler, Daniel

    2017-10-20

    The methodology of the solvent-based dissolution method used to sample gas phase volatile organic compounds (VOC) for compound-specific isotope analysis (CSIA) was optimized to lower the method detection limits for TCE and benzene. The sampling methodology previously evaluated by [1] consists in pulling the air through a solvent to dissolve and accumulate the gaseous VOC. After the sampling process, the solvent can then be treated similarly as groundwater samples to perform routine CSIA by diluting an aliquot of the solvent into water to reach the required concentration of the targeted contaminant. Among solvents tested, tetraethylene glycol dimethyl ether (TGDE) showed the best aptitude for the method. TGDE has a great affinity with TCE and benzene, hence efficiently dissolving the compounds during their transition through the solvent. The method detection limit for TCE (5±1μg/m 3 ) and benzene (1.7±0.5μg/m 3 ) is lower when using TGDE compared to methanol, which was previously used (385μg/m 3 for TCE and 130μg/m 3 for benzene) [2]. The method detection limit refers to the minimal gas phase concentration in ambient air required to load sufficient VOC mass into TGDE to perform δ 13 C analysis. Due to a different analytical procedure, the method detection limit associated with δ 37 Cl analysis was found to be 156±6μg/m 3 for TCE. Furthermore, the experimental results validated the relationship between the gas phase TCE and the progressive accumulation of dissolved TCE in the solvent during the sampling process. Accordingly, based on the air-solvent partitioning coefficient, the sampling methodology (e.g. sampling rate, sampling duration, amount of solvent) and the final TCE concentration in the solvent, the concentration of TCE in the gas phase prevailing during the sampling event can be determined. Moreover, the possibility to analyse for TCE concentration in the solvent after sampling (or other targeted VOCs) allows the field deployment of the sampling

  10. SPEXTRA: Optimal extraction code for long-slit spectra in crowded fields

    Science.gov (United States)

    Sarkisyan, A. N.; Vinokurov, A. S.; Solovieva, Yu. N.; Sholukhova, O. N.; Kostenkov, A. E.; Fabrika, S. N.

    2017-10-01

    We present a code for the optimal extraction of long-slit 2D spectra in crowded stellar fields. Its main advantage and difference from the existing spectrum extraction codes is the presence of a graphical user interface (GUI) and a convenient visualization system of data and extraction parameters. On the whole, the package is designed to study stars in crowded fields of nearby galaxies and star clusters in galaxies. Apart from the spectrum extraction for several stars which are closely located or superimposed, it allows the spectra of objects to be extracted with subtraction of superimposed nebulae of different shapes and different degrees of ionization. The package can also be used to study single stars in the case of a strong background. In the current version, the optimal extraction of 2D spectra with an aperture and the Gaussian function as PSF (point spread function) is proposed. In the future, the package will be supplemented with the possibility to build a PSF based on a Moffat function. We present the details of GUI, illustrate main features of the package, and show results of extraction of the several interesting spectra of objects from different telescopes.

  11. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    Science.gov (United States)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one

  12. Astronaut Neil Armstrong studies rock samples during geological field trip

    Science.gov (United States)

    1969-01-01

    Astronaut Neil Armstrong, commander of the Apollo 11 lunar landing mission, studies rock samples during a geological field trip to the Quitman Mountains area near the Fort Quitman ruins in far west Texas.

  13. Uniform field loop-gap resonator and rectangular TEU02 for aqueous sample EPR at 94 GHz

    Science.gov (United States)

    Sidabras, Jason W.; Sarna, Tadeusz; Mett, Richard R.; Hyde, James S.

    2017-09-01

    In this work we present the design and implementation of two uniform-field resonators: a seven-loop-six-gap loop-gap resonator (LGR) and a rectangular TEU02 cavity resonator. Each resonator has uniform-field-producing end-sections. These resonators have been designed for electron paramagnetic resonance (EPR) of aqueous samples at 94 GHz. The LGR geometry employs low-loss Rexolite end-sections to improve the field homogeneity over a 3 mm sample region-of-interest from near-cosine distribution to 90% uniform. The LGR was designed to accommodate large degassable Polytetrafluorethylen (PTFE) tubes (0.81 mm O.D.; 0.25 mm I.D.) for aqueous samples. Additionally, field modulation slots are designed for uniform 100 kHz field modulation incident at the sample. Experiments using a point sample of lithium phthalocyanine (LiPC) were performed to measure both the uniformity of the microwave magnetic field and 100 kHz field modulation, and confirm simulations. The rectangular TEU02 cavity resonator employs over-sized end-sections with sample shielding to provide an 87% uniform field for a 0.1 × 2 × 6 mm3 sample geometry. An evanescent slotted window was designed for light access to irradiate 90% of the sample volume. A novel dual-slot iris was used to minimize microwave magnetic field perturbations and maintain cross-sectional uniformity. Practical EPR experiments using the application of light irradiated rose bengal (4,5,6,7-tetrachloro-2‧,4‧,5‧,7‧-tetraiodofluorescein) were performed in the TEU02 cavity. The implementation of these geometries providing a practical designs for uniform field resonators that continue resonator advancements towards quantitative EPR spectroscopy.

  14. Operable Unit 3-13, Group 3, Other Surface Soils (Phase II) Field Sampling Plan

    Energy Technology Data Exchange (ETDEWEB)

    G. L. Schwendiman

    2006-07-27

    This Field Sampling Plan describes the Operable Unit 3-13, Group 3, Other Surface Soils, Phase II remediation field sampling activities to be performed at the Idaho Nuclear Technology and Engineering Center located within the Idaho National Laboratory Site. Sampling activities described in this plan support characterization sampling of new sites, real-time soil spectroscopy during excavation, and confirmation sampling that verifies that the remedial action objectives and remediation goals presented in the Final Record of Decision for Idaho Nuclear Technology and Engineering Center, Operable Unit 3-13 have been met.

  15. Optimizing mesoscopic two-band superconductors for observation of fractional vortex states

    Energy Technology Data Exchange (ETDEWEB)

    Piña, Juan C. [Departamento de Física, Universidade Federal de Pernambuco, Cidade Universitária, 50670-901 Recife, PE (Brazil); Núcleo de Tecnologia, CAA, Universidade Federal de Pernambuco, 55002-970 Caruaru, PE (Brazil); Souza Silva, Clécio C. de, E-mail: clecio@df.ufpe [Departamento de Física, Universidade Federal de Pernambuco, Cidade Universitária, 50670-901 Recife, PE (Brazil); Milošević, Milorad V. [Departamento de Física, Universidade Federal do Ceará, 60455-900 Fortaleza, Ceará (Brazil); Departement Fysica, Universiteit Antwerpen, Groenenborgerlaan 171, B-2020 Antwerpen (Belgium)

    2014-08-15

    Highlights: • Observation of fractional vortices in two-band superconductors of broad size range. • There is a minimal sample size for observing each particular fractional state. • Optimal value for stability of each fractional state is determined. • A suitable magnetic dot enhances stability even further. - Abstract: Using the two-component Ginzburg–Landau model, we investigate the effect of sample size and magnitude and homogeneity of external magnetic field on the stability of fractional vortex states in a mesoscopic two-band superconducting disk. We found that each fractional state has a preferable sample size, for which the range of applied field in which the state is stable is pronouncedly large. Vice versa, there exists an optimal magnitude of applied field for which a large range of possible sample radii will support the considered fractional state. Finally, we show that the stability of fractional states can be enhanced even further by magnetic nanostructuring of the sample, i.e. by suitably chosen geometrical parameters and magnetic moment of a ferromagnetic dot placed on top of the superconducting disk.

  16. Development and optimization of the determination of pharmaceuticals in water samples by SPE and HPLC with diode-array detection.

    Science.gov (United States)

    Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra

    2013-09-01

    This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    Science.gov (United States)

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2018-05-01

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2  = 0.98; p strategy promises to achieve the target area under the curve as part of precision dosing.

  18. Field sampling scheme optimization using simulated annealing

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-10-01

    Full Text Available : silica (quartz, chalcedony, and opal)→ alunite → kaolinite → illite → smectite → chlorite. Associated with this mineral alteration are high sulphidation gold deposits and low sulphidation base metal deposits. Gold min- eralization is located... of vuggy (porous) quartz, opal and gray and black chalcedony veins. Vuggy quartz (porous quartz) is formed from extreme leaching of the host rock. It hosts high sulphidation gold mineralization and is evidence for a hypogene event. Alteration...

  19. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    Science.gov (United States)

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  20. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  1. Field Exploration and Life Detection Sampling for Planetary Analogue Research (FELDSPAR)

    Science.gov (United States)

    Gentry, D.; Stockton, A. M.; Amador, E. S.; Cable, M. L.; Cantrell, T.; Chaudry, N.; Cullen, T.; Duca, Z. A.; Jacobsen, M. B.; Kirby, J.; McCaig, H. C.; Murukesan, G.; Rennie, V.; Rader, E.; Schwieterman, E. W.; Stevens, A. H.; Sutton, S. A.; Tan, G.; Yin, C.; Cullen, D.; Geppert, W.

    2017-12-01

    Extraterrestrial studies are typically conducted on mg samples from cm-scale features, while landing sites are selected based on m to km-scale features. It is therefore critical to understand spatial distribution of organic molecules over scales from cm to the km, particularly in geological features that appear homogenous at m to km scales. This is addressed by FELDSPAR, a NASA-funded project that conducts field operations analogous to Mars sample return in its science, operations, and technology [1]. Here, we present recent findings from a 2016 and a 2017 campaign to multiple Martian analogue sites in Iceland. Icelandic volcanic regions are Mars analogues due to desiccation, low nutrient availability, temperature extremes [2], and are relatively young and isolated from anthropogenic contamination [3]. Operationally, many Icelandic analogue sites are remote enough to require that field expeditions address several sampling constraints that are also faced by robotic exploration [1, 2]. Four field sites were evaluated in this study. The Fimmvörðuháls lava field was formed by a basaltic effusive eruption associated with the 2010 Eyjafjallajökull eruption. Mælifellssandur is a recently deglaciated plain to the north of the Myrdalsjökull glacier. Holuhraun is a basaltic spatter and cinder cone formed by 2014 fissure eruptions just north of the Vatnajökull glacier. Dyngjusandur is a plain kept barren by repeated aeolian mechanical weathering. Samples were collected in nested triangular grids from 10 cm to the 1 km scale. We obtained overhead imagery at 1 m to 200 m elevation to create digital elevation models. In-field reflectance spectroscopy was obtained with an ASD spectrometer and chemical composition was measured by a Bruker handheld XRF. All sites chosen were homogeneous in apparent color, morphology, moisture, grain size, and reflectance spectra at all scales greater than 10 cm. Field lab ATP assays were conducted to monitor microbial habitation, and home

  2. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  3. Optimal Value of Series Capacitors for Uniform Field Distribution in Transmission Line MRI Coils

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy

    2016-01-01

    Transmission lines are often used as coils in high field magnetic resonance imaging (MRI). Due to the distributed nature of transmission lines, coils based on them produce inhomogeneous field. This work investigates application of series capacitors to improve field homogeneity along the coil....... The equations for optimal values of evenly distributed capacitors are derived and expressed in terms of the implemented transmission line parameters.The achieved magnetic field homogeneity is estimated under quasistatic approximation and compared to the regular transmission line resonator. Finally, a more...... practical case of a microstrip line coil with two series capacitors is considered....

  4. INTERACTION OF IMPULSE ELECTROMAGNETIC FIELDS WITH SURFACES OF METAL SAMPLES

    Directory of Open Access Journals (Sweden)

    V. V. Pavliouchenko

    2006-01-01

    Full Text Available Measurements of maximum tangential component of magnetic intensity Hτm have been carried out in the paper. The measurements have been taken on the surface of metal samples according to time of single current pulse rise in the form of semi-sinusoid of a linear current wire. Measurements have been made with the purpose to determine a value of the component according to thickness of samples made of aluminium.Temporary resolution ranges of electric and magnetic properties and defects of sample continuity along the depth have been found.Empirical formulae of dependence Hτm on sample thickness have been derived and their relation with efficient depth penetration of magnetic field into metal has been found.

  5. Magnetic field models and their application in optimal magnetic divertor design

    Energy Technology Data Exchange (ETDEWEB)

    Blommaert, M.; Reiter, D. [Institute of Energy and Climate Research (IEK-4), FZ Juelich GmbH, Juelich (Germany); Baelmans, M. [KU Leuven, Department of Mechanical Engineering, Leuven (Belgium); Heumann, H. [TEAM CASTOR, INRIA Sophia Antipolis (France); Marandet, Y.; Bufferand, H. [Aix-Marseille Universite, CNRS, PIIM, Marseille (France); Gauger, N.R. [TU Kaiserslautern, Chair for Scientific Computing, Kaiserslautern (Germany)

    2016-08-15

    In recent automated design studies, optimal design methods were introduced to successfully reduce the often excessive heat loads that threaten the divertor target surface. To this end, divertor coils were controlled to improve the magnetic configuration. The divertor performance was then evaluated using a plasma edge transport code and a ''vacuum approach'' for magnetic field perturbations. Recent integration of a free boundary equilibrium (FBE) solver allows to assess the validity of the vacuum approach. It is found that the absence of plasma response currents significantly limits the accuracy of the vacuum approach. Therefore, the optimal magnetic divertor design procedure is extended to incorporate full FBE solutions. The novel procedure is applied to obtain first results for the new WEST (Tungsten Environment in Steady-state Tokamak) divertor currently under construction in the Tore Supra tokamak at CEA (Commissariat a l'Energie Atomique, France). The sensitivities and the related divertor optimization paths are strongly affected by the extension of the magnetic model. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  6. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  7. 4D Lung Reconstruction with Phase Optimization

    DEFF Research Database (Denmark)

    Lyksborg, Mark; Paulsen, Rasmus; Brink, Carsten

    2009-01-01

    This paper investigates and demonstrates a 4D lung CT reconstruction/registration method which results in a complete volumetric model of the lung that deforms according to a respiratory motion field. The motion field is estimated iteratively between all available slice samples and a reference...... volume which is updated on the fly. The method is two part and the second part of the method aims to correct wrong phase information by employing another iterative optimizer. This two part iterative optimization allows for complete reconstruction at any phase and it will be demonstrated that it is better...... than using an optimization which does not correct for phase errors. Knowing how the lung and any tumors located within the lung deforms is relevant in planning the treatment of lung cancer....

  8. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    Science.gov (United States)

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  9. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    Directory of Open Access Journals (Sweden)

    Arnaud Chapon

    Full Text Available Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples’ characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters. Keywords: Liquid Scintillation Counting (LSC, PerkinElmer, Tri-Carb, Smear, Swipe

  10. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  11. Field sampling for monitoring migration and defining the areal extent of chemical contamination

    International Nuclear Information System (INIS)

    Thomas, J.M.; Skalski, J.R.; Eberhardt, L.L.; Simmons, M.A.

    1984-11-01

    Initial research on compositing, field designs, and site mapping oriented toward detecting spills and migration at commercial low-level radioactive or chemical waste sites is summarized. Results indicate that the significance test developed to detect samples containing high levels of contamination when they are mixed with several other samples below detectable limits (composites), will be highly effective with large sample sizes when contaminant levels frequently or greatly exceed a maximum acceptable level. These conditions of frequent and high contaminant levels are most likely to occur in regions of a commercial waste site where the priors (previous knowledge) about a spill or migration are highest. Conversely, initial investigations of Bayes sampling strategies suggest that field sampling efforts should be inversely proportional to the priors (expressed as probabilities) for the occurrence of contamination

  12. Design of the Precipitation Process for Ni-Al Alloys with Optimal Mechanical Properties: A Phase-Field Study

    Science.gov (United States)

    Ta, Na; Zhang, Lijun; Du, Yong

    2014-04-01

    An attempt to design the heat treatment schedule for binary Ni-Al alloys with optimal mechanical properties was made in the present work. A series of quantitative three-dimensional (3-D) phase-field simulations of microstructure evolution in Ni-Al alloys during the precipitation process were first performed using MICRESS (MICRostructure Evolution Simulation Software) package developed in the formalism of the multi-phase field model. The coupling to CALPHAD (CALculation of PHAse Diagram) thermodynamic and atomic mobility databases was realized via TQ interface. Moreover, the temperature-dependent lattice misfits and elastic constants were utilized for simulation. The effect of the alloy composition and aging temperature on microstructure evolution was extensively studied with the aid of statistical analysis. After that, an evaluation function was proposed for evaluating the optimal heat treatment schedule by choosing the phase fraction, grain size, and shape factor of γ' precipitate as the evaluation indicators. Based on 50 groups of phase-field-simulated and experimental microstructure information, as well as the proposed evaluation function, the optimal alloy composition, aging temperature, and aging time for binary Ni-Al alloy with optimal mechanical properties were finally chosen. The successful application in the present Ni-Al alloys indicates that it is possible to design the optimal alloy composition and heat treatment for other binary and even multicomponent alloys with optimal mechanical properties based on the evaluation function and the sufficient microstructure information. Additionally, the combination of the present method and the key experiments can definitely accelerate the material design and improve the efficiency and accuracy.

  13. ROXIE: Routine for the optimization of magnet X-sections, inverse field calculation and coil end design. Proceedings

    International Nuclear Information System (INIS)

    Russenschuck, S.

    1999-01-01

    The Large Hadron Collider (LHC) will provide proton-proton collisions with a center-of-mass energy of 14 TeV which requires high field superconducting magnets to guide the counter-rotating beams in the existing LEP tunnel with a circumference of about 27 km. The LHC magnet system consists of 1232 superconducting dipoles and 386 main quadrupoles together with about 20 different types of magnets for insertions and correction. The design and optimization of these magnets is dominated by the requirement of a extremely uniform field which is mainly defined by the layout of the superconducting coils. The program package ROXIE (Routine for the Optimization of magnet X-sections, Inverse field calculation and coil End design) has been developed for the design and optimization of the coil geometries in two and three dimensions. Recently it has been extended in a collaboration with the University of Graz, Austria, to the calculation of saturation induced effects using a reduced vector-potential FEM formulation. With the University of Stuttgart, Germany, a collaboration exists fro the application of the BEM-FEM coupling method for the 2D and 3D field calculation. ROXIE now also features a TCL-TK user interface. The growing number of ROXIE users inside and outside CERN gave rise to the idea of organizing the 'First International ROXIE Users Meeting and Workshop' at CERN, March 16-18, 1998 which brought together about 50 researchers in the field. This report contains the contributions to the workshop and describes the features of the program, the mathematical optimization techniques applied and gives examples of the recent design work carried out. It also gives the theoretical background for the field computation methods and serves as a handbook for the installation and application of the program. (orig.)

  14. ROXIE: Routine for the optimization of magnet X-sections, inverse field calculation and coil end design. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Russenschuck, S [ed.

    1999-04-12

    The Large Hadron Collider (LHC) will provide proton-proton collisions with a center-of-mass energy of 14 TeV which requires high field superconducting magnets to guide the counter-rotating beams in the existing LEP tunnel with a circumference of about 27 km. The LHC magnet system consists of 1232 superconducting dipoles and 386 main quadrupoles together with about 20 different types of magnets for insertions and correction. The design and optimization of these magnets is dominated by the requirement of a extremely uniform field which is mainly defined by the layout of the superconducting coils. The program package ROXIE (Routine for the Optimization of magnet X-sections, Inverse field calculation and coil End design) has been developed for the design and optimization of the coil geometries in two and three dimensions. Recently it has been extended in a collaboration with the University of Graz, Austria, to the calculation of saturation induced effects using a reduced vector-potential FEM formulation. With the University of Stuttgart, Germany, a collaboration exists fro the application of the BEM-FEM coupling method for the 2D and 3D field calculation. ROXIE now also features a TCL-TK user interface. The growing number of ROXIE users inside and outside CERN gave rise to the idea of organizing the 'First International ROXIE Users Meeting and Workshop' at CERN, March 16-18, 1998 which brought together about 50 researchers in the field. This report contains the contributions to the workshop and describes the features of the program, the mathematical optimization techniques applied and gives examples of the recent design work carried out. It also gives the theoretical background for the field computation methods and serves as a handbook for the installation and application of the program. (orig.)

  15. Transforming data into decisions to optimize the recovery of the Saih Rawl Field in Oman

    Energy Technology Data Exchange (ETDEWEB)

    Dozier, G C [Society of Petroleum Engineers, Dubai (United Arab Emirates); [Schlumberger Oilfield Services, Dubai (United Arab Emirates); Giacon, P [Society of Petroleum Engineers, Dubai (United Arab Emirates); [Petroleum Development of Oman (Oman)

    2006-07-01

    The Saih Rawl field of Oman has been producing for more than 5 years from the Barik and Miqrat Formations. Well productivity depends greatly on the effectiveness of hydraulic fracturing and other operating practices. Productivity is further complicated by the changing mechanical and reservoir properties related to depletion and intralayer communication. In this study, a systematic approach was used by a team of operators and service companies to optimize well production within a one-year time period. The approach involved a dynamic integration of historical data and new information technologies and engineering diagnostics to identify the key parameters that influence productivity and to optimize performance according to current analyses. In particular, historical pressure trends by unit were incorporated with theoretical assumptions validated by indirect field evidence. Onsite decision-making resulted in effective placement of fracture treatments. The approach has produced some of the highest producing wells in the field's history. It was concluded that optimization and maximization of well productivity requires multidiscipline inputs that should be managed through structured workflow that includes not only the classical simulation design inputs but entails the entire process from design to execution with particular emphasis on cleanup practices and induced fluid damage. 6 refs., 2 tabs., 25 figs.

  16. BRAIN Journal - Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    OpenAIRE

    Ahmet Demir; Utku Kose

    2016-01-01

    ABSTRACT In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Sc...

  17. Study on poloidal field coil optimization and equilibrium control of ITER

    International Nuclear Information System (INIS)

    Shinya, Kichiro; Sugihara, Masayoshi; Nishio, Satoshi

    1989-03-01

    The purpose of this report is to present general features of the poloidal field coil optimization for the ITER plasma, flexibility analysis for various plasma options and some other aspect of the equilibrium control which is required for understanding plasma operation in more detail. Double null divertor plasma was selected as a main object of the optimization. Single null divertor plasma was assumed to be an alternative, because single null divertor plasma can be operational within the amounts of the total stored energy and ampere-turns of the double null divertor plasma, if it is shaped appropriately. Plasma parameters used in the present analysis are mainly those employed in the preliminary study by the Basic Device Engineering group of the ITER design team. The most part of the optimization study, however, utilizes the parameters proposed for discussion by the Japan team before starting joint design work at Garching. Plasma shape, and solenoid coil shape and size, which maximize available flux swing with reasonable amounts of the stored energy and ampere-turns, are discussed. Location and minimum number of the poloidal field coils with adequate shaping controllability were also discussed for various plasma options. Some other aspect of the equilibrium control, such as separatrix swing, moving null point operation during plasma heating and possible range of li, were evaluated and the guideline for the engineering design was proposed. Finally, fusion power output was estimated for the different pressure profiles and combinations of the average density and temperature, and the magnetic quantities of the scrape-off region was calculated to be available for the future divertor analysis. (author)

  18. ROXIE the Routine for the Optimization of Magnet X-sections, Inverse Field Computation and Coil End Design

    CERN Document Server

    Russenschuck, Stephan

    1999-01-01

    The ROXIE software program package has been developed for the design of the superconducting magnets for the LHC at CERN. The software is used as an approach towards the integrated design of superconducting magnets including feature-based coil geometry creation, conceptual design using genetic algorithms, optimization of the coil and iron cross-sections using a reduced vector-potential formulation, 3-D coil end geometry and field optimization using deterministic vector- optimization techniques, tolerance analysis, production of drawings by means of a DXF interface, end-spacer design with interfaces to CAD-CAM for the CNC machining of these pieces, and the tracing of manufacturing errors using field quality measurements. This paper gives an overview of the methods applied in the ROXIE program. (9 refs).

  19. [Matrix effect and application of field-amplified sample injection in the analysis of four tetracyclines in waters by capillary electrohoresis].

    Science.gov (United States)

    2014-08-01

    The system abilities of two chromatographic techniques, capillary electrophoresis (CE) and high performance liquid chromatography (HPLC), were compared for the analysis of four tetracyclines (tetracycline, chlorotetracycline, oxytetracycline and doxycycline). The pH, concentration of background electrolyte (BGE) were optimized for the analysis of the standard mixture sample, meanwhile, the effects of separation voltage and water matrix (pH value and hardness) effects were investigated. In hydrodynamic injection (HDI) mode, a good quantitative linearity and baseline separation within 9. 0 min were obtained for the four tetracyclines at the optimal conditions; the analytical time was about half of that of HPLC. The limits of detection (LODs) were in the range of 0. 28 - 0. 62 mg/L, and the relative standard deviations (RSDs) (n= 6) of migration time and peak area were 0. 42% - 0. 56% and 2. 24% - 2. 95%, respectively. The obtained recoveries spiked in tap water and fishpond water were at the ranges of 96. 3% - 107. 2% and 87. 1% - 105. 2%, respectively. In addition, the stacking method, field-amplified sample injection (FASI), was employed to improve the sensitivity, and the LOD was down to the range of 17.8-35.5 μg/L. With FASI stacking, the RSDs (n=6) of migration time and peak area were 0. 85%-0. 95% and 1. 69%-3.43%, respectively. Due to the advantages of simple sample pretreatment and fast speed, CE is promising in the analysis of the antibiotics in environmental water.

  20. Pre-Mission Input Requirements to Enable Successful Sample Collection by a Remote Field/EVA Team

    Science.gov (United States)

    Cohen, B. A.; Young, K. E.; Lim, D. S.

    2015-01-01

    This paper is intended to evaluate the sample collection process with respect to sample characterization and decision making. In some cases, it may be sufficient to know whether a given outcrop or hand sample is the same as or different from previous sampling localities or samples. In other cases, it may be important to have more in-depth characterization of the sample, such as basic composition, mineralogy, and petrology, in order to effectively identify the best sample. Contextual field observations, in situ/handheld analysis, and backroom evaluation may all play a role in understanding field lithologies and their importance for return. For example, whether a rock is a breccia or a clast-laden impact melt may be difficult based on a single sample, but becomes clear as exploration of a field site puts it into context. The FINESSE (Field Investigations to Enable Solar System Science and Exploration) team is a new activity focused on a science and exploration field based research program aimed at generating strategic knowledge in preparation for the human and robotic exploration of the Moon, near-Earth asteroids (NEAs) and Phobos and Deimos. We used the FINESSE field excursion to the West Clearwater Lake Impact structure (WCIS) as an opportunity to test factors related to sampling decisions. In contract to other technology-driven NASA analog studies, The FINESSE WCIS activity is science-focused, and moreover, is sampling-focused, with the explicit intent to return the best samples for geochronology studies in the laboratory. This specific objective effectively reduces the number of variables in the goals of the field test and enables a more controlled investigation of the role of the crewmember in selecting samples. We formulated one hypothesis to test: that providing details regarding the analytical fate of the samples (e.g. geochronology, XRF/XRD, etc.) to the crew prior to their traverse will result in samples that are more likely to meet specific analytical

  1. Optimized extraction of polysaccharides from corn silk by pulsed electric field and response surface quadratic design.

    Science.gov (United States)

    Zhao, Wenzhu; Yu, Zhipeng; Liu, Jingbo; Yu, Yiding; Yin, Yongguang; Lin, Songyi; Chen, Feng

    2011-09-01

    Corn silk is a traditional Chinese herbal medicine, which has been widely used for treatment of some diseases. In this study the effects of pulsed electric field on the extraction of polysaccharides from corn silk were investigated. Polysaccharides in corn silk were extracted by pulsed electric field and optimized by response surface methodology (RSM), based on a Box-Behnken design (BBD). Three independent variables, including electric field intensity (kV cm(-1) ), ratio of liquid to raw material and pulse duration (µs), were investigated. The experimental data were fitted to a second-order polynomial equation and also profiled into the corresponding 3-D contour plots. Optimal extraction conditions were as follows: electric field intensity 30 kV cm(-1) , ratio of liquid to raw material 50, and pulse duration 6 µs. Under these condition, the experimental yield of extracted polysaccharides was 7.31% ± 0.15%, matching well with the predicted value. The results showed that a pulsed electric field could be applied to extract value-added products from foods and/or agricultural matrix. Copyright © 2011 Society of Chemical Industry.

  2. Optimization of magnetic field system for glass spherical tokamak GLAST-III

    International Nuclear Information System (INIS)

    Ahmad, Zahoor; Ahmad, S; Naveed, M A; Deeba, F; Javeed, M Aqib; Batool, S; Hussain, S; Vorobyov, G M

    2017-01-01

    GLAST-III (Glass Spherical Tokamak) is a spherical tokamak with aspect ratio A = 2. The mapping of its magnetic system is performed to optimize the GLAST-III tokamak for plasma initiation using a Hall probe. Magnetic field from toroidal coils shows 1/ R dependence which is typical with spherical tokamaks. Toroidal field (TF) coils can produce 875 Gauss field, an essential requirement for electron cyclotron resonance assisted discharge. The central solenoid (CS) of GLAST-III is an air core solenoid and requires compensation coils to reduce unnecessary magnetic flux inside the vessel region. The vertical component of magnetic field from the CS in the vacuum vessel region is reduced to 1.15 Gauss kA −1 with the help of a differential loop. The CS of GLAST can produce flux change up to 68 mVs. Theoretical and experimental results are compared for the current waveform of TF coils using a combination of fast and slow capacitor banks. Also the magnetic field produced by poloidal field (PF) coils is compared with theoretically predicted values. It is found that calculated results are in good agreement with experimental measurement. Consequently magnetic field measurements are validated. A tokamak discharge with 2 kA plasma current and pulse length 1 ms is successfully produced using different sets of coils. (paper)

  3. Remotely detected high-field MRI of porous samples

    Science.gov (United States)

    Seeley, Juliette A.; Han, Song-I.; Pines, Alexander

    2004-04-01

    Remote detection of NMR is a novel technique in which an NMR-active sensor surveys an environment of interest and retains memory of that environment to be recovered at a later time in a different location. The NMR or MRI information about the sensor nucleus is encoded and stored as spin polarization at the first location and subsequently moved to a different physical location for optimized detection. A dedicated probe incorporating two separate radio frequency (RF)—circuits was built for this purpose. The encoding solenoid coil was large enough to fit around the bulky sample matrix, while the smaller detection solenoid coil had not only a higher quality factor, but also an enhanced filling factor since the coil volume comprised purely the sensor nuclei. We obtained two-dimensional (2D) void space images of two model porous samples with resolution less than 1.4 mm 2. The remotely reconstructed images demonstrate the ability to determine fine structure with image quality superior to their directly detected counterparts and show the great potential of NMR remote detection for imaging applications that suffer from low sensitivity due to low concentrations and filling factor.

  4. Astronauts Armstrong and Aldrin study rock samples during field trip

    Science.gov (United States)

    1969-01-01

    Astronaut Neil Armstrong, commander of the Apollo 11 lunar landing mission, and Astronaut Edwin Aldrin, Lunar module pilot for Apollo 11, study rock samples during a geological field trip to the Quitman Mountains area near the Fort Quitman ruins in far west Texas.

  5. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Tijssen, Rob H.N. [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Senneville, Baudouin D. de [Imaging Division, University Medical Center Utrecht, Utrecht (Netherlands); L' Institut de Mathématiques de Bordeaux, Unité Mixte de Recherche 5251, Centre National de la Recherche Scientifique/University of Bordeaux, Bordeaux (France); Heerkens, Hanne D.; Vulpen, Marco van; Lagendijk, Jan J.W.; Berg, Cornelis A.T. van den [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands)

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.

  6. Constrained optimization for position calibration of an NMR field camera.

    Science.gov (United States)

    Chang, Paul; Nassirpour, Sahar; Eschelbach, Martin; Scheffler, Klaus; Henning, Anke

    2018-07-01

    Knowledge of the positions of field probes in an NMR field camera is necessary for monitoring the B 0 field. The typical method of estimating these positions is by switching the gradients with known strengths and calculating the positions using the phases of the FIDs. We investigated improving the accuracy of estimating the probe positions and analyzed the effect of inaccurate estimations on field monitoring. The field probe positions were estimated by 1) assuming ideal gradient fields, 2) using measured gradient fields (including nonlinearities), and 3) using measured gradient fields with relative position constraints. The fields measured with the NMR field camera were compared to fields acquired using a dual-echo gradient recalled echo B 0 mapping sequence. Comparisons were done for shim fields from second- to fourth-order shim terms. The position estimation was the most accurate when relative position constraints were used in conjunction with measured (nonlinear) gradient fields. The effect of more accurate position estimates was seen when compared to fields measured using a B 0 mapping sequence (up to 10%-15% more accurate for some shim fields). The models acquired from the field camera are sensitive to noise due to the low number of spatial sample points. Position estimation of field probes in an NMR camera can be improved using relative position constraints and nonlinear gradient fields. Magn Reson Med 80:380-390, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. The Mechanical Design Optimization of a High Field HTS Solenoid

    Energy Technology Data Exchange (ETDEWEB)

    Lalitha, SL; Gupta, RC

    2015-06-01

    This paper describes the conceptual design optimization of a large aperture, high field (24 T at 4 K) solenoid for a 1.7 MJ superconducting magnetic energy storage device. The magnet is designed to be built entirely of second generation (2G) high temperature superconductor tape with excellent electrical and mechanical properties at the cryogenic temperatures. The critical parameters that govern the magnet performance are examined in detail through a multiphysics approach using ANSYS software. The analysis results formed the basis for the performance specification as well as the construction of the magnet.

  8. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    Science.gov (United States)

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  9. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    Science.gov (United States)

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  10. Design, analysis, and interpretation of field quality-control data for water-sampling projects

    Science.gov (United States)

    Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.

    2015-01-01

    The process of obtaining and analyzing water samples from the environment includes a number of steps that can affect the reported result. The equipment used to collect and filter samples, the bottles used for specific subsamples, any added preservatives, sample storage in the field, and shipment to the laboratory have the potential to affect how accurately samples represent the environment from which they were collected. During the early 1990s, the U.S. Geological Survey implemented policies to include the routine collection of quality-control samples in order to evaluate these effects and to ensure that water-quality data were adequately representing environmental conditions. Since that time, the U.S. Geological Survey Office of Water Quality has provided training in how to design effective field quality-control sampling programs and how to evaluate the resultant quality-control data. This report documents that training material and provides a reference for methods used to analyze quality-control data.

  11. Multiobjective optimization model of intersection signal timing considering emissions based on field data: A case study of Beijing.

    Science.gov (United States)

    Kou, Weibin; Chen, Xumei; Yu, Lei; Gong, Huibo

    2018-04-18

    Most existing signal timing models are aimed to minimize the total delay and stops at intersections, without considering environmental factors. This paper analyzes the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. First, considering the different operating modes of cruising, acceleration, deceleration, and idling, field data of emissions and Global Positioning System (GPS) are collected to estimate emission rates for heavy-duty and light-duty vehicles. Second, multiobjective signal timing optimization model is established based on a genetic algorithm to minimize delay, stops, and emissions. Finally, a case study is conducted in Beijing. Nine scenarios are designed considering different weights of emission and traffic efficiency. The results compared with those using Highway Capacity Manual (HCM) 2010 show that signal timing optimized by the model proposed in this paper can decrease vehicles delay and emissions more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development. Vehicle emissions are heavily at signal intersections in urban area. The multiobjective signal timing optimization model is proposed considering the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. The results indicate that signal timing optimized by the model proposed in this paper can decrease vehicle emissions and delays more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development.

  12. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmet Demir

    2017-01-01

    Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions. 

  13. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  14. Multiple fields may offer better esophagus sparing without increased probability of lung toxicity in optimized IMRT of lung tumors

    International Nuclear Information System (INIS)

    Chapet, Olivier; Fraass, Benedick A.; Haken, Randall K. ten

    2006-01-01

    Purpose: To evaluate whether increasing numbers of intensity-modulated radiation therapy (IMRT) fields enhance lung-tumor dose without additional predicted toxicity for difficult planning geometries. Methods and Materials: Data from 8 previous three dimensional conformal radiation therapy (3D-CRT) patients with tumors located in various regions of each lung, but with planning target volumes (PTVs) overlapping part of the esophagus, were used as input. Four optimized-beamlet IMRT plans (1 plan that used the 3D-CRT beam arrangement and 3 plans with 3, 5, or 7 axial, but predominantly one-sided, fields) were compared. For IMRT, the equivalent uniform dose (EUD) in the whole PTV was optimized simultaneously with that in a reduced PTV exclusive of the esophagus. Normal-tissue complication probability-based costlets were used for the esophagus, heart, and lung. Results: Overall, IMRT plans (optimized by use of EUD to judiciously allow relaxed PTV dose homogeneity) result in better minimum PTV isodose surface coverage and better average EUD values than does conformal planning; dose generally increases with the number of fields. Even 7-field plans do not significantly alter normal-lung mean-dose values or lung volumes that receive more than 13, 20, or 30 Gy. Conclusion: Optimized many-field IMRT plans can lead to escalated lung-tumor dose in the special case of esophagus overlapping PTV, without unacceptable alteration in the dose distribution to normal lung

  15. Optimization of sampling parameters for standardized exhaled breath sampling.

    Science.gov (United States)

    Doran, Sophie; Romano, Andrea; Hanna, George B

    2017-09-05

    The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample volume

  16. Optimism and self-esteem are related to sleep. Results from a large community-based sample.

    Science.gov (United States)

    Lemola, Sakari; Räikkönen, Katri; Gomez, Veronica; Allemand, Mathias

    2013-12-01

    There is evidence that positive personality characteristics, such as optimism and self-esteem, are important for health. Less is known about possible determinants of positive personality characteristics. To test the relationship of optimism and self-esteem with insomnia symptoms and sleep duration. Sleep parameters, optimism, and self-esteem were assessed by self-report in a community-based sample of 1,805 adults aged between 30 and 84 years in the USA. Moderation of the relation between sleep and positive characteristics by gender and age as well as potential confounding of the association by depressive disorder was tested. Individuals with insomnia symptoms scored lower on optimism and self-esteem largely independent of age and sex, controlling for symptoms of depression and sleep duration. Short sleep duration (self-esteem when compared to individuals sleeping 7-8 h, controlling depressive symptoms. Long sleep duration (>9 h) was also related to low optimism and self-esteem independent of age and sex. Good and sufficient sleep is associated with positive personality characteristics. This relationship is independent of the association between poor sleep and depression.

  17. Optimal sample to tracer ratio for isotope dilution mass spectrometry: the polyisotopic case

    International Nuclear Information System (INIS)

    Laszlo, G.; Ridder, P. de; Goldman, A.; Cappis, J.; Bievre, P. de

    1991-01-01

    The Isotope Dilution Mass Spectrometry (IDMS) measurement technique provides a means for determining the unknown amount of various isotopes of an element in a sample solution of known mass. The sample solution is mixed with an auxiliary solution, or tracer, containing a known amount of the same element having the same isotopes but of different relative abundances or isotopic composition and the induced change in the isotopic composition measured by isotope mass spectrometry. The technique involves the measurement of the abundance ratio of each isotope to a (same) reference isotope in the sample solution, in the tracer solution and in the blend of the sample and tracer solution. These isotope ratio measurements, the known element amount in the tracer and the known mass of sample solution are used to calculate the unknown amount of one isotope in the sample solution. Subsequently the unknown amount of element is determined. The purpose of this paper is to examine the optimization of the ratio of the estimated unknown amount of element in the sample solution to the known amount of element in the tracer solution in order to minimize the relative uncertainty in the determination of the unknown amount of element

  18. On a mean field game optimal control approach modeling fast exit scenarios in human crowds

    KAUST Repository

    Burger, Martin; Di Francesco, Marco; Markowich, Peter A.; Wolfram, Marie Therese

    2013-01-01

    The understanding of fast exit and evacuation situations in crowd motion research has received a lot of scientific interest in the last decades. Security issues in larger facilities, like shopping malls, sports centers, or festivals necessitate a better understanding of the major driving forces in crowd dynamics. In this paper we present an optimal control approach modeling fast exit scenarios in pedestrian crowds. The model is formulated in the framework of mean field games and based on a parabolic optimal control problem. We consider the case of a large human crowd trying to exit a room as fast as possible. The motion of every pedestrian is determined by minimizing a cost functional, which depends on his/her position and velocity, the overall density of people, and the time to exit. This microscopic setup leads in a mean-field formulation to a nonlinear macroscopic optimal control problem, which raises challenging questions for the analysis and numerical simulations.We discuss different aspects of the mathematical modeling and illustrate them with various computational results. ©2013 IEEE.

  19. On a mean field game optimal control approach modeling fast exit scenarios in human crowds

    KAUST Repository

    Burger, Martin

    2013-12-01

    The understanding of fast exit and evacuation situations in crowd motion research has received a lot of scientific interest in the last decades. Security issues in larger facilities, like shopping malls, sports centers, or festivals necessitate a better understanding of the major driving forces in crowd dynamics. In this paper we present an optimal control approach modeling fast exit scenarios in pedestrian crowds. The model is formulated in the framework of mean field games and based on a parabolic optimal control problem. We consider the case of a large human crowd trying to exit a room as fast as possible. The motion of every pedestrian is determined by minimizing a cost functional, which depends on his/her position and velocity, the overall density of people, and the time to exit. This microscopic setup leads in a mean-field formulation to a nonlinear macroscopic optimal control problem, which raises challenging questions for the analysis and numerical simulations.We discuss different aspects of the mathematical modeling and illustrate them with various computational results. ©2013 IEEE.

  20. Preservation of RNA and DNA from mammal samples under field conditions.

    Science.gov (United States)

    Camacho-Sanchez, Miguel; Burraco, Pablo; Gomez-Mestre, Ivan; Leonard, Jennifer A

    2013-07-01

    Ecological and conservation genetics require sampling of organisms in the wild. Appropriate preservation of the collected samples, usually by cryostorage, is key to the quality of the genetic data obtained. Nevertheless, cryopreservation in the field to ensure RNA and DNA stability is not always possible. We compared several nucleic acid preservation solutions appropriate for field sampling and tested them on rat (Rattus rattus) blood, ear and tail tip, liver, brain and muscle. We compared the efficacy of a nucleic acid preservation (NAP) buffer for DNA preservation against 95% ethanol and Longmire buffer, and for RNA preservation against RNAlater (Qiagen) and Longmire buffer, under simulated field conditions. For DNA, the NAP buffer was slightly better than cryopreservation or 95% ethanol, but high molecular weight DNA was preserved in all conditions. The NAP buffer preserved RNA as well as RNAlater. Liver yielded the best RNA and DNA quantity and quality; thus, liver should be the tissue preferentially collected from euthanized animals. We also show that DNA persists in nonpreserved muscle tissue for at least 1 week at ambient temperature, although degradation is noticeable in a matter of hours. When cryopreservation is not possible, the NAP buffer is an economical alternative for RNA preservation at ambient temperature for at least 2 months and DNA preservation for at least 10 months. © 2013 John Wiley & Sons Ltd.

  1. Quantum dynamics manipulation using optimal control theory in the presence of laser field noise

    Science.gov (United States)

    Kumar, Praveen; Malinovskaya, Svetlana A.

    2010-08-01

    We discuss recent advances in optimal control theory (OCT) related to the investigation of the impact of control field noise on controllability of quantum dynamics. Two numerical methods, the gradient method and the iteration method, are paid particular attention. We analyze the problem of designing noisy control fields to maximize the vibrational transition probability in diatomic quantum systems, e.g. the HF and OH molecules. White noise is used as an additive random variable in the amplitude of the control field. It is demonstrated that the convergence is faster in the presence of noise and population transfer is increased by 0.04% for small values of noise compared to the field amplitude.

  2. Sampling general N-body interactions with auxiliary fields

    Science.gov (United States)

    Körber, C.; Berkowitz, E.; Luu, T.

    2017-09-01

    We present a general auxiliary field transformation which generates effective interactions containing all possible N-body contact terms. The strength of the induced terms can analytically be described in terms of general coefficients associated with the transformation and thus are controllable. This transformation provides a novel way for sampling 3- and 4-body (and higher) contact interactions non-perturbatively in lattice quantum Monte Carlo simulations. As a proof of principle, we show that our method reproduces the exact solution for a two-site quantum mechanical problem.

  3. Optimization of temperature field of tobacco heat shrink machine

    Science.gov (United States)

    Yang, Xudong; Yang, Hai; Sun, Dong; Xu, Mingyang

    2018-06-01

    A company currently shrinking machine in the course of the film shrinkage is not compact, uneven temperature, resulting in poor quality of the shrinkage of the surface film. To solve this problem, the simulation and optimization of the temperature field are performed by using the k-epsilon turbulence model and the MRF model in fluent. The simulation results show that after the mesh screen structure is installed at the suction inlet of the centrifugal fan, the suction resistance of the fan can be increased and the eddy current intensity caused by the high-speed rotation of the fan can be improved, so that the internal temperature continuity of the heat shrinkable machine is Stronger.

  4. Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation

    International Nuclear Information System (INIS)

    Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A.; Bouquerel, Hélène

    2016-01-01

    Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L"−"1 and 10% for 10 mBq L"−"1. While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L"−"1, a conservative experimental estimate is rather 5 mBq L"−"1, corresponding to 0.14 fg g"−"1. The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported. - Highlights: • Radium-226 concentration measured with optimized accumulation in a container. • Radon-222 in air measured precisely with scintillation flasks and long countings. • Method tested by repetition tests, dilution experiments, and successful blind tests. • Estimated conservative detection limit without pre-concentration is 5 mBq L"−"1. • Method is portable, cost

  5. Isolation and identification of phytase-producing strains from soil samples and optimization of production parameters

    Directory of Open Access Journals (Sweden)

    Masoud Mohammadi

    2017-09-01

    Discussion and conclusion: Penicillium sp. isolated from a soil sample near Qazvin, was able to produce highly active phytase in optimized environmental conditions, which could be a suitable candidate for commercial production of phytase to be used as complement in poultry feeding industries.

  6. Guidance for establishment and implementation of field sample management programs in support of EM environmental sampling and analysis activities

    International Nuclear Information System (INIS)

    1994-01-01

    The role of the National Sample Management Program (NSMP) proposed by the Department of Energy's Office of Environmental Management (EM) is to be a resource for EM programs and for local Field Sample Management Programs (FSMPs). It will be a source of information on sample analysis and data collection within the DOE complex. The purpose of this document is to establish the suggested scope of the FSMP activities to be performed under each Operations Office, list the drivers under which the program will operate, define terms and list references. This guidance will apply only to EM sampling and analysis activities associated with project planning, contracting, laboratory selection, sample collection, sample transportation, laboratory analysis and data management

  7. Intact preservation of environmental samples by freezing under an alternating magnetic field.

    Science.gov (United States)

    Morono, Yuki; Terada, Takeshi; Yamamoto, Yuhji; Xiao, Nan; Hirose, Takehiro; Sugeno, Masaya; Ohwada, Norio; Inagaki, Fumio

    2015-04-01

    The study of environmental samples requires a preservation system that stabilizes the sample structure, including cells and biomolecules. To address this fundamental issue, we tested the cell alive system (CAS)-freezing technique for subseafloor sediment core samples. In the CAS-freezing technique, an alternating magnetic field is applied during the freezing process to produce vibration of water molecules and achieve a stable, super-cooled liquid phase. Upon further cooling, the temperature decreases further, achieving a uniform freezing of sample with minimal ice crystal formation. In this study, samples were preserved using the CAS and conventional freezing techniques at 4, -20, -80 and -196 (liquid nitrogen) °C. After 6 months of storage, microbial cell counts by conventional freezing significantly decreased (down to 10.7% of initial), whereas that by CAS-freezing resulted in minimal. When Escherichia coli cells were tested under the same freezing conditions and storage for 2.5 months, CAS-frozen E. coli cells showed higher viability than the other conditions. In addition, an alternating magnetic field does not impact on the direction of remanent magnetization in sediment core samples, although slight partial demagnetization in intensity due to freezing was observed. Consequently, our data indicate that the CAS technique is highly useful for the preservation of environmental samples. © 2014 Society for Applied Microbiology and John Wiley & Sons Ltd.

  8. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2017-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 17-19, 2016. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, health, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  9. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2015-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 13-15, 2014. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, healthcare, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  10. Effect of sample shape on nonlinear magnetization dynamics under an external magnetic field

    International Nuclear Information System (INIS)

    Vagin, Dmitry V.; Polyakov, Oleg P.

    2008-01-01

    Effect of sample shape on the nonlinear collective dynamics of magnetic moments in the presence of oscillating and constant external magnetic fields is studied using the Landau-Lifshitz-Gilbert (LLG) approach. The uniformly magnetized sample is considered to be an ellipsoidal axially symmetric particle described by demagnetization factors and uniaxial crystallographic anisotropy formed some angle with an applied field direction. It is investigated as to how the change in particle shape affects its nonlinear magnetization dynamics. To produce a regular study, all results are presented in the form of bifurcation diagrams for all sufficient dynamics regimes of the considered system. In this paper, we show that the sample's (particle's) shape and its orientation with respect to the external field (system configuration) determine the character of magnetization dynamics: deterministic behavior and appearance of chaotic states. A simple change in the system's configuration or in the shapes of its parts can transfer it from chaotic to periodic or even static regime and back. Moreover, the effect of magnetization precession stall and magnetic moments alignment parallel or antiparallel to the external oscillating field is revealed and the way of control of such 'polarized' states is found. Our results suggest that varying the particle's shape and fields' geometry may provide a useful way of magnetization dynamics control in complex magnetic systems

  11. A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization

    Science.gov (United States)

    Liu, Shuang; Hu, Xiangyun; Liu, Tianyou

    2014-07-01

    Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.

  12. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    Science.gov (United States)

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  13. Optimal protocols and optimal transport in stochastic thermodynamics.

    Science.gov (United States)

    Aurell, Erik; Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo

    2011-06-24

    Thermodynamics of small systems has become an important field of statistical physics. Such systems are driven out of equilibrium by a control, and the question is naturally posed how such a control can be optimized. We show that optimization problems in small system thermodynamics are solved by (deterministic) optimal transport, for which very efficient numerical methods have been developed, and of which there are applications in cosmology, fluid mechanics, logistics, and many other fields. We show, in particular, that minimizing expected heat released or work done during a nonequilibrium transition in finite time is solved by the Burgers equation and mass transport by the Burgers velocity field. Our contribution hence considerably extends the range of solvable optimization problems in small system thermodynamics.

  14. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    Science.gov (United States)

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  15. Optimized sample preparation for two-dimensional gel electrophoresis of soluble proteins from chicken bursa of Fabricius

    Directory of Open Access Journals (Sweden)

    Zheng Xiaojuan

    2009-10-01

    Full Text Available Abstract Background Two-dimensional gel electrophoresis (2-DE is a powerful method to study protein expression and function in living organisms and diseases. This technique, however, has not been applied to avian bursa of Fabricius (BF, a central immune organ. Here, optimized 2-DE sample preparation methodologies were constructed for the chicken BF tissue. Using the optimized protocol, we performed further 2-DE analysis on a soluble protein extract from the BF of chickens infected with virulent avibirnavirus. To demonstrate the quality of the extracted proteins, several differentially expressed protein spots selected were cut from 2-DE gels and identified by matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS. Results An extraction buffer containing 7 M urea, 2 M thiourea, 2% (w/v 3-[(3-cholamidopropyl-dimethylammonio]-1-propanesulfonate (CHAPS, 50 mM dithiothreitol (DTT, 0.2% Bio-Lyte 3/10, 1 mM phenylmethylsulfonyl fluoride (PMSF, 20 U/ml Deoxyribonuclease I (DNase I, and 0.25 mg/ml Ribonuclease A (RNase A, combined with sonication and vortex, yielded the best 2-DE data. Relative to non-frozen immobilized pH gradient (IPG strips, frozen IPG strips did not result in significant changes in the 2-DE patterns after isoelectric focusing (IEF. When the optimized protocol was used to analyze the spleen and thymus, as well as avibirnavirus-infected bursa, high quality 2-DE protein expression profiles were obtained. 2-DE maps of BF of chickens infected with virulent avibirnavirus were visibly different and many differentially expressed proteins were found. Conclusion These results showed that method C, in concert extraction buffer IV, was the most favorable for preparing samples for IEF and subsequent protein separation and yielded the best quality 2-DE patterns. The optimized protocol is a useful sample preparation method for comparative proteomics analysis of chicken BF tissues.

  16. Design optimization of a robust sleeve antenna for hepatic microwave ablation

    International Nuclear Information System (INIS)

    Prakash, Punit; Webster, John G; Deng Geng; Converse, Mark C; Mahvi, David M; Ferris, Michael C

    2008-01-01

    We describe the application of a Bayesian variable-number sample-path (VNSP) optimization algorithm to yield a robust design for a floating sleeve antenna for hepatic microwave ablation. Finite element models are used to generate the electromagnetic (EM) field and thermal distribution in liver given a particular design. Dielectric properties of the tissue are assumed to vary within ± 10% of average properties to simulate the variation among individuals. The Bayesian VNSP algorithm yields an optimal design that is a 14.3% improvement over the original design and is more robust in terms of lesion size, shape and efficiency. Moreover, the Bayesian VNSP algorithm finds an optimal solution saving 68.2% simulation of the evaluations compared to the standard sample-path optimization method

  17. Optimization of confinement in a toroidal plasma subject to strong radial electric fields

    International Nuclear Information System (INIS)

    Roth, J.R.

    1977-01-01

    A preliminary report on the identification and optimization of independent variables which affect the ion density and confinement time in a bumpy torus plasma is presented. The independent variables include the polarity, position, and number of the midplane electrode rings, the method of gas injection, and the polarity and strength of a weak vertical magnetic field. Some characteristic data taken under condition when most of the independent variables were optimized are presented. The highest value of the electron number density on the plasma axis is 3.2 x 10 to the 12th power/cc, the highest ion heating efficiency is 47 percent, and the longest particle containment time is 2.0 milliseconds

  18. Optimized Analytical Method to Determine Gallic and Picric Acids in Pyrotechnic Samples by Using HPLC/UV (Reverse Phase)

    International Nuclear Information System (INIS)

    Garcia Alonso, S.; Perez Pastor, R. M.

    2013-01-01

    A study on the optimization and development of a chromatographic method for the determination of gallic and picric acids in pyrotechnic samples is presented. In order to achieve this, both analytical conditions by HPLC with diode detection and extraction step of a selected sample were studied. (Author)

  19. Least Squares Magnetic-Field Optimization for Portable Nuclear Magnetic Resonance Magnet Design

    International Nuclear Information System (INIS)

    Paulsen, Jeffrey L; Franck, John; Demas, Vasiliki; Bouchard, Louis-S.

    2008-01-01

    Single-sided and mobile nuclear magnetic resonance (NMR) sensors have the advantages of portability, low cost, and low power consumption compared to conventional high-field NMR and magnetic resonance imaging (MRI) systems. We present fast, flexible, and easy-to-implement target field algorithms for mobile NMR and MRI magnet design. The optimization finds a global optimum in a cost function that minimizes the error in the target magnetic field in the sense of least squares. When the technique is tested on a ring array of permanent-magnet elements, the solution matches the classical dipole Halbach solution. For a single-sided handheld NMR sensor, the algorithm yields a 640 G field homogeneous to 16,100 ppm across a 1.9 cc volume located 1.5 cm above the top of the magnets and homogeneous to 32,200 ppm over a 7.6 cc volume. This regime is adequate for MRI applications. We demonstrate that the homogeneous region can be continuously moved away from the sensor by rotating magnet rod elements, opening the way for NMR sensors with adjustable 'sensitive volumes'

  20. Optimizing headspace sampling temperature and time for analysis of volatile oxidation products in fish oil

    DEFF Research Database (Denmark)

    Rørbæk, Karen; Jensen, Benny

    1997-01-01

    Headspace-gas chromatography (HS-GC), based on adsorption to Tenax GR(R), thermal desorption and GC, has been used for analysis of volatiles in fish oil. To optimize sam sampling conditions, the effect of heating the fish oil at various temperatures and times was evaluated from anisidine values (AV...

  1. Topology optimization of the permanent magnet type MRI considering the magnetic field homogeneity

    International Nuclear Information System (INIS)

    Lee, Junghoon; Yoo, Jeonghoon

    2010-01-01

    This study is to suggest a concept design of the permanent magnet (PM) type magnetic resonance imaging (MRI) device based on the topology optimization method. The pulse currents in the gradient coils in the MRI device will introduce the effect of eddy currents in ferromagnetic material and it may worsen the quality of imaging. To equalize the magnetic flux in the PM type MRI device for good imaging, the eddy current effect in the ferromagnetic material must be reduced. This study attempts to use the topology optimization scheme for equalizing the magnetic flux in the measuring domain of the PM type MRI device using that the magnetic flux can be calculated directly by a commercial finite element analysis package. The density method is adopted for topology optimization and the sensitivity of the objective function is computed according to the density change of each finite element in the design domain. As a result, optimal shapes of the pole of the PM type MRI device can be obtained. The commercial package, ANSYS, is used for analyzing the magnetic field problem and obtaining the resultant magnetic flux.

  2. A New Method for Determining Optimal Regularization Parameter in Near-Field Acoustic Holography

    Directory of Open Access Journals (Sweden)

    Yue Xiao

    2018-01-01

    Full Text Available Tikhonov regularization method is effective in stabilizing reconstruction process of the near-field acoustic holography (NAH based on the equivalent source method (ESM, and the selection of the optimal regularization parameter is a key problem that determines the regularization effect. In this work, a new method for determining the optimal regularization parameter is proposed. The transfer matrix relating the source strengths of the equivalent sources to the measured pressures on the hologram surface is augmented by adding a fictitious point source with zero strength. The minimization of the norm of this fictitious point source strength is as the criterion for choosing the optimal regularization parameter since the reconstructed value should tend to zero. The original inverse problem in calculating the source strengths is converted into a univariate optimization problem which is solved by a one-dimensional search technique. Two numerical simulations with a point driven simply supported plate and a pulsating sphere are investigated to validate the performance of the proposed method by comparison with the L-curve method. The results demonstrate that the proposed method can determine the regularization parameter correctly and effectively for the reconstruction in NAH.

  3. Application of support vector regression for optimization of vibration flow field of high-density polyethylene melts characterized by small angle light scattering

    Science.gov (United States)

    Xian, Guangming

    2018-03-01

    In this paper, the vibration flow field parameters of polymer melts in a visual slit die are optimized by using intelligent algorithm. Experimental small angle light scattering (SALS) patterns are shown to characterize the processing process. In order to capture the scattered light, a polarizer and an analyzer are placed before and after the polymer melts. The results reported in this study are obtained using high-density polyethylene (HDPE) with rotation speed at 28 rpm. In addition, support vector regression (SVR) analytical method is introduced for optimization the parameters of vibration flow field. This work establishes the general applicability of SVR for predicting the optimal parameters of vibration flow field.

  4. Adaptive Metropolis Sampling with Product Distributions

    Science.gov (United States)

    Wolpert, David H.; Lee, Chiu Fan

    2005-01-01

    The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.

  5. Optimization of Photospheric Electric Field Estimates for Accurate Retrieval of Total Magnetic Energy Injection

    Science.gov (United States)

    Lumme, E.; Pomoell, J.; Kilpua, E. K. J.

    2017-12-01

    Estimates of the photospheric magnetic, electric, and plasma velocity fields are essential for studying the dynamics of the solar atmosphere, for example through the derivative quantities of Poynting and relative helicity flux and using the fields to obtain the lower boundary condition for data-driven coronal simulations. In this paper we study the performance of a data processing and electric field inversion approach that requires only high-resolution and high-cadence line-of-sight or vector magnetograms, which we obtain from the Helioseismic and Magnetic Imager (HMI) onboard Solar Dynamics Observatory (SDO). The approach does not require any photospheric velocity estimates, and the lacking velocity information is compensated for using ad hoc assumptions. We show that the free parameters of these assumptions can be optimized to reproduce the time evolution of the total magnetic energy injection through the photosphere in NOAA AR 11158, when compared to recent state-of-the-art estimates for this active region. However, we find that the relative magnetic helicity injection is reproduced poorly, reaching at best a modest underestimation. We also discuss the effect of some of the data processing details on the results, including the masking of the noise-dominated pixels and the tracking method of the active region, neither of which has received much attention in the literature so far. In most cases the effect of these details is small, but when the optimization of the free parameters of the ad hoc assumptions is considered, a consistent use of the noise mask is required. The results found in this paper imply that the data processing and electric field inversion approach that uses only the photospheric magnetic field information offers a flexible and straightforward way to obtain photospheric magnetic and electric field estimates suitable for practical applications such as coronal modeling studies.

  6. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Energy Technology Data Exchange (ETDEWEB)

    Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it [Sapienza Università di Roma, Dipartimento di Ingegneria Civile, Edile e Ambientale (Italy); Alfonso, L. [Hydroinformatics Chair Group, UNESCO-IHE, Delft (Netherlands); Di Baldassarre, G. [Department of Earth Sciences, Program for Air, Water and Landscape Sciences, Uppsala University (Sweden)

    2016-06-08

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  7. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    International Nuclear Information System (INIS)

    Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.

    2016-01-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  8. Constrained Optimization and Optimal Control for Partial Differential Equations

    CERN Document Server

    Leugering, Günter; Griewank, Andreas

    2012-01-01

    This special volume focuses on optimization and control of processes governed by partial differential equations. The contributors are mostly participants of the DFG-priority program 1253: Optimization with PDE-constraints which is active since 2006. The book is organized in sections which cover almost the entire spectrum of modern research in this emerging field. Indeed, even though the field of optimal control and optimization for PDE-constrained problems has undergone a dramatic increase of interest during the last four decades, a full theory for nonlinear problems is still lacking. The cont

  9. What SCADA systems can offer to optimize field operations

    International Nuclear Information System (INIS)

    McLean, D.J.

    1997-01-01

    A new technology developed by Kenomic Controls Ltd. of Calgary was designed to solve some of the problems associated with producing gas wells with high gas to liquids ratios. The rationale and the system architecture of the SCADA (Supervisory Control and Data Acquisition) system were described. The most common application of SCADA is the Electronic Flow Measurement (EFM) installation using a solar or thermo-electric generator as a power source for the local electronics. Benefits that the SCADA system can provide to producing fields such as increased revenue, decreased operating costs, decreased fixed capital and working capital requirements, the planning and implementation strategies for SCADA were outlined. A case history of a gas well production optimization system in the Pierceland area of northwest Saskatchewan was provided as an illustrative example. 9 figs

  10. Universal field matching in craniospinal irradiation by a background-dose gradient-optimized method.

    Science.gov (United States)

    Traneus, Erik; Bizzocchi, Nicola; Fellin, Francesco; Rombi, Barbara; Farace, Paolo

    2018-01-01

    The gradient-optimized methods are overcoming the traditional feathering methods to plan field junctions in craniospinal irradiation. In this note, a new gradient-optimized technique, based on the use of a background dose, is described. Treatment planning was performed by RayStation (RaySearch Laboratories, Stockholm, Sweden) on the CT scans of a pediatric patient. Both proton (by pencil beam scanning) and photon (by volumetric modulated arc therapy) treatments were planned with three isocenters. An 'in silico' ideal background dose was created first to cover the upper-spinal target and to produce a perfect dose gradient along the upper and lower junction regions. Using it as background, the cranial and the lower-spinal beams were planned by inverse optimization to obtain dose coverage of their relevant targets and of the junction volumes. Finally, the upper-spinal beam was inversely planned after removal of the background dose and with the previously optimized beams switched on. In both proton and photon plans, the optimized cranial and the lower-spinal beams produced a perfect linear gradient in the junction regions, complementary to that produced by the optimized upper-spinal beam. The final dose distributions showed a homogeneous coverage of the targets. Our simple technique allowed to obtain high-quality gradients in the junction region. Such technique universally works for photons as well as protons and could be applicable to the TPSs that allow to manage a background dose. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  11. Optimization of magnetic field-assisted ultrasonication for the disintegration of waste activated sludge using Box-Behnken design with response surface methodology.

    Science.gov (United States)

    Guan, Su; Deng, Feng; Huang, Si-Qi; Liu, Shu-Yang; Ai, Le-Xian; She, Pu-Ying

    2017-09-01

    This study investigated for the first time the feasibility of using a magnetic field for sludge disintegration. Approximately 41.01% disintegration degree (DD) was reached after 30min at 180mT magnetic field intensity upon separate magnetic field treatment. Protein and polysaccharide contents significantly increased. This test was optimized using a Box-Behnken design (BBD) with response surface methodology (RSM) to fit the multiple equation of the DD. The maximum DD was 43.75% and the protein and polysaccharide contents increased to 56.71 and 119.44mg/L, respectively, when the magnetic field strength was 119.69mT, reaction time was 30.49min, and pH was 9.82 in the optimization experiment. We then analyzed the effects of ultrasound alone. We are the first to combine magnetic field with ultrasound to disintegrate waste-activated sludge (WAS). The optimum effect was obtained with the application of ultrasound alone at 45kHz frequency, with a DD of about 58.09%. By contrast, 62.62% DD was reached in combined magnetic field and ultrasound treatment. This combined test was also optimized using BBD with RSM to fit the multiple equation of DD. The maximum DD of 64.59% was achieved when the magnetic field intensity was 197.87mT, ultrasonic frequency was 42.28kHz, reaction time was 33.96min, and pH was 8.90. These results were consistent with those of particle size and electron microscopy analyses. This research proved that a magnetic field can effectively disintegrate WAS and can be combined with other physical techniques such as ultrasound for optimal results. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2015-01-01

    Roč. 52, č. 2 (2015), s. 419-440 ISSN 0021-9002 Grant - others:GA AV ČR(CZ) 171396 Institutional support: RVO:67985556 Keywords : Dominated Convergence theorem for the expected average criterion * Discrepancy function * Kolmogorov inequality * Innovations * Strong sample-path optimality Subject RIV: BC - Control Systems Theory Impact factor: 0.665, year: 2015 http://library.utia.cas.cz/separaty/2015/E/sladky-0449029.pdf

  13. Optimization of a radiochemistry method for plutonium determination in biological samples

    International Nuclear Information System (INIS)

    Cerchetti, Maria L.; Arguelles, Maria G.

    2005-01-01

    Plutonium has been widely used for civilian an military activities. Nevertheless, the methods to control work exposition have not evolved in the same way, remaining as one of the major challengers for the radiological protection practice. Due to the low acceptable incorporation limit, the usual determination is based on indirect methods in urine samples. Our main objective was to optimize a technique used to monitor internal contamination of workers exposed to Plutonium isotopes. Different parameters were modified and their influence on the three steps of the method was evaluated. Those which gave the highest yield and feasibility were selected. The method involves: 1-) Sample concentration (coprecipitation); 2-) Plutonium purification; and 3-) Source preparation by electrodeposition. On the coprecipitation phase, changes on temperature and concentration of the carrier were evaluated. On the ion-exchange separation, changes on the type of the resin, elution solution for hydroxylamine (concentration and volume), length and column recycle were evaluated. Finally, on the electrodeposition phase, we modified the following: electrolytic solution, pH and time. Measures were made by liquid scintillation counting and alpha spectrometry (PIPS). We obtained the following yields: 88% for coprecipitation (at 60 C degree with 2 ml of CaHPO 4 ), 71% for ion-exchange (resins AG 1x8 Cl - 100-200 mesh, hydroxylamine 0.1N in HCl 0.2N as eluent, column between 4.5 and 8 cm), and 93% for electrodeposition (H 2 SO 4 -NH 4 OH, 100 minutes and pH from 2 to 2.8). The expand uncertainty was 30% (NC 95%), the decision threshold (Lc) was 0.102 Bq/L and the minimum detectable activity was 0.218 Bq/L of urine. We obtained an optimized method to screen workers exposed to Plutonium. (author)

  14. Well Field Management Using Multi-Objective Optimization

    DEFF Research Database (Denmark)

    Hansen, Annette Kirstine; Hendricks Franssen, H. J.; Bauer-Gottwein, Peter

    2013-01-01

    with infiltration basins, injection wells and abstraction wells. The two management objectives are to minimize the amount of water needed for infiltration and to minimize the risk of getting contaminated water into the drinking water wells. The management is subject to a daily demand fulfilment constraint. Two...... different optimization methods are tested. Constant scheduling where decision variables are held constant during the time of optimization, and sequential scheduling where the optimization is performed stepwise for daily time steps. The latter is developed to work in a real-time situation. Case study...

  15. Genus-Specific Primers for Study of Fusarium Communities in Field Samples

    Science.gov (United States)

    Edel-Hermann, Véronique; Gautheron, Nadine; Durling, Mikael Brandström; Kolseth, Anna-Karin; Steinberg, Christian; Persson, Paula; Friberg, Hanna

    2015-01-01

    Fusarium is a large and diverse genus of fungi of great agricultural and economic importance, containing many plant pathogens and mycotoxin producers. To date, high-throughput sequencing of Fusarium communities has been limited by the lack of genus-specific primers targeting regions with high discriminatory power at the species level. In the present study, we evaluated two Fusarium-specific primer pairs targeting translation elongation factor 1 (TEF1). We also present the new primer pair Fa+7/Ra+6. Mock Fusarium communities reflecting phylogenetic diversity were used to evaluate the accuracy of the primers in reflecting the relative abundance of the species. TEF1 amplicons were subjected to 454 high-throughput sequencing to characterize Fusarium communities. Field samples from soil and wheat kernels were included to test the method on more-complex material. For kernel samples, a single PCR was sufficient, while for soil samples, nested PCR was necessary. The newly developed primer pairs Fa+7/Ra+6 and Fa/Ra accurately reflected Fusarium species composition in mock DNA communities. In field samples, 47 Fusarium operational taxonomic units were identified, with the highest Fusarium diversity in soil. The Fusarium community in soil was dominated by members of the Fusarium incarnatum-Fusarium equiseti species complex, contradicting findings in previous studies. The method was successfully applied to analyze Fusarium communities in soil and plant material and can facilitate further studies of Fusarium ecology. PMID:26519387

  16. Temperature and flow fields in samples heated in monoellipsoidal mirror furnaces

    Science.gov (United States)

    Rivas, D.; Haya, R.

    The temperature field in samples heated in monoellipsoidal mirror furnaces will be analyzed. The radiation heat exchange between the sample and the mirror is formulated analytically, taking into account multiple reflections at the mirror. It will be shown that the effect of these multiple reflections in the heating process is quite important, and, as a consequence, the effect of the mirror reflectance in the temperature field is quite strong. The conduction-radiation model will be used to simulate the heating process in the floating-zone technique in microgravity conditions; important parameters like the Marangoni number (that drives the thermocapillary flow in the melt), and the temperature gradient at the melt-crystal interface will be estimated. The model will be validated comparing with experimental data. The case of samples mounted in a wall-free configuration (as in the MAXUS-4 programme) will be also considered. Application to the case of compound samples (graphite-silicon-graphite) will be made; the melting of the silicon part and the surface temperature distribution in the melt will be analyzed. Of special interest is the temperature difference between the two graphite rods that hold the silicon part, since it drives the thermocapillary flow in the melt. This thermocapillary flow will be studied, after coupling the previous model with the convective effects. The possibility of counterbalancing this flow by the controlled vibration of the graphite rods will be studied as well. Numerical results show that suppressing the thermocapillary flow can be accomplished quite effectively.

  17. SpecOp: Optimal Extraction Software for Integral Field Unit Spectrographs

    Science.gov (United States)

    McCarron, Adam; Ciardullo, Robin; Eracleous, Michael

    2018-01-01

    The Hobby-Eberly Telescope’s new low resolution integral field spectrographs, LRS2-B and LRS2-R, each cover a 12”x6” area on the sky with 280 fibers and generate spectra with resolutions between R=1100 and R=1900. To extract 1-D spectra from the instrument’s 3D data cubes, a program is needed that is flexible enough to work for a wide variety of targets, including continuum point sources, emission line sources, and compact sources embedded in complex backgrounds. We therefore introduce SpecOp, a user-friendly python program for optimally extracting spectra from integral-field unit spectrographs. As input, SpecOp takes a sky-subtracted data cube consisting of images at each wavelength increment set by the instrument’s spectral resolution, and an error file for each count measurement. All of these files are generated by the current LRS2 reduction pipeline. The program then collapses the cube in the image plane using the optimal extraction algorithm detailed by Keith Horne (1986). The various user-selected options include the fraction of the total signal enclosed in a contour-defined region, the wavelength range to analyze, and the precision of the spatial profile calculation. SpecOp can output the weighted counts and errors at each wavelength in various table formats using python’s astropy package. We outline the algorithm used for extraction and explain how the software can be used to easily obtain high-quality 1-D spectra. We demonstrate the utility of the program by applying it to spectra of a variety of quasars and AGNs. In some of these targets, we extract the spectrum of a nuclear point source that is superposed on a spatially extended galaxy.

  18. Global analysis of the temperature and flow fields in samples heated in multizone resistance furnaces

    Science.gov (United States)

    Pérez-Grande, I.; Rivas, D.; de Pablo, V.

    The temperature field in samples heated in multizone resistance furnaces will be analyzed, using a global model where the temperature fields in the sample, the furnace and the insulation are coupled; the input thermal data is the electric power supplied to the heaters. The radiation heat exchange between the sample and the furnace is formulated analytically, taking into account specular reflections at the sample; for the solid sample the reflectance is both diffuse and specular, and for the melt it is mostly specular. This behavior is modeled through the exchange view factors, which depend on whether the sample is solid or liquid, and, therefore, they are not known a priori. The effect of this specular behavior in the temperature field will be analyzed, by comparing with the case of diffuse samples. A parameter of great importance is the thermal conductivity of the insulation material; it will be shown that the temperature field depends strongly on it. A careful characterization of the insulation is therefore necessary, here it will be done with the aid of experimental results, which will also serve to validate the model. The heating process in the floating-zone technique in microgravity conditions will be simulated; parameters like the Marangoni number or the temperature gradient at the melt-crystal interface will be estimated. Application to the case of compound samples (graphite-silicon-graphite) will be made; the temperature distribution in the silicon part will be studied, especially the temperature difference between the two graphite rods that hold the silicon, since it drives the thermocapillary flow in the melt. This flow will be studied, after coupling the previous model with the convective effects. The possibility of suppresing this flow by the controlled vibration of the graphite rods will be also analyzed. Numerical results show that the thermocapillary flow can indeed be counterbalanced quite effectively.

  19. Optimization of Sample Preparation for the Identification and Quantification of Saxitoxin in Proficiency Test Mussel Sample using Liquid Chromatography-Tandem Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Kirsi Harju

    2015-11-01

    Full Text Available Saxitoxin (STX and some selected paralytic shellfish poisoning (PSP analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS. Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk. Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD.

  20. Flow field optimization for proton exchange membrane fuel cells with varying channel heights and widths

    International Nuclear Information System (INIS)

    Wang Xiaodong; Huang Yuxian; Cheng, C.-H.; Jang, J.-Y.; Lee, D.-J.; Yan, W.-M.; Su Ay

    2009-01-01

    The optimal cathode flow field design of a single serpentine proton exchange membrane fuel cell is obtained by adopting a combined optimization procedure including a simplified conjugate-gradient method (SCGM) and a completely three-dimensional, two-phase, non-isothermal fuel cell model. The cell output power density P cell is the objective function to be maximized with channel heights, H 1 -H 5 , and channel widths, W 2 -W 5 as search variables. The optimal design has tapered channels 1, 3 and 4, and diverging channels 2 and 5, producing 22.51% increment compared with the basic design with all heights and widths setting as 1 mm. Reduced channel heights of channels 2-4 significantly enhance sub-rib convection to effectively transport oxygen to and liquid water out of diffusion layer. The final diverging channel prevents significant leakage of fuel to outlet via sub-rib convection from channel 4. Near-optimal design without huge loss in cell performance but is easily manufactured is discussed.

  1. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  2. Relationships between depressive symptoms and perceived social support, self-esteem, & optimism in a sample of rural adolescents.

    Science.gov (United States)

    Weber, Scott; Puskar, Kathryn Rose; Ren, Dianxu

    2010-09-01

    Stress, developmental changes and social adjustment problems can be significant in rural teens. Screening for psychosocial problems by teachers and other school personnel is infrequent but can be a useful health promotion strategy. We used a cross-sectional survey descriptive design to examine the inter-relationships between depressive symptoms and perceived social support, self-esteem, and optimism in a sample of rural school-based adolescents. Depressive symptoms were negatively correlated with peer social support, family social support, self-esteem, and optimism. Findings underscore the importance for teachers and other school staff to provide health education. Results can be used as the basis for education to improve optimism, self-esteem, social supports and, thus, depression symptoms of teens.

  3. Optimization of electret ionization chambers for dosimetry in mixed neutron-gamma fields

    International Nuclear Information System (INIS)

    Doerschel, B.; Pretzsch, G.

    1984-01-01

    The properties of combination dosemeters consisting of two air-filled electret ionization chambers in mixed neutron-gamma fields have been investigated. The first chamber, polyethylene-walled, is sensitive to neutrons and gamma rays, the second, having walls of teflon, is sensitive to gamma rays only. The properties of the dosemeters are determined by the resulting errors and the measuring range. As both properties depend on the dimensions of the electret ionization chambers they have been taken into account in optimizing the dimensions. The results show that with the use of the dosemeters the effective dose equivalent in mixed neutron-gamma fields can be determined nearly independently of the spectra. The lower detection limit is less than 1 mSv and the maximum uncertainty of dose measurements about 12%. (author)

  4. Optimization of a method based on micro-matrix solid-phase dispersion (micro-MSPD for the determination of PCBs in mussel samples

    Directory of Open Access Journals (Sweden)

    Nieves Carro

    2017-03-01

    Full Text Available This paper reports the development and optimization of micro-matrix solid-phase dispersion (micro-MSPD of nine polychlorinated biphenyls (PCBs in mussel samples (Mytilus galloprovincialis by using a two-level factorial design. Four variables (amount of sample, anhydrous sodium sulphate, Florisil and solvent volume were considered as factors in the optimization process. The results suggested that only the interaction between the amount of anhydrous sodium sulphate and the solvent volume was statistically significant for the overall recovery of a trichlorinated compound, CB 28. Generally most of the considered species exhibited a similar behaviour, the sample and Florisil amounts had a positive effect on PCBs extractions and solvent volume and sulphate amount had a negative effect. The analytical determination and confirmation of PCBs were carried out by using GC-ECD and GC-MS/MS, respectively. The method was validated having satisfactory precision and accuracy with RSD values below 6% and recoveries between 81 and 116% for all congeners. The optimized method was applied to the extraction of real mussel samples from two Galician Rías.

  5. A longitudinal field multiple sampling ionization chamber for RIBLL2

    International Nuclear Information System (INIS)

    Tang Shuwen; Ma Peng; Lu Chengui; Duan Limin; Sun Zhiyu; Yang Herun; Zhang Jinxia; Hu Zhengguo; Xu Shanhu

    2012-01-01

    A longitudinal field MUltiple Sampling Ionization Chamber (MUSIC), which makes multiple measurements of energy loss for very high energy heavy ions at RIBLL2, has been constructed and tested with 3 constituent α source ( 239 Pu : 3.435 MeV, 241 Am : 3.913 MeV, 244 Cm : 4.356 MeV). The voltage plateau curve has been plotted and-500 V is determined as a proper work voltage. The energy resolution is 271.4 keV FWHM for the sampling unit when 3.435 MeV energy deposited. A Geant4 Monte Carlo simulation is made and it indicates the detector can provide unique particle identification for ions Z≥4. (authors)

  6. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    International Nuclear Information System (INIS)

    Carpy, R; Picker, G; Amann, B; Ranebo, H; Vincent-Bonnieu, S; Minster, O; Winter, J; Dettmann, J; Castiglione, L; Höhler, R; Langevin, D

    2011-01-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of 'wet foams' have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume 3 . These units, will be on orbit replaceable sets, that will allow multiple sample compositions processing (in the range of >40).

  7. Optimal laser heating of plasmas confined in strong solenoidal magnetic fields

    International Nuclear Information System (INIS)

    Vitela, J.; Akcasu, A.Z.

    1987-01-01

    Optimal Control Theory is used to analyze the laser-heating of plasmas confined in strong solenoidal magnetic fields. Heating strategies that minimize a linear combination of heating time and total energy spent by the laser system are found. A numerical example is used to illustrate the theory. Results of this example show that by an appropriate modulation of the laser intensity, significant savings in the laser energy are possible with only slight increases in the heating time. However, results may depend strongly on the initial state of the plasma and on the final ion temperature. (orig.)

  8. The optimally sampled galaxy-wide stellar initial mass function. Observational tests and the publicly available GalIMF code

    Science.gov (United States)

    Yan, Zhiqiang; Jerabkova, Tereza; Kroupa, Pavel

    2017-11-01

    Here we present a full description of the integrated galaxy-wide initial mass function (IGIMF) theory in terms of the optimal sampling and compare it with available observations. Optimal sampling is the method we use to discretize the IMF deterministically into stellar masses. Evidence indicates that nature may be closer to deterministic sampling as observations suggest a smaller scatter of various relevant observables than random sampling would give, which may result from a high level of self-regulation during the star formation process. We document the variation of IGIMFs under various assumptions. The results of the IGIMF theory are consistent with the empirical relation between the total mass of a star cluster and the mass of its most massive star, and the empirical relation between the star formation rate (SFR) of a galaxy and the mass of its most massive cluster. Particularly, we note a natural agreement with the empirical relation between the IMF power-law index and the SFR of a galaxy. The IGIMF also results in a relation between the SFR of a galaxy and the mass of its most massive star such that, if there were no binaries, galaxies with SFR first time, we show optimally sampled galaxy-wide IMFs (OSGIMF) that mimic the IGIMF with an additional serrated feature. Finally, a Python module, GalIMF, is provided allowing the calculation of the IGIMF and OSGIMF dependent on the galaxy-wide SFR and metallicity. A copy of the python code model is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A126

  9. Field implementation of geological steering techniques optimizes drilling in highly-deviated and horizontal wells

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, C. E.; Illfelder, H. M. J.; Pineda, G.

    1998-12-31

    Field implementation of an integrated wellsite geological steering service is described. The service provides timely, useful feedback from real-time logging-while-drilling (LWD) measurements for making immediate course corrections. Interactive multi-dimensional displays of both the geological and petrophysical properties of the formation being penetrated by the wellbore are a prominent feature of the service; the optimization of the drilling is the result of the visualization afforded by the displays. The paper reviews forward modelling techniques, provides a detailed explanation of the principles underlying this new application, and illustrates the application by examples from the field. 5 refs., 1 tab., 8 figs.

  10. Triangular Geometrized Sampling Heuristics for Fast Optimal Motion Planning

    Directory of Open Access Journals (Sweden)

    Ahmed Hussain Qureshi

    2015-02-01

    Full Text Available Rapidly-exploring Random Tree (RRT-based algorithms have become increasingly popular due to their lower computational complexity as compared with other path planning algorithms. The recently presented RRT* motion planning algorithm improves upon the original RRT algorithm by providing optimal path solutions. While RRT determines an initial collision-free path fairly quickly, RRT* guarantees almost certain convergence to an optimal, obstacle-free path from the start to the goal points for any given geometrical environment. However, the main limitations of RRT* include its slow processing rate and high memory consumption, due to the large number of iterations required for calculating the optimal path. In order to overcome these limitations, we present another improvement, i.e, the Triangular Geometerized-RRT* (TG-RRT* algorithm, which utilizes triangular geometrical methods to improve the performance of the RRT* algorithm in terms of the processing time and a decreased number of iterations required for an optimal path solution. Simulations comparing the performance results of the improved TG-RRT* with RRT* are presented to demonstrate the overall improvement in performance and optimal path detection.

  11. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hojin; Li Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing Lei [Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States) and Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Ehwa University, Seoul 158-710 (Korea, Republic of); Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States); Department of Statistics, Stanford University, Stanford, California 94305-4065 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5304 (United States)

    2012-07-15

    Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the

  12. Accuracy and Effort of Interpolation and Sampling: Can GIS Help Lower Field Costs?

    Directory of Open Access Journals (Sweden)

    Greg Simpson

    2014-12-01

    Full Text Available Sedimentation is a problem for all reservoirs in the Black Hills of South Dakota. Before working on sediment removal, a survey on the extent and distribution of the sediment is needed. Two sample lakes were used to determine which of three interpolation methods gave the most accurate volume results. A secondary goal was to see if fewer samples could be taken while still providing similar results. The smaller samples would mean less field time and thus lower costs. Subsamples of 50%, 33% and 25% were taken from the total samples and evaluated for the lowest Root Mean Squared Error values. Throughout the trials, the larger sample sizes generally showed better accuracy than smaller samples. Graphing the sediment volume estimates of the full sample, 50%, 33% and 25% showed little improvement after a sample of approximately 40%–50% when comparing the asymptote of the separate samples. When we used smaller subsamples the predicted sediment volumes were normally greater than the full sample volumes. It is suggested that when planning future sediment surveys, workers plan on gathering data at approximately every 5.21 meters. These sample sizes can be cut in half and still retain relative accuracy if time savings are needed. Volume estimates may slightly suffer with these reduced samples sizes, but the field work savings can be of benefit. Results from these surveys are used in prioritization of available funds for reclamation efforts.

  13. Path integral methods for primordial density perturbations - sampling of constrained Gaussian random fields

    International Nuclear Information System (INIS)

    Bertschinger, E.

    1987-01-01

    Path integrals may be used to describe the statistical properties of a random field such as the primordial density perturbation field. In this framework the probability distribution is given for a Gaussian random field subjected to constraints such as the presence of a protovoid or supercluster at a specific location in the initial conditions. An algorithm has been constructed for generating samples of a constrained Gaussian random field on a lattice using Monte Carlo techniques. The method makes possible a systematic study of the density field around peaks or other constrained regions in the biased galaxy formation scenario, and it is effective for generating initial conditions for N-body simulations with rare objects in the computational volume. 21 references

  14. Electric field computation and measurements in the electroporation of inhomogeneous samples

    Science.gov (United States)

    Bernardis, Alessia; Bullo, Marco; Campana, Luca Giovanni; Di Barba, Paolo; Dughiero, Fabrizio; Forzan, Michele; Mognaschi, Maria Evelina; Sgarbossa, Paolo; Sieni, Elisabetta

    2017-12-01

    In clinical treatments of a class of tumors, e.g. skin tumors, the drug uptake of tumor tissue is helped by means of a pulsed electric field, which permeabilizes the cell membranes. This technique, which is called electroporation, exploits the conductivity of the tissues: however, the tumor tissue could be characterized by inhomogeneous areas, eventually causing a non-uniform distribution of current. In this paper, the authors propose a field model to predict the effect of tissue inhomogeneity, which can affect the current density distribution. In particular, finite-element simulations, considering non-linear conductivity against field relationship, are developed. Measurements on a set of samples subject to controlled inhomogeneity make it possible to assess the numerical model in view of identifying the equivalent resistance between pairs of electrodes.

  15. Subsurface Noble Gas Sampling Manual

    Energy Technology Data Exchange (ETDEWEB)

    Carrigan, C. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sun, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-18

    The intent of this document is to provide information about best available approaches for performing subsurface soil gas sampling during an On Site Inspection or OSI. This information is based on field sampling experiments, computer simulations and data from the NA-22 Noble Gas Signature Experiment Test Bed at the Nevada Nuclear Security Site (NNSS). The approaches should optimize the gas concentration from the subsurface cavity or chimney regime while simultaneously minimizing the potential for atmospheric radioxenon and near-surface Argon-37 contamination. Where possible, we quantitatively assess differences in sampling practices for the same sets of environmental conditions. We recognize that all sampling scenarios cannot be addressed. However, if this document helps to inform the intuition of the reader about addressing the challenges resulting from the inevitable deviations from the scenario assumed here, it will have achieved its goal.

  16. AMORE-HX: a multidimensional optimization of radial enhanced NMR-sampled hydrogen exchange

    International Nuclear Information System (INIS)

    Gledhill, John M.; Walters, Benjamin T.; Wand, A. Joshua

    2009-01-01

    The Cartesian sampled three-dimensional HNCO experiment is inherently limited in time resolution and sensitivity for the real time measurement of protein hydrogen exchange. This is largely overcome by use of the radial HNCO experiment that employs the use of optimized sampling angles. The significant practical limitation presented by use of three-dimensional data is the large data storage and processing requirements necessary and is largely overcome by taking advantage of the inherent capabilities of the 2D-FT to process selective frequency space without artifact or limitation. Decomposition of angle spectra into positive and negative ridge components provides increased resolution and allows statistical averaging of intensity and therefore increased precision. Strategies for averaging ridge cross sections within and between angle spectra are developed to allow further statistical approaches for increasing the precision of measured hydrogen occupancy. Intensity artifacts potentially introduced by over-pulsing are effectively eliminated by use of the BEST approach

  17. Optimally Stopped Optimization

    Science.gov (United States)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  18. Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction

    Science.gov (United States)

    2016-01-01

    human-sized scene in 0.048sec− 0.101sec. Index Terms—Microwave imaging, multistatic radar, Fast Fourier Transform (FFT). I. INTRODUCTION Near-field...configuration, but its computational demands are extreme. Fast Fourier Transform (FFT) imaging has long been used to efficiently construct images sampled...with the block diagram depicted in Fig. 4. It is noted that the multistatic to monostatic correction is valid over a finite imaging domain. However, as

  19. Near-optimal quantum circuit for Grover's unstructured search using a transverse field

    Science.gov (United States)

    Jiang, Zhang; Rieffel, Eleanor G.; Wang, Zhihui

    2017-06-01

    Inspired by a class of algorithms proposed by Farhi et al. (arXiv:1411.4028), namely, the quantum approximate optimization algorithm (QAOA), we present a circuit-based quantum algorithm to search for a needle in a haystack, obtaining the same quadratic speedup achieved by Grover's original algorithm. In our algorithm, the problem Hamiltonian (oracle) and a transverse field are applied alternately to the system in a periodic manner. We introduce a technique, based on spin-coherent states, to analyze the composite unitary in a single period. This composite unitary drives a closed transition between two states that have high degrees of overlap with the initial state and the target state, respectively. The transition rate in our algorithm is of order Θ (1 /√{N }) , and the overlaps are of order Θ (1 ) , yielding a nearly optimal query complexity of T ≃√{N }(π /2 √{2 }) . Our algorithm is a QAOA circuit that demonstrates a quantum advantage with a large number of iterations that is not derived from Trotterization of an adiabatic quantum optimization (AQO) algorithm. It also suggests that the analysis required to understand QAOA circuits involves a very different process from estimating the energy gap of a Hamiltonian in AQO.

  20. Optimal usage of computing grid network in the fields of nuclear fusion computing task

    International Nuclear Information System (INIS)

    Tenev, D.

    2006-01-01

    Nowadays the nuclear power becomes the main source of energy. To make its usage more efficient, the scientists created complicated simulation models, which require powerful computers. The grid computing is the answer to powerful and accessible computing resources. The article observes, and estimates the optimal configuration of the grid environment in the fields of the complicated nuclear fusion computing tasks. (author)

  1. Optimization of a near-field thermophotovoltaic system operating at low temperature and large vacuum gap

    Science.gov (United States)

    Lim, Mikyung; Song, Jaeman; Kim, Jihoon; Lee, Seung S.; Lee, Ikjin; Lee, Bong Jae

    2018-05-01

    The present work successfully achieves a strong enhancement in performance of a near-field thermophotovoltaic (TPV) system operating at low temperature and large-vacuum-gap width by introducing a hyperbolic-metamaterial (HMM) emitter, multilayered graphene, and an Au-backside reflector. Design variables for the HMM emitter and the multilayered-graphene-covered TPV cell are optimized for maximizing the power output of the near-field TPV system with the genetic algorithm. The near-field TPV system with the optimized configuration results in 24.2 times of enhancement in power output compared with that of the system with a bulk emitter and a bare TPV cell. Through the analysis of the radiative heat transfer together with surface-plasmon-polariton (SPP) dispersion curves, it is found that coupling of SPPs generated from both the HMM emitter and the multilayered-graphene-covered TPV cell plays a key role in a substantial increase in the heat transfer even at a 200-nm vacuum gap. Further, the backside reflector at the bottom of the TPV cell significantly increases not only the conversion efficiency, but also the power output by generating additional polariton modes which can be readily coupled with the existing SPPs of the HMM emitter and the multilayered-graphene-covered TPV cell.

  2. Optimal sampling plan for clean development mechanism lighting projects with lamp population decay

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2014-01-01

    Highlights: • A metering cost minimisation model is built with the lamp population decay to optimise CDM lighting projects sampling plan. • The model minimises the total metering cost and optimise the annual sample size during the crediting period. • The required 90/10 criterion sampling accuracy is satisfied for each CDM monitoring report. - Abstract: This paper proposes a metering cost minimisation model that minimises metering cost under the constraints of sampling accuracy requirement for clean development mechanism (CDM) energy efficiency (EE) lighting project. Usually small scale (SSC) CDM EE lighting projects expect a crediting period of 10 years given that the lighting population will decay as time goes by. The SSC CDM sampling guideline requires that the monitored key parameters for the carbon emission reduction quantification must satisfy the sampling accuracy of 90% confidence and 10% precision, known as the 90/10 criterion. For the existing registered CDM lighting projects, sample sizes are either decided by professional judgment or by rule-of-thumb without considering any optimisation. Lighting samples are randomly selected and their energy consumptions are monitored continuously by power meters. In this study, the sampling size determination problem is formulated as a metering cost minimisation model by incorporating a linear lighting decay model as given by the CDM guideline AMS-II.J. The 90/10 criterion is formulated as constraints to the metering cost minimisation problem. Optimal solutions to the problem minimise the metering cost whilst satisfying the 90/10 criterion for each reporting period. The proposed metering cost minimisation model is applicable to other CDM lighting projects with different population decay characteristics as well

  3. Strategies for discovery and optimization of thermoelectric materials: Role of real objects and local fields

    Science.gov (United States)

    Zhu, Hao; Xiao, Chong

    2018-06-01

    Thermoelectric materials provide a renewable and eco-friendly solution to mitigate energy shortages and to reduce environmental pollution via direct heat-to-electricity conversion. Discovery of the novel thermoelectric materials and optimization of the state-of-the-art material systems lie at the core of the thermoelectric society, the basic concept behind these being comprehension and manipulation of the physical principles and transport properties regarding thermoelectric materials. In this mini-review, certain examples for designing high-performance bulk thermoelectric materials are presented from the perspectives of both real objects and local fields. The highlights of this topic involve the Rashba effect, Peierls distortion, local magnetic field, and local stress field, which cover several aspects in the field of thermoelectric research. We conclude with an overview of future developments in thermoelectricity.

  4. Topology optimized permanent magnet systems

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Bahl, Christian; Insinga, Andrea Roberto

    2017-01-01

    Topology optimization of permanent magnet systems consisting of permanent magnets, high permeability iron and air is presented. An implementation of topology optimization for magnetostatics is discussed and three examples are considered. The Halbach cylinder is topology optimized with iron...... and an increase of 15% in magnetic efficiency is shown. A topology optimized structure to concentrate a homogeneous field is shown to increase the magnitude of the field by 111%. Finally, a permanent magnet with alternating high and low field regions is topology optimized and a ΛcoolΛcool figure of merit of 0...

  5. An optimized target-field method for MRI transverse biplanar gradient coil design

    International Nuclear Information System (INIS)

    Zhang, Rui; Xu, Jing; Huang, Kefu; Zhang, Jue; Fang, Jing; Fu, Youyi; Li, Yangjing

    2011-01-01

    Gradient coils are essential components of magnetic resonance imaging (MRI) systems. In this paper, we present an optimized target-field method for designing a transverse biplanar gradient coil with high linearity, low inductance and small resistance, which can well satisfy the requirements of permanent-magnet MRI systems. In this new method, the current density is expressed by trigonometric basis functions with unknown coefficients in polar coordinates. Following the standard procedures, we construct an objective function with respect to the total square errors of the magnetic field at all target-field points with the penalty items associated with the stored magnetic energy and the dissipated power. By adjusting the two penalty factors and minimizing the objective function, the appropriate coefficients of the current density are determined. Applying the stream function method to the current density, the specific winding patterns on the planes can be obtained. A novel biplanar gradient coil has been designed using this method to operate in a permanent-magnet MRI system. In order to verify the validity of the proposed approach, the gradient magnetic field generated by the resulted current density has been calculated via the Biot–Savart law. The results have demonstrated the effectiveness and advantage of this proposed method

  6. Topology optimized permanent magnet systems

    Science.gov (United States)

    Bjørk, R.; Bahl, C. R. H.; Insinga, A. R.

    2017-09-01

    Topology optimization of permanent magnet systems consisting of permanent magnets, high permeability iron and air is presented. An implementation of topology optimization for magnetostatics is discussed and three examples are considered. The Halbach cylinder is topology optimized with iron and an increase of 15% in magnetic efficiency is shown. A topology optimized structure to concentrate a homogeneous field is shown to increase the magnitude of the field by 111%. Finally, a permanent magnet with alternating high and low field regions is topology optimized and a Λcool figure of merit of 0.472 is reached, which is an increase of 100% compared to a previous optimized design.

  7. Hybrid optimal design of the eco-hydrological wireless sensor network in the middle reach of the Heihe River Basin, China.

    Science.gov (United States)

    Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao

    2014-10-14

    The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables.

  8. Optimization of sample preparation variables for wedelolactone from Eclipta alba using Box-Behnken experimental design followed by HPLC identification.

    Science.gov (United States)

    Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S

    2013-07-01

    Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  9. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    Science.gov (United States)

    Carpy, R.; Picker, G.; Amann, B.; Ranebo, H.; Vincent-Bonnieu, S.; Minster, O.; Winter, J.; Dettmann, J.; Castiglione, L.; Höhler, R.; Langevin, D.

    2011-12-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of "wet foams" have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy [1] and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume 40).

  10. Toward high-resolution NMR spectroscopy of microscopic liquid samples

    Energy Technology Data Exchange (ETDEWEB)

    Butler, Mark C.; Mehta, Hardeep S.; Chen, Ying; Reardon, Patrick N.; Renslow, Ryan S.; Khbeis, Michael; Irish, Duane; Mueller, Karl T.

    2017-01-01

    A longstanding limitation of high-resolution NMR spectroscopy is the requirement for samples to have macroscopic dimensions. Commercial probes, for example, are designed for volumes of at least 5 mL, in spite of decades of work directed toward the goal of miniaturization. Progress in miniaturizing inductive detectors has been limited by a perceived need to meet two technical requirements: (1) minimal separation between the sample and the detector, which is essential for sensitivity, and (2) near-perfect magnetic-field homogeneity at the sample, which is typically needed for spectral resolution. The first of these requirements is real, but the second can be relaxed, as we demonstrate here. By using pulse sequences that yield high-resolution spectra in an inhomogeneous field, we eliminate the need for near-perfect field homogeneity and the accompanying requirement for susceptibility matching of microfabricated detector components. With this requirement removed, typical imperfections in microfabricated components can be tolerated, and detector dimensions can be matched to those of the sample, even for samples of volume << 5 uL. Pulse sequences that are robust to field inhomogeneity thus enable small-volume detection with optimal sensitivity. We illustrate the potential of this approach to miniaturization by presenting spectra acquired with a flat-wire detector that can easily be scaled to subnanoliter volumes. In particular, we report high-resolution NMR spectroscopy of an alanine sample of volume 500 pL.

  11. Intermolecular Force Field Parameters Optimization for Computer Simulations of CH4 in ZIF-8

    Directory of Open Access Journals (Sweden)

    Phannika Kanthima

    2016-01-01

    Full Text Available The differential evolution (DE algorithm is applied for obtaining the optimized intermolecular interaction parameters between CH4 and 2-methylimidazolate ([C4N2H5]− using quantum binding energies of CH4-[C4N2H5]− complexes. The initial parameters and their upper/lower bounds are obtained from the general AMBER force field. The DE optimized and the AMBER parameters are then used in the molecular dynamics (MD simulations of CH4 molecules in the frameworks of ZIF-8. The results show that the DE parameters are better for representing the quantum interaction energies than the AMBER parameters. The dynamical and structural behaviors obtained from MD simulations with both sets of parameters are also of notable differences.

  12. Frequency locking of a field-widened Michelson interferometer based on optimal multi-harmonics heterodyning.

    Science.gov (United States)

    Cheng, Zhongtao; Liu, Dong; Zhou, Yudi; Yang, Yongying; Luo, Jing; Zhang, Yupeng; Shen, Yibing; Liu, Chong; Bai, Jian; Wang, Kaiwei; Su, Lin; Yang, Liming

    2016-09-01

    A general resonant frequency locking scheme for a field-widened Michelson interferometer (FWMI), which is intended as a spectral discriminator in a high-spectral-resolution lidar, is proposed based on optimal multi-harmonics heterodyning. By transferring the energy of a reference laser to multi-harmonics of different orders generated by optimal electro-optic phase modulation, the heterodyne signal of these multi-harmonics through the FWMI can reveal the resonant frequency drift of the interferometer very sensitively within a large frequency range. This approach can overcome the locking difficulty induced by the low finesse of the FWMI, thus contributing to excellent locking accuracy and lock acquisition range without any constraint on the interferometer itself. The theoretical and experimental results are presented to verify the performance of this scheme.

  13. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    International Nuclear Information System (INIS)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles

    2014-01-01

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr 2 ) than is the pentafluorostyrene component distribution

  14. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    Energy Technology Data Exchange (ETDEWEB)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles, E-mail: cwilkins@uark.edu

    2014-01-15

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr{sub 2}) than is the pentafluorostyrene component distribution.

  15. Ratio methods for cost-effective field sampling of commercial radioactive low-level wastes

    International Nuclear Information System (INIS)

    Eberhardt, L.L.; Simmons, M.A.; Thomas, J.M.

    1985-07-01

    In many field studies to determine the quantities of radioactivity at commercial low-level radioactive waste sites, preliminary appraisals are made with field radiation detectors, or other relatively inaccurate devices. More accurate determinations are subsequently made with procedures requiring chemical separations or other expensive analyses. Costs of these laboratory determinations are often large, so that adequate sampling may not be achieved due to budget limitations. In this report, we propose double sampling as a way to combine the expensive and inexpensive aproaches to substantially reduce overall costs. The underlying theory was developed for human and agricultural surveys, and is partially based on assumptions that are not appropriate for commercial low-level waste sites. Consequently, extensive computer simulations were conducted to determine whether the results can be applied in circumstances of importance to the Nuclear Regulatory Commission. This report gives the simulation details, and concludes that the principal equations are appropriate for most studies at commercial low-level waste sites. A few points require further research, using actual commercial low-level radioactive waste site data. The final section of the report provides some guidance (via an example) for the field use of double sampling. Details of the simulation programs are available from the authors. Major findings are listed in the Executive Summary. 9 refs., 9 figs., 30 tabs

  16. Field manual for geohydrological sampling as applied to the radioactive waste disposal program

    International Nuclear Information System (INIS)

    Levin, M.

    1983-08-01

    This report serves as a manual for geohydrological sampling as practised by NUCOR's Geology Department. It discusses systematically all aspects concerned with sampling and stresses those where negligence has caused complications in the past. Accurate, neat and systematic procedures are emphasised. The report is intended as a reference work for the field technician. Analytical data on water samples provide an indication of the geohydrological processes taking place during the interaction between groundwater and the enclosing aquifers. It is possible to identify water bodies, using some of a multitude of parameters such as major ions, trace elements and isotopes which may give clues as to the origin, directions of flow and age of groundwater bodies. The South African Radioactive Waste Project also requires this information for determining the direction of migration of the radionuclides in the environment in the event of a spillage. The sampling procedures required for water, and in particular groundwater, must be applied in such a manner that the natural variation of dissolved species is not disturbed to any significant degree. With this in mind, the operator has to exercise meticulous care during initial preparation, collection, storing, preserving and handling of the water samples. This report is a field manual and describes the procedures adopted for the Radwaste Project geohydrological investigations in the Northwest Cape

  17. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    Science.gov (United States)

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  18. Optimal sampling theory and population modelling - Application to determination of the influence of the microgravity environment on drug distribution and elimination

    Science.gov (United States)

    Drusano, George L.

    1991-01-01

    The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.

  19. Electro-Optic Sampling of Transient Electric Fields from Charged Particle Beams

    Energy Technology Data Exchange (ETDEWEB)

    Fitch, Michael James [Rochester U.

    2000-01-01

    The passage of a relativistic charged particle beam bunch through a structure is accompanied by transient electromagnetic fields. By causality, these fields must be behind the bunch, and are called "wakefields." The wakefields act back on the beam, and cause instabilities such as the beam break-up instability, and the headtail instability, which limit the luminosity of linear colliders. The wakefields are particularly important for short bunches with high charge. A great deal of effort is devoted to analytical and numerical calculations of wakefields, and wakefield effects. Experimental numbers are needed. In this thesis, we present measurements of the transient electric fields induced by a short high-charge electron bunch passing through a 6-way vacuum cross. These measurements are performed in the time domain using electro-optic sampling with a time resolution of approximately 5 picoseconds. With different orientations of the electro-optic crystal, we have measured different vector components of the electric field. The Fourier transform of the time-domain data yields the product of the beam impedance with the excitation spectrum of the bunch. Since the bunch length is known from streak camera measurements, the k loss factor is directly obtained. There is reasonably good agreement between the experimental k loss factor with calculations from the code MAFIA. To our knowledge, this is the first direct measurement of the k loss factor for bunch lengths shorter than one millimeter ( nns). We also present results of magnetic bunch compression (using a dipole chicane) of a high-charge photoinjector beam for two different UV laser pulse lengths on the pholocalhode. Al best compression, a 13.87 nC bunch was compressed to 0.66 mm (2.19 ps) rms, or a peak current of 3 kA. Other results from the photoinjeclor are given, and the laser system for pholocalhode excitation and electro-optic sampling is described.

  20. Computation within the auxiliary field approach

    International Nuclear Information System (INIS)

    Baeurle, S.A.

    2003-01-01

    Recently, the classical auxiliary field methodology has been developed as a new simulation technique for performing calculations within the framework of classical statistical mechanics. Since the approach suffers from a sign problem, a judicious choice of the sampling algorithm, allowing a fast statistical convergence and an efficient generation of field configurations, is of fundamental importance for a successful simulation. In this paper we focus on the computational aspects of this simulation methodology. We introduce two different types of algorithms, the single-move auxiliary field Metropolis Monte Carlo algorithm and two new classes of force-based algorithms, which enable multiple-move propagation. In addition, to further optimize the sampling, we describe a preconditioning scheme, which permits to treat each field degree of freedom individually with regard to the evolution through the auxiliary field configuration space. Finally, we demonstrate the validity and assess the competitiveness of these algorithms on a representative practical example. We believe that they may also provide an interesting possibility for enhancing the computational efficiency of other auxiliary field methodologies

  1. Optimal sampling period of the digital control system for the nuclear power plant steam generator water level control

    International Nuclear Information System (INIS)

    Hur, Woo Sung; Seong, Poong Hyun

    1995-01-01

    A great effort has been made to improve the nuclear plant control system by use of digital technologies and a long term schedule for the control system upgrade has been prepared with an aim to implementation in the next generation nuclear plants. In case of digital control system, it is important to decide the sampling period for analysis and design of the system, because the performance and the stability of a digital control system depend on the value of the sampling period of the digital control system. There is, however, currently no systematic method used universally for determining the sampling period of the digital control system. Generally, a traditional way to select the sampling frequency is to use 20 to 30 times the bandwidth of the analog control system which has the same system configuration and parameters as the digital one. In this paper, a new method to select the sampling period is suggested which takes into account of the performance as well as the stability of the digital control system. By use of the Irving's model steam generator, the optimal sampling period of an assumptive digital control system for steam generator level control is estimated and is actually verified in the digital control simulation system for Kori-2 nuclear power plant steam generator level control. Consequently, we conclude the optimal sampling period of the digital control system for Kori-2 nuclear power plant steam generator level control is 1 second for all power ranges. 7 figs., 3 tabs., 8 refs. (Author)

  2. Sleep and optimism: A longitudinal study of bidirectional causal relationship and its mediating and moderating variables in a Chinese student sample.

    Science.gov (United States)

    Lau, Esther Yuet Ying; Hui, C Harry; Lam, Jasmine; Cheung, Shu-Fai

    2017-01-01

    While both sleep and optimism have been found to be predictive of well-being, few studies have examined their relationship with each other. Neither do we know much about the mediators and moderators of the relationship. This study investigated (1) the causal relationship between sleep quality and optimism in a college student sample, (2) the role of symptoms of depression, anxiety, and stress as mediators, and (3) how circadian preference might moderate the relationship. Internet survey data were collected from 1,684 full-time university students (67.6% female, mean age = 20.9 years, SD = 2.66) at three time-points, spanning about 19 months. Measures included the Attributional Style Questionnaire, the Pittsburgh Sleep Quality Index, the Composite Scale of Morningness, and the Depression Anxiety Stress Scale-21. Moderate correlations were found among sleep quality, depressive mood, stress symptoms, anxiety symptoms, and optimism. Cross-lagged analyses showed a bidirectional effect between optimism and sleep quality. Moreover, path analyses demonstrated that anxiety and stress symptoms partially mediated the influence of optimism on sleep quality, while depressive mood partially mediated the influence of sleep quality on optimism. In support of our hypothesis, sleep quality affects mood symptoms and optimism differently for different circadian preferences. Poor sleep results in depressive mood and thus pessimism in non-morning persons only. In contrast, the aggregated (direct and indirect) effects of optimism on sleep quality were invariant of circadian preference. Taken together, people who are pessimistic generally have more anxious mood and stress symptoms, which adversely affect sleep while morningness seems to have a specific protective effect countering the potential damage poor sleep has on optimism. In conclusion, optimism and sleep quality were both cause and effect of each other. Depressive mood partially explained the effect of sleep quality on optimism

  3. Optimizing the data acquisition rate for a remotely controllable structural monitoring system with parallel operation and self-adaptive sampling

    International Nuclear Information System (INIS)

    Sheng, Wenjuan; Guo, Aihuang; Liu, Yang; Azmi, Asrul Izam; Peng, Gang-Ding

    2011-01-01

    We present a novel technique that optimizes the real-time remote monitoring and control of dispersed civil infrastructures. The monitoring system is based on fiber Bragg gating (FBG) sensors, and transfers data via Ethernet. This technique combines parallel operation and self-adaptive sampling to increase the data acquisition rate in remote controllable structural monitoring systems. The compact parallel operation mode is highly efficient at achieving the highest possible data acquisition rate for the FBG sensor based local data acquisition system. Self-adaptive sampling is introduced to continuously coordinate local acquisition and remote control for data acquisition rate optimization. Key issues which impact the operation of the whole system, such as the real-time data acquisition rate, data processing capability, and buffer usage, are investigated. The results show that, by introducing parallel operation and self-adaptive sampling, the data acquisition rate can be increased by several times without affecting the system operating performance on both local data acquisition and remote process control

  4. Optimizing Solute-Solute Interactions in the GLYCAM06 and CHARMM36 Carbohydrate Force Fields Using Osmotic Pressure Measurements.

    Science.gov (United States)

    Lay, Wesley K; Miller, Mark S; Elcock, Adrian H

    2016-04-12

    GLYCAM06 and CHARMM36 are successful force fields for modeling carbohydrates. To correct recently identified deficiencies with both force fields, we adjusted intersolute nonbonded parameters to reproduce the experimental osmotic coefficient of glucose at 1 M. The modified parameters improve behavior of glucose and sucrose up to 4 M and improve modeling of a dextran 55-mer. While the modified parameters may not be applicable to all carbohydrates, they highlight the use of osmotic simulations to optimize force fields.

  5. The concentration of Cs, Sr and other elements in water samples collected in a paddy field

    International Nuclear Information System (INIS)

    Ban-nai, Tadaaki; Hisamatsu, Shun'ichi; Yanai-Kudo, Masumi; Hasegawa, Hidenao; Torikai, Yuji

    2000-01-01

    To research elemental concentrations in soil water in a paddy field, samples of the soil water were collected with porous Teflon resin tubes which were buried in the field. The soil water collections were made at various depth, 2.5, 12.5, 25 and 35 cm from the surface in the paddy field, located in Rokkasho, Aomori, once every two weeks during the rice cultivation period, from May to October in 1998. The paddy field was irrigated from May 7th to July 20th, dried from July 20th to August 5th, then again irrigated until September 16th. Drastic changes of the alkaline earth metal elements, Fe and Mn in soil water samples were seen at the beginning and end of the midsummer drainage. The concentrations of Cs, Fe, Mn and NH 4 in soil water samples showed a similar variation pattern to that of alkaline earth metal elements in the waterlogged period. The change of redox potential was considered a possible cause for the concentration variation for these substances. (author)

  6. An Evaluation of Plotless Sampling Using Vegetation Simulations and Field Data from a Mangrove Forest.

    Directory of Open Access Journals (Sweden)

    Renske Hijbeek

    Full Text Available In vegetation science and forest management, tree density is often used as a variable. To determine the value of this variable, reliable field methods are necessary. When vegetation is sparse or not easily accessible, the use of sample plots is not feasible in the field. Therefore, plotless methods, like the Point Centred Quarter Method, are often used as an alternative. In this study we investigate the accuracy of different plotless sampling methods. To this end, tree densities of a mangrove forest were determined and compared with estimates provided by several plotless methods. None of these methods proved accurate across all field sites with mean underestimations up to 97% and mean overestimations up to 53% in the field. Applying the methods to different vegetation patterns shows that when random spatial distributions were used the true density was included within the 95% confidence limits of all the plotless methods tested. It was also found that, besides aggregation and regularity, density trends often found in mangroves contribute to the unreliability. This outcome raises questions about the use of plotless sampling in forest monitoring and management, as well as for estimates of density-based carbon sequestration. We give recommendations to minimize errors in vegetation surveys and recommendations for further in-depth research.

  7. MO-AB-BRA-08: Rapid Treatment Field Uniformity Optimization for Total Skin Electron Beam Therapy Using Cherenkov Imaging

    International Nuclear Information System (INIS)

    Andreozzi, J; Zhang, R; Glaser, A; Pogue, B; Jarvis, L; Williams, B; Gladstone, D

    2015-01-01

    Purpose: To evaluate treatment field heterogeneity resulting from gantry angle choice in total skin electron beam therapy (TSEBT) following a modified Stanford dual-field technique, and determine a relationship between source to surface distance (SSD) and optimized gantry angle spread. Methods: Cherenkov imaging was used to image 62 treatment fields on a sheet of 1.2m x 2.2m x 1.2cm polyethylene following standard TSEBT setup at our institution (6 MeV, 888 MU/min, no spoiler, SSD=441cm), where gantry angles spanned from 239.5° to 300.5° at 1° increments. Average Cherenkov intensity and coefficient of variation in the region of interest were compared for the set of composite Cherenkov images created by summing all unique combinations of angle pairs to simulate dual-field treatment. The angle pair which produced the lowest coefficient of variation was further studied using an ionization chamber. The experiment was repeated at SSD=300cm, and SSD=370.5cm. Cherenkov imaging was also implemented during TSEBT of three patients. Results: The most uniform treatment region from a symmetric angle spread was achieved using gantry angles +/−17.5° about the horizontal axis at SSD=441cm, +/−18.5° at SSD=370.5cm, and +/−19.5° at SSD=300cm. Ionization chamber measurements comparing the original treatment spread (+/−14.5°) and the optimized angle pair (+/−17.5°) at SSD=441cm showed no significant deviation (r=0.999) in percent depth dose curves, and chamber measurements from nine locations within the field showed an improvement in dose uniformity from 24.41% to 9.75%. Ionization chamber measurements correlated strongly (r=0.981) with Cherenkov intensity measured concurrently on the flat Plastic Water phantom. Patient images and TLD results also showed modest uniformity improvements. Conclusion: A decreasing linear relationship between optimal angle spread and SSD was observed. Cherenkov imaging offers a new method of rapidly analyzing and optimizing TSEBT setup

  8. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    International Nuclear Information System (INIS)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-01-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors. (paper)

  9. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    Science.gov (United States)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-06-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors.

  10. SU-E-T-262: Planning for Proton Pencil Beam Scanning (PBS): Applications of Gradient Optimization for Field Matching

    Energy Technology Data Exchange (ETDEWEB)

    Lin, H; Kirk, M; Zhai, H; Ding, X; Liu, H; Hill-Kayser, C; Lustig, R; Tochner, Z; Deville, C; Vapiwala, N; McDonough, J; Both, S [University Pennsylvania, Philadelphia, PA (United States)

    2014-06-01

    Purpose: To propose the gradient optimization(GO) approach in planning for matching proton PBS fields and present two commonly used applications in our institution. Methods: GO is employed for PBS field matching in the scenarios that when the size of the target is beyond the field size limit of the beam delivery system or matching is required for beams from different angles to either improve the sparing of important organs or to pass through a short and simple beam path. Overlap is designed between adjacent fields and in the overlapped junction, the dose was optimized such that it gradually decreases in one field and the decrease is compensated by increase from another field. Clinical applications of this approach on craniospinal irradiation(CSI) and whole pelvis treatment were presented. Mathematical model was developed to study the relationships between dose errors, setup errors and junction lengths. Results: Uniform and conformal dose coverage to the entire target volumes was achieved for both applications using GO approach. For CSI, the gradient matching (6.7cm junction) between fields overcame the complexity of planning associated with feathering match lines. A slow dose gradient in the junction area significantly reduced the sensitivity of the treatment to setup errors. For whole pelvis, gradient matching (4cm junction) between posterior fields for superior target and bilateral fields for inferior target provided dose sparing to organs such as bowel, bladder and rectum. For a setup error of 3 mm in longitudinal direction from one field, mathematical model predicted dose errors of 10%, 6% and 4.3% for junction length of 3, 5 and 7cm. Conclusion: This GO approach improves the quality of the PBS treatment plan with matching fields while maintaining the safety of treatment delivery relative to potential misalignments.

  11. SU-E-T-262: Planning for Proton Pencil Beam Scanning (PBS): Applications of Gradient Optimization for Field Matching

    International Nuclear Information System (INIS)

    Lin, H; Kirk, M; Zhai, H; Ding, X; Liu, H; Hill-Kayser, C; Lustig, R; Tochner, Z; Deville, C; Vapiwala, N; McDonough, J; Both, S

    2014-01-01

    Purpose: To propose the gradient optimization(GO) approach in planning for matching proton PBS fields and present two commonly used applications in our institution. Methods: GO is employed for PBS field matching in the scenarios that when the size of the target is beyond the field size limit of the beam delivery system or matching is required for beams from different angles to either improve the sparing of important organs or to pass through a short and simple beam path. Overlap is designed between adjacent fields and in the overlapped junction, the dose was optimized such that it gradually decreases in one field and the decrease is compensated by increase from another field. Clinical applications of this approach on craniospinal irradiation(CSI) and whole pelvis treatment were presented. Mathematical model was developed to study the relationships between dose errors, setup errors and junction lengths. Results: Uniform and conformal dose coverage to the entire target volumes was achieved for both applications using GO approach. For CSI, the gradient matching (6.7cm junction) between fields overcame the complexity of planning associated with feathering match lines. A slow dose gradient in the junction area significantly reduced the sensitivity of the treatment to setup errors. For whole pelvis, gradient matching (4cm junction) between posterior fields for superior target and bilateral fields for inferior target provided dose sparing to organs such as bowel, bladder and rectum. For a setup error of 3 mm in longitudinal direction from one field, mathematical model predicted dose errors of 10%, 6% and 4.3% for junction length of 3, 5 and 7cm. Conclusion: This GO approach improves the quality of the PBS treatment plan with matching fields while maintaining the safety of treatment delivery relative to potential misalignments

  12. From Field to the Web: Management and Publication of Geoscience Samples in CSIRO Mineral Resources

    Science.gov (United States)

    Devaraju, A.; Klump, J. F.; Tey, V.; Fraser, R.; Reid, N.; Brown, A.; Golodoniuc, P.

    2016-12-01

    Inaccessible samples are an obstacle to the reproducibility of research and may cause waste of time and resources through duplication of sample collection and management. Within the Commonwealth Scientific and Industrial Research Organisation (CSIRO) Mineral Resources there are various research communities who collect or generate physical samples as part of their field studies and analytical processes. Materials can be varied and could be rock, soil, plant materials, water, and even synthetic materials. Given the wide range of applications in CSIRO, each researcher or project may follow their own method of collecting, curating and documenting samples. In many cases samples and their documentation are often only available to the sample collector. For example, the Australian Resources Research Centre stores rock samples and research collections dating as far back as the 1970s. Collecting these samples again would be prohibitively expensive and in some cases impossible because the site has been mined out. These samples would not be easily discoverable by others without an online sample catalog. We identify some of the organizational and technical challenges to provide unambiguous and systematic access to geoscience samples, and present their solutions (e.g., workflow, persistent identifier and tools). We present the workflow starting from field sampling to sample publication on the Web, and describe how the International Geo Sample Number (IGSN) can be applied to identify samples along the process. In our test case geoscientific samples are collected as part of the Capricorn Distal Footprints project, a collaboration project between the CSIRO, the Geological Survey of Western Australia, academic institutions and industry partners. We conclude by summarizing the values of our solutions in terms of sample management and publication.

  13. Optimizing detection of noble gas emission at a former UNE site: sample strategy, collection, and analysis

    Science.gov (United States)

    Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.

    2013-12-01

    Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.

  14. Neutron activation analysis for the optimal sampling and extraction of extractable organohalogens in human hari

    International Nuclear Information System (INIS)

    Zhang, H.; Chai, Z.F.; Sun, H.B.; Xu, H.F.

    2005-01-01

    Many persistent organohalogen compounds such as DDTs and polychlorinated biphenyls have caused seriously environmental pollution problem that now involves all life. It is know that neutron activation analysis (NAA) is a very convenient method for halogen analysis and is also the only method currently available for simultaneously determining organic chlorine, bromine and iodine in one extract. Human hair is a convenient material to evaluate the burden of such compounds in human body and dan be easily collected from people over wide ranges of age, sex, residential areas, eating habits and working environments. To effectively extract organohalogen compounds from human hair, in present work the optimal Soxhelt-extraction time of extractable organohalogen (EOX) and extractable persistent organohalogen (EPOX) from hair of different lengths were studied by NAA. The results indicated that the optimal Soxhelt-extraction time of EOX and EPOX from human hair was 8-11 h, and the highest EOX and EPOX contents were observed in hair powder extract. The concentrations of both EOX and EPOX in different hair sections were in the order of hair powder ≥ 2 mm > 5 mm, which stated that hair samples milled into hair powder or cut into very short sections were not only for homogeneous. hair sample but for the best hair extraction efficiency.

  15. Solve: a non linear least-squares code and its application to the optimal placement of torsatron vertical field coils

    International Nuclear Information System (INIS)

    Aspinall, J.

    1982-01-01

    A computational method was developed which alleviates the need for lengthy parametric scans as part of a design process. The method makes use of a least squares algorithm to find the optimal value of a parameter vector. Optimal is defined in terms of a utility function prescribed by the user. The placement of the vertical field coils of a torsatron is such a non linear problem

  16. Plasma grid design for optimized filter field configuration for the NBI test facility ELISE

    International Nuclear Information System (INIS)

    Nocentini, R.; Gutser, R.; Heinemann, B.; Froeschle, M.; Riedl, R.

    2009-01-01

    Maintenance-free RF sources for negative hydrogen ions with moderate extraction areas (100-200 cm 2 ) have been successfully developed in the last years at IPP Garching in the test facilities BATMAN and MANITU. A facility with larger extraction area (1000 cm 2 ), ELISE, is being designed with a 'half-size' ITER-like extraction system, pulsed ion acceleration up to 60 kV for 10 s and plasma generation up to 1 h. Due to the large size of the source, the magnetic filter field (FF) cannot be produced solely by permanent magnets. Therefore, an additional magnetic field produced by current flowing through the plasma grid (PG current) is required. The filter field homogeneity and the interaction with the electron suppression magnetic field have been studied in detail by finite element method (FEM) during the ELISE design phase. Significant improvements regarding the field homogeneity have been introduced compared to the ITER reference design. Also, for the same PG current a 50% higher field in front of the grid has been achieved by optimizing the plasma grid geometry. Hollow spaces have been introduced in the plasma grid for a more homogeneous PG current distribution. The introduction of hollow spaces also allows the insertion of permanent magnets in the plasma grid.

  17. Cohesive phase-field fracture and a PDE constrained optimization approach to fracture inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Tupek, Michael R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-06-30

    In recent years there has been a proliferation of modeling techniques for forward predictions of crack propagation in brittle materials, including: phase-field/gradient damage models, peridynamics, cohesive-zone models, and G/XFEM enrichment techniques. However, progress on the corresponding inverse problems has been relatively lacking. Taking advantage of key features of existing modeling approaches, we propose a parabolic regularization of Barenblatt cohesive models which borrows extensively from previous phase-field and gradient damage formulations. An efficient explicit time integration strategy for this type of nonlocal fracture model is then proposed and justified. In addition, we present a C++ computational framework for computing in- put parameter sensitivities efficiently for explicit dynamic problems using the adjoint method. This capability allows for solving inverse problems involving crack propagation to answer interesting engineering questions such as: 1) what is the optimal design topology and material placement for a heterogeneous structure to maximize fracture resistance, 2) what loads must have been applied to a structure for it to have failed in an observed way, 3) what are the existing cracks in a structure given various experimental observations, etc. In this work, we focus on the first of these engineering questions and demonstrate a capability to automatically and efficiently compute optimal designs intended to minimize crack propagation in structures.

  18. Application of optimal control theory to laser heating of a plasma in a solenoidal magnetic field

    International Nuclear Information System (INIS)

    Neal, R.D.

    1975-01-01

    Laser heating of a plasma column confined by a solenoidal magnetic field is studied via modern optimal control techniques. A two-temperature, constant pressure model is used for the plasma so that the temperature and density are functions of time and location along the plasma column. They are assumed to be uniform in the radial direction so that refraction of the laser beam does not occur. The laser intensity used as input to the column at one end is taken as the control variable and plasma losses are neglected. The localized behavior of the plasma heating dynamics is first studied and conventional optimal control theory applied. The distributed parameter optimal control problem is next considered with minimum time to reach a specified final ion temperature criterion as the objective. Since the laser intensity can only be directly controlled at the input end of the plasma column, a boundary control situation results. The problem is unique in that the control is the boundary value of one of the state variables. The necessary conditions are developed and the problem solved numerically for typical plasma parameters. The problem of maximizing the space-time integral of neutron production rate in the plasma is considered for a constant distributed control problem where the laser intensity is assumed fixed at maximum and the external magnetic field is taken as a control variable

  19. Modification and Application of a Leaf Blower-vac for Field Sampling of Arthropods.

    Science.gov (United States)

    Zou, Yi; van Telgen, Mario D; Chen, Junhui; Xiao, Haijun; de Kraker, Joop; Bianchi, Felix J J A; van der Werf, Wopke

    2016-08-10

    Rice fields host a large diversity of arthropods, but investigating their population dynamics and interactions is challenging. Here we describe the modification and application of a leaf blower-vac for suction sampling of arthropod populations in rice. When used in combination with an enclosure, application of this sampling device provides absolute estimates of the populations of arthropods as numbers per standardized sampling area. The sampling efficiency depends critically on the sampling duration. In a mature rice crop, a two-minute sampling in an enclosure of 0.13 m(2) yields more than 90% of the arthropod population. The device also allows sampling of arthropods dwelling on the water surface or the soil in rice paddies, but it is not suitable for sampling fast flying insects, such as predatory Odonata or larger hymenopterous parasitoids. The modified blower-vac is simple to construct, and cheaper and easier to handle than traditional suction sampling devices, such as D-vac. The low cost makes the modified blower-vac also accessible to researchers in developing countries.

  20. Multiple response optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry with sample injection as detergent emulsion

    International Nuclear Information System (INIS)

    Brum, Daniel M.; Lima, Claudio F.; Robaina, Nicolle F.; Fonseca, Teresa Cristina O.; Cassella, Ricardo J.

    2011-01-01

    The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO 3 , the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO 3 medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.

  1. Multiple response optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry with sample injection as detergent emulsion

    Energy Technology Data Exchange (ETDEWEB)

    Brum, Daniel M.; Lima, Claudio F. [Departamento de Quimica, Universidade Federal de Vicosa, A. Peter Henry Rolfs s/n, Vicosa/MG, 36570-000 (Brazil); Robaina, Nicolle F. [Departamento de Quimica Analitica, Universidade Federal Fluminense, Outeiro de S.J. Batista s/n, Centro, Niteroi/RJ, 24020-141 (Brazil); Fonseca, Teresa Cristina O. [Petrobras, Cenpes/PDEDS/QM, Av. Horacio Macedo 950, Ilha do Fundao, Rio de Janeiro/RJ, 21941-915 (Brazil); Cassella, Ricardo J., E-mail: cassella@vm.uff.br [Departamento de Quimica Analitica, Universidade Federal Fluminense, Outeiro de S.J. Batista s/n, Centro, Niteroi/RJ, 24020-141 (Brazil)

    2011-05-15

    The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO{sub 3}, the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO{sub 3} medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.

  2. Development of near-field laser ablation inductively coupled plasma mass spectrometry for sub-micrometric analysis of solid samples

    International Nuclear Information System (INIS)

    Jabbour, Chirelle

    2016-01-01

    A near field laser ablation method was developed for chemical analysis of solid samples at sub-micrometric scale. This analytical technique combines a nanosecond laser Nd:YAG, an atomic Force Microscope (AFM), and an inductively coupled plasma mass spectrometer (ICPMS). In order to improve the spatial resolution of the laser ablation process, the near-field enhancement effect was applied by illuminating, by the laser beam, the apex of the AFM conductive sharp tip maintained at a few nanometers (5 to 30 nm) above the sample surface. The interaction between the illuminated tip and the sample surface enhances locally the incident laser energy and leads to the ablation process. By applying this technique to conducting gold and tantalum samples, and semiconducting silicon sample, a lateral resolution of 100 nm and depths of a few nanometers were demonstrated. Two home-made numerical codes have enabled the study of two phenomena occurring around the tip: the enhancement of the laser electrical field by tip effect, and the induced laser heating at the sample surface. The influence of the main operating parameters on these two phenomena, amplification and heating, was studied. an experimental multi-parametric study was carried out in order to understand the effect of different experimental parameters (laser fluence, laser wavelength, number of laser pulses, tip-to-sample distance, sample and tip nature) on the near-field laser ablation efficiency, crater dimensions and amount of ablated material. (author) [fr

  3. Rigorous force field optimization principles based on statistical distance minimization

    Energy Technology Data Exchange (ETDEWEB)

    Vlcek, Lukas, E-mail: vlcekl1@ornl.gov [Chemical Sciences Division, Geochemistry & Interfacial Sciences Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6110 (United States); Joint Institute for Computational Sciences, University of Tennessee, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6173 (United States); Chialvo, Ariel A. [Chemical Sciences Division, Geochemistry & Interfacial Sciences Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6110 (United States)

    2015-10-14

    We use the concept of statistical distance to define a measure of distinguishability between a pair of statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the model’s static measurable properties to those of the target. We exploit this feature to define a rigorous basis for the development of accurate and robust effective molecular force fields that are inherently compatible with coarse-grained experimental data. The new model optimization principles and their efficient implementation are illustrated through selected examples, whose outcome demonstrates the higher robustness and predictive accuracy of the approach compared to other currently used methods, such as force matching and relative entropy minimization. We also discuss relations between the newly developed principles and established thermodynamic concepts, which include the Gibbs-Bogoliubov inequality and the thermodynamic length.

  4. Toxicity evaluation of natural samples from the vicinity of rice fields using two trophic levels.

    Science.gov (United States)

    Marques, Catarina R; Pereira, Ruth; Gonçalves, Fernando

    2011-09-01

    An ecotoxicological screening of environmental samples collected in the vicinity of rice fields followed a combination of physical and chemical measurements and chronic bioassays with two freshwater trophic levels (microalgae: Pseudokirchneriella subcapitata and Chlorella vulgaris; daphnids: Daphnia longispina and Daphnia magna). As so, water and sediment/soil elutriate samples were obtained from three sites: (1) in a canal reach crossing a protected wetland upstream, (2) in a canal reach surrounded by rice fields and (3) in a rice paddy. The sampling was performed before and during the rice culture. During the rice cropping, the whole system quality decreased comparatively to the situation before that period (e.g. nutrient overload, the presence of pesticides in elutriates from sites L2 and L3). This was reinforced by a significant inhibition of both microalgae growth, especially under elutriates. Contrary, the life-history traits of daphnids were significantly stimulated with increasing concentrations of water and elutriates, for both sampling periods.

  5. Incorporating covariance estimation uncertainty in spatial sampling design for prediction with trans-Gaussian random fields

    Directory of Open Access Journals (Sweden)

    Gunter eSpöck

    2015-05-01

    Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.

  6. Systematic optimization of exterior measurement locations for the determination of interior magnetic field vector components in inaccessible regions

    Energy Technology Data Exchange (ETDEWEB)

    Nouri, N.; Plaster, B.

    2014-12-11

    An experiment may face the challenge of real-time determination of the magnetic field vector components present within some interior region of the experimental apparatus over which it is impossible to directly measure the field components during the operation of the experiment. As a solution to this problem, we propose a general concept which provides for a unique determination of the field components within such an interior region solely from exterior measurements at fixed discrete locations. The method is general and does not require the field to possess any type of symmetry. We describe our systematic approach for optimizing the locations of these exterior measurements which maximizes their sensitivity to successive terms in a multipole expansion of the field.

  7. Model of geophysical fields representation in problems of complex correlation-extreme navigation

    Directory of Open Access Journals (Sweden)

    Volodymyr KHARCHENKO

    2015-09-01

    Full Text Available A model of the optimal representation of spatial data for the task of complex correlation-extreme navigation is developed based on the criterion of minimum deviation of the correlation functions of the original and the resulting fields. Calculations are presented for one-dimensional case using the approximation of the correlation function by Fourier series. It is shown that in the presence of different geophysical map data fields their representation is possible by single template with optimal sampling without distorting the form of the correlation functions.

  8. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.

    Science.gov (United States)

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung

    2017-04-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

  9. Efficient 3D porous microstructure reconstruction via Gaussian random field and hybrid optimization.

    Science.gov (United States)

    Jiang, Z; Chen, W; Burkhart, C

    2013-11-01

    Obtaining an accurate three-dimensional (3D) structure of a porous microstructure is important for assessing the material properties based on finite element analysis. Whereas directly obtaining 3D images of the microstructure is impractical under many circumstances, two sets of methods have been developed in literature to generate (reconstruct) 3D microstructure from its 2D images: one characterizes the microstructure based on certain statistical descriptors, typically two-point correlation function and cluster correlation function, and then performs an optimization process to build a 3D structure that matches those statistical descriptors; the other method models the microstructure using stochastic models like a Gaussian random field and generates a 3D structure directly from the function. The former obtains a relatively accurate 3D microstructure, but computationally the optimization process can be very intensive, especially for problems with large image size; the latter generates a 3D microstructure quickly but sacrifices the accuracy due to issues in numerical implementations. A hybrid optimization approach of modelling the 3D porous microstructure of random isotropic two-phase materials is proposed in this paper, which combines the two sets of methods and hence maintains the accuracy of the correlation-based method with improved efficiency. The proposed technique is verified for 3D reconstructions based on silica polymer composite images with different volume fractions. A comparison of the reconstructed microstructures and the optimization histories for both the original correlation-based method and our hybrid approach demonstrates the improved efficiency of the approach. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  10. The origin of the coercivity reduction of Nd–Fe–B sintered magnet annealed below an optimal temperature

    International Nuclear Information System (INIS)

    Akiya, T.; Sasaki, T.T.; Ohkubo, T.; Une, Y.; Sagawa, M.; Kato, H.; Hono, K.

    2013-01-01

    In order to understand the origin of the coercivity reduction in a sintered Nd–Fe–B magnet that is annealed below an optimal annealing temperature, we performed focused ion beam/scanning electron microscopy tomography of post-sinter annealed magnets. A number of grain boundary cracks were observed between Nd 2 Fe 14 B grains and Nd-rich phases in the sample annealed below the optimal temperature. We deduced micromagnetic parameters α and N eff by fitting the temperature dependence of the coercivity. While α was constant regardless of the annealing conditions, N eff increased in the sample annealed below the optimal temperature with the reduced coercivity. This indicates that the reduction of the coercivity is due to the local stray field at the cracks. - Highlights: • We performed FIB/SEM tomography of post-sinter annealed magnets. • A number of grain boundary cracks were observed in the low-coercivity sample. • Parameters α and N eff were deduced from the temperature dependence of coercivity. • While α was constant, N eff increased in the low-coercivity sample. • The reduction of the coercivity is due to the local stray field at the cracks

  11. A simple optimized microwave digestion method for multielement monitoring in mussel samples

    International Nuclear Information System (INIS)

    Saavedra, Y.; Gonzalez, A.; Fernandez, P.; Blanco, J.

    2004-01-01

    With the aim of obtaining a set of common decomposition conditions allowing the determination of several metals in mussel tissue (Hg by cold vapour atomic absorption spectrometry; Cu and Zn by flame atomic absorption spectrometry; and Cd, PbCr, Ni, As and Ag by electrothermal atomic absorption spectrometry), a factorial experiment was carried out using as factors the sample weight, digestion time and acid addition. It was found that the optimal conditions were 0.5 g of freeze-dried and triturated samples with 6 ml of nitric acid and subjected to microwave heating for 20 min at 180 psi. This pre-treatment, using only one step and one oxidative reagent, was suitable to determine the nine metals studied with no subsequent handling of the digest. It was possible to carry out the determination of atomic absorption using calibrations with aqueous standards and matrix modifiers for cadmium, lead, chromium, arsenic and silver. The accuracy of the procedure was checked using oyster tissue (SRM 1566b) and mussel tissue (CRM 278R) certified reference materials. The method is now used routinely to monitor these metals in wild and cultivated mussels, and found to be good

  12. Bayesian prediction and adaptive sampling algorithms for mobile sensor networks online environmental field reconstruction in space and time

    CERN Document Server

    Xu, Yunfei; Dass, Sarat; Maiti, Tapabrata

    2016-01-01

    This brief introduces a class of problems and models for the prediction of the scalar field of interest from noisy observations collected by mobile sensor networks. It also introduces the problem of optimal coordination of robotic sensors to maximize the prediction quality subject to communication and mobility constraints either in a centralized or distributed manner. To solve such problems, fully Bayesian approaches are adopted, allowing various sources of uncertainties to be integrated into an inferential framework effectively capturing all aspects of variability involved. The fully Bayesian approach also allows the most appropriate values for additional model parameters to be selected automatically by data, and the optimal inference and prediction for the underlying scalar field to be achieved. In particular, spatio-temporal Gaussian process regression is formulated for robotic sensors to fuse multifactorial effects of observations, measurement noise, and prior distributions for obtaining the predictive di...

  13. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 1. Theory

    Science.gov (United States)

    Graham, Wendy D.; Tankersley, Claude D.

    1994-05-01

    Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.

  14. Testing the sensitivity of pumpage to increases in surficial aquifer system heads in the Cypress Creek well-field area, West-Central Florida : an optimization technique

    Science.gov (United States)

    Yobbi, Dann K.

    2002-01-01

    Tampa Bay depends on ground water for most of the water supply. Numerous wetlands and lakes in Pasco County have been impacted by the high demand for ground water. Central Pasco County, particularly the area within the Cypress Creek well field, has been greatly affected. Probable causes for the decline in surface-water levels are well-field pumpage and a decade-long drought. Efforts are underway to increase surface-water levels by developing alternative sources of water supply, thus reducing the quantity of well-field pumpage. Numerical ground-water flow simulations coupled with an optimization routine were used in a series of simulations to test the sensitivity of optimal pumpage to desired increases in surficial aquifer system heads in the Cypress Creek well field. The ground-water system was simulated using the central northern Tampa Bay ground-water flow model. Pumping solutions for 1987 equilibrium conditions and for a transient 6-month timeframe were determined for five test cases, each reflecting a range of desired target recovery heads at different head control sites in the surficial aquifer system. Results are presented in the form of curves relating average head recovery to total optimal pumpage. Pumping solutions are sensitive to the location of head control sites formulated in the optimization problem and as expected, total optimal pumpage decreased when desired target head increased. The distribution of optimal pumpage for individual production wells also was significantly affected by the location of head control sites. A pumping advantage was gained for test-case formulations where hydraulic heads were maximized in cells near the production wells, in cells within the steady-state pumping center cone of depression, and in cells within the area of the well field where confining-unit leakance is the highest. More water was pumped and the ratio of head recovery per unit decrease in optimal pumpage was more than double for test cases where hydraulic heads

  15. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    Directory of Open Access Journals (Sweden)

    D. Ramyachitra

    2015-09-01

    Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  16. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    Science.gov (United States)

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  17. Modification and application of a leaf blower-vac for field sampling of arthropods

    NARCIS (Netherlands)

    Zou, Yi; Telgen, van Mario D.; Chen, Junhui; Xiao, Haijun; Kraker, de Joop; Bianchi, Felix J.J.A.; Werf, van der Wopke

    2016-01-01

    Rice fields host a large diversity of arthropods, but investigating their population dynamics and interactions is challenging. Here we describe the modification and application of a leaf blower-vac for suction sampling of arthropod populations in rice. When used in combination with an enclosure,

  18. Optimized cryo-focused ion beam sample preparation aimed at in situ structural studies of membrane proteins.

    Science.gov (United States)

    Schaffer, Miroslava; Mahamid, Julia; Engel, Benjamin D; Laugks, Tim; Baumeister, Wolfgang; Plitzko, Jürgen M

    2017-02-01

    While cryo-electron tomography (cryo-ET) can reveal biological structures in their native state within the cellular environment, it requires the production of high-quality frozen-hydrated sections that are thinner than 300nm. Sample requirements are even more stringent for the visualization of membrane-bound protein complexes within dense cellular regions. Focused ion beam (FIB) sample preparation for transmission electron microscopy (TEM) is a well-established technique in material science, but there are only few examples of biological samples exhibiting sufficient quality for high-resolution in situ investigation by cryo-ET. In this work, we present a comprehensive description of a cryo-sample preparation workflow incorporating additional conductive-coating procedures. These coating steps eliminate the adverse effects of sample charging on imaging with the Volta phase plate, allowing data acquisition with improved contrast. We discuss optimized FIB milling strategies adapted from material science and each critical step required to produce homogeneously thin, non-charging FIB lamellas that make large areas of unperturbed HeLa and Chlamydomonas cells accessible for cryo-ET at molecular resolution. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Phase-only shaped laser pulses in optimal control theory: Application to indirect photofragmentation dynamics in the weak-field limit

    DEFF Research Database (Denmark)

    Shu, Chuan-Cun; Henriksen, Niels E.

    2012-01-01

    We implement phase-only shaped laser pulses within quantum optimal control theory for laser-molecule interaction. This approach is applied to the indirect photofragmentation dynamics of NaI in the weak-field limit. It is shown that optimized phase-modulated pulses with a fixed frequency distribut...... distribution can substantially modify transient dissociation probabilities as well as the momentum distribution associated with the relative motion of Na and I. © 2012 American Institute of Physics....

  20. Transmission characteristics and optimal diagnostic samples to detect an FMDV infection in vaccinated and non-vaccinated sheep

    NARCIS (Netherlands)

    Eble, P.L.; Orsel, K.; Kluitenberg-van Hemert, F.; Dekker, A.

    2015-01-01

    We wanted to quantify transmission of FMDV Asia-1 in sheep and to evaluate which samples would be optimal for detection of an FMDV infection in sheep. For this, we used 6 groups of 4 non-vaccinated and 6 groups of 4 vaccinated sheep. In each group 2 sheep were inoculated and contact exposed to 2

  1. Optimizing fracture and completion design in the Westerose field

    Energy Technology Data Exchange (ETDEWEB)

    Dunn-Norman, S. [Missouri Univ., Rolla, MO (United States); Griffiths, E.; Barnhart, W. [Pan-Canadian Petroleum Ltd., Calgary, AB (Canada); Aunger, D.; Kenny, L.; Halvaci, M.

    1998-12-31

    An experimental study was conducted to determine the feasibility of developing additional gas reserves in the tight sands located between the main bar trends in the Westerose gas field, located 75 km south of Edmonton, Alberta. As part of the study, fracturing and completion alternatives in the Glauconitic `bar` and `interbar` sands were analyzed and compared. Optimal fracture designs for vertical wells were determined for each type of sand. Vertical well performance was compared to stimulated and unstimulated horizontal wells drilled either parallel or perpendicular to the minimum in-situ stress. Results indicated that in-situ permeabilities in the interbar sands were lower than anticipated. It was also shown that over the permeability ranges studied, predicted rates matched actual rates for both vertical fractured and multifractured horizontal wells, suggesting that analytical models can be used to assess anticipated well performance. A further conclusion drawn from the study was that by stimulating a wide variety of permeability ranges, well orientations, anisotropy, fracture orientations and completion options can be determined. 12 refs., 7 tabs., 4 figs.

  2. Logic-based methods for optimization combining optimization and constraint satisfaction

    CERN Document Server

    Hooker, John

    2011-01-01

    A pioneering look at the fundamental role of logic in optimization and constraint satisfaction While recent efforts to combine optimization and constraint satisfaction have received considerable attention, little has been said about using logic in optimization as the key to unifying the two fields. Logic-Based Methods for Optimization develops for the first time a comprehensive conceptual framework for integrating optimization and constraint satisfaction, then goes a step further and shows how extending logical inference to optimization allows for more powerful as well as flexible

  3. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  4. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  5. Field data analysis of active chlorine-containing stormwater samples.

    Science.gov (United States)

    Zhang, Qianyi; Gaafar, Mohamed; Yang, Rong-Cai; Ding, Chen; Davies, Evan G R; Bolton, James R; Liu, Yang

    2018-01-15

    Many municipalities in Canada and all over the world use chloramination for drinking water secondary disinfection to avoid DBPs formation from conventional chlorination. However, the long-lasting monochloramine (NH 2 Cl) disinfectant can pose a significant risk to aquatic life through its introduction into municipal storm sewer systems and thus fresh water sources by residential, commercial, and industrial water uses. To establish general total active chlorine (TAC) concentrations in discharges from storm sewers, the TAC concentration was measured in stormwater samples in Edmonton, Alberta, Canada, during the summers of 2015 and 2016 under both dry and wet weather conditions. The field-sampling results showed TAC concentration variations from 0.02 to 0.77 mg/L in summer 2015, which exceeds the discharge effluent limit of 0.02 mg/L. As compared to 2015, the TAC concentrations were significantly lower during the summer 2016 (0-0.24 mg/L), for which it is believed that the higher precipitation during summer 2016 reduced outdoor tap water uses. Since many other cities also use chloramines as disinfectants for drinking water disinfection, the TAC analysis from Edmonton may prove useful for other regions as well. Other physicochemical and biological characteristics of stormwater and storm sewer biofilm samples were also analyzed, and no significant difference was found during these two years. Higher density of AOB and NOB detected in the storm sewer biofilm of residential areas - as compared with other areas - generally correlated to high concentrations of ammonium and nitrite in this region in both of the two years, and they may have contributed to the TAC decay in the storm sewers. The NH 2 Cl decay laboratory experiments illustrate that dissolved organic carbon (DOC) concentration is the dominant factor in determining the NH 2 Cl decay rate in stormwater samples. The high DOC concentrations detected from a downstream industrial sampling location may contribute to a

  6. Enhanced conformational sampling using enveloping distribution sampling.

    Science.gov (United States)

    Lin, Zhixiong; van Gunsteren, Wilfred F

    2013-10-14

    To lessen the problem of insufficient conformational sampling in biomolecular simulations is still a major challenge in computational biochemistry. In this article, an application of the method of enveloping distribution sampling (EDS) is proposed that addresses this challenge and its sampling efficiency is demonstrated in simulations of a hexa-β-peptide whose conformational equilibrium encompasses two different helical folds, i.e., a right-handed 2.7(10∕12)-helix and a left-handed 3(14)-helix, separated by a high energy barrier. Standard MD simulations of this peptide using the GROMOS 53A6 force field did not reach convergence of the free enthalpy difference between the two helices even after 500 ns of simulation time. The use of soft-core non-bonded interactions in the centre of the peptide did enhance the number of transitions between the helices, but at the same time led to neglect of relevant helical configurations. In the simulations of a two-state EDS reference Hamiltonian that envelops both the physical peptide and the soft-core peptide, sampling of the conformational space of the physical peptide ensures that physically relevant conformations can be visited, and sampling of the conformational space of the soft-core peptide helps to enhance the transitions between the two helices. The EDS simulations sampled many more transitions between the two helices and showed much faster convergence of the relative free enthalpy of the two helices compared with the standard MD simulations with only a slightly larger computational effort to determine optimized EDS parameters. Combined with various methods to smoothen the potential energy surface, the proposed EDS application will be a powerful technique to enhance the sampling efficiency in biomolecular simulations.

  7. A Counterexample on Sample-Path Optimality in Stable Markov Decision Chains with the Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2014-01-01

    Roč. 163, č. 2 (2014), s. 674-684 ISSN 0022-3239 Grant - others:PSF Organization(US) 012/300/02; CONACYT (México) and ASCR (Czech Republic)(MX) 171396 Institutional support: RVO:67985556 Keywords : Strong sample-path optimality * Lyapunov function condition * Stationary policy * Expected average reward criterion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.509, year: 2014 http://library.utia.cas.cz/separaty/2014/E/sladky-0432661.pdf

  8. A contemporary decennial global Landsat sample of changing agricultural field sizes

    Science.gov (United States)

    White, Emma; Roy, David

    2014-05-01

    Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by

  9. Oil Reservoir Production Optimization using Optimal Control

    DEFF Research Database (Denmark)

    Völcker, Carsten; Jørgensen, John Bagterp; Stenby, Erling Halfdan

    2011-01-01

    Practical oil reservoir management involves solution of large-scale constrained optimal control problems. In this paper we present a numerical method for solution of large-scale constrained optimal control problems. The method is a single-shooting method that computes the gradients using the adjo...... reservoir using water ooding and smart well technology. Compared to the uncontrolled case, the optimal operation increases the Net Present Value of the oil field by 10%.......Practical oil reservoir management involves solution of large-scale constrained optimal control problems. In this paper we present a numerical method for solution of large-scale constrained optimal control problems. The method is a single-shooting method that computes the gradients using...

  10. Field Optimization for short Period Undulators

    CERN Document Server

    Peiffer, P; Rossmanith, R; Schoerling, D

    2011-01-01

    Undulators dedicated to low energy electron beams, like Laser Wakefield Accelerators, require very short period lengths to achieve X-ray emission. However, at these short period length (LambdaU ~ 5 mm) it becomes difficult to reach magnetic field amplitudes that lead to a K parameter of >1, which is generally desired. Room temperature permanent magnets and even superconductive undulators using Nb-Ti as conductor material have proven insufficient to achieve the desired field amplitudes. The superconductor Nb$_{3}$Sn has the theoretical potential to achieve the desired fields. However, up to now it is limited by several technological challenges to much lower field values than theoretically predicted. An alternative idea for higher fields is to manufacture the poles of the undulator body from Holmium instead of iron or to use Nb-Ti wires with a higher superconductor/copper ratio. The advantages and challenges of the different options are compared in this contribution.

  11. Micrometer-scale magnetic imaging of geological samples using a quantum diamond microscope

    Science.gov (United States)

    Glenn, D. R.; Fu, R. R.; Kehayias, P.; Le Sage, D.; Lima, E. A.; Weiss, B. P.; Walsworth, R. L.

    2017-08-01

    Remanent magnetization in geological samples may record the past intensity and direction of planetary magnetic fields. Traditionally, this magnetization is analyzed through measurements of the net magnetic moment of bulk millimeter to centimeter sized samples. However, geological samples are often mineralogically and texturally heterogeneous at submillimeter scales, with only a fraction of the ferromagnetic grains carrying the remanent magnetization of interest. Therefore, characterizing this magnetization in such cases requires a technique capable of imaging magnetic fields at fine spatial scales and with high sensitivity. To address this challenge, we developed a new instrument, based on nitrogen-vacancy centers in diamond, which enables direct imaging of magnetic fields due to both remanent and induced magnetization, as well as optical imaging, of room-temperature geological samples with spatial resolution approaching the optical diffraction limit. We describe the operating principles of this device, which we call the quantum diamond microscope (QDM), and report its optimized image-area-normalized magnetic field sensitivity (20 µTṡµm/Hz1/2), spatial resolution (5 µm), and field of view (4 mm), as well as trade-offs between these parameters. We also perform an absolute magnetic field calibration for the device in different modes of operation, including three-axis (vector) and single-axis (projective) magnetic field imaging. Finally, we use the QDM to obtain magnetic images of several terrestrial and meteoritic rock samples, demonstrating its ability to resolve spatially distinct populations of ferromagnetic carriers.

  12. Optimization of well field operation: case study of søndersø waterworks, Denmark

    DEFF Research Database (Denmark)

    Hansen, Annette Kirstine; Madsen, Henrik; Bauer-Gottwein, Peter

    2013-01-01

    An integrated hydrological well field model (WELLNES) that predicts the water level and energy consumption in the production wells of a waterworks is used to optimize the management of a waterworks with the speed of the pumps as decision variables. The two-objective optimization problem...... variable-speed pumps, it is possible to save 42% of the specific energy consumption and at the same time improve the risk objective function. The payback period of investing in new variable speed pumps is only 3.1 years, due to the large savings in electricity. The case study illustrates the efficiency...... of minimizing the risk of contamination from a nearby contaminated site and minimizing the energy consumption of the waterworks is solved by genetic algorithms. In comparison with historical values, significant improvements in both objectives can be obtained. If the existing on/off pumps are changed to new...

  13. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  14. Population pharmacokinetic analysis of clopidogrel in healthy Jordanian subjects with emphasis optimal sampling strategy.

    Science.gov (United States)

    Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A

    2013-05-01

    Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Offset Risk Minimization for Open-loop Optimal Control of Oil Reservoirs

    DEFF Research Database (Denmark)

    Capolei, Andrea; Christiansen, Lasse Hjuler; Jørgensen, J. B.

    2017-01-01

    Simulation studies of oil field water flooding have demonstrated a significant potential of optimal control technology to improve industrial practices. However, real-life applications are challenged by unknown geological factors that make reservoir models highly uncertain. To minimize...... the associated financial risks, the oil literature has used ensemble-based methods to manipulate the net present value (NPV) distribution by optimizing sample estimated risk measures. In general, such methods successfully reduce overall risk. However, as this paper demonstrates, ensemble-based control strategies...... practices. The results suggest that it may be more relevant to consider the NPV offset distribution than the NPV distribution when minimizing risk in production optimization....

  16. Research on the Flow Field and Structure Optimization in Cyclone Separator with Downward Exhaust Gas

    Directory of Open Access Journals (Sweden)

    Wang Weiwei

    2017-01-01

    Full Text Available A numerical software analysis of the turbulent and strongly swirling flow field of a cyclone separator with downward exhaust gas and its performances is described. The ANSYS 14.0 simulations based on DPM model are also used in the investigation. A new set of geometrical design has been optimized to achieve minimum pressure drop and maximum separation efficiency. A comparison of numerical simulation of the new design confirm the superior performance of the new design compared to the conventional design. The influence of the structure parameters such as the length of the guide pipe, the shape of the guide, the inlet shape on the separation performance was analyzed in this research. This research result has certain reference value for cyclone separator design and performance optimization.

  17. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  18. Optimization of Measurements on Dynamically Sensitive Structures Using a Reliability Approach

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    Design of a measuring program devoted to parameter identification of structural dynamic systems described by random fields is considered. The design problem is formulated as an optimization problem to minimize the total expected costs due to failure and costs of a measuring program. Design variab...... variables are the numbers of measuring points, the locations of these points and the required number of sample records. An example with a simply supported plane, vibrating beam is considered and tentative results are presented.......Design of a measuring program devoted to parameter identification of structural dynamic systems described by random fields is considered. The design problem is formulated as an optimization problem to minimize the total expected costs due to failure and costs of a measuring program. Design...

  19. Optimization of Measurements on Dynamically Sensitive Structures Using a Reliability Approach

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    1990-01-01

    Design of measuring program devoted to parameter identification of structural dynamic systems described by random fields is considered. The design problem is formulated as an optimization problem to minimize the total expected costs due to failure and costs of masuring program. Design variables a...... are the numbers of measuring points, the locations of these points and the required number of sample records. An example with a simply supported plane, vibrating beam is considered and tentative results are presented.......Design of measuring program devoted to parameter identification of structural dynamic systems described by random fields is considered. The design problem is formulated as an optimization problem to minimize the total expected costs due to failure and costs of masuring program. Design variables...

  20. Incorporating prior knowledge into beam orientation optimization in IMRT

    International Nuclear Information System (INIS)

    Pugachev, Andrei M.S.; Lei Xing

    2002-01-01

    to guide the search for the optimal beam configuration. The BEVD-guided sampling improved both optimization speed and convergence of the calculation. A comparison of several five-field IMRT treatment plans obtained with and without BEVD guidance indicated that the computational efficiency was increased by a factor of ∼10. Conclusion: Incorporation of BEVD information allows for development of a more robust tool for beam orientation optimization in IMRT planning. It enables us to more effectively use the angular degree of freedom in IMRT without paying the excessive computing overhead and brings us one step closer to the goal of automated selection of beam orientations in a clinical environment

  1. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    Science.gov (United States)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically

  2. Demonstration and Optimization of BNFL's Pulsed Jet Mixing and RFD Sampling Systems Using NCAW Simulant

    International Nuclear Information System (INIS)

    Bontha, J.R.; Golcar, G.R.; Hannigan, N.

    2000-01-01

    The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%

  3. Optimized pre-thinning procedures of ion-beam thinning for TEM sample preparation by magnetorheological polishing.

    Science.gov (United States)

    Luo, Hu; Yin, Shaohui; Zhang, Guanhua; Liu, Chunhui; Tang, Qingchun; Guo, Meijian

    2017-10-01

    Ion-beam-thinning is a well-established sample preparation technique for transmission electron microscopy (TEM), but tedious procedures and labor consuming pre-thinning could seriously reduce its efficiency. In this work, we present a simple pre-thinning technique by using magnetorheological (MR) polishing to replace manual lapping and dimpling, and demonstrate the successful preparation of electron-transparent single crystal silicon samples after MR polishing and single-sided ion milling. Dimples pre-thinned to less than 30 microns and with little mechanical surface damage were repeatedly produced under optimized MR polishing conditions. Samples pre-thinned by both MR polishing and traditional technique were ion-beam thinned from the rear side until perforation, and then observed by optical microscopy and TEM. The results show that the specimen pre-thinned by MR technique was free from dimpling related defects, which were still residual in sample pre-thinned by conventional technique. Nice high-resolution TEM images could be acquired after MR polishing and one side ion-thinning. MR polishing promises to be an adaptable and efficient method for pre-thinning in preparation of TEM specimens, especially for brittle ceramics. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Study on Design Optimization of Centrifugal Compressors Considering Efficiency and Weight

    International Nuclear Information System (INIS)

    Lee, Younghwan; Kang, Shinhyoung; Ha, Kyunggu

    2015-01-01

    Various centrifugal compressors are currently used extensively in industrial fields, where the design requirements are more complicated. This makes it more difficult to determine the optimal design point of a centrifugal compressor. Traditionally, the efficiency is an important factor for optimization. In this study, the weight of the compressor was also considered. The aim of this study was to present the design tendency considering the stage efficiency and weight. In addition, this study suggested possibility of a selection of compressor design objectives at an early design stage based on the optimization results. Only a vaneless diffuser was considered in this case. The Kriging method was used with sample points from 1D design program data. The optimal points were determined in a substitute design space.

  5. Study on Design Optimization of Centrifugal Compressors Considering Efficiency and Weight

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Younghwan; Kang, Shinhyoung [Seoul National University, Seoul (Korea, Republic of); Ha, Kyunggu [Hyundai Motor Group, Ulsan (Korea, Republic of)

    2015-04-15

    Various centrifugal compressors are currently used extensively in industrial fields, where the design requirements are more complicated. This makes it more difficult to determine the optimal design point of a centrifugal compressor. Traditionally, the efficiency is an important factor for optimization. In this study, the weight of the compressor was also considered. The aim of this study was to present the design tendency considering the stage efficiency and weight. In addition, this study suggested possibility of a selection of compressor design objectives at an early design stage based on the optimization results. Only a vaneless diffuser was considered in this case. The Kriging method was used with sample points from 1D design program data. The optimal points were determined in a substitute design space.

  6. Exploring structural variability in X-ray crystallographic models using protein local optimization by torsion-angle sampling

    International Nuclear Information System (INIS)

    Knight, Jennifer L.; Zhou, Zhiyong; Gallicchio, Emilio; Himmel, Daniel M.; Friesner, Richard A.; Arnold, Eddy; Levy, Ronald M.

    2008-01-01

    Torsion-angle sampling, as implemented in the Protein Local Optimization Program (PLOP), is used to generate multiple structurally variable single-conformer models which are in good agreement with X-ray data. An ensemble-refinement approach to differentiate between positional uncertainty and conformational heterogeneity is proposed. Modeling structural variability is critical for understanding protein function and for modeling reliable targets for in silico docking experiments. Because of the time-intensive nature of manual X-ray crystallographic refinement, automated refinement methods that thoroughly explore conformational space are essential for the systematic construction of structurally variable models. Using five proteins spanning resolutions of 1.0–2.8 Å, it is demonstrated how torsion-angle sampling of backbone and side-chain libraries with filtering against both the chemical energy, using a modern effective potential, and the electron density, coupled with minimization of a reciprocal-space X-ray target function, can generate multiple structurally variable models which fit the X-ray data well. Torsion-angle sampling as implemented in the Protein Local Optimization Program (PLOP) has been used in this work. Models with the lowest R free values are obtained when electrostatic and implicit solvation terms are included in the effective potential. HIV-1 protease, calmodulin and SUMO-conjugating enzyme illustrate how variability in the ensemble of structures captures structural variability that is observed across multiple crystal structures and is linked to functional flexibility at hinge regions and binding interfaces. An ensemble-refinement procedure is proposed to differentiate between variability that is a consequence of physical conformational heterogeneity and that which reflects uncertainty in the atomic coordinates

  7. Advanced sampling techniques for hand-held FT-IR instrumentation

    Science.gov (United States)

    Arnó, Josep; Frunzi, Michael; Weber, Chris; Levy, Dustin

    2013-05-01

    FT-IR spectroscopy is the technology of choice to identify solid and liquid phase unknown samples. The challenging ConOps in emergency response and military field applications require a significant redesign of the stationary FT-IR bench-top instruments typically used in laboratories. Specifically, field portable units require high levels of resistance against mechanical shock and chemical attack, ease of use in restrictive gear, extreme reliability, quick and easy interpretation of results, and reduced size. In the last 20 years, FT-IR instruments have been re-engineered to fit in small suitcases for field portable use and recently further miniaturized for handheld operation. This article introduces the HazMatID™ Elite, a FT-IR instrument designed to balance the portability advantages of a handheld device with the performance challenges associated with miniaturization. In this paper, special focus will be given to the HazMatID Elite's sampling interfaces optimized to collect and interrogate different types of samples: accumulated material using the on-board ATR press, dispersed powders using the ClearSampler™ tool, and the touch-to-sample sensor for direct liquid sampling. The application of the novel sample swipe accessory (ClearSampler) to collect material from surfaces will be discussed in some detail. The accessory was tested and evaluated for the detection of explosive residues before and after detonation. Experimental results derived from these investigations will be described in an effort to outline the advantages of this technology over existing sampling methods.

  8. Optimal Design and Related Areas in Optimization and Statistics

    CERN Document Server

    Pronzato, Luc

    2009-01-01

    This edited volume, dedicated to Henry P. Wynn, reflects his broad range of research interests, focusing in particular on the applications of optimal design theory in optimization and statistics. It covers algorithms for constructing optimal experimental designs, general gradient-type algorithms for convex optimization, majorization and stochastic ordering, algebraic statistics, Bayesian networks and nonlinear regression. Written by leading specialists in the field, each chapter contains a survey of the existing literature along with substantial new material. This work will appeal to both the

  9. An optimized groundwater extraction system for the toxic burning pits area of J-Field, Aberdeen Proving Ground, Maryland

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, J.J.; Johnson, R.L.; Patton, T.L.; Martino, L.E.

    1996-06-01

    Testing and disposal of chemical warfare agents, munitions, and industrial chemicals at the J-Field area of the Aberdeen Proving Ground (APG) have resulted in contamination of soil and groundwater. The discharge of contaminated groundwater to on-site marshes and adjacent estuaries poses a potential risk to ecological receptors. The Toxic Burning Pits (TBP) area is of special concern because of its disposal history. This report describes a groundwater modeling study conducted at J-Field that focused on the TBP area. The goal of this modeling effort was optimization of the groundwater extraction system at the TBP area by applying linear programming techniques. Initially, the flow field in the J-Field vicinity was characterized with a three-dimensional model that uses existing data and several numerical techniques. A user-specified border was set near the marsh and used as a constraint boundary in two modeled remediation scenarios: containment of the groundwater and containment of groundwater with an impermeable cap installed over the TBP area. In both cases, the objective was to extract the minimum amount of water necessary while satisfying the constraints. The smallest number of wells necessary was then determined for each case. This optimization approach provided two benefits: cost savings, in that the water to be treated and the well installation costs were minimized, and minimization of remediation impacts on the ecology of the marsh.

  10. Different elution modes and field programming in gravitational field-flow fractionation. III. Field programming by flow-rate gradient generated by a programmable pump.

    Science.gov (United States)

    Plocková, J; Chmelík, J

    2001-05-25

    Gravitational field-flow fractionation (GFFF) utilizes the Earth's gravitational field as an external force that causes the settlement of particles towards the channel accumulation wall. Hydrodynamic lift forces oppose this action by elevating particles away from the channel accumulation wall. These two counteracting forces enable modulation of the resulting force field acting on particles in GFFF. In this work, force-field programming based on modulating the magnitude of hydrodynamic lift forces was implemented via changes of flow-rate, which was accomplished by a programmable pump. Several flow-rate gradients (step gradients, linear gradients, parabolic, and combined gradients) were tested and evaluated as tools for optimization of the separation of a silica gel particle mixture. The influence of increasing amount of sample injected on the peak resolution under flow-rate gradient conditions was also investigated. This is the first time that flow-rate gradients have been implemented for programming of the resulting force field acting on particles in GFFF.

  11. Investigation and optimization of the magnetic field configuration in high-power impulse magnetron sputtering

    International Nuclear Information System (INIS)

    Yu, He; Meng, Liang; Szott, Matthew M; Meister, Jack T; Cho, Tae S; Ruzic, David N

    2013-01-01

    An effort to optimize the magnetic field configuration specifically for high-power impulse magnetron sputtering (HiPIMS) was made. Magnetic field configurations with different field strengths, race track widths and race track patterns were designed using COMSOL. Their influence on HiPIMS plasma properties was investigated using a 36 cm diameter copper target. The I–V discharge characteristics were measured. The temporal evolution of electron temperature (T e ) and density (n e ) was studied employing a triple Langmuir probe, which was also scanned in the whole discharge region to characterize the plasma distribution and transport. Based on the studies, a closed path for electrons to drift along was still essential in HiPIMS in order to efficiently confine electrons and achieve a high pulse current. Very dense plasmas (10 19 –10 20 m −3 ) were generated in front of the race tracks during the pulse, and expanded downstream afterwards. As the magnetic field strength increased from 200 to 800 G, the expansion became faster and less isotropic, i.e. more directional toward the substrate. The electric potential distribution accounted for these effects. Varied race track widths and patterns altered the plasma distribution from the target to the substrate. A spiral-shaped magnetic field design was able to produce superior plasma uniformity on the substrate in addition to improved target utilization. (paper)

  12. On-Line Organic Solvent Field Enhanced Sample Injection in Capillary Zone Electrophoresis for Analysis of Quetiapine in Beagle Dog Plasma

    Directory of Open Access Journals (Sweden)

    Yuqing Cao

    2016-01-01

    Full Text Available A rapid and sensitive capillary zone electrophoresis (CZE method with field enhanced sample injection (FESI was developed and validated for the determination of quetiapine fumarate in beagle dog plasma, with a sample pretreatment by LLE in 96-well deep format plate. The optimum separation was carried out in an uncoated 31.2 cm × 75 μm fused-silica capillary with an applied voltage of 13 kV. The electrophoretic analysis was performed by 50 mM phosphate at pH 2.5. The detection wavelength was 210 nm. Under these optimized conditions, FESI with acetonitrile enhanced the sensitivity of quetiapine about 40–50 folds in total. The method was suitably validated with respect to stability, specificity, linearity, lower limit of quantitation, accuracy, precision and extraction recovery. Using mirtazapine as an internal standard (100 ng/mL, the response of quetiapine was linear over the range of 1–1000 ng/mL. The lower limit of quantification was 1 ng/mL. The intra- and inter-day precisions for the assay were within 4.8% and 12.7%, respectively. The method represents the first application of FESI-CZE to the analysis of quetiapine fumarate in beagle dog plasma after oral administration.

  13. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  14. Imaging samples larger than the field of view: the SLS experience

    Science.gov (United States)

    Vogiatzis Oikonomidis, Ioannis; Lovric, Goran; Cremona, Tiziana P.; Arcadu, Filippo; Patera, Alessandra; Schittny, Johannes C.; Stampanoni, Marco

    2017-06-01

    Volumetric datasets with micrometer spatial and sub-second temporal resolutions are nowadays routinely acquired using synchrotron X-ray tomographic microscopy (SRXTM). Although SRXTM technology allows the examination of multiple samples with short scan times, many specimens are larger than the field-of-view (FOV) provided by the detector. The extension of the FOV in the direction perpendicular to the rotation axis remains non-trivial. We present a method that can efficiently increase the FOV merging volumetric datasets obtained by region-of-interest tomographies in different 3D positions of the sample with a minimal amount of artefacts and with the ability to handle large amounts of data. The method has been successfully applied for the three-dimensional imaging of a small number of mouse lung acini of intact animals, where pixel sizes down to the micrometer range and short exposure times are required.

  15. Application and optimization of electric field-assisted ultrasonication for disintegration of waste activated sludge using response surface methodology with a Box-Behnken design.

    Science.gov (United States)

    Jung, Kyung-Won; Hwang, Min-Jin; Cha, Min-Jung; Ahn, Kyu-Hong

    2015-01-01

    In the present study, an electric field is applied in order to disintegrate waste activated sludge (WAS). As a preliminary step, feasibility tests are investigated using different applied voltages of 10-100V for 60min. As the applied voltage increases, the disintegration degrees (DD) are gradually enhanced, and thereby the soluble N, P, and carbohydrate concentrations increase simultaneously due to the WAS decomposition. Subsequently, an optimization process is conducted using a response surface methodology with a Box-Behnken design (BBD). The total solid concentration, applied voltage, and reaction time are selected as independent variables, while the DD is selected as the response variable. The overall results demonstrate that the BBD with an experimental design can be used effectively in the optimization of the electric field treatment of WAS. In the confirmation test, a DD of 10.26±0.14% is recorded, which corresponds to 99.1% of the predicted response value under the statistically optimized conditions. Finally, the statistic optimization of the combined treatment (electric field+ultrasonication) demonstrated that even though this method is limited to highly disintegrated WAS when it is applied individually, a high DD of 47.28±0.20% was recorded where the TS concentration was 6780mg/l, the strength of ultrasonication was 8.0W, the applied voltage was 68.4V, and the reaction time was 44min. E-SEM images clearly revealed that the application of the electric field is a significant alternative method for the combined treatment of WAS. This study was the first attempt to increase disintegration using the electric field for a combined treatment with ultrasonication. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. [Sampling optimization for tropical invertebrates: an example using dung beetles (Coleoptera: Scarabaeinae) in Venezuela].

    Science.gov (United States)

    Ferrer-Paris, José Rafael; Sánchez-Mercado, Ada; Rodríguez, Jon Paul

    2013-03-01

    The development of efficient sampling protocols is an essential prerequisite to evaluate and identify priority conservation areas. There are f ew protocols for fauna inventory and monitoring in wide geographical scales for the tropics, where the complexity of communities and high biodiversity levels, make the implementation of efficient protocols more difficult. We proposed here a simple strategy to optimize the capture of dung beetles, applied to sampling with baited traps and generalizable to other sampling methods. We analyzed data from eight transects sampled between 2006-2008 withthe aim to develop an uniform sampling design, that allows to confidently estimate species richness, abundance and composition at wide geographical scales. We examined four characteristics of any sampling design that affect the effectiveness of the sampling effort: the number of traps, sampling duration, type and proportion of bait, and spatial arrangement of the traps along transects. We used species accumulation curves, rank-abundance plots, indicator species analysis, and multivariate correlograms. We captured 40 337 individuals (115 species/morphospecies of 23 genera). Most species were attracted by both dung and carrion, but two thirds had greater relative abundance in traps baited with human dung. Different aspects of the sampling design influenced each diversity attribute in different ways. To obtain reliable richness estimates, the number of traps was the most important aspect. Accurate abundance estimates were obtained when the sampling period was increased, while the spatial arrangement of traps was determinant to capture the species composition pattern. An optimum sampling strategy for accurate estimates of richness, abundance and diversity should: (1) set 50-70 traps to maximize the number of species detected, (2) get samples during 48-72 hours and set trap groups along the transect to reliably estimate species abundance, (3) set traps in groups of at least 10 traps to

  17. Optimization of L-shaped tunneling field-effect transistor for ambipolar current suppression and Analog/RF performance enhancement

    Science.gov (United States)

    Li, Cong; Zhao, Xiaolong; Zhuang, Yiqi; Yan, Zhirui; Guo, Jiaming; Han, Ru

    2018-03-01

    L-shaped tunneling field-effect transistor (LTFET) has larger tunnel area than planar TFET, which leads to enhanced on-current ION . However, LTFET suffers from severe ambipolar behavior, which needs to be further optimized for low power and high-frequency applications. In this paper, both hetero-gate-dielectric (HGD) and lightly doped drain (LDD) structures are introduced into LTFET for suppression of ambipolarity and improvement of analog/RF performance of LTFET. Current-voltage characteristics, the variation of energy band diagrams, distribution of band-to-band tunneling (BTBT) generation and distribution of electric field are analyzed for our proposed HGD-LDD-LTFET. In addition, the effect of LDD on the ambipolar behavior of LTFET is investigated, the length and doping concentration of LDD is also optimized for better suppression of ambipolar current. Finally, analog/RF performance of HGD-LDD-LTFET are studied in terms of gate-source capacitance, gate-drain capacitance, cut-off frequency, and gain bandwidth production. TCAD simulation results show that HGD-LDD-LTFET not only drastically suppresses ambipolar current but also improves analog/RF performance compared with conventional LTFET.

  18. Sterile Reverse Osmosis Water Combined with Friction Are Optimal for Channel and Lever Cavity Sample Collection of Flexible Duodenoscopes

    Directory of Open Access Journals (Sweden)

    Michelle J. Alfa

    2017-11-01

    Full Text Available IntroductionSimulated-use buildup biofilm (BBF model was used to assess various extraction fluids and friction methods to determine the optimal sample collection method for polytetrafluorethylene channels. In addition, simulated-use testing was performed for the channel and lever cavity of duodenoscopes.Materials and methodsBBF was formed in polytetrafluorethylene channels using Enterococcus faecalis, Escherichia coli, and Pseudomonas aeruginosa. Sterile reverse osmosis (RO water, and phosphate-buffered saline with and without Tween80 as well as two neutralizing broths (Letheen and Dey–Engley were each assessed with and without friction. Neutralizer was added immediately after sample collection and samples concentrated using centrifugation. Simulated-use testing was done using TJF-Q180V and JF-140F Olympus duodenoscopes.ResultsDespite variability in the bacterial CFU in the BBF model, none of the extraction fluids tested were significantly better than RO. Borescope examination showed far less residual material when friction was part of the extraction protocol. The RO for flush-brush-flush (FBF extraction provided significantly better recovery of E. coli (p = 0.02 from duodenoscope lever cavities compared to the CDC flush method.Discussion and conclusionWe recommend RO with friction for FBF extraction of the channel and lever cavity of duodenoscopes. Neutralizer and sample concentration optimize recovery of viable bacteria on culture.

  19. Optimization of mechanical structures using particle swarm optimization

    International Nuclear Information System (INIS)

    Leite, Victor C.; Schirru, Roberto

    2015-01-01

    Several optimization problems are dealed with the particle swarm optimization (PSO) algorithm, there is a wide kind of optimization problems, it may be applications related to logistics or the reload of nuclear reactors. This paper discusses the use of the PSO in the treatment of problems related to mechanical structure optimization. The geometry and material characteristics of mechanical components are important for the proper functioning and performance of the systems were they are applied, particularly to the nuclear field. Calculations related to mechanical aspects are all made using ANSYS, while the PSO is programed in MATLAB. (author)

  20. Optimization of mechanical structures using particle swarm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Leite, Victor C.; Schirru, Roberto, E-mail: victor.coppo.leite@lmp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (LMP/PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Lab. de Monitoracao de Processos

    2015-07-01

    Several optimization problems are dealed with the particle swarm optimization (PSO) algorithm, there is a wide kind of optimization problems, it may be applications related to logistics or the reload of nuclear reactors. This paper discusses the use of the PSO in the treatment of problems related to mechanical structure optimization. The geometry and material characteristics of mechanical components are important for the proper functioning and performance of the systems were they are applied, particularly to the nuclear field. Calculations related to mechanical aspects are all made using ANSYS, while the PSO is programed in MATLAB. (author)

  1. Special nuclear material inventory sampling plans

    International Nuclear Information System (INIS)

    Vaccaro, H.S.; Goldman, A.S.

    1987-01-01

    This paper presents improved procedures for obtaining statistically valid sampling plans for nuclear facilities. The double sampling concept and methods for developing optimal double sampling plans are described. An algorithm is described that is satisfactory for finding optimal double sampling plans and choosing appropriate detection and false alarm probabilities

  2. Optimization of a pulsed-field gel electrophoresis for molecular typing of Proteus mirabilis

    Directory of Open Access Journals (Sweden)

    Alper Karagöz

    2013-09-01

    Full Text Available Objective: For the detection of outbreaks caused byProteus mirabilis, strains clonal relations are determinedmethods as “pulsed-field gel electrophoresis (PFGE”.The aim of this study was optimization of a pulsed-fieldgel electrophoresis for molecular typing of P. mirabilis.Methods: In this study, PFGE’ protocol is optimized foruse in molecular typing of P. mirabilis. Phylogenetic analyzesof strains were evaluated with Bionumerics softwaresystem (version 6.01; Applied Maths, Sint-Martens-Latem, Belgium.Results: This protocol compared with Gram-negativebacteria PFGE protocols, NotI enzyme is suitable for thisbacterium. Electrophoresis conditions should be revealedas; - block 1: initial pulse duration 1 sec, ending pulseduration 30 sec, striking angle 120°, the current 6 V/cm2,temperature 14°C, time 8 hours; - block 2: initial pulseduration 30 sec, ending pulse duration 70 sec, strikingangle 120°, the current 6 V/cm2, temperature 14°C, time16 hours; - TBE, pH=8.4.Conclusion: P. mirabilis strains were typed by PFGE andBionumerics analysis program were determined clonal relationships.The procedure was simple, reproducible andsuitable for these bacteria. Also it was evaluated, becauseof reducing time, the solution volumes and enzymes canbe economically. Outbreaks of nosocomial infections dueto bacteria studied assessment and the potential to provideuseful information about the degree of prevalence.This optimized protocol is allowed different centers’ PFGEresults to compare with other laboratories results. J ClinExp Invest 2013; 4 (3: 306-312Key words: Proteus mirabilis, molecular typing, pulsedfieldgel electrophoresis.

  3. Statistical properties of the surface velocity field in the northern Gulf of Mexico sampled by GLAD drifters

    OpenAIRE

    Mariano, A.J.; Ryan, E.H.; Huntley, H.S.; Laurindo, L.C.; Coelho, E.; Ozgokmen, TM; Berta, M.; Bogucki, D; Chen, S.S.; Curcic, M.; Drouin, K.L.; Gough, M; Haus, BK; Haza, A.C.; Hogan, P

    2016-01-01

    The Grand LAgrangian Deployment (GLAD) used multiscale sampling and GPS technology to observe time series of drifter positions with initial drifter separation of O(100 m) to O(10 km), and nominal 5 min sampling, during the summer and fall of 2012 in the northern Gulf of Mexico. Histograms of the velocity field and its statistical parameters are non-Gaussian; most are multimodal. The dominant periods for the surface velocity field are 1–2 days due to inertial oscillations, tides, and the sea b...

  4. Design of 2D time-varying vector fields.

    Science.gov (United States)

    Chen, Guoning; Kwatra, Vivek; Wei, Li-Yi; Hansen, Charles D; Zhang, Eugene

    2012-10-01

    Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects.

  5. SOCIAL NETWORK OPTIMIZATION A NEW METHAHEURISTIC FOR GENERAL OPTIMIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Hassan Sherafat

    2017-12-01

    Full Text Available In the recent years metaheuristics were studied and developed as powerful technics for hard optimization problems. Some of well-known technics in this field are: Genetic Algorithms, Tabu Search, Simulated Annealing, Ant Colony Optimization, and Swarm Intelligence, which are applied successfully to many complex optimization problems. In this paper, we introduce a new metaheuristic for solving such problems based on social networks concept, named as Social Network Optimization – SNO. We show that a wide range of np-hard optimization problems may be solved by SNO.

  6. System performance optimization

    International Nuclear Information System (INIS)

    Bednarz, R.J.

    1978-01-01

    The System Performance Optimization has become an important and difficult field for large scientific computer centres. Important because the centres must satisfy increasing user demands at the lowest possible cost. Difficult because the System Performance Optimization requires a deep understanding of hardware, software and workload. The optimization is a dynamic process depending on the changes in hardware configuration, current level of the operating system and user generated workload. With the increasing complication of the computer system and software, the field for the optimization manoeuvres broadens. The hardware of two manufacturers IBM and CDC is discussed. Four IBM and two CDC operating systems are described. The description concentrates on the organization of the operating systems, the job scheduling and I/O handling. The performance definitions, workload specification and tools for the system stimulation are given. The measurement tools for the System Performance Optimization are described. The results of the measurement and various methods used for the operating system tuning are discussed. (Auth.)

  7. Magnetic Resonance Super-resolution Imaging Measurement with Dictionary-optimized Sparse Learning

    Directory of Open Access Journals (Sweden)

    Li Jun-Bao

    2017-06-01

    Full Text Available Magnetic Resonance Super-resolution Imaging Measurement (MRIM is an effective way of measuring materials. MRIM has wide applications in physics, chemistry, biology, geology, medical and material science, especially in medical diagnosis. It is feasible to improve the resolution of MR imaging through increasing radiation intensity, but the high radiation intensity and the longtime of magnetic field harm the human body. Thus, in the practical applications the resolution of hardware imaging reaches the limitation of resolution. Software-based super-resolution technology is effective to improve the resolution of image. This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method. The framework is to solve the problem of sample selection for dictionary learning of sparse reconstruction. The textural complexity-based image quality representation is proposed to choose the optimal samples for dictionary learning. Comprehensive experiments show that the dictionary-optimized sparse learning improves the performance of sparse representation.

  8. Optimal placement of active braces by using PSO algorithm in near- and far-field earthquakes

    Science.gov (United States)

    Mastali, M.; Kheyroddin, A.; Samali, B.; Vahdani, R.

    2016-03-01

    One of the most important issues in tall buildings is lateral resistance of the load-bearing systems against applied loads such as earthquake, wind and blast. Dual systems comprising core wall systems (single or multi-cell core) and moment-resisting frames are used as resistance systems in tall buildings. In addition to adequate stiffness provided by the dual system, most tall buildings may have to rely on various control systems to reduce the level of unwanted motions stemming from severe dynamic loads. One of the main challenges to effectively control the motion of a structure is limitation in distributing the required control along the structure height optimally. In this paper, concrete shear walls are used as secondary resistance system at three different heights as well as actuators installed in the braces. The optimal actuator positions are found by using optimized PSO algorithm as well as arbitrarily. The control performance of buildings that are equipped and controlled using the PSO algorithm method placement is assessed and compared with arbitrary placement of controllers using both near- and far-field ground motions of Kobe and Chi-Chi earthquakes.

  9. Irradiation test of HAFM and tag gas samples at the standard neutron field of 'YAYOI'

    International Nuclear Information System (INIS)

    Iguchi, Tetsuo

    1997-03-01

    To check the accuracy of helium accumulation neutron fluence monitors (HAFM) as new technique for fast reactor neutron dosimetry and the applicability of tag gas activation analysis to fast reactor failed fuel detection, their samples were irradiated at the standard neutron field of the fast neutron source reactor 'YAYOI' (Nuclear Engineering Research Laboratory, University of Tokyo). Since October in 1996, the HAFM samples such as 93% enriched boron (B) powders of 1 mg and natural B powders of 10 mg contained in vanadium (V) capsule were intermittently irradiated at the reactor core center (Glory hole: Gy) and/or under the leakage neutron field from the reactor core (Fast column: FC). In addition, new V capsules filled with enriched B of 40 mg and Be of 100 mg, respectively, were put into an experimental hole through the blanket surrounding the core. These neutron fields were monitored by the activation foils consisting of Fe, Co, Ni, Au, 235 U, 237 Np etc., mainly to confirm the results obtained from 1995's preliminary works. In particular, neutron flux distributions in the vicinity of irradiated samples were measured in more detail. At the end of March in 1997, the irradiated neutron fluence have reached the goal necessary to produce the detectable number of He atoms more than ∼10 13 in each HAFM sample. Six kinds of tag gas samples, which are the mixed gases of isotopically adjusted Xe and Kr contained in SUS capsules, were separately irradiated three times at Gy under the neutron fluence of ∼10 16 n/cm 2 in average. After irradiation, γ-ray spectra were measured for each sample. Depending on the composition of tag gas mixtures, the different patterns of γ-ray peak spectra from 79 Kr, 125 Xe, etc. produced through tag gas activation were able to be clearly identified. These experimental data will be very useful for the benchmark test of tag gas activation calculation applied to the fast reactor failed fuel detection. (author)

  10. Autoblocking dose-limiting normal structures within a radiation treatment field: 3-D computer optimization of 'unconventional' field arrangements

    International Nuclear Information System (INIS)

    Bates, Brian A.; Cullip, Timothy J.; Rosenman, Julian G.

    1995-01-01

    Purpose/Objective: To demonstrate that one can obtain a homogeneous dose distribution within a specified gross tumor volume (GTV) while severely limiting the dose to a structure surrounded by that tumor volume. We present three clinical examples below. Materials and Methods: Using planning CT scans from previously treated patients, we designed variety of radiation treatment plans in which the dose-critical normal structure was blocked, even if it meant blocking some of the tumor. To deal with the resulting dose inhomogeneities within the tumor, we introduced 3D compensation. Examples presented here include (1) blocking the spinal cord segment while treating an entire vertebral body, (2) blocking both kidneys while treating the entire peritoneal cavity, and (3) blocking one parotid gland while treating the oropharynx in its entirety along with regional nodes. A series of multiple planar and non-coplanar beam templates with automatic anatomic blocking and field shaping were designed for each scenario. Three-dimensional compensators were designed that gave the most homogeneous dose-distribution for the GTV. For each beam, rays were cast from the beam source through a 2D compensator grid and out through the tumor. The average tumor dose along each ray was then used to adjust the compensator thickness over successive iterations to achieve a uniform average dose. DVH calculations for the GTV, normal structures, and the 'auto-blocked' structure were made and used for inter-plan comparisons. Results: These optimized treatment plans successfully decreased dose to the dose-limiting structure while at the same time preserving or even improving the dose distribution to the tumor volume as compared to traditional treatment plans. Conclusion: The use of 3D compensation allows one to obtain dose distributions that are, theoretically, at least, far superior to those in common clinical use. Sensible beam templates, auto-blocking, auto-field shaping, and 3D compensators form a

  11. TRAN-STAT, Issue No. 3, January 1978. Topics discussed: some statistical aspects of compositing field samples

    International Nuclear Information System (INIS)

    Gilbert, R.O.

    1978-01-01

    Some statistical aspects of compositing field samples of soils for determining the content of Pu are discussed. Some of the potential problems involved in pooling samples are reviewed. This is followed by more detailed discussions and examples of compositing designs, adequacy of mixing, statistical models and their role in compositing, and related topics

  12. Linear models for airborne-laser-scanning-based operational forest inventory with small field sample size and highly correlated LiDAR data

    Science.gov (United States)

    Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.

    2015-01-01

    Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.

  13. Sample preparation optimization in fecal metabolic profiling.

    Science.gov (United States)

    Deda, Olga; Chatziioannou, Anastasia Chrysovalantou; Fasoula, Stella; Palachanis, Dimitris; Raikos, Νicolaos; Theodoridis, Georgios A; Gika, Helen G

    2017-03-15

    Metabolomic analysis of feces can provide useful insight on the metabolic status, the health/disease state of the human/animal and the symbiosis with the gut microbiome. As a result, recently there is increased interest on the application of holistic analysis of feces for biomarker discovery. For metabolomics applications, the sample preparation process used prior to the analysis of fecal samples is of high importance, as it greatly affects the obtained metabolic profile, especially since feces, as matrix are diversifying in their physicochemical characteristics and molecular content. However there is still little information in the literature and lack of a universal approach on sample treatment for fecal metabolic profiling. The scope of the present work was to study the conditions for sample preparation of rat feces with the ultimate goal of the acquisition of comprehensive metabolic profiles either untargeted by NMR spectroscopy and GC-MS or targeted by HILIC-MS/MS. A fecal sample pooled from male and female Wistar rats was extracted under various conditions by modifying the pH value, the nature of the organic solvent and the sample weight to solvent volume ratio. It was found that the 1/2 (w f /v s ) ratio provided the highest number of metabolites under neutral and basic conditions in both untargeted profiling techniques. Concerning LC-MS profiles, neutral acetonitrile and propanol provided higher signals and wide metabolite coverage, though extraction efficiency is metabolite dependent. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Optimization of saddle coils for magnetic resonance imaging

    International Nuclear Information System (INIS)

    Salmon, Carlos Ernesto Garrido; Vidoto, Edson Luiz Gea; Martins, Mateus Jose; Tannus, Alberto

    2006-01-01

    In Nuclear Magnetic Resonance (NMR) experiments, besides the apparatus designed to acquire the NMR signal, it is necessary to generate a radio frequency electromagnetic field using a device capable to transduce electromagnetic power into a transverse magnetic field. We must generate this transverse homogeneous magnetic field inside the region of interest with minimum power consumption. Many configurations have been proposed for this task, from coils to resonators. For low field intensity (<0.5 T) and small sample dimensions (<30 cm), the saddle coil configuration has been widely used. In this work we present a simplified method for calculating the magnetic field distribution in these coils considering the current density profile. We propose an optimized saddle configuration as a function of the dimensions of the region of interest, taking into account the uniformity and the sensitivity. In order to evaluate the magnetic field uniformity three quantities have been analyzed: Non-uniformity, peak-to-peak homogeneity and relative uniformity. Some experimental results are presented to validate our calculation. (author)

  15. Ultrasonic field analysis program for transducer design in the nuclear industry

    International Nuclear Information System (INIS)

    Singh, G.P.; Rose, J.L.

    1980-02-01

    An ultrasonic field analysis program is presented that can be used for transducer design in the nuclear industry. Calculation routines that make use of Huygen's principle in a field analysis model are introduced that enable such field characteristics as axial and lateral resolution, beam symmetry, and gain variation throughout the ultrasonic field to be optimized. Mathematical details are presented along with several sample problems that show comparisons with classical results reported in the literature and with experimental data. Several sample problems that are of interest to the nuclear industry are also included, along with some that satisfy both academical and practical curiosity. These include transducer shape effects, pulse shape effects, crystal vibration variation, and an introduction to such novel transducer designs as annular arrays and dual element angle beam transducers

  16. Method Evaluation And Field Sample Measurements For The Rate Of Movement Of The Oxidation Front In Saltstone

    Energy Technology Data Exchange (ETDEWEB)

    Almond, P. M. [Savannah River Site (SRS), Aiken, SC (United States); Kaplan, D. I. [Savannah River Site (SRS), Aiken, SC (United States); Langton, C. A. [Savannah River Site (SRS), Aiken, SC (United States); Stefanko, D. B. [Savannah River Site (SRS), Aiken, SC (United States); Spencer, W. A. [Savannah River Site (SRS), Aiken, SC (United States); Hatfield, A. [Clemson University, Clemson, SC (United States); Arai, Y. [Clemson University, Clemson, SC (United States)

    2012-08-23

    The objective of this work was to develop and evaluate a series of methods and validate their capability to measure differences in oxidized versus reduced saltstone. Validated methods were then applied to samples cured under field conditions to simulate Performance Assessment (PA) needs for the Saltstone Disposal Facility (SDF). Four analytical approaches were evaluated using laboratory-cured saltstone samples. These methods were X-ray absorption spectroscopy (XAS), diffuse reflectance spectroscopy (DRS), chemical redox indicators, and thin-section leaching methods. XAS and thin-section leaching methods were validated as viable methods for studying oxidation movement in saltstone. Each method used samples that were spiked with chromium (Cr) as a tracer for oxidation of the saltstone. The two methods were subsequently applied to field-cured samples containing chromium to characterize the oxidation state of chromium as a function of distance from the exposed air/cementitious material surface.

  17. Existence of optimal controls for systems governed by mean-field ...

    African Journals Online (AJOL)

    In this paper, we study the existence of an optimal control for systems, governed by stochastic dierential equations of mean-eld type. For non linear systems, we prove the existence of an optimal relaxed control, by using tightness techniques and Skorokhod selection theorem. The optimal control is a measure valued process ...

  18. Optimization of the Extraction of the Volatile Fraction from Honey Samples by SPME-GC-MS, Experimental Design, and Multivariate Target Functions

    Directory of Open Access Journals (Sweden)

    Elisa Robotti

    2017-01-01

    Full Text Available Head space (HS solid phase microextraction (SPME followed by gas chromatography with mass spectrometry detection (GC-MS is the most widespread technique to study the volatile profile of honey samples. In this paper, the experimental SPME conditions were optimized by a multivariate strategy. Both sensitivity and repeatability were optimized by experimental design techniques considering three factors: extraction temperature (from 50°C to 70°C, time of exposition of the fiber (from 20 min to 60 min, and amount of salt added (from 0 to 27.50%. Each experiment was evaluated by Principal Component Analysis (PCA that allows to take into consideration all the analytes at the same time, preserving the information about their different characteristics. Optimal extraction conditions were identified independently for signal intensity (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w and repeatability (extraction temperature: 50°C; extraction time: 60 min; salt percentage: 27.50% w/w and a final global compromise (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w was also reached. Considerations about the choice of the best internal standards were also drawn. The whole optimized procedure was than applied to the analysis of a multiflower honey sample and more than 100 compounds were identified.

  19. Poloidal field leakage optimization in ETE

    Energy Technology Data Exchange (ETDEWEB)

    Shibata, Carlos Shinya; Montes, Antonio

    1996-12-01

    A very simple but efficient numerical algorithm is used to minimize the Ohmic coil field leakage into the plasma region of the tokamak ETE. After few interactions the code provides the positions and the current required for two pairs of compensation coils. Resulting optimum field intensity distribution is presented and commented. (author). 5 refs., 4 figs., 2 tabs.

  20. Poloidal field leakage optimization in ETE

    International Nuclear Information System (INIS)

    Shibata, Carlos Shinya; Montes, Antonio.

    1996-01-01

    A very simple but efficient numerical algorithm is used to minimize the Ohmic coil field leakage into the plasma region of the tokamak ETE. After few interactions the code provides the positions and the current required for two pairs of compensation coils. Resulting optimum field intensity distribution is presented and commented. (author). 5 refs., 4 figs., 2 tabs

  1. SamplingStrata: An R Package for the Optimization of Strati?ed Sampling

    Directory of Open Access Journals (Sweden)

    Giulio Barcaroli

    2014-11-01

    Full Text Available When designing a sampling survey, usually constraints are set on the desired precision levels regarding one or more target estimates (the Ys. If a sampling frame is available, containing auxiliary information related to each unit (the Xs, it is possible to adopt a stratified sample design. For any given strati?cation of the frame, in the multivariate case it is possible to solve the problem of the best allocation of units in strata, by minimizing a cost function sub ject to precision constraints (or, conversely, by maximizing the precision of the estimates under a given budget. The problem is to determine the best stratification in the frame, i.e., the one that ensures the overall minimal cost of the sample necessary to satisfy precision constraints. The Xs can be categorical or continuous; continuous ones can be transformed into categorical ones. The most detailed strati?cation is given by the Cartesian product of the Xs (the atomic strata. A way to determine the best stratification is to explore exhaustively the set of all possible partitions derivable by the set of atomic strata, evaluating each one by calculating the corresponding cost in terms of the sample required to satisfy precision constraints. This is una?ordable in practical situations, where the dimension of the space of the partitions can be very high. Another possible way is to explore the space of partitions with an algorithm that is particularly suitable in such situations: the genetic algorithm. The R package SamplingStrata, based on the use of a genetic algorithm, allows to determine the best strati?cation for a population frame, i.e., the one that ensures the minimum sample cost necessary to satisfy precision constraints, in a multivariate and multi-domain case.

  2. Analysis of urinary neurotransmitters by capillary electrophoresis: Sensitivity enhancement using field-amplified sample injection and molecular imprinted polymer solid phase extraction

    International Nuclear Information System (INIS)

    Claude, Berengere; Nehme, Reine; Morin, Philippe

    2011-01-01

    Highlights: → Field-amplified sample injection (FASI) improves the sensitivity of capillary electrophoresis through the online pre-concentration samples. → The cationic analytes are stacked at the capillary inlet between a zone of low conductivity - sample and pre-injection plug - and a zone of high conductivity - running buffer. → The limits of quantification are 500 times lower than those obtained with hydrodynamic injection. → The presence of salts in the matrix greatly reduces the sensitivity of the FASI/CE-UV method. - Abstract: Capillary electrophoresis (CE) has been investigated for the analysis of some neurotransmitters, dopamine (DA), 3-methoxytyramine (3-MT) and serotonin (5-hydroxytryptamine, 5-HT) at nanomolar concentrations in urine. Field-amplified sample injection (FASI) has been used to improve the sensitivity through the online pre-concentration samples. The cationic analytes were stacked at the capillary inlet between a zone of low conductivity - sample and pre-injection plug - and a zone of high conductivity - running buffer. Several FASI parameters have been optimized (ionic strength of the running buffer, concentration of the sample protonation agent, composition of the sample solvent and nature of the pre-injection plug). Best results were obtained using H 3 PO 4 -LiOH (pH 4, ionic strength of 80 mmol L -1 ) as running buffer, 100 μmol L -1 of H 3 PO 4 in methanol-water 90/10 (v/v) as sample solvent and 100 μmol L -1 of H 3 PO 4 in water for the pre-injection plug. In these conditions, the linearity was verified in the 50-300 nmol L -1 concentration range for DA, 3-MT and 5-HT with a determination coefficient (r 2 ) higher than 0.99. The limits of quantification (10 nmol L -1 for DA and 3-MT, 5.9 nmol L -1 for 5-HT) were 500 times lower than those obtained with hydrodynamic injection. However, if this method is applied to the analysis of neurotransmitters in urine, the presence of salts in the matrix greatly reduces the sensitivity

  3. Analysis of urinary neurotransmitters by capillary electrophoresis: Sensitivity enhancement using field-amplified sample injection and molecular imprinted polymer solid phase extraction

    Energy Technology Data Exchange (ETDEWEB)

    Claude, Berengere, E-mail: berengere.claude@univ-orleans.fr [Institut de Chimie Organique et Analytique, CNRS FR 2708 UMR 6005, Universite d' Orleans, 45067 Orleans (France); Nehme, Reine; Morin, Philippe [Institut de Chimie Organique et Analytique, CNRS FR 2708 UMR 6005, Universite d' Orleans, 45067 Orleans (France)

    2011-08-12

    Highlights: {yields} Field-amplified sample injection (FASI) improves the sensitivity of capillary electrophoresis through the online pre-concentration samples. {yields} The cationic analytes are stacked at the capillary inlet between a zone of low conductivity - sample and pre-injection plug - and a zone of high conductivity - running buffer. {yields} The limits of quantification are 500 times lower than those obtained with hydrodynamic injection. {yields} The presence of salts in the matrix greatly reduces the sensitivity of the FASI/CE-UV method. - Abstract: Capillary electrophoresis (CE) has been investigated for the analysis of some neurotransmitters, dopamine (DA), 3-methoxytyramine (3-MT) and serotonin (5-hydroxytryptamine, 5-HT) at nanomolar concentrations in urine. Field-amplified sample injection (FASI) has been used to improve the sensitivity through the online pre-concentration samples. The cationic analytes were stacked at the capillary inlet between a zone of low conductivity - sample and pre-injection plug - and a zone of high conductivity - running buffer. Several FASI parameters have been optimized (ionic strength of the running buffer, concentration of the sample protonation agent, composition of the sample solvent and nature of the pre-injection plug). Best results were obtained using H{sub 3}PO{sub 4}-LiOH (pH 4, ionic strength of 80 mmol L{sup -1}) as running buffer, 100 {mu}mol L{sup -1} of H{sub 3}PO{sub 4} in methanol-water 90/10 (v/v) as sample solvent and 100 {mu}mol L{sup -1} of H{sub 3}PO{sub 4} in water for the pre-injection plug. In these conditions, the linearity was verified in the 50-300 nmol L{sup -1} concentration range for DA, 3-MT and 5-HT with a determination coefficient (r{sup 2}) higher than 0.99. The limits of quantification (10 nmol L{sup -1} for DA and 3-MT, 5.9 nmol L{sup -1} for 5-HT) were 500 times lower than those obtained with hydrodynamic injection. However, if this method is applied to the analysis of

  4. Evaluation and optimization of DNA extraction and purification procedures for soil and sediment samples.

    Science.gov (United States)

    Miller, D N; Bryant, J E; Madsen, E L; Ghiorse, W C

    1999-11-01

    We compared and statistically evaluated the effectiveness of nine DNA extraction procedures by using frozen and dried samples of two silt loam soils and a silt loam wetland sediment with different organic matter contents. The effects of different chemical extractants (sodium dodecyl sulfate [SDS], chloroform, phenol, Chelex 100, and guanadinium isothiocyanate), different physical disruption methods (bead mill homogenization and freeze-thaw lysis), and lysozyme digestion were evaluated based on the yield and molecular size of the recovered DNA. Pairwise comparisons of the nine extraction procedures revealed that bead mill homogenization with SDS combined with either chloroform or phenol optimized both the amount of DNA extracted and the molecular size of the DNA (maximum size, 16 to 20 kb). Neither lysozyme digestion before SDS treatment nor guanidine isothiocyanate treatment nor addition of Chelex 100 resin improved the DNA yields. Bead mill homogenization in a lysis mixture containing chloroform, SDS, NaCl, and phosphate-Tris buffer (pH 8) was found to be the best physical lysis technique when DNA yield and cell lysis efficiency were used as criteria. The bead mill homogenization conditions were also optimized for speed and duration with two different homogenizers. Recovery of high-molecular-weight DNA was greatest when we used lower speeds and shorter times (30 to 120 s). We evaluated four different DNA purification methods (silica-based DNA binding, agarose gel electrophoresis, ammonium acetate precipitation, and Sephadex G-200 gel filtration) for DNA recovery and removal of PCR inhibitors from crude extracts. Sephadex G-200 spin column purification was found to be the best method for removing PCR-inhibiting substances while minimizing DNA loss during purification. Our results indicate that for these types of samples, optimum DNA recovery requires brief, low-speed bead mill homogenization in the presence of a phosphate-buffered SDS-chloroform mixture, followed

  5. Handbook of optimization in telecommunications

    CERN Document Server

    Pardalos, Panos M

    2008-01-01

    Covers the field of optimization in telecommunications, and the optimization developments that are frequently applied to telecommunications. This book aims to provide a reference tool for scientists and engineers in telecommunications who depend upon optimization.

  6. Optimization of potential field method parameters through networks for swarm cooperative manipulation tasks

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2016-10-01

    Full Text Available An interesting current research field related to autonomous robots is mobile manipulation performed by cooperating robots (in terrestrial, aerial and underwater environments. Focusing on the underwater scenario, cooperative manipulation of Intervention-Autonomous Underwater Vehicles (I-AUVs is a complex and difficult application compared with the terrestrial or aerial ones because of many technical issues, such as underwater localization and limited communication. A decentralized approach for cooperative mobile manipulation of I-AUVs based on Artificial Neural Networks (ANNs is proposed in this article. This strategy exploits the potential field method; a multi-layer control structure is developed to manage the coordination of the swarm, the guidance and navigation of I-AUVs and the manipulation task. In the article, this new strategy has been implemented in the simulation environment, simulating the transportation of an object. This object is moved along a desired trajectory in an unknown environment and it is transported by four underwater mobile robots, each one provided with a seven-degrees-of-freedom robotic arm. The simulation results are optimized thanks to the ANNs used for the potentials tuning.

  7. Field portable low temperature porous layer open tubular cryoadsorption headspace sampling and analysis part I: Instrumentation.

    Science.gov (United States)

    Bruno, Thomas J

    2016-01-15

    Building on the successful application in the laboratory of PLOT-cryoadsorption as a means of collecting vapor (or headspace) samples for chromatographic analysis, in this paper a field portable apparatus is introduced. This device fits inside of a briefcase (aluminum tool carrier), and can be easily transported by vehicle or by air. The portable apparatus functions entirely on compressed air, making it suitable for use in locations lacking electrical power, and for use in flammable and explosive environments. The apparatus consists of four aspects: a field capable PLOT-capillary platform, the supporting equipment platform, the service interface between the PLOT-capillary and the supporting equipment, and the necessary peripherals. Vapor sampling can be done with either a hand piece (containing the PLOT capillary) or with a custom fabricated standoff module. Both the hand piece and the standoff module can be heated and cooled to facilitate vapor collection and subsequent vapor sample removal. The service interface between the support platform and the sampling units makes use of a unique counter current approach that minimizes loss of cooling and heating due to heat transfer with the surroundings (recuperative thermostatting). Several types of PLOT-capillary elements and sampling probes are described in this report. Applications to a variety of samples relevant to forensic and environmental analysis are discussed in a companion paper. Published by Elsevier B.V.

  8. Application of field synergy principle for optimization fluid flow and convective heat transfer in a tube bundle of a pre-heater

    International Nuclear Information System (INIS)

    Hamid, Mohammed O.A.; Zhang, Bo; Yang, Luopeng

    2014-01-01

    The big problems facing solar-assisted MED (multiple-effect distillation) desalination unit are the low efficiency and bulky heat exchangers, which worsen its systematic economic feasibility. In an attempt to develop heat transfer technologies with high energy efficiency, a mathematical study is established, and optimization analysis using FSP (field synergy principle) is proposed to support meaning of heat transfer enhancement of a pre-heater in a solar-assisted MED desalination unit. Numerical simulations are performed on fluid flow and heat transfer characteristics in a circular and elliptical tube bundle. The numerical results are analyzed using the concept of synergy angle and synergy number as an indication of synergy between velocity vector and temperature gradient fields. Heat transfer in elliptical tube bundle is enhanced significantly with increasing initial velocity of the feed seawater and field synergy number and decreasing of synergy angle. Under the same operating conditions of the two designs, the total average synergy angle is 78.97° and 66.31° in circular and elliptical tube bundle, respectively. Optimization of the pre-heater by FSP shows that in case of elliptical tube bundle design, the average synergy number and heat transfer rate are increased by 22.68% and 35.98% respectively. - Highlights: • FSP (field synergy principle) is used to investigate heat transfer enhancement. • Numerical simulations are performed in circular and elliptical tubes pre-heater. • Numerical results are analyzed using concept of synergy angle and synergy number. • Optimization of elliptical tube bundle by FSP has better performance

  9. Some statistical and sampling needs for detecting spills or migration at commercial low-level radioactive waste disposal sites

    International Nuclear Information System (INIS)

    Thomas, J.M.; Eberhardt, L.L.; Skalski, J.R.; Simmons, M.A.

    1984-05-01

    As part of a larger study funded by the US Nuclear Regulatory Commission we have been investigating field sampling strategies and compositing as a means of detecting spills or migration at commercial low-level radioactive waste disposal sites. The overall project is designed to produce information for developing guidance on implementing 10 CFR part 61. Compositing (pooling samples) for detection is discussed first, followed by our development of a statistical test to allow a decision as to whether any component of a composite exceeds a prescribed maximum acceptable level. The question of optimal field sampling designs and an Apple computer program designed to show the difficulties in constructing efficient field designs and using compositing schemes are considered. 6 references, 3 figures, 3 tables

  10. Modelling and comparison of trapped fields in (RE)BCO bulk superconductors for activation using pulsed field magnetization

    Science.gov (United States)

    Ainslie, M. D.; Fujishiro, H.; Ujiie, T.; Zou, J.; Dennis, A. R.; Shi, Y.-H.; Cardwell, D. A.

    2014-06-01

    The ability to generate a permanent, stable magnetic field unsupported by an electromotive force is fundamental to a variety of engineering applications. Bulk high temperature superconducting (HTS) materials can trap magnetic fields of magnitude over ten times higher than the maximum field produced by conventional magnets, which is limited practically to rather less than 2 T. In this paper, two large c-axis oriented, single-grain YBCO and GdBCO bulk superconductors are magnetized by the pulsed field magnetization (PFM) technique at temperatures of 40 and 65 K and the characteristics of the resulting trapped field profile are investigated with a view of magnetizing such samples as trapped field magnets (TFMs) in situ inside a trapped flux-type superconducting electric machine. A comparison is made between the temperatures at which the pulsed magnetic field is applied and the results have strong implications for the optimum operating temperature for TFMs in trapped flux-type superconducting electric machines. The effects of inhomogeneities, which occur during the growth process of single-grain bulk superconductors, on the trapped field and maximum temperature rise in the sample are modelled numerically using a 3D finite-element model based on the H-formulation and implemented in Comsol Multiphysics 4.3a. The results agree qualitatively with the observed experimental results, in that inhomogeneities act to distort the trapped field profile and reduce the magnitude of the trapped field due to localized heating within the sample and preferential movement and pinning of flux lines around the growth section regions (GSRs) and growth sector boundaries (GSBs), respectively. The modelling framework will allow further investigation of various inhomogeneities that arise during the processing of (RE)BCO bulk superconductors, including inhomogeneous Jc distributions and the presence of current-limiting grain boundaries and cracks, and it can be used to assist optimization of

  11. Application of Chitosan-Zinc Oxide Nanoparticles for Lead Extraction From Water Samples by Combining Ant Colony Optimization with Artificial Neural Network

    Science.gov (United States)

    Khajeh, M.; Pourkarami, A.; Arefnejad, E.; Bohlooli, M.; Khatibi, A.; Ghaffari-Moghaddam, M.; Zareian-Jahromi, S.

    2017-09-01

    Chitosan-zinc oxide nanoparticles (CZPs) were developed for solid-phase extraction. Combined artificial neural network-ant colony optimization (ANN-ACO) was used for the simultaneous preconcentration and determination of lead (Pb2+) ions in water samples prior to graphite furnace atomic absorption spectrometry (GF AAS). The solution pH, mass of adsorbent CZPs, amount of 1-(2-pyridylazo)-2-naphthol (PAN), which was used as a complexing agent, eluent volume, eluent concentration, and flow rates of sample and eluent were used as input parameters of the ANN model, and the percentage of extracted Pb2+ ions was used as the output variable of the model. A multilayer perception network with a back-propagation learning algorithm was used to fit the experimental data. The optimum conditions were obtained based on the ACO. Under the optimized conditions, the limit of detection for Pb2+ ions was found to be 0.078 μg/L. This procedure was also successfully used to determine the amounts of Pb2+ ions in various natural water samples.

  12. Optimization and approximation

    CERN Document Server

    Pedregal, Pablo

    2017-01-01

    This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.

  13. Field sampling and travel report

    Science.gov (United States)

    Dr. Sigua was involved with two field visits of watersheds with different livestock production systems (poultry, swine, and beef/dairy cattle); one in the sub-basins of Pinhal River Watershed (October 23, 2008) and at the micro-basins of the Rio Pine Forest (October 29, 2008) where studies of assess...

  14. Research of Dust Field Optimization Distribution Based on Parameters Change of Air Duct Outlet in Fully Mechanized Excavation Face of Coal Mine

    Science.gov (United States)

    Gong, Xiao-Yan; Xia, Zhi-Xin; Wu, Yue; Mo, Jin-Ming; Zhang, Xin-Yi

    2017-12-01

    Aiming at the problem of dust accumulation and pollution risk rising sharply in fully mechanized excavation face, which caused by the unreasonable air duct outlet airflow under the long distance driving, this paper proposes a new idea to optimize the distribution of dust by changing the angle, caliber and the front and rear distance of air duct outlet. Taking the fully mechanized excavation face of Ningtiaota coal mine which located in Shaanxi province as the research object, the numerical simulation scheme of dust field was established, the safety hazard of the distribution of original dust field was simulated and analyzed, the numerical simulation and optimization analysis of the dust distribution by changing the angle, caliber and the front and rear distance of air duct outlet was carried out, and the adjustment scheme of the optimized dust distribution was obtained, which provides a theoretical basis for reducing the probability of dust explosion and the degree of pollution.

  15. Modeling Optimal Cutoffs for the Brazilian Household Food Insecurity Measurement Scale in a Nationwide Representative Sample.

    Science.gov (United States)

    Interlenghi, Gabriela S; Reichenheim, Michael E; Segall-Corrêa, Ana M; Pérez-Escamilla, Rafael; Moraes, Claudia L; Salles-Costa, Rosana

    2017-07-01

    Background: This is the second part of a model-based approach to examine the suitability of the current cutoffs applied to the raw score of the Brazilian Household Food Insecurity Measurement Scale [Escala Brasileira de Insegurança Alimentar (EBIA)]. The approach allows identification of homogeneous groups who correspond to severity levels of food insecurity (FI) and, by extension, discriminant cutoffs able to accurately distinguish these groups. Objective: This study aims to examine whether the model-based approach for identifying optimal cutoffs first implemented in a local sample is replicated in a countrywide representative sample. Methods: Data were derived from the Brazilian National Household Sample Survey of 2013 ( n = 116,543 households). Latent class factor analysis (LCFA) models from 2 to 5 classes were applied to the scale's items to identify the number of underlying FI latent classes. Next, identification of optimal cutoffs on the overall raw score was ascertained from these identified classes. Analyses were conducted in the aggregate data and by macroregions. Finally, model-based classifications (latent classes and groupings identified thereafter) were contrasted to the traditionally used classification. Results: LCFA identified 4 homogeneous groups with a very high degree of class separation (entropy = 0.934-0.975). The following cutoffs were identified in the aggregate data: between 1 and 2 (1/2), 5 and 6 (5/6), and 10 and 11 (10/11) in households with children and/or adolescents category emerged consistently in all analyses. Conclusions: Nationwide findings corroborate previous local evidence that households with an overall score of 1 are more akin to those scoring negative on all items. These results may contribute to guide experts' and policymakers' decisions on the most appropriate EBIA cutoffs. © 2017 American Society for Nutrition.

  16. Optimization of microwave-assisted extraction with saponification (MAES) for the determination of polybrominated flame retardants in aquaculture samples.

    Science.gov (United States)

    Fajar, N M; Carro, A M; Lorenzo, R A; Fernandez, F; Cela, R

    2008-08-01

    The efficiency of microwave-assisted extraction with saponification (MAES) for the determination of seven polybrominated flame retardants (polybrominated biphenyls, PBBs; and polybrominated diphenyl ethers, PBDEs) in aquaculture samples is described and compared with microwave-assisted extraction (MAE). Chemometric techniques based on experimental designs and desirability functions were used for simultaneous optimization of the operational parameters used in both MAES and MAE processes. Application of MAES to this group of contaminants in aquaculture samples, which had not been previously applied to this type of analytes, was shown to be superior to MAE in terms of extraction efficiency, extraction time and lipid content extracted from complex matrices (0.7% as against 18.0% for MAE extracts). PBBs and PBDEs were determined by gas chromatography with micro-electron capture detection (GC-muECD). The quantification limits for the analytes were 40-750 pg g(-1) (except for BB-15, which was 1.43 ng g(-1)). Precision for MAES-GC-muECD (%RSD < 11%) was significantly better than for MAE-GC-muECD (%RSD < 20%). The accuracy of both optimized methods was satisfactorily demonstrated by analysis of appropriate certified reference material (CRM), WMF-01.

  17. Design of 2D Time-Varying Vector Fields

    KAUST Repository

    Chen, Guoning

    2012-10-01

    Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects. © 1995-2012 IEEE.

  18. Design of 2D Time-Varying Vector Fields

    KAUST Repository

    Chen, Guoning; Kwatra, Vivek; Wei, Li-Yi; Hansen, Charles D.; Zhang, Eugene

    2012-01-01

    Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects. © 1995-2012 IEEE.

  19. Gas chromatographic-mass spectrometric analysis of urinary volatile organic metabolites: Optimization of the HS-SPME procedure and sample storage conditions.

    Science.gov (United States)

    Živković Semren, Tanja; Brčić Karačonji, Irena; Safner, Toni; Brajenović, Nataša; Tariba Lovaković, Blanka; Pizent, Alica

    2018-01-01

    Non-targeted metabolomics research of human volatile urinary metabolome can be used to identify potential biomarkers associated with the changes in metabolism related to various health disorders. To ensure reliable analysis of urinary volatile organic metabolites (VOMs) by gas chromatography-mass spectrometry (GC-MS), parameters affecting the headspace-solid phase microextraction (HS-SPME) procedure have been evaluated and optimized. The influence of incubation and extraction temperatures and times, coating fibre material and salt addition on SPME efficiency was investigated by multivariate optimization methods using reduced factorial and Doehlert matrix designs. The results showed optimum values for temperature to be 60°C, extraction time 50min, and incubation time 35min. The proposed conditions were applied to investigate urine samples' stability regarding different storage conditions and freeze-thaw processes. The sum of peak areas of urine samples stored at 4°C, -20°C, and -80°C up to six months showed a time dependent decrease over time although storage at -80°C resulted in a slight non-significant reduction comparing to the fresh sample. However, due to the volatile nature of the analysed compounds, more than two cycles of freezing/thawing of the sample stored for six months at -80°C should be avoided whenever possible. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Radiological optimization

    International Nuclear Information System (INIS)

    Zeevaert, T.

    1998-01-01

    Radiological optimization is one of the basic principles in each radiation-protection system and it is a basic requirement in the safety standards for radiation protection in the European Communities. The objectives of the research, performed in this field at the Belgian Nuclear Research Centre SCK-CEN, are: (1) to implement the ALARA principles in activities with radiological consequences; (2) to develop methodologies for optimization techniques in decision-aiding; (3) to optimize radiological assessment models by validation and intercomparison; (4) to improve methods to assess in real time the radiological hazards in the environment in case of an accident; (5) to develop methods and programmes to assist decision-makers during a nuclear emergency; (6) to support the policy of radioactive waste management authorities in the field of radiation protection; (7) to investigate existing software programmes in the domain of multi criteria analysis. The main achievements for 1997 are given

  1. [Orthogonal design method to optimize rehabilitation prescription of pulsed electric field at Jiaji (EX-B 2) points for spinal cord injury].

    Science.gov (United States)

    Zhang, Lifeng; Zhang, Hui; Wang, Lin; Liu, Yanyan; Sun, Xianyue; Li, Lingyan; Hou, Jing

    2015-01-01

    By using orthogonal design method to optimnize prescription of pulsed electric field at Jiaji (EX- B 2) points for spinal cord injury (SCI). Fifty six patients of SCI were selected, in which 36 cases were divided into orthogonal design trial and 20 cases were into clinical verification. With 36 patients who received orthogonal design trial, Frankel grading scale was used as observation index to screen optimal prescription of pulsed electric field. Pulse frequency (factor A) included low frequency (factor A(I), 10(2) Hz). moderate frequency (factor A(II), 10(4) Hz) and high frequency (factor A(III), 10(3) Hz); pulse amplitude (factor B) included 0-30 V (factor B ), 0-60 V (factor B(II)) and 0-90 V (factor B(III)); pulse width (factor C) included 0.1 ms (factor C(I)). 0.6 ms (factor C(II)) and 0.9 ms (factor C(III)); acupuncture time (factor D) included one month (DI), three months (D(II)) and five months (D(III)). Twenty patients were used for clinical efficacy observation and the effects of screened optimal pre scription of pulsed electric field at Jiaji (EX-B 2) points combined with regular rehabilitation training on spasm se- verity, score of sensory and motor functions, Barthel index and Frankel score were observed. (1) As results of orthogonal design trial, the optimal prescription was A(III) B(III), C(I), D(III), which were high frequency (10(3) Hz), 0-90 V of pulse amplitude, 0.4 ms of pulse width and 5 months of treatment time. (2) As results of 20 patient clinical verification, Ashworth score, tendon reflex and clonus were all significantly improved (Ppulsed electric field at Jiaji (EX-B 2) points for spinal cord injury is high frequency (10& Hz), 0-90 V of pulse amplitude, 0.4 ms of pulse width and 5 months of treatment time. The optimal prescription of pulsed electric field at Jiaji (EX-B 2) points combined with regular rehabilitation could obviously improve spasm severity, enhance senso- ry and motor functions, and ameliorate activity of daily life and

  2. Decision support tool for soil sampling of heterogeneous pesticide (chlordecone) pollution.

    Science.gov (United States)

    Clostre, Florence; Lesueur-Jannoyer, Magalie; Achard, Raphaël; Letourmy, Philippe; Cabidoche, Yves-Marie; Cattan, Philippe

    2014-02-01

    When field pollution is heterogeneous due to localized pesticide application, as is the case of chlordecone (CLD), the mean level of pollution is difficult to assess. Our objective was to design a decision support tool to optimize soil sampling. We analyzed the CLD heterogeneity of soil content at 0-30- and 30-60-cm depth. This was done within and between nine plots (0.4 to 1.8 ha) on andosol and ferralsol. We determined that 20 pooled subsamples per plot were a satisfactory compromise with respect to both cost and accuracy. Globally, CLD content was greater for andosols and the upper soil horizon (0-30 cm). Soil organic carbon cannot account for CLD intra-field variability. Cropping systems and tillage practices influence the CLD content and distribution; that is CLD pollution was higher under intensive banana cropping systems and, while upper soil horizon was more polluted than the lower one with shallow tillage (pollution in the soil profile. The decision tool we proposed compiles and organizes these results to better assess CLD soil pollution in terms of sampling depth, distance, and unit at field scale. It accounts for sampling objectives, farming practices (cropping system, tillage), type of soil, and topographical characteristics (slope) to design a relevant sampling plan. This decision support tool is also adaptable to other types of heterogeneous agricultural pollution at field level.

  3. Near Field and Far Field Effects in the Taguchi-Optimized Design of AN InP/GaAs-BASED Double Wafer-Fused Mqw Long-Wavelength Vertical-Cavity Surface-Emitting Laser

    Science.gov (United States)

    Menon, P. S.; Kandiah, K.; Mandeep, J. S.; Shaari, S.; Apte, P. R.

    Long-wavelength VCSELs (LW-VCSEL) operating in the 1.55 μm wavelength regime offer the advantages of low dispersion and optical loss in fiber optic transmission systems which are crucial in increasing data transmission speed and reducing implementation cost of fiber-to-the-home (FTTH) access networks. LW-VCSELs are attractive light sources because they offer unique features such as low power consumption, narrow beam divergence and ease of fabrication for two-dimensional arrays. This paper compares the near field and far field effects of the numerically investigated LW-VCSEL for various design parameters of the device. The optical intensity profile far from the device surface, in the Fraunhofer region, is important for the optical coupling of the laser with other optical components. The near field pattern is obtained from the structure output whereas the far-field pattern is essentially a two-dimensional fast Fourier Transform (FFT) of the near-field pattern. Design parameters such as the number of wells in the multi-quantum-well (MQW) region, the thickness of the MQW and the effect of using Taguchi's orthogonal array method to optimize the device design parameters on the near/far field patterns are evaluated in this paper. We have successfully increased the peak lasing power from an initial 4.84 mW to 12.38 mW at a bias voltage of 2 V and optical wavelength of 1.55 μm using Taguchi's orthogonal array. As a result of the Taguchi optimization and fine tuning, the device threshold current is found to increase along with a slight decrease in the modulation speed due to increased device widths.

  4. Dark field electron holography for strain measurement

    Energy Technology Data Exchange (ETDEWEB)

    Beche, A., E-mail: armand.beche@fei.com [CEA-Grenoble, INAC/SP2M/LEMMA, F-38054 Grenoble (France); Rouviere, J.L. [CEA-Grenoble, INAC/SP2M/LEMMA, F-38054 Grenoble (France); Barnes, J.P.; Cooper, D. [CEA-LETI, Minatec Campus, F-38054 Grenoble (France)

    2011-02-15

    Dark field electron holography is a new TEM-based technique for measuring strain with nanometer scale resolution. Here we present the procedure to align a transmission electron microscope and obtain dark field holograms as well as the theoretical background necessary to reconstruct strain maps from holograms. A series of experimental parameters such as biprism voltage, sample thickness, exposure time, tilt angle and choice of diffracted beam are then investigated on a silicon-germanium layer epitaxially embedded in a silicon matrix in order to obtain optimal dark field holograms over a large field of view with good spatial resolution and strain sensitivity. -- Research Highlights: {yields} Step by step explanation of the dark field electron holography technique. {yields} Presentation of the theoretical equations to obtain quantitative strain map. {yields} Description of experimental parameters influencing dark field holography results. {yields} Quantitative strain measurement on a SiGe layer embedded in a silicon matrix.

  5. Synthesis of Optimal Processing Pathway for Microalgae-based Biorefinery under Uncertainty

    DEFF Research Database (Denmark)

    Rizwan, Muhammad; Lee, Jay H.; Gani, Rafiqul

    2015-01-01

    decision making, we propose a systematic framework for the synthesis and optimal design of microalgae-based processing network under uncertainty. By incorporating major uncertainties into the biorefinery superstructure model we developed previously, a stochastic mixed integer nonlinear programming (s......The research in the field of microalgae-based biofuels and chemicals is in early phase of the development, and therefore a wide range of uncertainties exist due to inconsistencies among and shortage of technical information. In order to handle and address these uncertainties to ensure robust......MINLP) problem is formulated for determining the optimal biorefinery structure under given parameter uncertainties modelled as sampled scenarios. The solution to the sMINLP problem determines the optimal decisions with respect to processing technologies, material flows, and product portfolio in the presence...

  6. Full-zone spectral envelope function formalism for the optimization of line and point tunnel field-effect transistors

    Energy Technology Data Exchange (ETDEWEB)

    Verreck, Devin, E-mail: devin.verreck@imec.be; Groeseneken, Guido [imec, Kapeldreef 75, 3001 Leuven (Belgium); Department of Electrical Engineering, KU Leuven, 3001 Leuven (Belgium); Verhulst, Anne S.; Mocuta, Anda; Collaert, Nadine; Thean, Aaron [imec, Kapeldreef 75, 3001 Leuven (Belgium); Van de Put, Maarten; Magnus, Wim [imec, Kapeldreef 75, 3001 Leuven (Belgium); Department of Physics, Universiteit Antwerpen, 2020 Antwerpen (Belgium); Sorée, Bart [imec, Kapeldreef 75, 3001 Leuven (Belgium); Department of Physics, Universiteit Antwerpen, 2020 Antwerpen (Belgium); Department of Electrical Engineering, KU Leuven, 3001 Leuven (Belgium)

    2015-10-07

    Efficient quantum mechanical simulation of tunnel field-effect transistors (TFETs) is indispensable to allow for an optimal configuration identification. We therefore present a full-zone 15-band quantum mechanical solver based on the envelope function formalism and employing a spectral method to reduce computational complexity and handle spurious solutions. We demonstrate the versatility of the solver by simulating a 40 nm wide In{sub 0.53}Ga{sub 0.47}As lineTFET and comparing it to p-n-i-n configurations with various pocket and body thicknesses. We find that the lineTFET performance is not degraded compared to semi-classical simulations. Furthermore, we show that a suitably optimized p-n-i-n TFET can obtain similar performance to the lineTFET.

  7. Cryogen free high magnetic field and low temperature sample environments for neutron scattering - latest developments

    International Nuclear Information System (INIS)

    Burgoyne, John

    2016-01-01

    Continuous progress has been made over many years now in the provision of low- and ultra-low temperature sample environments, together with new high-field superconducting magnets and increased convenience for both the user and the neutron research facility via new cooling technologies. Within Oxford Instrument's experience, this has been achieved in many cases through close collaboration with neutron scientists, and with the neutron facilities' sample environment leaders in particular. Superconducting magnet designs ranging from compact Small Angle (SANS) systems up to custom-engineered wide-angle scattering systems have been continuously developed. Recondensing, or 'zero boil-off' (ZBO), systems are well established for situations in which a high field magnet is not conducive to totally cryogen free cooling solutions, and offer a reliable route with the best trade-offs of maximum system capability versus running costs and user convenience. Fully cryogen free solutions for cryostats, dilution refrigerators, and medium-field magnets are readily available. Here we will present the latest technology developments in these options, describing the state-of-the art, the relative advantages of each, and the opportunities they offer to the neutron science community. (author)

  8. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  9. φq-field theory for portfolio optimization: “fat tails” and nonlinear correlations

    Science.gov (United States)

    Sornette, D.; Simonetti, P.; Andersen, J. V.

    2000-08-01

    Physics and finance are both fundamentally based on the theory of random walks (and their generalizations to higher dimensions) and on the collective behavior of large numbers of correlated variables. The archetype examplifying this situation in finance is the portfolio optimization problem in which one desires to diversify on a set of possibly dependent assets to optimize the return and minimize the risks. The standard mean-variance solution introduced by Markovitz and its subsequent developments is basically a mean-field Gaussian solution. It has severe limitations for practical applications due to the strongly non-Gaussian structure of distributions and the nonlinear dependence between assets. Here, we present in details a general analytical characterization of the distribution of returns for a portfolio constituted of assets whose returns are described by an arbitrary joint multivariate distribution. In this goal, we introduce a non-linear transformation that maps the returns onto Gaussian variables whose covariance matrix provides a new measure of dependence between the non-normal returns, generalizing the covariance matrix into a nonlinear covariance matrix. This nonlinear covariance matrix is chiseled to the specific fat tail structure of the underlying marginal distributions, thus ensuring stability and good conditioning. The portfolio distribution is then obtained as the solution of a mapping to a so-called φq field theory in particle physics, of which we offer an extensive treatment using Feynman diagrammatic techniques and large deviation theory, that we illustrate in details for multivariate Weibull distributions. The interaction (non-mean field) structure in this field theory is a direct consequence of the non-Gaussian nature of the distribution of asset price returns. We find that minimizing the portfolio variance (i.e. the relatively “small” risks) may often increase the large risks, as measured by higher normalized cumulants. Extensive

  10. Communication: Calculation of interatomic forces and optimization of molecular geometry with auxiliary-field quantum Monte Carlo

    Science.gov (United States)

    Motta, Mario; Zhang, Shiwei

    2018-05-01

    We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.

  11. Highly reliable field electron emitters produced from reproducible damage-free carbon nanotube composite pastes with optimal inorganic fillers

    Science.gov (United States)

    Kim, Jae-Woo; Jeong, Jin-Woo; Kang, Jun-Tae; Choi, Sungyoul; Ahn, Seungjoon; Song, Yoon-Ho

    2014-02-01

    Highly reliable field electron emitters were developed using a formulation for reproducible damage-free carbon nanotube (CNT) composite pastes with optimal inorganic fillers and a ball-milling method. We carefully controlled the ball-milling sequence and time to avoid any damage to the CNTs, which incorporated fillers that were fully dispersed as paste constituents. The field electron emitters fabricated by printing the CNT pastes were found to exhibit almost perfect adhesion of the CNT emitters to the cathode, along with good uniformity and reproducibility. A high field enhancement factor of around 10 000 was achieved from the CNT field emitters developed. By selecting nano-sized metal alloys and oxides and using the same formulation sequence, we also developed reliable field emitters that could survive high-temperature post processing. These field emitters had high durability to post vacuum annealing at 950 °C, guaranteeing survival of the brazing process used in the sealing of field emission x-ray tubes. We evaluated the field emitters in a triode configuration in the harsh environment of a tiny vacuum-sealed vessel and observed very reliable operation for 30 h at a high current density of 350 mA cm-2. The CNT pastes and related field emitters that were developed could be usefully applied in reliable field emission devices.

  12. Advanced backend optimization

    CERN Document Server

    Touati, Sid

    2014-01-01

    This book is a summary of more than a decade of research in the area of backend optimization. It contains the latest fundamental research results in this field. While existing books are often more oriented toward Masters students, this book is aimed more towards professors and researchers as it contains more advanced subjects.It is unique in the sense that it contains information that has not previously been covered by other books in the field, with chapters on phase ordering in optimizing compilation; register saturation in instruction level parallelism; code size reduction for software pipe

  13. Euler's fluid equations: Optimal control vs optimization

    International Nuclear Information System (INIS)

    Holm, Darryl D.

    2009-01-01

    An optimization method used in image-processing (metamorphosis) is found to imply Euler's equations for incompressible flow of an inviscid fluid, without requiring that the Lagrangian particle labels exactly follow the flow lines of the Eulerian velocity vector field. Thus, an optimal control problem and an optimization problem for incompressible ideal fluid flow both yield the same Euler fluid equations, although their Lagrangian parcel dynamics are different. This is a result of the gauge freedom in the definition of the fluid pressure for an incompressible flow, in combination with the symmetry of fluid dynamics under relabeling of their Lagrangian coordinates. Similar ideas are also illustrated for SO(N) rigid body motion.

  14. Optimization of a Pre-MEKC Separation SPE Procedure for Steroid Molecules in Human Urine Samples

    Directory of Open Access Journals (Sweden)

    Ilona Olędzka

    2013-11-01

    Full Text Available Many steroid hormones can be considered as potential biomarkers and their determination in body fluids can create opportunities for the rapid diagnosis of many diseases and disorders of the human body. Most existing methods for the determination of steroids are usually time- and labor-consuming and quite costly. Therefore, the aim of analytical laboratories is to develop a new, relatively low-cost and rapid implementation methodology for their determination in biological samples. Due to the fact that there is little literature data on concentrations of steroid hormones in urine samples, we have made attempts at the electrophoretic determination of these compounds. For this purpose, an extraction procedure for the optimized separation and simultaneous determination of seven steroid hormones in urine samples has been investigated. The isolation of analytes from biological samples was performed by liquid-liquid extraction (LLE with dichloromethane and compared to solid phase extraction (SPE with C18 and hydrophilic-lipophilic balance (HLB columns. To separate all the analytes a micellar electrokinetic capillary chromatography (MECK technique was employed. For full separation of all the analytes a running buffer (pH 9.2, composed of 10 mM sodium tetraborate decahydrate (borax, 50 mM sodium dodecyl sulfate (SDS, and 10% methanol was selected. The methodology developed in this work for the determination of steroid hormones meets all the requirements of analytical methods. The applicability of the method has been confirmed for the analysis of urine samples collected from volunteers—both men and women (students, amateur bodybuilders, using and not applying steroid doping. The data obtained during this work can be successfully used for further research on the determination of steroid hormones in urine samples.

  15. Detection of pesticides residues in water samples from organic and conventional paddy fields of Ledang, Johor, Malaysia

    Science.gov (United States)

    Abdullah, Md Pauzi; Othman, Mohamed Rozali; Ishak, Anizan; Nabhan, Khitam Jaber

    2016-11-01

    Pesticides have been used extensively by the farmers in Malaysia during the last few decades. Sixteen water samples, collected from paddy fields both organic and conventional, from Ledang, Johor, were analyzed to determine the occurrence and distribution of organochlorine (OCPs) and organophosphorus (OPPs) pesticide residues. GC-ECD instrument was used to identify and determine the concentrations of these pesticide residues. Pesticide residues were detected in conventional fields in the range about 0.036-0.508 µg/L higher than detected in organic fields about 0.015-0.428 µg/L. However the level of concentration of pesticide residues in water sample from both paddy fields are in the exceed limit for human consumption, according to European Economic Commission (EEC) (Directive 98/83/EC) at 0.1 µg/L for any pesticide or 0.5 µg/L for total pesticides. The results that the organic plot is still contaminated with pesticides although pesticides were not use at all in plot possibly from historical used as well as from airborne contamination.

  16. Nonlinear optimization

    CERN Document Server

    Ruszczynski, Andrzej

    2011-01-01

    Optimization is one of the most important areas of modern applied mathematics, with applications in fields from engineering and economics to finance, statistics, management science, and medicine. While many books have addressed its various aspects, Nonlinear Optimization is the first comprehensive treatment that will allow graduate students and researchers to understand its modern ideas, principles, and methods within a reasonable time, but without sacrificing mathematical precision. Andrzej Ruszczynski, a leading expert in the optimization of nonlinear stochastic systems, integrates the theory and the methods of nonlinear optimization in a unified, clear, and mathematically rigorous fashion, with detailed and easy-to-follow proofs illustrated by numerous examples and figures. The book covers convex analysis, the theory of optimality conditions, duality theory, and numerical methods for solving unconstrained and constrained optimization problems. It addresses not only classical material but also modern top...

  17. A Novel Field Deployable Point-of-Care Diagnostic Test for Cutaneous Leishmaniasis

    Science.gov (United States)

    2015-10-01

    could enhance the success of the RPA method in the field, including 1) isolation of DNA from clinical samples using a mini (portable) extractor at...the POC or FTA Whatman filter paper specially designed to improve DNA preservation and purification at POC. Aim 1: To optimize the analytical...sensitivity and specificity of the genus- and complex-specific RPA-LF tests using Leishmania isolates and clinical samples from collaborating study sites. We

  18. Bifurcations of optimal vector fields: an overview

    NARCIS (Netherlands)

    Kiseleva, T.; Wagener, F.; Rodellar, J.; Reithmeier, E.

    2009-01-01

    We develop a bifurcation theory for the solution structure of infinite horizon optimal control problems with one state variable. It turns out that qualitative changes of this structure are connected to local and global bifurcations in the state-costate system. We apply the theory to investigate an

  19. Mass transfer of H2O between petroleum and water: implications for oil field water sample quality

    International Nuclear Information System (INIS)

    McCartney, R.A.; Ostvold, T.

    2005-01-01

    Water mass transfer can occur between water and petroleum during changes in pressure and temperature. This process can result in the dilution or concentration of dissolved ions in the water phase of oil field petroleum-water samples. In this study, PVT simulations were undertaken for 4 petroleum-water systems covering a range of reservoir conditions (80-185 o C; 300-1000 bar) and a range of water-petroleum mixtures (volume ratios of 1:1000-300:1000) to quantify the extent of H 2 O mass transfer as a result of pressure and temperature changes. Conditions were selected to be relevant to different types of oil field water sample (i.e. surface, downhole and core samples). The main variables determining the extent of dilution and concentration were found to be: (a) reservoir pressure and temperature, (b) pressure and temperature of separation of water and petroleum, (c) petroleum composition, and (d) petroleum:water ratio (PWR). The results showed that significant dilution and concentration of water samples could occur, particularly at high PWR. It was not possible to establish simple guidelines for identifying good and poor quality samples due to the interplay of the above variables. Sample quality is best investigated using PVT software of the type used in this study. (author)

  20. Optimizing detectability

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    HPLC is useful for trace and ultratrace analyses of a variety of compounds. For most applications, HPLC is useful for determinations in the nanogram-to-microgram range; however, detection limits of a picogram or less have been demonstrated in certain cases. These determinations require state-of-the-art capability; several examples of such determinations are provided in this chapter. As mentioned before, to detect and/or analyze low quantities of a given analyte at submicrogram or ultratrace levels, it is necessary to optimize the whole separation system, including the quantity and type of sample, sample preparation, HPLC equipment, chromatographic conditions (including column), choice of detector, and quantitation techniques. A limited discussion is provided here for optimization based on theoretical considerations, chromatographic conditions, detector selection, and miscellaneous approaches to detectability optimization. 59 refs