WorldWideScience

Sample records for optimal sampling frequency

  1. β-NMR sample optimization

    CERN Document Server

    Zakoucka, Eva

    2013-01-01

    During my summer student programme I was working on sample optimization for a new β-NMR project at the ISOLDE facility. The β-NMR technique is well-established in solid-state physics and just recently it is being introduced for applications in biochemistry and life sciences. The β-NMR collaboration will be applying for beam time to the INTC committee in September for three nuclei: Cu, Zn and Mg. Sample optimization for Mg was already performed last year during the summer student programme. Therefore sample optimization for Cu and Zn had to be completed as well for the project proposal. My part in the project was to perform thorough literature research on techniques studying Cu and Zn complexes in native conditions, search for relevant binding candidates for Cu and Zn applicable for ß-NMR and eventually evaluate selected binding candidates using UV-VIS spectrometry.

  2. Importance of sampling frequency when collecting diatoms

    KAUST Repository

    Wu, Naicheng; Faber, Claas; Sun, Xiuming; Qu, Yueming; Wang, Chao; Ivetic, Snjezana; Riis, Tenna; Ulrich, Uta; Fohrer, Nicola

    2016-01-01

    There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected

  3. Optimal relaxed causal sampler using sampled-date system theory

    NARCIS (Netherlands)

    Shekhawat, Hanumant; Meinsma, Gjerrit

    This paper studies the design of an optimal relaxed causal sampler using sampled data system theory. A lifted frequency domain approach is used to obtain the existence conditions and the optimal sampler. A state space formulation of the results is also provided. The resulting optimal relaxed causal

  4. Resolution optimization with irregularly sampled Fourier data

    International Nuclear Information System (INIS)

    Ferrara, Matthew; Parker, Jason T; Cheney, Margaret

    2013-01-01

    Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer–Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications. (paper)

  5. Importance of sampling frequency when collecting diatoms

    KAUST Repository

    Wu, Naicheng

    2016-11-14

    There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected and analyzed daily riverine diatom samples over a 1-year period (25 April 2013–30 April 2014) at the outlet of a German lowland river. The samples were classified into five clusters (1–5) by a Kohonen Self-Organizing Map (SOM) method based on similarity between species compositions over time. ASFs were determined to be 25 days at Cluster 2 (June-July 2013) and 13 days at Cluster 5 (February-April 2014), whereas no specific ASFs were found at Cluster 1 (April-May 2013), 3 (August-November 2013) (>30 days) and Cluster 4 (December 2013 - January 2014) (<1 day). ASFs showed dramatic seasonality and were negatively related to hydrological wetness conditions, suggesting that sampling interval should be reduced with increasing catchment wetness. A key implication of our findings for freshwater management is that long-term bio-monitoring protocols should be developed with the knowledge of tracking algal temporal dynamics with an appropriate sampling frequency.

  6. The optimal sampling of outsourcing product

    International Nuclear Information System (INIS)

    Yang Chao; Pei Jiacheng

    2014-01-01

    In order to improve quality and cost, the sampling c = 0 has been introduced to the inspection of outsourcing product. According to the current quality level (p = 0.4%), we confirmed the optimal sampling that is: Ac = 0; if N ≤ 3000, n = 55; 3001 ≤ N ≤ 10000, n = 86; N ≥ 10001, n = 108. Through analyzing the OC curve, we came to the conclusion that when N ≤ 3000, the protective ability of optimal sampling for product quality is stronger than current sampling. Corresponding to the same 'consumer risk', the product quality of optimal sampling is superior to current sampling. (authors)

  7. Numerical solution of optimal departure frequency of Taipei TMS

    Science.gov (United States)

    Young, Lih-jier; Chiu, Chin-Hsin

    2016-05-01

    Route Number 5 (Bannan Line) of Taipei Mass Rapid Transit (MRT) is the most popular line in the Taipei Metro System especially during rush hours periods. It has been estimated there are more than 8,000 passengers on the ticket platform during 18:00∼19:00 at Taipei main station. The purpose of this research is to predict a specific departure frequency of passengers per train. Monte Carlo Simulation will be used to optimize departure frequency according to the passenger information provided by 22 stations, i.e., 22 random variables of route number 5. It is worth mentioning that we used 30,000 iterations to get the different samples of the optimization departure frequency, i.e., 10 trains/hr which matches the practical situation.

  8. Optimal depth-based regional frequency analysis

    Directory of Open Access Journals (Sweden)

    H. Wazneh

    2013-06-01

    Full Text Available Classical methods of regional frequency analysis (RFA of hydrological variables face two drawbacks: (1 the restriction to a particular region which can lead to a loss of some information and (2 the definition of a region that generates a border effect. To reduce the impact of these drawbacks on regional modeling performance, an iterative method was proposed recently, based on the statistical notion of the depth function and a weight function φ. This depth-based RFA (DBRFA approach was shown to be superior to traditional approaches in terms of flexibility, generality and performance. The main difficulty of the DBRFA approach is the optimal choice of the weight function ϕ (e.g., φ minimizing estimation errors. In order to avoid a subjective choice and naïve selection procedures of φ, the aim of the present paper is to propose an algorithm-based procedure to optimize the DBRFA and automate the choice of ϕ according to objective performance criteria. This procedure is applied to estimate flood quantiles in three different regions in North America. One of the findings from the application is that the optimal weight function depends on the considered region and can also quantify the region's homogeneity. By comparing the DBRFA to the canonical correlation analysis (CCA method, results show that the DBRFA approach leads to better performances both in terms of relative bias and mean square error.

  9. Optimal depth-based regional frequency analysis

    Science.gov (United States)

    Wazneh, H.; Chebana, F.; Ouarda, T. B. M. J.

    2013-06-01

    Classical methods of regional frequency analysis (RFA) of hydrological variables face two drawbacks: (1) the restriction to a particular region which can lead to a loss of some information and (2) the definition of a region that generates a border effect. To reduce the impact of these drawbacks on regional modeling performance, an iterative method was proposed recently, based on the statistical notion of the depth function and a weight function φ. This depth-based RFA (DBRFA) approach was shown to be superior to traditional approaches in terms of flexibility, generality and performance. The main difficulty of the DBRFA approach is the optimal choice of the weight function ϕ (e.g., φ minimizing estimation errors). In order to avoid a subjective choice and naïve selection procedures of φ, the aim of the present paper is to propose an algorithm-based procedure to optimize the DBRFA and automate the choice of ϕ according to objective performance criteria. This procedure is applied to estimate flood quantiles in three different regions in North America. One of the findings from the application is that the optimal weight function depends on the considered region and can also quantify the region's homogeneity. By comparing the DBRFA to the canonical correlation analysis (CCA) method, results show that the DBRFA approach leads to better performances both in terms of relative bias and mean square error.

  10. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  11. Optimal sampling schemes applied in geology

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-05-01

    Full Text Available Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology UP 2010 2 / 47 Outline 1 Introduction to hyperspectral remote... sensing 2 Objective of Study 1 3 Study Area 4 Data used 5 Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology...

  12. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  13. Mixed Frequency Data Sampling Regression Models: The R Package midasr

    Directory of Open Access Journals (Sweden)

    Eric Ghysels

    2016-08-01

    Full Text Available When modeling economic relationships it is increasingly common to encounter data sampled at different frequencies. We introduce the R package midasr which enables estimating regression models with variables sampled at different frequencies within a MIDAS regression framework put forward in work by Ghysels, Santa-Clara, and Valkanov (2002. In this article we define a general autoregressive MIDAS regression model with multiple variables of different frequencies and show how it can be specified using the familiar R formula interface and estimated using various optimization methods chosen by the researcher. We discuss how to check the validity of the estimated model both in terms of numerical convergence and statistical adequacy of a chosen regression specification, how to perform model selection based on a information criterion, how to assess forecasting accuracy of the MIDAS regression model and how to obtain a forecast aggregation of different MIDAS regression models. We illustrate the capabilities of the package with a simulated MIDAS regression model and give two empirical examples of application of MIDAS regression.

  14. Sample Adaptive Offset Optimization in HEVC

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2014-11-01

    Full Text Available As the next generation of video coding standard, High Efficiency Video Coding (HEVC adopted many useful tools to improve coding efficiency. Sample Adaptive Offset (SAO, is a technique to reduce sample distortion by providing offsets to pixels in in-loop filter. In SAO, pixels in LCU are classified into several categories, then categories and offsets are given based on Rate-Distortion Optimization (RDO of reconstructed pixels in a Largest Coding Unit (LCU. Pixels in a LCU are operated by the same SAO process, however, transform and inverse transform makes the distortion of pixels in Transform Unit (TU edge larger than the distortion inside TU even after deblocking filtering (DF and SAO. And the categories of SAO can also be refined, since it is not proper for many cases. This paper proposed a TU edge offset mode and a category refinement for SAO in HEVC. Experimental results shows that those two kinds of optimization gets -0.13 and -0.2 gain respectively compared with the SAO in HEVC. The proposed algorithm which using the two kinds of optimization gets -0.23 gain on BD-rate compared with the SAO in HEVC which is a 47 % increase with nearly no increase on coding time.

  15. Media planning by optimizing contact frequencies

    NARCIS (Netherlands)

    N. Piersma (Nanda); S. Kapsenberg; P. Kloprogge; A.P.M. Wagelmans (Albert)

    1998-01-01

    textabstractIn this paper we study a model to estimate the probability that a target group of an advertising campaign is reached by a commercial message a given number of times. This contact frequency distribution is known to be computationally difficult to calculate because of dependence between

  16. Cooperative Game Study of Airlines Based on Flight Frequency Optimization

    Directory of Open Access Journals (Sweden)

    Wanming Liu

    2014-01-01

    Full Text Available By applying the game theory, the relationship between airline ticket price and optimal flight frequency is analyzed. The paper establishes the payoff matrix of the flight frequency in noncooperation scenario and flight frequency optimization model in cooperation scenario. The airline alliance profit distribution is converted into profit distribution game based on the cooperation game theory. The profit distribution game is proved to be convex, and there exists an optimal distribution strategy. The results show that joining the airline alliance can increase airline whole profit, the change of negotiated prices and cost is beneficial to profit distribution of large airlines, and the distribution result is in accordance with aviation development.

  17. Sampling frequency of ciliated protozoan microfauna for seasonal distribution research in marine ecosystems.

    Science.gov (United States)

    Xu, Henglong; Yong, Jiang; Xu, Guangjian

    2015-12-30

    Sampling frequency is important to obtain sufficient information for temporal research of microfauna. To determine an optimal strategy for exploring the seasonal variation in ciliated protozoa, a dataset from the Yellow Sea, northern China was studied. Samples were collected with 24 (biweekly), 12 (monthly), 8 (bimonthly per season) and 4 (seasonally) sampling events. Compared to the 24 samplings (100%), the 12-, 8- and 4-samplings recovered 94%, 94%, and 78% of the total species, respectively. To reveal the seasonal distribution, the 8-sampling regime may result in >75% information of the seasonal variance, while the traditional 4-sampling may only explain sampling frequency, the biotic data showed stronger correlations with seasonal variables (e.g., temperature, salinity) in combination with nutrients. It is suggested that the 8-sampling events per year may be an optimal sampling strategy for ciliated protozoan seasonal research in marine ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Sampling frequency affects ActiGraph activity counts

    DEFF Research Database (Denmark)

    Brønd, Jan Christian; Arvidsson, Daniel

    that is normally performed at frequencies higher than 2.5 Hz. With the ActiGraph model GT3X one has the option to select sample frequency from 30 to 100 Hz. This study investigated the effect of the sampling frequency on the ouput of the bandpass filter.Methods: A synthetic frequency sweep of 0-15 Hz was generated...... in Matlab and sampled at frequencies of 30-100 Hz. Also, acceleration signals during indoor walking and running were sampled at 30 Hz using the ActiGraph GT3X and resampled in Matlab to frequencies of 40-100 Hz. All data was processed with the ActiLife software.Results: Acceleration frequencies between 5......-15 Hz escaped the bandpass filter when sampled at 40, 50, 70, 80 and 100 Hz, while this was not the case when sampled at 30, 60 and 90 Hz. During the ambulatory activities this artifact resultet in different activity count output from the ActiLife software with different sampling frequency...

  19. A software sampling frequency adaptive algorithm for reducing spectral leakage

    Institute of Scientific and Technical Information of China (English)

    PAN Li-dong; WANG Fei

    2006-01-01

    Spectral leakage caused by synchronous error in a nonsynchronous sampling system is an important cause that reduces the accuracy of spectral analysis and harmonic measurement.This paper presents a software sampling frequency adaptive algorithm that can obtain the actual signal frequency more accurately,and then adjusts sampling interval base on the frequency calculated by software algorithm and modifies sampling frequency adaptively.It can reduce synchronous error and impact of spectral leakage;thereby improving the accuracy of spectral analysis and harmonic measurement for power system signal where frequency changes slowly.This algorithm has high precision just like the simulations show,and it can be a practical method in power system harmonic analysis since it can be implemented easily.

  20. Frequency response as a surrogate eigenvalue problem in topology optimization

    DEFF Research Database (Denmark)

    Andreassen, Erik; Ferrari, Federico; Sigmund, Ole

    2018-01-01

    This article discusses the use of frequency response surrogates for eigenvalue optimization problems in topology optimization that may be used to avoid solving the eigenvalue problem. The motivation is to avoid complications that arise from multiple eigenvalues and the computational complexity as...

  1. Software for CATV Design and Frequency Plan Optimization

    OpenAIRE

    Hala, O.

    2007-01-01

    The paper deals with the structure of a software medium used for design and sub-optimization of frequency plan in CATV networks, their description and design method. The software performance is described and a simple design example of energy balance of a simplified CATV network is given. The software was created in programming environment called Delphi and local optimization was made in Matlab.

  2. Multi-frequency direct sampling method in inverse scattering problem

    Science.gov (United States)

    Kang, Sangwoo; Lambert, Marc; Park, Won-Kwang

    2017-10-01

    We consider the direct sampling method (DSM) for the two-dimensional inverse scattering problem. Although DSM is fast, stable, and effective, some phenomena remain unexplained by the existing results. We show that the imaging function of the direct sampling method can be expressed by a Bessel function of order zero. We also clarify the previously unexplained imaging phenomena and suggest multi-frequency DSM to overcome traditional DSM. Our method is evaluated in simulation studies using both single and multiple frequencies.

  3. Frequency Tuning of Vibration Absorber Using Topology Optimization

    Science.gov (United States)

    Harel, Swapnil Subhash

    A tuned mass absorber is a system for reducing the amplitude in one oscillator by coupling it to a second oscillator. If tuned correctly, the maximum amplitude of the first oscillator in response to a periodic driver will be lowered, and much of the vibration will be 'transferred' to the second oscillator. The tuned vibration absorber (TVA) has been utilized for vibration control purposes in many sectors of Civil/Automotive/Aerospace Engineering for many decades since its inception. Time and again we come across a situation in which a vibratory system is required to run near resonance. In the past, approaches have been made to design such auxiliary spring mass tuned absorbers for the safety of the structures. This research focuses on the development and optimization of continuously tuned mass absorbers as a substitute to the discretely tuned mass absorbers (spring- mass system). After conducting the study of structural behavior, the boundary condition and frequency to which the absorber is to be tuned are determined. The Modal analysis approach is used to determine mode shapes and frequencies. The absorber is designed and optimized using the topology optimization tool, which simultaneously designs, optimizes and tunes the absorber to the desired frequency. The tuned, optimized absorber, after post processing, is attached to the target structure. The number of the absorbers are increased to amplify bandwidth and thereby upgrade the safety of structure for a wide range of frequency. The frequency response analysis is carried out using various combinations of structure and number of absorber cell.

  4. Analysis of modal frequency optimization of railway vehicle car body

    Directory of Open Access Journals (Sweden)

    Wenjing Sun

    2016-04-01

    Full Text Available High structural modal frequencies of car body are beneficial as they ensure better vibration control and enhance ride quality of railway vehicles. Modal sensitivity optimization and elastic suspension parameters used in the design of equipment beneath the chassis of the car body are proposed in order to improve the modal frequencies of car bodies under service conditions. Modal sensitivity optimization is based on sensitivity analysis theory which considers the thickness of the body frame at various positions as variables in order to achieve optimization. Equipment suspension design analyzes the influence of suspension parameters on the modal frequencies of the car body through the use of an equipment-car body coupled model. Results indicate that both methods can effectively improve the modal parameters of the car body. Modal sensitivity optimization increases vertical bending frequency from 9.70 to 10.60 Hz, while optimization of elastic suspension parameters increases the vertical bending frequency to 10.51 Hz. The suspension design can be used without alteration to the structure of the car body while ensuring better ride quality.

  5. Optimal Load Control via Frequency Measurement and Neighborhood Area Communication

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, CH; Topcu, U; Low, SH

    2013-11-01

    We propose a decentralized optimal load control scheme that provides contingency reserve in the presence of sudden generation drop. The scheme takes advantage of flexibility of frequency responsive loads and neighborhood area communication to solve an optimal load control problem that balances load and generation while minimizing end-use disutility of participating in load control. Local frequency measurements enable individual loads to estimate the total mismatch between load and generation. Neighborhood area communication helps mitigate effects of inconsistencies in the local estimates due to frequency measurement noise. Case studies show that the proposed scheme can balance load with generation and restore the frequency within seconds of time after a generation drop, even when the loads use a highly simplified power system model in their algorithms. We also investigate tradeoffs between the amount of communication and the performance of the proposed scheme through simulation-based experiments.

  6. Efficient estimation for ergodic diffusions sampled at high frequency

    DEFF Research Database (Denmark)

    Sørensen, Michael

    A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...

  7. Optimal Frequency Ranges for Sub-Microsecond Precision Pulsar Timing

    Science.gov (United States)

    Lam, Michael Timothy; McLaughlin, Maura; Cordes, James; Chatterjee, Shami; Lazio, Joseph

    2018-01-01

    Precision pulsar timing requires optimization against measurement errors and astrophysical variance from the neutron stars themselves and the interstellar medium. We investigate optimization of arrival time precision as a function of radio frequency and bandwidth. We find that increases in bandwidth that reduce the contribution from receiver noise are countered by the strong chromatic dependence of interstellar effects and intrinsic pulse-profile evolution. The resulting optimal frequency range is therefore telescope and pulsar dependent. We demonstrate the results for five pulsars included in current pulsar timing arrays and determine that they are not optimally observed at current center frequencies. We also find that arrival-time precision can be improved by increases in total bandwidth. Wideband receivers centered at high frequencies can reduce required overall integration times and provide significant improvements in arrival time uncertainty by a factor of $\\sim$$\\sqrt{2}$ in most cases, assuming a fixed integration time. We also discuss how timing programs can be extended to pulsars with larger dispersion measures through the use of higher-frequency observations.

  8. Software for CATV Design and Frequency Plan Optimization

    Directory of Open Access Journals (Sweden)

    O. Hala

    2007-09-01

    Full Text Available The paper deals with the structure of a software medium used for design and sub-optimization of frequency plan in CATV networks, their description and design method. The software performance is described and a simple design example of energy balance of a simplified CATV network is given. The software was created in programming environment called Delphi and local optimization was made in Matlab.

  9. Designing waveforms for temporal encoding using a frequency sampling method

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    was compared to a linear frequency modulated signal with amplitude tapering, previously used in clinical studies for synthetic transmit aperture imaging. The latter had a relatively flat spectrum which implied that the waveform tried to excite all frequencies including ones with low amplification. The proposed......In this paper a method for designing waveforms for temporal encoding in medical ultrasound imaging is described. The method is based on least squares optimization and is used to design nonlinear frequency modulated signals for synthetic transmit aperture imaging. By using the proposed design method...... waveform, on the other hand, was designed so that only frequencies where the transducer had a large amplification were excited. Hereby, unnecessary heating of the transducer could be avoided and the signal-tonoise ratio could be increased. The experimental ultrasound scanner RASMUS was used to evaluate...

  10. General solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging.

    Science.gov (United States)

    Nakata, Toshihiko; Ninomiya, Takanori

    2006-10-10

    A general solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging is presented. Phase-modulated heterodyne interference light generated by a linear region of periodic displacement is captured by a charge-coupled device image sensor, in which the interference light is sampled at a sampling rate lower than the Nyquist frequency. The frequencies of the components of the light, such as the sideband and carrier (which include photodisplacement and topography information, respectively), are downconverted and sampled simultaneously based on the integration and sampling effects of the sensor. A general solution of frequency and amplitude in this downconversion is derived by Fourier analysis of the sampling procedure. The optimal frequency condition for the heterodyne beat signal, modulation signal, and sensor gate pulse is derived such that undesirable components are eliminated and each information component is converted into an orthogonal function, allowing each to be discretely reproduced from the Fourier coefficients. The optimal frequency parameters that maximize the sideband-to-carrier amplitude ratio are determined, theoretically demonstrating its high selectivity over 80 dB. Preliminary experiments demonstrate that this technique is capable of simultaneous imaging of reflectivity, topography, and photodisplacement for the detection of subsurface lattice defects at a speed corresponding to an acquisition time of only 0.26 s per 256 x 256 pixel area.

  11. Dental anthropology of a Brazilian sample: Frequency of nonmetric traits.

    Science.gov (United States)

    Tinoco, Rachel Lima Ribeiro; Lima, Laíse Nascimento Correia; Delwing, Fábio; Francesquini, Luiz; Daruge, Eduardo

    2016-01-01

    Dental elements are valuable tools in a study of ancient populations and species, and key-features for human identification; among the dental anthropology field, nonmetric traits, standardized by ASUDAS, are closely related to ancestry. This study aimed to analyze the frequency of six nonmetric traits in a sample from Southeast Brazil, composed by 130 dental casts from individuals aged between 18 and 30, without foreign parents or grandparents. A single examiner observed the presence or absence of shoveling, Carabelli's cusp, fifth cusp, 3-cusped UM2, sixth cusp, and 4-cusped LM2. The frequencies obtained were different from the ones shown by other researches to Amerindian and South American samples, and related to European and sub-Saharan frequencies, showing the influence of this groups in the current Brazilian population. Sexual dimorphism was found in the frequencies of Carabelli's cusp, 3-cusped UM2, and sixth cusp. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Optimal CCD readout by digital correlated double sampling

    Science.gov (United States)

    Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.

    2016-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.

  13. Estimating fish swimming metrics and metabolic rates with accelerometers: the influence of sampling frequency.

    Science.gov (United States)

    Brownscombe, J W; Lennox, R J; Danylchuk, A J; Cooke, S J

    2018-06-21

    Accelerometry is growing in popularity for remotely measuring fish swimming metrics, but appropriate sampling frequencies for accurately measuring these metrics are not well studied. This research examined the influence of sampling frequency (1-25 Hz) with tri-axial accelerometer biologgers on estimates of overall dynamic body acceleration (ODBA), tail-beat frequency, swimming speed and metabolic rate of bonefish Albula vulpes in a swim-tunnel respirometer and free-swimming in a wetland mesocosm. In the swim tunnel, sampling frequencies of ≥ 5 Hz were sufficient to establish strong relationships between ODBA, swimming speed and metabolic rate. However, in free-swimming bonefish, estimates of metabolic rate were more variable below 10 Hz. Sampling frequencies should be at least twice the maximum tail-beat frequency to estimate this metric effectively, which is generally higher than those required to estimate ODBA, swimming speed and metabolic rate. While optimal sampling frequency probably varies among species due to tail-beat frequency and swimming style, this study provides a reference point with a medium body-sized sub-carangiform teleost fish, enabling researchers to measure these metrics effectively and maximize study duration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  14. Optimal stride frequencies in running at different speeds

    NARCIS (Netherlands)

    Van Oeveren, Ben T.; De Ruiter, Cornelis J.; Beek, Peter J.; Van Dieën, Jaap H.

    2017-01-01

    During running at a constant speed, the optimal stride frequency (SF) can be derived from the u-shaped relationship between SF and heart rate (HR). Changing SF towards the optimum of this relationship is beneficial for energy expenditure and may positively change biomechanics of running. In the

  15. Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust...

  16. Linear Optimization of Frequency Spectrum Assignments Across System

    Science.gov (United States)

    2016-03-01

    selection tools, frequency allocation, transmission optimization, electromagnetic maneuver warfare, electronic protection, assignment model 15. NUMBER ...Characteristics Modeled ...............................................................29 Table 10.   Antenna Systems Modeled , Number of Systems and...surveillance EW early warning GAMS general algebraic modeling system GHz gigahertz IDE integrated development environment ILP integer linear program

  17. Optimal Tikhonov Regularization in Finite-Frequency Tomography

    Science.gov (United States)

    Fang, Y.; Yao, Z.; Zhou, Y.

    2017-12-01

    The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.

  18. Optimization of sampling parameters for standardized exhaled breath sampling.

    Science.gov (United States)

    Doran, Sophie; Romano, Andrea; Hanna, George B

    2017-09-05

    The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample volume

  19. Demosaicking Based on Optimization and Projection in Different Frequency Bands

    Directory of Open Access Journals (Sweden)

    Omer OsamaA

    2008-01-01

    Full Text Available Abstract A fast and effective iterative demosaicking algorithm is described for reconstructing a full-color image from single-color filter array data. The missing color values are interpolated on the basis of optimization and projection in different frequency bands. A filter bank is used to decompose an initially interpolated image into low-frequency and high-frequency bands. In the low-frequency band, a quadratic cost function is minimized in accordance with the observation that the low-frequency components of chrominance slowly vary within an object region. In the high-frequency bands, the high-frequency components of the unknown values are projected onto the high-frequency components of the known values. Comparison of the proposed algorithm with seven state-of-the-art demosaicking algorithms showed that it outperforms all of them for 20 images on average in terms of objective quality and that it is competitive with them from the subjective quality and complexity points of view.

  20. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  1. A Frequency Domain Design Method For Sampled-Data Compensators

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Jannerup, Ole Erik

    1990-01-01

    A new approach to the design of a sampled-data compensator in the frequency domain is investigated. The starting point is a continuous-time compensator for the continuous-time system which satisfy specific design criteria. The new design method will graphically show how the discrete...

  2. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  3. Sample preparation optimization in fecal metabolic profiling.

    Science.gov (United States)

    Deda, Olga; Chatziioannou, Anastasia Chrysovalantou; Fasoula, Stella; Palachanis, Dimitris; Raikos, Νicolaos; Theodoridis, Georgios A; Gika, Helen G

    2017-03-15

    Metabolomic analysis of feces can provide useful insight on the metabolic status, the health/disease state of the human/animal and the symbiosis with the gut microbiome. As a result, recently there is increased interest on the application of holistic analysis of feces for biomarker discovery. For metabolomics applications, the sample preparation process used prior to the analysis of fecal samples is of high importance, as it greatly affects the obtained metabolic profile, especially since feces, as matrix are diversifying in their physicochemical characteristics and molecular content. However there is still little information in the literature and lack of a universal approach on sample treatment for fecal metabolic profiling. The scope of the present work was to study the conditions for sample preparation of rat feces with the ultimate goal of the acquisition of comprehensive metabolic profiles either untargeted by NMR spectroscopy and GC-MS or targeted by HILIC-MS/MS. A fecal sample pooled from male and female Wistar rats was extracted under various conditions by modifying the pH value, the nature of the organic solvent and the sample weight to solvent volume ratio. It was found that the 1/2 (w f /v s ) ratio provided the highest number of metabolites under neutral and basic conditions in both untargeted profiling techniques. Concerning LC-MS profiles, neutral acetonitrile and propanol provided higher signals and wide metabolite coverage, though extraction efficiency is metabolite dependent. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Optimizing Power–Frequency Droop Characteristics of Distributed Energy Resources

    Energy Technology Data Exchange (ETDEWEB)

    Guggilam, Swaroop S.; Zhao, Changhong; Dall Anese, Emiliano; Chen, Yu Christine; Dhople, Sairaj V.

    2018-05-01

    This paper outlines a procedure to design power-frequency droop slopes for distributed energy resources (DERs) installed in distribution networks to optimally participate in primary frequency response. In particular, the droop slopes are engineered such that DERs respond in proportion to their power ratings and they are not unfairly penalized in power provisioning based on their location in the distribution network. The main contribution of our approach is that a guaranteed level of frequency regulation can be guaranteed at the feeder head, while ensuring that the outputs of individual DERs conform to some well-defined notion of fairness. The approach we adopt leverages an optimization-based perspective and suitable linearizations of the power-flow equations to embed notions of fairness and information regarding the physics of the power flows within the distribution network into the droop slopes. Time-domain simulations from a differential algebraic equation model of the 39-bus New England test-case system augmented with three instances of the IEEE 37-node distribution-network with frequency-sensitive DERs are provided to validate our approach.

  5. Joint fundamental frequency and order estimation using optimal filtering

    Directory of Open Access Journals (Sweden)

    Jakobsson Andreas

    2011-01-01

    Full Text Available Abstract In this paper, the problem of jointly estimating the number of harmonics and the fundamental frequency of periodic signals is considered. We show how this problem can be solved using a number of methods that either are or can be interpreted as filtering methods in combination with a statistical model selection criterion. The methods in question are the classical comb filtering method, a maximum likelihood method, and some filtering methods based on optimal filtering that have recently been proposed, while the model selection criterion is derived herein from the maximum a posteriori principle. The asymptotic properties of the optimal filtering methods are analyzed and an order-recursive efficient implementation is derived. Finally, the estimators have been compared in computer simulations that show that the optimal filtering methods perform well under various conditions. It has previously been demonstrated that the optimal filtering methods perform extremely well with respect to fundamental frequency estimation under adverse conditions, and this fact, combined with the new results on model order estimation and efficient implementation, suggests that these methods form an appealing alternative to classical methods for analyzing multi-pitch signals.

  6. Optimizing sampling approaches along ecological gradients

    DEFF Research Database (Denmark)

    Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel

    2016-01-01

    1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...... patterns in a statistically solid and reproducible manner, given the normal restrictions in labour, time and money. However, a technical guideline about an adequate sampling design to maximize prediction success under restricted resources is lacking. This study aims at developing such a solid...... and reproducible guideline for sampling along gradients in all fields of ecology and science in general. 2. We conducted simulations with artificial data for five common response types known in ecology, each represented by a simple function (no response, linear, exponential, symmetric unimodal and asymmetric...

  7. Neuro-genetic system for optimization of GMI samples sensitivity.

    Science.gov (United States)

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Sampling methods for low-frequency electromagnetic imaging

    International Nuclear Information System (INIS)

    Gebauer, Bastian; Hanke, Martin; Schneider, Christoph

    2008-01-01

    For the detection of hidden objects by low-frequency electromagnetic imaging the linear sampling method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfils the assumptions for the fully justified variant of the linear sampling method, the so-called factorization method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds

  9. Combining agreement and frequency rating scales to optimize psychometrics in measuring behavioral health functioning.

    Science.gov (United States)

    Marfeo, Elizabeth E; Ni, Pengsheng; Chan, Leighton; Rasch, Elizabeth K; Jette, Alan M

    2014-07-01

    The goal of this article was to investigate optimal functioning of using frequency vs. agreement rating scales in two subdomains of the newly developed Work Disability Functional Assessment Battery: the Mood & Emotions and Behavioral Control scales. A psychometric study comparing rating scale performance embedded in a cross-sectional survey used for developing a new instrument to measure behavioral health functioning among adults applying for disability benefits in the United States was performed. Within the sample of 1,017 respondents, the range of response category endorsement was similar for both frequency and agreement item types for both scales. There were fewer missing values in the frequency items than the agreement items. Both frequency and agreement items showed acceptable reliability. The frequency items demonstrated optimal effectiveness around the mean ± 1-2 standard deviation score range; the agreement items performed better at the extreme score ranges. Findings suggest an optimal response format requires a mix of both agreement-based and frequency-based items. Frequency items perform better in the normal range of responses, capturing specific behaviors, reactions, or situations that may elicit a specific response. Agreement items do better for those whose scores are more extreme and capture subjective content related to general attitudes, behaviors, or feelings of work-related behavioral health functioning. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Degeneracy, frequency response and filtering in IMRT optimization

    International Nuclear Information System (INIS)

    Llacer, Jorge; Agazaryan, Nzhde; Solberg, Timothy D; Promberger, Claus

    2004-01-01

    This paper attempts to provide an answer to some questions that remain either poorly understood, or not well documented in the literature, on basic issues related to intensity modulated radiation therapy (IMRT). The questions examined are: the relationship between degeneracy and frequency response of optimizations, effects of initial beamlet fluence assignment and stopping point, what does filtering of an optimized beamlet map actually do and how could image analysis help to obtain better optimizations? Two target functions are studied, a quadratic cost function and the log likelihood function of the dynamically penalized likelihood (DPL) algorithm. The algorithms used are the conjugate gradient, the stochastic adaptive simulated annealing and the DPL. One simple phantom is used to show the development of the analysis tools used and two clinical cases of medium and large dose matrix size (a meningioma and a prostate) are studied in detail. The conclusions reached are that the high number of iterations that is needed to avoid degeneracy is not warranted in clinical practice, as the quality of the optimizations, as judged by the DVHs and dose distributions obtained, does not improve significantly after a certain point. It is also shown that the optimum initial beamlet fluence assignment for analytical iterative algorithms is a uniform distribution, but such an assignment does not help a stochastic method of optimization. Stopping points for the studied algorithms are discussed and the deterioration of DVH characteristics with filtering is shown to be partially recoverable by the use of space-variant filtering techniques

  11. Degeneracy, frequency response and filtering in IMRT optimization

    Energy Technology Data Exchange (ETDEWEB)

    Llacer, Jorge [EC Engineering Consultants LLC, 130 Forest Hill Drive, Los Gatos, CA 95032 (United States); Agazaryan, Nzhde [Department of Radiation Oncology, University of California, Los Angeles, CA 90095 (United States); Solberg, Timothy D [Department of Radiation Oncology, University of California, Los Angeles, CA 90095 (United States); Promberger, Claus [BrainLAB AG, Ammerthalstrasse 8, 85551 Heimstetten (Germany)

    2004-07-07

    This paper attempts to provide an answer to some questions that remain either poorly understood, or not well documented in the literature, on basic issues related to intensity modulated radiation therapy (IMRT). The questions examined are: the relationship between degeneracy and frequency response of optimizations, effects of initial beamlet fluence assignment and stopping point, what does filtering of an optimized beamlet map actually do and how could image analysis help to obtain better optimizations? Two target functions are studied, a quadratic cost function and the log likelihood function of the dynamically penalized likelihood (DPL) algorithm. The algorithms used are the conjugate gradient, the stochastic adaptive simulated annealing and the DPL. One simple phantom is used to show the development of the analysis tools used and two clinical cases of medium and large dose matrix size (a meningioma and a prostate) are studied in detail. The conclusions reached are that the high number of iterations that is needed to avoid degeneracy is not warranted in clinical practice, as the quality of the optimizations, as judged by the DVHs and dose distributions obtained, does not improve significantly after a certain point. It is also shown that the optimum initial beamlet fluence assignment for analytical iterative algorithms is a uniform distribution, but such an assignment does not help a stochastic method of optimization. Stopping points for the studied algorithms are discussed and the deterioration of DVH characteristics with filtering is shown to be partially recoverable by the use of space-variant filtering techniques.

  12. FREQUENCY OPTIMIZATION FOR SECURITY MONITORING OF COMPUTER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Вogatyrev V.A.

    2015-03-01

    Full Text Available The subject areas of the proposed research are monitoring facilities for protection of computer systems exposed to destructive attacks of accidental and malicious nature. The interval optimization model of test monitoring for the detection of hazardous states of security breach caused by destructive attacks is proposed. Optimization function is to maximize profit in case of requests servicing in conditions of uncertainty, and intensity variance of the destructive attacks including penalties when servicing of requests is in dangerous conditions. The vector task of system availability maximization and minimization of probabilities for its downtime and dangerous conditions is proposed to be reduced to the scalar optimization problem based on the criterion of profit maximization from information services (service of requests that integrates these private criteria. Optimization variants are considered with the definition of the averaged periodic activities of monitoring and adapting of these periods to the changes in the intensity of destructive attacks. Adaptation efficiency of the monitoring frequency to changes in the activity of the destructive attacks is shown. The proposed solutions can find their application for optimization of test monitoring intervals to detect hazardous conditions of security breach that makes it possible to increase the system effectiveness, and specifically, to maximize the expected profit from information services.

  13. Robust and efficient multi-frequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection.

    Science.gov (United States)

    Zhang, Minliang; Chen, Qian; Tao, Tianyang; Feng, Shijie; Hu, Yan; Li, Hui; Zuo, Chao

    2017-08-21

    Temporal phase unwrapping (TPU) is an essential algorithm in fringe projection profilometry (FPP), especially when measuring complex objects with discontinuities and isolated surfaces. Among others, the multi-frequency TPU has been proven to be the most reliable algorithm in the presence of noise. For a practical FPP system, in order to achieve an accurate, efficient, and reliable measurement, one needs to make wise choices about three key experimental parameters: the highest fringe frequency, the phase-shifting steps, and the fringe pattern sequence. However, there was very little research on how to optimize these parameters quantitatively, especially considering all three aspects from a theoretical and analytical perspective simultaneously. In this work, we propose a new scheme to determine simultaneously the optimal fringe frequency, phase-shifting steps and pattern sequence under multi-frequency TPU, robustly achieving high accuracy measurement by a minimum number of fringe frames. Firstly, noise models regarding phase-shifting algorithms as well as 3-D coordinates are established under a projector defocusing condition, which leads to the optimal highest fringe frequency for a FPP system. Then, a new concept termed frequency-to-frame ratio (FFR) that evaluates the magnitude of the contribution of each frame for TPU is defined, on which an optimal phase-shifting combination scheme is proposed. Finally, a judgment criterion is established, which can be used to judge whether the ratio between adjacent fringe frequencies is conducive to stably and efficiently unwrapping the phase. The proposed method provides a simple and effective theoretical framework to improve the accuracy, efficiency, and robustness of a practical FPP system in actual measurement conditions. The correctness of the derived models as well as the validity of the proposed schemes have been verified through extensive simulations and experiments. Based on a normal monocular 3-D FPP hardware system

  14. Optimization of a Virtual Power Plant to Provide Frequency Support.

    Energy Technology Data Exchange (ETDEWEB)

    Neely, Jason C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Johnson, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gonzalez, Sigifredo [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lave, Matthew Samuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Delhotal, Jarod James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    Increasing the penetration of distributed renewable sources, including photovoltaic (PV) sources, poses technical challenges for grid management. The grid has been optimized over decades to rely upon large centralized power plants with well-established feedback controls, but now non-dispatchable, renewable sources are displacing these controllable generators. This one-year study was funded by the Department of Energy (DOE) SunShot program and is intended to better utilize those variable resources by providing electric utilities with the tools to implement frequency regulation and primary frequency reserves using aggregated renewable resources, known as a virtual power plant. The goal is to eventually enable the integration of 100s of Gigawatts into US power systems.

  15. Optimal supplementary frequency controller design using the wind farm frequency model and controller parameters stability region.

    Science.gov (United States)

    Toulabi, Mohammadreza; Bahrami, Shahab; Ranjbar, Ali Mohammad

    2018-03-01

    In most of the existing studies, the frequency response in the variable speed wind turbines (VSWTs) is simply realized by changing the torque set-point via appropriate inputs such as frequency deviations signal. However, effective dynamics and systematic process design have not been comprehensively discussed yet. Accordingly, this paper proposes a proportional-derivative frequency controller and investigates its performance in a wind farm consisting of several VSWTs. A band-pass filter is deployed before the proposed controller to avoid responding to either steady state frequency deviations or high rate of change of frequency. To design the controller, the frequency model of the wind farm is first characterized. The proposed controller is then designed based on the obtained open loop system. The stability region associated with the controller parameters is analytically determined by decomposing the closed-loop system's characteristic polynomial into the odd and even parts. The performance of the proposed controller is evaluated through extensive simulations in MATLAB/Simulink environment in a power system comprising a high penetration of VSWTs equipped with the proposed controller. Finally, based on the obtained feasible area and appropriate objective function, the optimal values associated with the controller parameters are determined using the genetic algorithm (GA). Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Accident frequency and unrealistic optimism: Children's assessment of risk.

    Science.gov (United States)

    Joshi, Mary Sissons; Maclean, Morag; Stevens, Claire

    2018-02-01

    Accidental injury is a major cause of mortality and morbidity among children, warranting research on their risk perceptions. Three hundred and seven children aged 10-11 years assessed the frequency, danger and personal risk likelihood of 8 accidents. Two social-cognitive biases were manifested. The frequency of rare accidents (e.g. drowning) was overestimated, and the frequency of common accidents (e.g. bike accidents) underestimated; and the majority of children showed unrealistic optimism tending to see themselves as less likely to suffer these accidents in comparison to their peers, offering superior skills or parental control of the environment as an explanation. In the case of pedestrian accidents, children recognised their seriousness, underestimated the frequency of this risk and regarded their own road crossing skill as protection. These findings highlight the challenging task facing safety educators who, when teaching conventional safety knowledge and routines, also need to alert children to the danger of over-confidence without disabling them though fear. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  18. Using remotely-sensed data for optimal field sampling

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-09-01

    Full Text Available M B E R 2 0 0 8 15 USING REMOTELY- SENSED DATA FOR OPTIMAL FIELD SAMPLING BY DR PRAVESH DEBBA STATISTICS IS THE SCIENCE pertaining to the collection, summary, analysis, interpretation and presentation of data. It is often impractical... studies are: where to sample, what to sample and how many samples to obtain. Conventional sampling techniques are not always suitable in environmental studies and scientists have explored the use of remotely-sensed data as ancillary information to aid...

  19. Reducing the sampling frequency of groundwater monitoring wells

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, V.M.; Ridley, M.N. [Lawrence Livermore National Lab., CA (United States); Tuckfield, R.C.; Anderson, R.A. [Westinghouse, Savannah River Co., Aiken, SC (United States)

    1996-01-01

    As part of a joint LLNL/SRTC project, a methodology for selecting sampling frequencies is evolving that introduces statistical thinking and cost effectiveness into the sampling schedule selection practices now commonly employed on environmental projects. Our current emphasis is on descriptive rather than inferential statistics. Environmental monitoring data are inherently messy, being plagued by such problems as extremely high variability and left-censoring. As a result, real data often fail to meet the assumptions required for the appropriate application of many statistical methods. Rather than abandon the quantitative approach in these cases, however, the methodology employs simple statistical techniques to bring a measure of objectivity and reproducibility to the process. The techniques are applied within the framework of decision logic, which inrerprets the numerical results from the standpoint of chemistry-related professional judgment and the regulatory context. This paper presents the methodology`s basic concepts together with early implementation results, showing the estimated cost savings. 6 refs., 3 figs.

  20. Optimal sampling schemes for vegetation and geological field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2012-07-01

    Full Text Available The presentation made to Wits Statistics Department was on common classification methods used in the field of remote sensing, and the use of remote sensing to design optimal sampling schemes for field visits with applications in vegetation...

  1. Time-Frequency Based Instantaneous Frequency Estimation of Sparse Signals from an Incomplete Set of Samples

    Science.gov (United States)

    2014-06-17

    100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with

  2. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available sampling schemes case studies Optimized field sampling representing the overall distribution of a particular mineral Deriving optimal exploration target zones CONTINUUM REMOVAL for vegetation [13, 27, 46]. The convex hull transform is a method... of normalizing spectra [16, 41]. The convex hull technique is anal- ogous to fitting a rubber band over a spectrum to form a continuum. Figure 5 shows the concept of the convex hull transform. The differ- ence between the hull and the orig- inal spectrum...

  3. Sampling optimization for printer characterization by direct search.

    Science.gov (United States)

    Bianco, Simone; Schettini, Raimondo

    2012-12-01

    Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.

  4. Assessment of Optimal Flexibility in Ensemble of Frequency Responsive Loads

    Energy Technology Data Exchange (ETDEWEB)

    Kundu, Soumya; Hansen, Jacob; Lian, Jianming; Kalsi, Karanjit

    2018-04-19

    Potential of electrical loads in providing grid ancillary services is often limited due to the uncertainties associated with the load behavior. A knowledge of the expected uncertainties with a load control program would invariably yield to better informed control policies, opening up the possibility of extracting the maximal load control potential without affecting grid operations. In the context of frequency responsive load control, a probabilistic uncertainty analysis framework is presented to quantify the expected error between the target and actual load response, under uncertainties in the load dynamics. A closed-form expression of an optimal demand flexibility, minimizing the expected error in actual and committed flexibility, is provided. Analytical results are validated through Monte Carlo simulations of ensembles of electric water heaters.

  5. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Directory of Open Access Journals (Sweden)

    Jake M Ferguson

    2014-06-01

    Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  6. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  7. Optimal stride frequencies in running at different speeds.

    Directory of Open Access Journals (Sweden)

    Ben T van Oeveren

    Full Text Available During running at a constant speed, the optimal stride frequency (SF can be derived from the u-shaped relationship between SF and heart rate (HR. Changing SF towards the optimum of this relationship is beneficial for energy expenditure and may positively change biomechanics of running. In the current study, the effects of speed on the optimal SF and the nature of the u-shaped relation were empirically tested using Generalized Estimating Equations. To this end, HR was recorded from twelve healthy (4 males, 8 females inexperienced runners, who completed runs at three speeds. The three speeds were 90%, 100% and 110% of self-selected speed. A self-selected SF (SFself was determined for each of the speeds prior to the speed series. The speed series started with a free-chosen SF condition, followed by five imposed SF conditions (SFself, 70, 80, 90, 100 strides·min-1 assigned in random order. The conditions lasted 3 minutes with 2.5 minutes of walking in between. SFself increased significantly (p<0.05 with speed with averages of 77, 79, 80 strides·min-1 at 2.4, 2.6, 2.9 m·s-1, respectively. As expected, the relation between SF and HR could be described by a parabolic curve for all speeds. Speed did not significantly affect the curvature, nor did it affect optimal SF. We conclude that over the speed range tested, inexperienced runners may not need to adapt their SF to running speed. However, since SFself were lower than the SFopt of 83 strides·min-1, the runners could reduce HR by increasing their SFself.

  8. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  9. Monte Carlo importance sampling optimization for system reliability applications

    International Nuclear Information System (INIS)

    Campioni, Luca; Vestrucci, Paolo

    2004-01-01

    This paper focuses on the reliability analysis of multicomponent systems by the importance sampling technique, and, in particular, it tackles the optimization aspect. A methodology based on the minimization of the variance at the component level is proposed for the class of systems consisting of independent components. The claim is that, by means of such a methodology, the optimal biasing could be achieved without resorting to the typical approach by trials

  10. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  11. Optimization of protein samples for NMR using thermal shift assays

    International Nuclear Information System (INIS)

    Kozak, Sandra; Lercher, Lukas; Karanth, Megha N.; Meijers, Rob; Carlomagno, Teresa; Boivin, Stephane

    2016-01-01

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor"® provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  12. Optimization of protein samples for NMR using thermal shift assays

    Energy Technology Data Exchange (ETDEWEB)

    Kozak, Sandra [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Lercher, Lukas; Karanth, Megha N. [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Meijers, Rob [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Carlomagno, Teresa, E-mail: teresa.carlomagno@oci.uni-hannover.de [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Boivin, Stephane, E-mail: sboivin77@hotmail.com, E-mail: s.boivin@embl-hamburg.de [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany)

    2016-04-15

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor{sup ®} provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  13. On Optimal, Minimal BRDF Sampling for Reflectance Acquisition

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Jensen, Henrik Wann; Ramamoorthi, Ravi

    2015-01-01

    The bidirectional reflectance distribution function (BRDF) is critical for rendering, and accurate material representation requires data-driven reflectance models. However, isotropic BRDFs are 3D functions, and measuring the reflectance of a flat sample can require a million incident and outgoing...... direction pairs, making the use of measured BRDFs impractical. In this paper, we address the problem of reconstructing a measured BRDF from a limited number of samples. We present a novel mapping of the BRDF space, allowing for extraction of descriptive principal components from measured databases......, such as the MERL BRDF database. We optimize for the best sampling directions, and explicitly provide the optimal set of incident and outgoing directions in the Rusinkiewicz parameterization for n = {1, 2, 5, 10, 20} samples. Based on the principal components, we describe a method for accurately reconstructing BRDF...

  14. Optimal updating magnitude in adaptive flat-distribution sampling.

    Science.gov (United States)

    Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery

    2017-11-07

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  15. Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP

    Science.gov (United States)

    Roshani, E.; Berg, A. A.; Lindsay, J.

    2013-12-01

    Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories

  16. Optimizing switching frequency of the soliton transistor by numerical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Izadyar, S., E-mail: S_izadyar@yahoo.co [Department of Electronics, Khaje Nasir Toosi University of Technology, Shariati Ave., Tehran (Iran, Islamic Republic of); Niazzadeh, M.; Raissi, F. [Department of Electronics, Khaje Nasir Toosi University of Technology, Shariati Ave., Tehran (Iran, Islamic Republic of)

    2009-10-15

    In this paper, by numerical simulations we have examined different ways to increase the soliton transistor's switching frequency. Speed of the solitons in a soliton transistor depends on various parameters such as the loss of the junction, the applied bias current, and the transmission line characteristics. Three different ways have been examined; (i) decreasing the size of the transistor without losing transistor effect. (ii) Decreasing the amount of loss of the junction to increase the soliton speed. (iii) Optimizing the bias current to obtain maximum possible speed. We have obtained the shortest possible length to have at least one working soliton inside the transistor. The dimension of the soliton can be decreased by changing the inductance of the transmission line, causing a further decrease in the size of the transistor, however, a trade off between the size and the inductance is needed to obtain the optimum switching speed. Decreasing the amount of loss can be accomplished by increasing the characteristic tunneling resistance of the device, however, a trade off is again needed to make soliton and antisoliton annihilation possible. By increasing the bias current, the forces acting the solitons increases and so does their speed. Due to nonuniform application of bias current a self induced magnetic field is created which can result in creation of unwanted solitons. Optimum bias current application can result in larger bias currents and larger soliton speed. Simulations have provided us with such an arrangement of bias current paths.

  17. Optimizing switching frequency of the soliton transistor by numerical simulation

    International Nuclear Information System (INIS)

    Izadyar, S.; Niazzadeh, M.; Raissi, F.

    2009-01-01

    In this paper, by numerical simulations we have examined different ways to increase the soliton transistor's switching frequency. Speed of the solitons in a soliton transistor depends on various parameters such as the loss of the junction, the applied bias current, and the transmission line characteristics. Three different ways have been examined; (i) decreasing the size of the transistor without losing transistor effect. (ii) Decreasing the amount of loss of the junction to increase the soliton speed. (iii) Optimizing the bias current to obtain maximum possible speed. We have obtained the shortest possible length to have at least one working soliton inside the transistor. The dimension of the soliton can be decreased by changing the inductance of the transmission line, causing a further decrease in the size of the transistor, however, a trade off between the size and the inductance is needed to obtain the optimum switching speed. Decreasing the amount of loss can be accomplished by increasing the characteristic tunneling resistance of the device, however, a trade off is again needed to make soliton and antisoliton annihilation possible. By increasing the bias current, the forces acting the solitons increases and so does their speed. Due to nonuniform application of bias current a self induced magnetic field is created which can result in creation of unwanted solitons. Optimum bias current application can result in larger bias currents and larger soliton speed. Simulations have provided us with such an arrangement of bias current paths.

  18. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  19. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  20. Optimized preparation of urine samples for two-dimensional electrophoresis and initial application to patient samples

    DEFF Research Database (Denmark)

    Lafitte, Daniel; Dussol, Bertrand; Andersen, Søren

    2002-01-01

    OBJECTIVE: We optimized of the preparation of urinary samples to obtain a comprehensive map of urinary proteins of healthy subjects and then compared this map with the ones obtained with patient samples to show that the pattern was specific of their kidney disease. DESIGN AND METHODS: The urinary...

  1. OLT-centralized sampling frequency offset compensation scheme for OFDM-PON.

    Science.gov (United States)

    Chen, Ming; Zhou, Hui; Zheng, Zhiwei; Deng, Rui; Chen, Qinghui; Peng, Miao; Liu, Cuiwei; He, Jing; Chen, Lin; Tang, Xionggui

    2017-08-07

    We propose an optical line terminal (OLT)-centralized sampling frequency offset (SFO) compensation scheme for adaptively-modulated OFDM-PON systems. By using the proposed SFO scheme, the phase rotation and inter-symbol interference (ISI) caused by SFOs between OLT and multiple optical network units (ONUs) can be centrally compensated in the OLT, which reduces the complexity of ONUs. Firstly, the optimal fast Fourier transform (FFT) size is identified in the intensity-modulated and direct-detection (IMDD) OFDM system in the presence of SFO. Then, the proposed SFO compensation scheme including phase rotation modulation (PRM) and length-adaptive OFDM frame has been experimentally demonstrated in the downlink transmission of an adaptively modulated optical OFDM with the optimal FFT size. The experimental results show that up to ± 300 ppm SFO can be successfully compensated without introducing any receiver performance penalties.

  2. Ad-Hoc vs. Standardized and Optimized Arthropod Diversity Sampling

    Directory of Open Access Journals (Sweden)

    Pedro Cardoso

    2009-09-01

    Full Text Available The use of standardized and optimized protocols has been recently advocated for different arthropod taxa instead of ad-hoc sampling or sampling with protocols defined on a case-by-case basis. We present a comparison of both sampling approaches applied for spiders in a natural area of Portugal. Tests were made to their efficiency, over-collection of common species, singletons proportions, species abundance distributions, average specimen size, average taxonomic distinctness and behavior of richness estimators. The standardized protocol revealed three main advantages: (1 higher efficiency; (2 more reliable estimations of true richness; and (3 meaningful comparisons between undersampled areas.

  3. Topology optimization of radio frequency and microwave structures

    DEFF Research Database (Denmark)

    Aage, Niels

    in this thesis, concerns the optimization of devices for wireless energy transfer via strongly coupled magnetic resonators. A single design problem is considered to demonstrate proof of concept. The resulting design illustrates the possibilities of the optimization method, but also reveals its numerical...... of efficient antennas and power supplies. A topology optimization methodology is proposed based on a design parameterization which incorporates the skin effect. The numerical optimization procedure is implemented in Matlab, for 2D problems, and in a parallel C++ optimization framework, for 3D design problems...... formalism, a two step optimization procedure is presented. This scheme is applied to the design and optimization of a hemispherical sub-wavelength antenna. The optimized antenna configuration displayed a ratio of radiated power to input power in excess of 99 %. The third, and last, design problem considered...

  4. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  5. Distributed Optimization Design of Continuous-Time Multiagent Systems With Unknown-Frequency Disturbances.

    Science.gov (United States)

    Wang, Xinghu; Hong, Yiguang; Yi, Peng; Ji, Haibo; Kang, Yu

    2017-05-24

    In this paper, a distributed optimization problem is studied for continuous-time multiagent systems with unknown-frequency disturbances. A distributed gradient-based control is proposed for the agents to achieve the optimal consensus with estimating unknown frequencies and rejecting the bounded disturbance in the semi-global sense. Based on convex optimization analysis and adaptive internal model approach, the exact optimization solution can be obtained for the multiagent system disturbed by exogenous disturbances with uncertain parameters.

  6. Research on a Micro-Grid Frequency Modulation Strategy Based on Optimal Utilization of Air Conditioners

    Directory of Open Access Journals (Sweden)

    Qingzhu Wan

    2016-12-01

    Full Text Available With the proportion of air conditioners increasing gradually, they can provide a certain amount of frequency-controlled reserves for a micro-grid. Optimizing utilization of air conditioners and considering load response characteristics and customer comfort, the frequency adjustment model is a quadratic function model between the trigger temperature of the air conditioner compressor, and frequency variation is provided, which can be used to regulate the trigger temperature of the air conditioner when the micro-grid frequency rises and falls. This frequency adjustment model combines a primary frequency modulation method and a secondary frequency modulation method of the energy storage system, in order to optimize the frequency of a micro-grid. The simulation results show that the frequency modulation strategy for air conditioners can effectively improve the frequency modulation ability of air conditioners and frequency modulation effects of a micro-grid in coordination with an energy storage system.

  7. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  8. Simultaneous beam sampling and aperture shape optimization for SPORT

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu

    2015-01-01

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  9. Simultaneous beam sampling and aperture shape optimization for SPORT

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Ye, Yinyu [Department of Management Science and Engineering, Stanford University, Stanford, California 94305 (United States)

    2015-02-15

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  10. Simultaneous beam sampling and aperture shape optimization for SPORT.

    Science.gov (United States)

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case

  11. Optimal execution in high-frequency trading with Bayesian learning

    Science.gov (United States)

    Du, Bian; Zhu, Hongliang; Zhao, Jingdong

    2016-11-01

    We consider optimal trading strategies in which traders submit bid and ask quotes to maximize the expected quadratic utility of total terminal wealth in a limit order book. The trader's bid and ask quotes will be changed by the Poisson arrival of market orders. Meanwhile, the trader may update his estimate of other traders' target sizes and directions by Bayesian learning. The solution of optimal execution in the limit order book is a two-step procedure. First, we model an inactive trading with no limit order in the market. The dealer simply holds dollars and shares of stocks until terminal time. Second, he calibrates his bid and ask quotes to the limit order book. The optimal solutions are given by dynamic programming and in fact they are globally optimal. We also give numerical simulation to the value function and optimal quotes at the last part of the article.

  12. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  13. On the optimal sampling of bandpass measurement signals through data acquisition systems

    International Nuclear Information System (INIS)

    Angrisani, L; Vadursi, M

    2008-01-01

    Data acquisition systems (DAS) play a fundamental role in a lot of modern measurement solutions. One of the parameters characterizing a DAS is its maximum sample rate, which imposes constraints on the signals that can be alias-free digitized. Bandpass sampling theory singles out separated ranges of admissible sample rates, which can be significantly lower than carrier frequency. But, how to choose the most convenient sample rate according to the purpose at hand? The paper proposes a method for the automatic selection of the optimal sample rate in measurement applications involving bandpass signals; the effects of sample clock instability and limited resolution are also taken into account. The method allows the user to choose the location of spectral replicas of the sampled signal in terms of normalized frequency, and the minimum guard band between replicas, thus introducing a feature that no DAS currently available on the market seems to offer. A number of experimental tests on bandpass digitally modulated signals are carried out to assess the concurrence of the obtained central frequency with the expected one

  14. CiOpt: a program for optimization of the frequency response of linear circuits

    OpenAIRE

    Miró Sans, Joan Maria; Palà Schönwälder, Pere

    1991-01-01

    An interactive personal-computer program for optimizing the frequency response of linear lumped circuits (CiOpt) is presented. CiOpt has proved to be an efficient tool in improving designs where the inclusion of more accurate device models distorts the desired frequency response, as well as in device modeling. The outputs of CiOpt are the element values which best match the obtained and the desired frequency response. The optimization algorithms used (the Fletcher-Powell and Newton's methods,...

  15. PWM pulse pattern optimization method using carrier frequency modulation. Carrier shuhasu hencho ni yoru PWM pulse pattern saitekikaho

    Energy Technology Data Exchange (ETDEWEB)

    Iwaji, Y.; Fukuda, S. (Hokkaido University, Sapporo (Japan))

    1991-07-15

    Sinusoidal inverters are getting more widely used keeping pace with the development of semiconductor switching elements. This paper discusses optimizing a PWM pulse pattern at an inverter output to drive an induction motor, proposes methods for improving distortion and torque ripples using a carrier frequency modulation (CFM), and describes a method for realizing the improvement through use of a single-chip microcomputer. The method defines evaluation parameters corresponding to the distortion and torque ripples, and optimizes the CFM depth to the parameters. The PWM pulse pattern has its voltage vector and time width so selected that the time integrated space vector of a three-phase voltage approaches a circular locus. Furthermore, the carrier frequency, that is the sampling frequency of the inverter, is also adjusted so that the above evaluation parameters are minimized. The addition of a new variable called the frequency modulation provides freedom in selecting an output characteristic as called for by the purpose. 12 refs., 18 figs.

  16. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...

  17. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  18. 7 CFR 58.643 - Frequency of sampling.

    Science.gov (United States)

    2010-01-01

    ... AGRICULTURAL MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT (CONTINUED) GRADING AND INSPECTION... each type of mix, and for the finished frozen product one sample from each flavor made. (b) Composition... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards...

  19. Evaluation of the Frequency for Gas Sampling for the High Burnup Confirmatory Data Project

    Energy Technology Data Exchange (ETDEWEB)

    Stockman, Christine T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Alsaed, Halim A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bryan, Charles R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marschman, Steven C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Scaglione, John M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-05-01

    This report provides a technically based gas sampling frequency strategy for the High Burnup (HBU) Confirmatory Data Project. The evaluation of: 1) the types and magnitudes of gases that could be present in the project cask and, 2) the degradation mechanisms that could change gas compositions culminates in an adaptive gas sampling frequency strategy. This adaptive strategy is compared against the sampling frequency that has been developed based on operational considerations.

  20. Real Time Optimal Control of Supercapacitor Operation for Frequency Response

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Yusheng; Panwar, Mayank; Mohanpurkar, Manish; Hovsapian, Rob

    2016-07-01

    Supercapacitors are gaining wider applications in power systems due to fast dynamic response. Utilizing supercapacitors by means of power electronics interfaces for power compensation is a proven effective technique. For applications such as requency restoration if the cost of supercapacitors maintenance as well as the energy loss on the power electronics interfaces are addressed. It is infeasible to use traditional optimization control methods to mitigate the impacts of frequent cycling. This paper proposes a Front End Controller (FEC) using Generalized Predictive Control featuring real time receding optimization. The optimization constraints are based on cost and thermal management to enhance to the utilization efficiency of supercapacitors. A rigorous mathematical derivation is conducted and test results acquired from Digital Real Time Simulator are provided to demonstrate effectiveness.

  1. Frequency Optimization for Enhancement of Surface Defect Classification Using the Eddy Current Technique

    Science.gov (United States)

    Fan, Mengbao; Wang, Qi; Cao, Binghua; Ye, Bo; Sunny, Ali Imam; Tian, Guiyun

    2016-01-01

    Eddy current testing is quite a popular non-contact and cost-effective method for nondestructive evaluation of product quality and structural integrity. Excitation frequency is one of the key performance factors for defect characterization. In the literature, there are many interesting papers dealing with wide spectral content and optimal frequency in terms of detection sensitivity. However, research activity on frequency optimization with respect to characterization performances is lacking. In this paper, an investigation into optimum excitation frequency has been conducted to enhance surface defect classification performance. The influences of excitation frequency for a group of defects were revealed in terms of detection sensitivity, contrast between defect features, and classification accuracy using kernel principal component analysis (KPCA) and a support vector machine (SVM). It is observed that probe signals are the most sensitive on the whole for a group of defects when excitation frequency is set near the frequency at which maximum probe signals are retrieved for the largest defect. After the use of KPCA, the margins between the defect features are optimum from the perspective of the SVM, which adopts optimal hyperplanes for structure risk minimization. As a result, the best classification accuracy is obtained. The main contribution is that the influences of excitation frequency on defect characterization are interpreted, and experiment-based procedures are proposed to determine the optimal excitation frequency for a group of defects rather than a single defect with respect to optimal characterization performances. PMID:27164112

  2. Optimization of vehicle compartment low frequency noise based on Radial Basis Function Neuro-Network Approximation Model

    Directory of Open Access Journals (Sweden)

    HU Qi-guo

    2017-01-01

    Full Text Available For reducing the vehicle compartment low frequency noise, the Optimal Latin hypercube sampling method was applied to perform experimental design for sampling in the factorial design space. The thickness parameters of the panels with larger acoustic contribution was considered as factors, as well as the vehicle mass, seventh rank modal frequency of body, peak sound pressure of test point and sound pressure root-mean-square value as responses. By using the RBF(radial basis function neuro-network method, an approximation model of four responses about six factors was established. Further more, error analysis of established approximation model was performed in this paper. To optimize the panel’s thickness parameter, the adaptive simulated annealing algorithm was im-plemented. Optimization results show that the peak sound pressure of driver’s head was reduced by 4.45dB and 5.47dB at frequency 158HZ and 134Hz respec-tively. The test point pressure were significantly reduced at other frequency as well. The results indicate that through the optimization the vehicle interior cavity noise was reduced effectively, and the acoustical comfort of the vehicle was im-proved significantly.

  3. Search for the optimally suited cantilever type for high-frequency MFM

    International Nuclear Information System (INIS)

    Koblischka, M R; Wei, J D; Kirsch, M; Lessel, M; Pfeifer, R; Brust, M; Hartmann, U; Richter, C; Sulzbach, T

    2007-01-01

    To optimize the performance of the high-frequency MFM (HF-MFM) technique [1-4], we performed a search for the best suited cantilever type and magnetic material coating. Using a HF-MFM setup with hard disk writer poles as test samples, we carried out HF-MFM imaging at frequencies up to 2 GHz. For HF-MFM, it is an essential ingredient that the tip material can follow the fast switching of the high-frequency fields. In this contribution, we investigated 6 different types of cantilevers (i) the 'standard' MFM tip (Nanoworld Pointprobe) with 30 nm CoCr coating, (ii) a 'SSS' (Nanoworld SuperSharpSilicon TM ) cantilever with a 10 nm CoCr coating, (iii) a (Ni, Zn)-ferrite coated pointprobe tip (iv) a Ba 3 Co 2 Fe 23 O 41 (BCFO) coated pointprobe tip, (v) a low-coercivity NiCo alloy coated tip, and (vi) a permalloy-coated tip

  4. A Dictionary of Basic Pashto Frequency List I, Project Description and Samples, and Frequency List II.

    Science.gov (United States)

    Heston, Wilma

    The three-volume set of materials describes and presents the results to date of a federally-funded project to develop Pashto-English and English-Pashto dictionaries. The goal was to produce a list of 12,000 basic Pashto words for English-speaking users. Words were selected based on frequency in various kinds of oral and written materials, and were…

  5. Application of energies of optimal frequency bands for fault diagnosis based on modified distance function

    Energy Technology Data Exchange (ETDEWEB)

    Zamanian, Amir Hosein [Southern Methodist University, Dallas (United States); Ohadi, Abdolreza [Amirkabir University of Technology (Tehran Polytechnic), Tehran (Iran, Islamic Republic of)

    2017-06-15

    Low-dimensional relevant feature sets are ideal to avoid extra data mining for classification. The current work investigates the feasibility of utilizing energies of vibration signals in optimal frequency bands as features for machine fault diagnosis application. Energies in different frequency bands were derived based on Parseval's theorem. The optimal feature sets were extracted by optimization of the related frequency bands using genetic algorithm and a Modified distance function (MDF). The frequency bands and the number of bands were optimized based on the MDF. The MDF is designed to a) maximize the distance between centers of classes, b) minimize the dispersion of features in each class separately, and c) minimize dimension of extracted feature sets. The experimental signals in two different gearboxes were used to demonstrate the efficiency of the presented technique. The results show the effectiveness of the presented technique in gear fault diagnosis application.

  6. Improvement of Low-Frequency Sound Field Obtained by an Optimized Boundary

    Institute of Scientific and Technical Information of China (English)

    JING Lu; ZHU Xiao-tian

    2006-01-01

    An approach based on the finite element analysis was introduced to improve low-frequency sound field. The optimized scatters on the wall redistribute the modes of the room and provide effective diffusion of sound field. The frequency response, eigenfrequency, spatial distribution and transient response were calculated. Experimental data were obtained through a 1:5 scaled set up. The results show that the optimized treatment has a positive effect on sound field and the improvement is obvious.

  7. Optimization of Quantum-state-preserving Frequency Conversion by Changing the Input Signal

    DEFF Research Database (Denmark)

    Andersen, Lasse Mejling; Reddy, D. V.; McKinstrie, C. J.

    We optimize frequency conversion based on four-wave mixing by using the input modes of the system. We find a 10-25 % higher conversion efficiency relative to a pump-shaped input signal.......We optimize frequency conversion based on four-wave mixing by using the input modes of the system. We find a 10-25 % higher conversion efficiency relative to a pump-shaped input signal....

  8. Alfven continuum and high-frequency eigenmodes in optimized stellarators

    International Nuclear Information System (INIS)

    Kolesnichenko, Ya.I.; Lutsenko, V.V.; Wobig, H.; Yakovenko, Yu.V.; Fesenyuk, O.P.

    2001-01-01

    An equation of shear Alfven eigenmodes (AE) in optimized stellarators of Wendelstein line (Helias configurations) is derived. The metric tensor coefficients, which are contained in this equation, are calculated analytically. Two numerical codes are developed: the first one, COBRA (COntinuum BRanches of Alfven waves), is intended for the investigation of the structure of Alfven continuum; the second, BOA (Branches Of Alfven modes), solves the eigenvalue problem. The family of possible gaps in Alfven continuum of a Helias configuration is obtained. It is predicted that there exist gaps which arise due to or are strongly affected by the variation of the shape of the plasma cross section along the large azimuth of the torus. In such gaps, discrete eigenmodes, namely, helicity-induced eigenmodes (HAE 21 ) and mirror-induced eigenmodes (MAE) are found. It is shown that plasma inhomogeneity may suppress the AEs with a wide region of localization

  9. Etching of Niobium Sample Placed on Superconducting Radio Frequency Cavity Surface in Ar/CL2 Plasma

    International Nuclear Information System (INIS)

    Upadhyay, Janardan; Phillips, Larry; Valente, Anne-Marie

    2011-01-01

    Plasma based surface modification is a promising alternative to wet etching of superconducting radio frequency (SRF) cavities. It has been proven with flat samples that the bulk Niobium (Nb) removal rate and the surface roughness after the plasma etchings are equal to or better than wet etching processes. To optimize the plasma parameters, we are using a single cell cavity with 20 sample holders symmetrically distributed over the cell. These holders serve the purpose of diagnostic ports for the measurement of the plasma parameters and for the holding of the Nb sample to be etched. The plasma properties at RF (100 MHz) and MW (2.45 GHz) frequencies are being measured with the help of electrical and optical probes at different pressures and RF power levels inside of this cavity. The niobium coupons placed on several holders around the cell are being etched simultaneously. The etching results will be presented at this conference.

  10. Etching of Niobium Sample Placed on Superconducting Radio Frequency Cavity Surface in Ar/CL2 Plasma

    Energy Technology Data Exchange (ETDEWEB)

    Janardan Upadhyay, Larry Phillips, Anne-Marie Valente

    2011-09-01

    Plasma based surface modification is a promising alternative to wet etching of superconducting radio frequency (SRF) cavities. It has been proven with flat samples that the bulk Niobium (Nb) removal rate and the surface roughness after the plasma etchings are equal to or better than wet etching processes. To optimize the plasma parameters, we are using a single cell cavity with 20 sample holders symmetrically distributed over the cell. These holders serve the purpose of diagnostic ports for the measurement of the plasma parameters and for the holding of the Nb sample to be etched. The plasma properties at RF (100 MHz) and MW (2.45 GHz) frequencies are being measured with the help of electrical and optical probes at different pressures and RF power levels inside of this cavity. The niobium coupons placed on several holders around the cell are being etched simultaneously. The etching results will be presented at this conference.

  11. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  12. Design of Meander-Line Antennas for Radio Frequency Identification Based on Multiobjective Optimization

    Directory of Open Access Journals (Sweden)

    X. L. Travassos

    2012-01-01

    Full Text Available This paper presents optimization problem formulations to design meander-line antennas for passive UHF radio frequency identification tags based on given specifications of input impedance, frequency range, and geometric constraints. In this application, there is a need for directive transponders to select properly the target tag, which in turn must be ideally isotropic. The design of an effective meander-line antenna for RFID purposes requires balancing geometrical characteristics with the microchip impedance. Therefore, there is an issue of optimization in determining the antenna parameters for best performance. The antenna is analyzed by a method of moments. Some results using a deterministic optimization algorithm are shown.

  13. Optimizing an Actuator Array for the Control of Multi-Frequency Noise in Aircraft Interiors

    Science.gov (United States)

    Palumbo, D. L.; Padula, S. L.

    1997-01-01

    Techniques developed for selecting an optimized actuator array for interior noise reduction at a single frequency are extended to the multi-frequency case. Transfer functions for 64 actuators were obtained at 5 frequencies from ground testing the rear section of a fully trimmed DC-9 fuselage. A single loudspeaker facing the left side of the aircraft was the primary source. A combinatorial search procedure (tabu search) was employed to find optimum actuator subsets of from 2 to 16 actuators. Noise reduction predictions derived from the transfer functions were used as a basis for evaluating actuator subsets during optimization. Results indicate that it is necessary to constrain actuator forces during optimization. Unconstrained optimizations selected actuators which require unrealistically large forces. Two methods of constraint are evaluated. It is shown that a fast, but approximate, method yields results equivalent to an accurate, but computationally expensive, method.

  14. Draft evaluation of the frequency for gas sampling for the high burnup confirmatory data project

    Energy Technology Data Exchange (ETDEWEB)

    Stockman, Christine T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Alsaed, Halim A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bryan, Charles R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-03-26

    This report fulfills the M3 milestone M3FT-15SN0802041, “Draft Evaluation of the Frequency for Gas Sampling for the High Burn-up Storage Demonstration Project” under Work Package FT-15SN080204, “ST Field Demonstration Support – SNL”. This report provides a technically based gas sampling frequency strategy for the High Burnup (HBU) Confirmatory Data Project. The evaluation of: 1) the types and magnitudes of gases that could be present in the project cask and, 2) the degradation mechanisms that could change gas compositions culminates in an adaptive gas sampling frequency strategy. This adaptive strategy is compared against the sampling frequency that has been developed based on operational considerations. Gas sampling will provide information on the presence of residual water (and byproducts associated with its reactions and decomposition) and breach of cladding, which could inform the decision of when to open the project cask.

  15. The Importance of Pressure Sampling Frequency in Models for Determination of Critical Wave Loadingson Monolithic Structures

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Andersen, Thomas Lykke; Meinert, Palle

    2008-01-01

    This paper discusses the influence of wave load sampling frequency on calculated sliding distance in an overall stability analysis of a monolithic caisson. It is demonstrated by a specific example of caisson design that for this kind of analyses the sampling frequency in a small scale model could...... be as low as 100 Hz in model scale. However, for design of structure elements like the wave wall on the top of a caisson the wave load sampling frequency must be much higher, in the order of 1000 Hz in the model. Elastic-plastic deformations of foundation and structure were not included in the analysis....

  16. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  17. Frequency Response of the Sample Vibration Mode in Scanning Probe Acoustic Microscope

    International Nuclear Information System (INIS)

    Ya-Jun, Zhao; Qian, Cheng; Meng-Lu, Qian

    2010-01-01

    Based on the interaction mechanism between tip and sample in the contact mode of a scanning probe acoustic microscope (SPAM), an active mass of the sample is introduced in the mass-spring model. The tip motion and frequency response of the sample vibration mode in the SPAM are calculated by the Lagrange equation with dissipation function. For the silicon tip and glass assemblage in the SPAM the frequency response is simulated and it is in agreement with the experimental result. The living myoblast cells on the glass slide are imaged at resonance frequencies of the SPAM system, which are 20kHz, 30kHz and 120kHz. It is shown that good contrast of SPAM images could be obtained when the system is operated at the resonance frequencies of the system in high and low-frequency regions

  18. The optimal operation of cooling tower systems with variable-frequency control

    Science.gov (United States)

    Cao, Yong; Huang, Liqing; Cui, Zhiguo; Liu, Jing

    2018-02-01

    This study investigates the energy performance of chiller and cooling tower systems integrated with variable-frequency control for cooling tower fans and condenser water pumps. With regard to an example chiller system serving an office building, Chiller and cooling towers models were developed to assess how different variable-frequency control methods of cooling towers fans and condenser water pumps influence the trade-off between the chiller power, pump power and fan power under various operating conditions. The matching relationship between the cooling tower fans frequency and condenser water pumps frequency at optimal energy consumption of the system is introduced to achieve optimum system performance.

  19. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    Science.gov (United States)

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  20. Extended exergy concept to facilitate designing and optimization of frequency-dependent direct energy conversion systems

    International Nuclear Information System (INIS)

    Wijewardane, S.; Goswami, Yogi

    2014-01-01

    Highlights: • Proved exergy method is not adequate to optimize frequency-dependent energy conversion. • Exergy concept is modified to facilitate the thermoeconomic optimization of photocell. • The exergy of arbitrary radiation is used for a practical purpose. • The utility of the concept is illustrated using pragmatic examples. - Abstract: Providing the radiation within the acceptable (responsive) frequency range(s) is a common method to increase the efficiency of the frequency-dependent energy conversion systems, such as photovoltaic and nano-scale rectenna. Appropriately designed auxiliary items such as spectrally selective thermal emitters, optical filters, and lenses are used for this purpose. However any energy conversion method that utilizes auxiliary components to increase the efficiency of a system has to justify the potential cost incurred by those auxiliary components through the economic gain emerging from the increased system efficiency. Therefore much effort should be devoted to design innovative systems, effectively integrating the auxiliary items and to optimize the system with economic considerations. Exergy is the widely used method to design and optimize conventional energy conversion systems. Although the exergy concept is used to analyze photovoltaic systems, it has not been used effectively to design and optimize such systems. In this manuscript, we present a modified exergy method in order to effectively design and economically optimize frequency-dependent energy conversion systems. Also, we illustrate the utility of this concept using examples of thermophotovoltaic, Photovoltaic/Thermal and concentrated solar photovoltaic

  1. Frequency Mixing Magnetic Detection Scanner for Imaging Magnetic Particles in Planar Samples.

    Science.gov (United States)

    Hong, Hyobong; Lim, Eul-Gyoon; Jeong, Jae-Chan; Chang, Jiho; Shin, Sung-Woong; Krause, Hans-Joachim

    2016-06-09

    The setup of a planar Frequency Mixing Magnetic Detection (p-FMMD) scanner for performing Magnetic Particles Imaging (MPI) of flat samples is presented. It consists of two magnetic measurement heads on both sides of the sample mounted on the legs of a u-shaped support. The sample is locally exposed to a magnetic excitation field consisting of two distinct frequencies, a stronger component at about 77 kHz and a weaker field at 61 Hz. The nonlinear magnetization characteristics of superparamagnetic particles give rise to the generation of intermodulation products. A selected sum-frequency component of the high and low frequency magnetic field incident on the magnetically nonlinear particles is recorded by a demodulation electronics. In contrast to a conventional MPI scanner, p-FMMD does not require the application of a strong magnetic field to the whole sample because mixing of the two frequencies occurs locally. Thus, the lateral dimensions of the sample are just limited by the scanning range and the supports. However, the sample height determines the spatial resolution. In the current setup it is limited to 2 mm. As examples, we present two 20 mm × 25 mm p-FMMD images acquired from samples with 1 µm diameter maghemite particles in silanol matrix and with 50 nm magnetite particles in aminosilane matrix. The results show that the novel MPI scanner can be applied for analysis of thin biological samples and for medical diagnostic purposes.

  2. AMORE-HX: a multidimensional optimization of radial enhanced NMR-sampled hydrogen exchange

    International Nuclear Information System (INIS)

    Gledhill, John M.; Walters, Benjamin T.; Wand, A. Joshua

    2009-01-01

    The Cartesian sampled three-dimensional HNCO experiment is inherently limited in time resolution and sensitivity for the real time measurement of protein hydrogen exchange. This is largely overcome by use of the radial HNCO experiment that employs the use of optimized sampling angles. The significant practical limitation presented by use of three-dimensional data is the large data storage and processing requirements necessary and is largely overcome by taking advantage of the inherent capabilities of the 2D-FT to process selective frequency space without artifact or limitation. Decomposition of angle spectra into positive and negative ridge components provides increased resolution and allows statistical averaging of intensity and therefore increased precision. Strategies for averaging ridge cross sections within and between angle spectra are developed to allow further statistical approaches for increasing the precision of measured hydrogen occupancy. Intensity artifacts potentially introduced by over-pulsing are effectively eliminated by use of the BEST approach

  3. Three-Dimensional Dynamic Topology Optimization with Frequency Constraints Using Composite Exponential Function and ICM Method

    Directory of Open Access Journals (Sweden)

    Hongling Ye

    2015-01-01

    Full Text Available The dynamic topology optimization of three-dimensional continuum structures subject to frequency constraints is investigated using Independent Continuous Mapping (ICM design variable fields. The composite exponential function (CEF is selected to be a filter function which recognizes the design variables and to implement the changing process of design variables from “discrete” to “continuous” and back to “discrete.” Explicit formulations of frequency constraints are given based on filter functions, first-order Taylor series expansion. And an improved optimal model is formulated using CEF and the explicit frequency constraints. Dual sequential quadratic programming (DSQP algorithm is used to solve the optimal model. The program is developed on the platform of MSC Patran & Nastran. Finally, numerical examples are given to demonstrate the validity and applicability of the proposed method.

  4. Optimization of Natural Frequencies and Sound Power of Beams Using Functionally Graded Material

    Directory of Open Access Journals (Sweden)

    Nabeel T. Alshabatat

    2014-01-01

    Full Text Available This paper presents a design method to optimize the material distribution of functionally graded beams with respect to some vibration and acoustic properties. The change of the material distribution through the beam length alters the stiffness and the mass of the beam. This can be used to alter a specific beam natural frequency. It can also be used to reduce the sound power radiated from the vibrating beam. Two novel volume fraction laws are used to describe the material volume distributions through the length of the FGM beam. The proposed method couples the finite element method (for the modal and harmonic analysis, Lumped Parameter Model (for calculating the power of sound radiation, and an optimization technique based on Genetic Algorithm. As a demonstration of this technique, the optimization procedure is applied to maximize the fundamental frequency of FGM cantilever and clamped beams and to minimize the sound radiation from vibrating clamped FGM beam at a specific frequency.

  5. Focusing light through dynamical samples using fast continuous wavefront optimization.

    Science.gov (United States)

    Blochet, B; Bourdieu, L; Gigan, S

    2017-12-01

    We describe a fast continuous optimization wavefront shaping system able to focus light through dynamic scattering media. A micro-electro-mechanical system-based spatial light modulator, a fast photodetector, and field programmable gate array electronics are combined to implement a continuous optimization of a wavefront with a single-mode optimization rate of 4.1 kHz. The system performances are demonstrated by focusing light through colloidal solutions of TiO 2 particles in glycerol with tunable temporal stability.

  6. Symbol synchronization and sampling frequency synchronization techniques in real-time DDO-OFDM systems

    Science.gov (United States)

    Chen, Ming; He, Jing; Cao, Zizheng; Tang, Jin; Chen, Lin; Wu, Xian

    2014-09-01

    In this paper, we propose and experimentally demonstrate a symbol synchronization and sampling frequency synchronization techniques in real-time direct-detection optical orthogonal frequency division multiplexing (DDO-OFDM) system, over 100-km standard single mode fiber (SSMF) using a cost-effective directly modulated distributed feedback (DFB) laser. The experiment results show that the proposed symbol synchronization based on training sequence (TS) has a low complexity and high accuracy even at a sampling frequency offset (SFO) of 5000-ppm. Meanwhile, the proposed pilot-assisted sampling frequency synchronization between digital-to-analog converter (DAC) and analog-to-digital converter (ADC) is capable of estimating SFOs with an accuracy of technique can also compensate SFO effects within a small residual SFO caused by deviation of SFO estimation and low-precision or unstable clock source. The two synchronization techniques are suitable for high-speed DDO-OFDM transmission systems.

  7. Optimal Control and Operation Strategy for Wind Turbines Contributing to Grid Primary Frequency Regulation

    Directory of Open Access Journals (Sweden)

    Mun-Kyeom Kim

    2017-09-01

    Full Text Available This study introduces a frequency regulation strategy to enable the participation of wind turbines with permanent magnet synchronous generators (PMSGs. The optimal strategy focuses on developing the frequency support capability of PMSGs connected to the power system. Active power control is performed using maximum power point tracking (MPPT and de-loaded control to supply the required power reserve following a disturbance. A kinetic energy (KE reserve control is developed to enhance the frequency regulation capability of wind turbines. The coordination with the de-loaded control prevents instability in the PMSG wind system due to excessive KE discharge. A KE optimization method that maximizes the sum of the KE reserves at wind farms is also adopted to determine the de-loaded power reference for each PMSG wind turbine using the particle swarm optimization (PSO algorithm. To validate the effectiveness of the proposed optimal control and operation strategy, three different case studies are conducted using the PSCAD/EMTDC simulation tool. The results demonstrate that the optimal strategy enhances the frequency support contribution from PMSG wind turbines.

  8. Numerical optimization of quasi-optical mode converter for frequency step-tunable gyrotron

    International Nuclear Information System (INIS)

    Drumm, O.

    2002-08-01

    This work concentrates on the design of a quasi-optical mode converter for a frequency step-tunable gyrotron. Special attention is paid to the optimization of the conversion and forming of the exited wave of different frequencies inside the resonator. The investigations were part of the HGF-strategy-fonds-project ''Optimization of Tokamak Operation with controlled ECRH-Deposition''. In the resonator of the gyrotron modes can be exited at frequencies between 105 and 140 GHz. With the designed converter the desired field distribution at the output window for all frequencies will be approximately obtained. The newly gained knowledge and invented synthesis methods are applied to this practical example and verified. In this work, the waveguide antenna and the mirror system of the quasi-optical mode converter are presented separately from each other. At the beginning the synthesis of the aperture antenna for a frequency step-tunable design of the Vlasov-type as well as the Denisov-type is considered. As a conclusion of the investigation, the important parameters for the design of all antennas are summarized and the frequency behavior is compared. In the second part of this work new broadband design methods for the synthesis of the mirror surface are presented. These mirrors make an optimal wave forming for all frequencies equally possible. Therefore new quality criteria are introduced for the broadband evaluation of the mirror. Afterwards the surface is varied until the criteria reach an optimum. For the numerical optimization, in this work the gradient method and the extended Katsenelenbaum-Semenov algorithm are invented and applied. The efficient realization of the described algorithms on a computer is the significant point. The theoretical background of the presented methods for the synthesis of a mirror system is based on the general solution of the Helmholtz equation. Due to this, these methods can be utilized in other fields outside the microwave applications in

  9. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  10. Effect of Sampling Frequency for Real-Time Tablet Coating Monitoring Using Near Infrared Spectroscopy.

    Science.gov (United States)

    Igne, Benoît; Arai, Hiroaki; Drennen, James K; Anderson, Carl A

    2016-09-01

    While the sampling of pharmaceutical products typically follows well-defined protocols, the parameterization of spectroscopic methods and their associated sampling frequency is not standard. Whereas, for blending, the sampling frequency is limited by the nature of the process, in other processes, such as tablet film coating, practitioners must determine the best approach to collecting spectral data. The present article studied how sampling practices affected the interpretation of the results provided by a near-infrared spectroscopy method for the monitoring of tablet moisture and coating weight gain during a pan-coating experiment. Several coating runs were monitored with different sampling frequencies (with or without co-adds (also known as sub-samples)) and with spectral averaging corresponding to processing cycles (1 to 15 pan rotations). Beyond integrating the sensor into the equipment, the present work demonstrated that it is necessary to have a good sense of the underlying phenomena that have the potential to affect the quality of the signal. The effects of co-adds and averaging was significant with respect to the quality of the spectral data. However, the type of output obtained from a sampling method dictated the type of information that one can gain on the dynamics of a process. Thus, different sampling frequencies may be needed at different stages of process development. © The Author(s) 2016.

  11. Using high-frequency sampling to detect effects of atmospheric pollutants on stream chemistry

    Science.gov (United States)

    Stephen D. Sebestyen; James B. Shanley; Elizabeth W. Boyer

    2009-01-01

    We combined information from long-term (weekly over many years) and short-term (high-frequency during rainfall and snowmelt events) stream water sampling efforts to understand how atmospheric deposition affects stream chemistry. Water samples were collected at the Sleepers River Research Watershed, VT, a temperate upland forest site that receives elevated atmospheric...

  12. Impact of sampling frequency in the analysis of tropospheric ozone observations

    Directory of Open Access Journals (Sweden)

    M. Saunois

    2012-08-01

    Full Text Available Measurements of ozone vertical profiles are valuable for the evaluation of atmospheric chemistry models and contribute to the understanding of the processes controlling the distribution of tropospheric ozone. The longest record of ozone vertical profiles is provided by ozone sondes, which have a typical frequency of 4 to 12 profiles a month. Here we quantify the uncertainty introduced by low frequency sampling in the determination of means and trends. To do this, the high frequency MOZAIC (Measurements of OZone, water vapor, carbon monoxide and nitrogen oxides by in-service AIrbus airCraft profiles over airports, such as Frankfurt, have been subsampled at two typical ozone sonde frequencies of 4 and 12 profiles per month. We found the lowest sampling uncertainty on seasonal means at 700 hPa over Frankfurt, with around 5% for a frequency of 12 profiles per month and 10% for a 4 profile-a-month frequency. However the uncertainty can reach up to 15 and 29% at the lowest altitude levels. As a consequence, the sampling uncertainty at the lowest frequency could be higher than the typical 10% accuracy of the ozone sondes and should be carefully considered for observation comparison and model evaluation. We found that the 95% confidence limit on the seasonal mean derived from the subsample created is similar to the sampling uncertainty and suggest to use it as an estimate of the sampling uncertainty. Similar results are found at six other Northern Hemisphere sites. We show that the sampling substantially impacts on the inter-annual variability and the trend derived over the period 1998–2008 both in magnitude and in sign throughout the troposphere. Also, a tropical case is discussed using the MOZAIC profiles taken over Windhoek, Namibia between 2005 and 2008. For this site, we found that the sampling uncertainty in the free troposphere is around 8 and 12% at 12 and 4 profiles a month respectively.

  13. Optimal grade control sampling practice in open-pit mining

    DEFF Research Database (Denmark)

    Engström, Karin; Esbensen, Kim Harry

    2017-01-01

    Misclassification of ore grades results in lost revenues, and the need for representative sampling procedures in open pit mining is increasingly important in all mining industries. This study evaluated possible improvements in sampling representativity with the use of Reverse Circulation (RC) drill...... sampling compared to manual Blast Hole (BH) sampling in the Leveäniemi open pit mine, northern Sweden. The variographic experiment results showed that sampling variability was lower for RC than for BH sampling. However, the total costs for RC drill sampling are significantly exceeding current costs...... for manual BH sampling, which needs to be compensated for by other benefits to motivate introduction of RC drilling. The main conclusion is that manual BH sampling can be fit-for-purpose in the studied open pit mine. However, with so many mineral commodities and mining methods in use globally...

  14. Practical iterative learning control with frequency domain design and sampled data implementation

    CERN Document Server

    Wang, Danwei; Zhang, Bin

    2014-01-01

    This book is on the iterative learning control (ILC) with focus on the design and implementation. We approach the ILC design based on the frequency domain analysis and address the ILC implementation based on the sampled data methods. This is the first book of ILC from frequency domain and sampled data methodologies. The frequency domain design methods offer ILC users insights to the convergence performance which is of practical benefits. This book presents a comprehensive framework with various methodologies to ensure the learnable bandwidth in the ILC system to be set with a balance between learning performance and learning stability. The sampled data implementation ensures effective execution of ILC in practical dynamic systems. The presented sampled data ILC methods also ensure the balance of performance and stability of learning process. Furthermore, the presented theories and methodologies are tested with an ILC controlled robotic system. The experimental results show that the machines can work in much h...

  15. Variable Sampling Composite Observer Based Frequency Locked Loop and its Application in Grid Connected System

    Directory of Open Access Journals (Sweden)

    ARUN, K.

    2016-05-01

    Full Text Available A modified digital signal processing procedure is described for the on-line estimation of DC, fundamental and harmonics of periodic signal. A frequency locked loop (FLL incorporated within the parallel structure of observers is proposed to accommodate a wide range of frequency drift. The error in frequency generated under drifting frequencies has been used for changing the sampling frequency of the composite observer, so that the number of samples per cycle of the periodic waveform remains constant. A standard coupled oscillator with automatic gain control is used as numerically controlled oscillator (NCO to generate the enabling pulses for the digital observer. The NCO gives an integer multiple of the fundamental frequency making it suitable for power quality applications. Another observer with DC and second harmonic blocks in the feedback path act as filter and reduces the double frequency content. A systematic study of the FLL is done and a method has been proposed to design the controller. The performance of FLL is validated through simulation and experimental studies. To illustrate applications of the new FLL, estimation of individual harmonics from nonlinear load and the design of a variable sampling resonant controller, for a single phase grid-connected inverter have been presented.

  16. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  17. Geometric optimization of the 56 MHz SRF cavity and its frequency table

    International Nuclear Information System (INIS)

    Chang, X.; Ben-Zvi, I.

    2008-01-01

    It is essential to know the frequency of a Superconducting Radio Frequency (SRF) cavity at its 'just being fabricated' stage because frequency is the key parameter in constructing the cavity. In this paper, we report our work on assessing it. We can estimate the frequency change from stage to stage theoretically and/or by simulation. At the operating stage, the frequency can be calculated accurately, and, from this value, we obtain the frequencies at other stages. They are listed in a table that serves to check the processes from stage to stage. Equally important is optimizing the geometric shape of the SRF cavity so that the peak electric-field and peak magnetic-field are as low as possible. It is particularly desirable in the 56MHz SRF cavity of RHIC to maximize the frequency sensitivity of the slow tuner. After undertaking such optimization, our resultant peak electric-field is only 44.1MV/m, and the peak magnetic-field is 1049G at 2.5MV of voltage across the cavity gap. To quench superconductivity in an SRF cavity, it is reported that the limit of the peak magnetic-field is 1800G (1), and that of the peak electric-field is more than l00MV/m for a SRF cavity (2). Our simulations employed the codes Superfish and Microwave Studio

  18. Triangular Geometrized Sampling Heuristics for Fast Optimal Motion Planning

    Directory of Open Access Journals (Sweden)

    Ahmed Hussain Qureshi

    2015-02-01

    Full Text Available Rapidly-exploring Random Tree (RRT-based algorithms have become increasingly popular due to their lower computational complexity as compared with other path planning algorithms. The recently presented RRT* motion planning algorithm improves upon the original RRT algorithm by providing optimal path solutions. While RRT determines an initial collision-free path fairly quickly, RRT* guarantees almost certain convergence to an optimal, obstacle-free path from the start to the goal points for any given geometrical environment. However, the main limitations of RRT* include its slow processing rate and high memory consumption, due to the large number of iterations required for calculating the optimal path. In order to overcome these limitations, we present another improvement, i.e, the Triangular Geometerized-RRT* (TG-RRT* algorithm, which utilizes triangular geometrical methods to improve the performance of the RRT* algorithm in terms of the processing time and a decreased number of iterations required for an optimal path solution. Simulations comparing the performance results of the improved TG-RRT* with RRT* are presented to demonstrate the overall improvement in performance and optimal path detection.

  19. Achieving Optimal Quantum Acceleration of Frequency Estimation Using Adaptive Coherent Control.

    Science.gov (United States)

    Naghiloo, M; Jordan, A N; Murch, K W

    2017-11-03

    Precision measurements of frequency are critical to accurate time keeping and are fundamentally limited by quantum measurement uncertainties. While for time-independent quantum Hamiltonians the uncertainty of any parameter scales at best as 1/T, where T is the duration of the experiment, recent theoretical works have predicted that explicitly time-dependent Hamiltonians can yield a 1/T^{2} scaling of the uncertainty for an oscillation frequency. This quantum acceleration in precision requires coherent control, which is generally adaptive. We experimentally realize this quantum improvement in frequency sensitivity with superconducting circuits, using a single transmon qubit. With optimal control pulses, the theoretically ideal frequency precision scaling is reached for times shorter than the decoherence time. This result demonstrates a fundamental quantum advantage for frequency estimation.

  20. Optimal experiment design in a filtering context with application to sampled network data

    OpenAIRE

    Singhal, Harsh; Michailidis, George

    2010-01-01

    We examine the problem of optimal design in the context of filtering multiple random walks. Specifically, we define the steady state E-optimal design criterion and show that the underlying optimization problem leads to a second order cone program. The developed methodology is applied to tracking network flow volumes using sampled data, where the design variable corresponds to controlling the sampling rate. The optimal design is numerically compared to a myopic and a naive strategy. Finally, w...

  1. An optimal frequency range for assessing the pressure reactivity index in patients with traumatic brain injury.

    Science.gov (United States)

    Howells, Tim; Johnson, Ulf; McKelvey, Tomas; Enblad, Per

    2015-02-01

    The objective of this study was to identify the optimal frequency range for computing the pressure reactivity index (PRx). PRx is a clinical method for assessing cerebral pressure autoregulation based on the correlation of spontaneous variations of arterial blood pressure (ABP) and intracranial pressure (ICP). Our hypothesis was that optimizing the methodology for computing PRx in this way could produce a more stable, reliable and clinically useful index of autoregulation status. The patients studied were a series of 131 traumatic brain injury patients. Pressure reactivity indices were computed in various frequency bands during the first 4 days following injury using bandpass filtering of the input ABP and ICP signals. Patient outcome was assessed using the extended Glasgow Outcome Scale (GOSe). The optimization criterion was the strength of the correlation with GOSe of the mean index value over the first 4 days following injury. Stability of the indices was measured as the mean absolute deviation of the minute by minute index value from 30-min moving averages. The optimal index frequency range for prediction of outcome was identified as 0.018-0.067 Hz (oscillations with periods from 55 to 15 s). The index based on this frequency range correlated with GOSe with ρ=-0.46 compared to -0.41 for standard PRx, and reduced the 30-min variation by 23%.

  2. A Port-Hamiltonian Approach to Optimal Frequency Regulation in Power Grids

    NARCIS (Netherlands)

    Stegink, Tjerk; Persis, Claudio De; Schaft, Arjan van der

    2015-01-01

    This paper studies the problem of frequency regulation in power grids, while maximizing the social welfare. Two price-based controllers are proposed; the first one an internal-model-based controller and the second one based on a continuous gradient method for optimization. Both controllers can be

  3. Tuning Range Optimization of a Planar Inverted F Antenna for LTE Low Frequency Bands

    DEFF Research Database (Denmark)

    Barrio, Samantha Caporal Del; Pelosi, Mauro; Franek, Ondrej

    2011-01-01

    This paper presents a Planar Inverted F Antenna (PIFA) tuned with a fixed capacitor to the low frequency bands supported by the Long Term Evolution (LTE) technology. The tuning range is investigated and optimized with respect to the bandwidth and the efficiency of the resulting antenna. Simulatio...... and mock-ups are presented....

  4. Optimized Wavelength-Tuned Nonlinear Frequency Conversion Using a Liquid Crystal Clad Waveguide

    Science.gov (United States)

    Stephen, Mark A. (Inventor)

    2018-01-01

    An optimized wavelength-tuned nonlinear frequency conversion process using a liquid crystal clad waveguide. The process includes implanting ions on a top surface of a lithium niobate crystal to form an ion implanted lithium niobate layer. The process also includes utilizing a tunable refractive index of a liquid crystal to rapidly change an effective index of the lithium niobate crystal.

  5. Control mechanisms for battery energy storage system performing primary frequency regulation and self-consumption optimization

    NARCIS (Netherlands)

    Pliatskas Stylianidis, A.

    2016-01-01

    This report contains the design of a model for the integration of a battery energy system in a household level and its use for primary frequency regulation and self-consumption optimization. The main goal of this project was to investigate what are the possible applications and the most suitable for

  6. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  7. Implementation of PLL and FLL trackers for signals with high harmonic content and low sampling frequency

    DEFF Research Database (Denmark)

    Mathe, Laszlo; Iov, Florin; Sera, Dezso

    2014-01-01

    The accurate tracking of phase, frequency, and amplitude of different frequency components from a measured signal is an essential requirement for many digitally controlled equipment. The accurate and robust tracking of a frequency component from a complex signal was successfully applied for example...... in: grid connected inverters, sensorless motor control for rotor position estimation, grid voltage monitoring for ac-dc converters etc. Usually, the design of such trackers is done in continuous time domain. The discretization introduces errors which change the performance, especially when the input...... signal is rich in harmonics and the sampling frequency is close to the tracked frequency component. In this paper different discretization methods and implementation issues, such as Tustin, Backward-Forward Euler, are discussed and compared. A special case is analyzed, when the input signal is reach...

  8. SNP calling, genotype calling, and sample allele frequency estimation from new-generation sequencing data

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Korneliussen, Thorfinn Sand; Albrechtsen, Anders

    2012-01-01

    We present a statistical framework for estimation and application of sample allele frequency spectra from New-Generation Sequencing (NGS) data. In this method, we first estimate the allele frequency spectrum using maximum likelihood. In contrast to previous methods, the likelihood function is cal...... be extended to various other cases including cases with deviations from Hardy-Weinberg equilibrium. We evaluate the statistical properties of the methods using simulations and by application to a real data set....

  9. A note on eigenfrequency sensitivities and structural eigenfrequency optimization based on local sub-domain frequencies

    DEFF Research Database (Denmark)

    Pedersen, Pauli; Pedersen, Niels Leergaard

    2014-01-01

    foundation. A numerical heuristic redesign procedure is proposed and illustrated with examples. For the ideal case, an optimality criterion is fulfilled if the design have the same sub-domain frequency (local Rayleigh quotient). Sensitivity analysis shows an important relation between squared system...... eigenfrequency and squared local sub-domain frequency for a given eigenmode. Higher order eigenfrequenciesmay also be controlled in this manner. The presented examples are based on 2D finite element models with the use of subspace iteration for analysis and a heuristic recursive design procedure based...... on the derived optimality condition. The design that maximize a frequency depend on the total amount of available material and on a necessary interpolation as illustrated by different design cases.In this note we have assumed a linear and conservative eigenvalue problem without multiple eigenvalues. The presence...

  10. Dynamic regime of coherent population trapping and optimization of frequency modulation parameters in atomic clocks.

    Science.gov (United States)

    Yudin, V I; Taichenachev, A V; Basalaev, M Yu; Kovalenko, D V

    2017-02-06

    We theoretically investigate the dynamic regime of coherent population trapping (CPT) in the presence of frequency modulation (FM). We have formulated the criteria for quasi-stationary (adiabatic) and dynamic (non-adiabatic) responses of atomic system driven by this FM. Using the density matrix formalism for Λ system, the error signal is exactly calculated and optimized. It is shown that the optimal FM parameters correspond to the dynamic regime of atomic-field interaction, which significantly differs from conventional description of CPT resonances in the frame of quasi-stationary approach (under small modulation frequency). Obtained theoretical results are in good qualitative agreement with different experiments. Also we have found CPT-analogue of Pound-Driver-Hall regime of frequency stabilization.

  11. Frequency locking of a field-widened Michelson interferometer based on optimal multi-harmonics heterodyning.

    Science.gov (United States)

    Cheng, Zhongtao; Liu, Dong; Zhou, Yudi; Yang, Yongying; Luo, Jing; Zhang, Yupeng; Shen, Yibing; Liu, Chong; Bai, Jian; Wang, Kaiwei; Su, Lin; Yang, Liming

    2016-09-01

    A general resonant frequency locking scheme for a field-widened Michelson interferometer (FWMI), which is intended as a spectral discriminator in a high-spectral-resolution lidar, is proposed based on optimal multi-harmonics heterodyning. By transferring the energy of a reference laser to multi-harmonics of different orders generated by optimal electro-optic phase modulation, the heterodyne signal of these multi-harmonics through the FWMI can reveal the resonant frequency drift of the interferometer very sensitively within a large frequency range. This approach can overcome the locking difficulty induced by the low finesse of the FWMI, thus contributing to excellent locking accuracy and lock acquisition range without any constraint on the interferometer itself. The theoretical and experimental results are presented to verify the performance of this scheme.

  12. A Frequency Matching Method for Generation of a Priori Sample Models from Training Images

    DEFF Research Database (Denmark)

    Lange, Katrine; Cordua, Knud Skou; Frydendall, Jan

    2011-01-01

    This paper presents a Frequency Matching Method (FMM) for generation of a priori sample models based on training images and illustrates its use by an example. In geostatistics, training images are used to represent a priori knowledge or expectations of models, and the FMM can be used to generate...... new images that share the same multi-point statistics as a given training image. The FMM proceeds by iteratively updating voxel values of an image until the frequency of patterns in the image matches the frequency of patterns in the training image; making the resulting image statistically...... indistinguishable from the training image....

  13. An extension of command shaping methods for controlling residual vibration using frequency sampling

    Science.gov (United States)

    Singer, Neil C.; Seering, Warren P.

    1992-01-01

    The authors present an extension to the impulse shaping technique for commanding machines to move with reduced residual vibration. The extension, called frequency sampling, is a method for generating constraints that are used to obtain shaping sequences which minimize residual vibration in systems such as robots whose resonant frequencies change during motion. The authors present a review of impulse shaping methods, a development of the proposed extension, and a comparison of results of tests conducted on a simple model of the space shuttle robot arm. Frequency shaping provides a method for minimizing the impulse sequence duration required to give the desired insensitivity.

  14. Global Time Tomography of Finite Frequency Waves with Optimized Tetrahedral Grids.

    Science.gov (United States)

    Montelli, R.; Montelli, R.; Nolet, G.; Dahlen, F. A.; Masters, G.; Hung, S.

    2001-12-01

    Besides true velocity heterogeneities, tomographic images reflect the effect of data errors, model parametrization, linearization, uncertainties involved with the solution of the forward problem and the greatly inadequate sampling of the earth by seismic rays. These influences cannot be easily separated and often produce artefacts in the final image with amplitudes comparable to those of the velocity heterogeneities. In practice, the tomographer uses some form of damping of the ill-resolved aspects of the model to get a unique solution and reduce the influence of the errors. However damping is not fully adequate, and may reveal a strong influence of the ray path coverage in tomographic images. If some cells are ill determinated regularization techniques may lead to heterogeneity because these cells are damped towards zero. Thus we want a uniform resolution of the parameters in our model. This can be obtained by using an irregular grid with variable length scales. We have introduced an irregular parametrization of the velocity structure by using a Delaunay triangulation. Extensively work on error analysis of tomographic images together with mesh optimization has shown that both resolution and ray density can provide the critical informations needed to re-design grids. However, criteria based on resolution are preferred in the presence of narrow ray beams coming from the same direction. This can be understood if we realise that resolution is not only determined by the number of rays crossing a region, but also by their azimutal coverage. We shall discuss various strategies for grid optimization. In general the computation of the travel times is restricted to ray theory, the infinite frequency approximation of the elastodynamic equation of motion. This simplifies the mathematic and is therefore widely applied in seismic tomography. But ray theory does not account for scattering, wavefront healing and other diffraction effects that render the traveltime of a finite

  15. Implications of Microwave Holography Using Minimum Required Frequency Samples for Weakly- and Strongly-Scattering Indications

    Science.gov (United States)

    Fallahpour, M.; Case, J. T.; Kharkovsky, S.; Zoughi, R.

    2010-01-01

    Microwave imaging techniques, an integral component of nondestructive testing and evaluation (NDTE), have received significant attention in the past decade. These techniques have included the implementation of synthetic aperture focusing (SAF) algorithms for obtaining high spatial resolution images. The next important step in these developments is the implementation of 3-D holographic imaging algorithms. These are well-known wideband imaging technique requiring a swept-frequency (i.e., wideband), which unlike SAF that is a single frequency technique, are not easily performed on a real-time basis. This is due to the fact that a significant number of data points (in the frequency domain) must be obtained within the frequency band of interest. This not only makes for a complex imaging system design, it also significantly increases the image-production time. Consequently in an attempt to reduce the measurement time and system complexity, an investigation was conducted to determine the minimum required number of frequency samples needed to image a specific object while preserving a desired maximum measurement range and range resolution. To this end the 3-D holographic algorithm was modified to use properlyinterpolated frequency data. Measurements of the complex reflection coefficient for several samples were conducted using a swept-frequency approach. Subsequently, holographical images were generated using data containing a relatively large number of frequency samples and were compared with images generated by the reduced data set data. Quantitative metrics such as average, contrast, and signal-to-noise ratio were used to evaluate the quality of images generated using reduced data sets. Furthermore, this approach was applied to both weakly- and strongly-scattering indications. This paper presents the methods used and the results of this investigation.

  16. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  17. Optimizing Bus Frequencies under Uncertain Demand: Case Study of the Transit Network in a Developing City

    Directory of Open Access Journals (Sweden)

    Zhengfeng Huang

    2013-01-01

    Full Text Available Various factors can make predicting bus passenger demand uncertain. In this study, a bilevel programming model for optimizing bus frequencies based on uncertain bus passenger demand is formulated. There are two terms constituting the upper-level objective. The first is transit network cost, consisting of the passengers’ expected travel time and operating costs, and the second is transit network robustness performance, indicated by the variance in passenger travel time. The second term reflects the risk aversion of decision maker, and it can make the most uncertain demand be met by the bus operation with the optimal transit frequency. With transit link’s proportional flow eigenvalues (mean and covariance obtained from the lower-level model, the upper-level objective is formulated by the analytical method. In the lower-level model, the above two eigenvalues are calculated by analyzing the propagation of mean transit trips and their variation in the optimal strategy transit assignment process. The genetic algorithm (GA used to solve the model is tested in an example network. Finally, the model is applied to determining optimal bus frequencies in the city of Liupanshui, China. The total cost of the transit system in Liupanshui can be reduced by about 6% via this method.

  18. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  19. Optimism is universal: exploring the presence and benefits of optimism in a representative sample of the world.

    Science.gov (United States)

    Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D

    2013-10-01

    Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations. © 2012 Wiley Periodicals, Inc.

  20. Determination of Optimal Double Sampling Plan using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sampath Sundaram

    2012-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Designing double sampling plan requires identification of sample sizes and acceptance numbers. In this paper a genetic algorithm has been designed for the selection of optimal acceptance numbers and sample sizes for the specified producer’s risk and consumer’s risk. Implementation of the algorithm has been illustrated numerically for different choices of quantities involved in a double sampling plan   

  1. The effect of sampling rate and anti-aliasing filters on high-frequency response spectra

    Science.gov (United States)

    Boore, David M.; Goulet, Christine

    2013-01-01

    The most commonly used intensity measure in ground-motion prediction equations is the pseudo-absolute response spectral acceleration (PSA), for response periods from 0.01 to 10 s (or frequencies from 0.1 to 100 Hz). PSAs are often derived from recorded ground motions, and these motions are usually filtered to remove high and low frequencies before the PSAs are computed. In this article we are only concerned with the removal of high frequencies. In modern digital recordings, this filtering corresponds at least to an anti-aliasing filter applied before conversion to digital values. Additional high-cut filtering is sometimes applied both to digital and to analog records to reduce high-frequency noise. Potential errors on the short-period (high-frequency) response spectral values are expected if the true ground motion has significant energy at frequencies above that of the anti-aliasing filter. This is especially important for areas where the instrumental sample rate and the associated anti-aliasing filter corner frequency (above which significant energy in the time series is removed) are low relative to the frequencies contained in the true ground motions. A ground-motion simulation study was conducted to investigate these effects and to develop guidance for defining the usable bandwidth for high-frequency PSA. The primary conclusion is that if the ratio of the maximum Fourier acceleration spectrum (FAS) to the FAS at a frequency fsaa corresponding to the start of the anti-aliasing filter is more than about 10, then PSA for frequencies above fsaa should be little affected by the recording process, because the ground-motion frequencies that control the response spectra will be less than fsaa . A second topic of this article concerns the resampling of the digital acceleration time series to a higher sample rate often used in the computation of short-period PSA. We confirm previous findings that sinc-function interpolation is preferred to the standard practice of using

  2. FREQUENCY OF ANEUPLOID SPERMATOZOA STUDIED BY MULTICOLOR FISH IN SERIAL SEMEN SAMPLES

    Science.gov (United States)

    Frequency of aneuploid spermatozoa studied by multicolor FISH in serial semen samplesM. Vozdova1, S. D. Perreault2, O. Rezacova1, D. Zudova1 , Z. Zudova3, S. G. Selevan4, J. Rubes1,51Veterinary Research Institute, Brno, Czech Republic; 2U.S. Environmental Protection A...

  3. On the Berry-Esséen bound of frequency polygons for ϕ-mixing samples.

    Science.gov (United States)

    Huang, Gan-Ji; Xing, Guodong

    2017-01-01

    Under some mild assumptions, the Berry-Esséen bound of frequency polygons for ϕ -mixing samples is presented. By the bound derived, we obtain the corresponding convergence rate of uniformly asymptotic normality, which is nearly [Formula: see text] under the given conditions.

  4. An Effective Experimental Optimization Method for Wireless Power Transfer System Design Using Frequency Domain Measurement

    Directory of Open Access Journals (Sweden)

    Sangyeong Jeong

    2017-10-01

    Full Text Available This paper proposes an experimental optimization method for a wireless power transfer (WPT system. The power transfer characteristics of a WPT system with arbitrary loads and various types of coupling and compensation networks can be extracted by frequency domain measurements. The various performance parameters of the WPT system, such as input real/imaginary/apparent power, power factor, efficiency, output power and voltage gain, can be accurately extracted in a frequency domain by a single passive measurement. Subsequently, the design parameters can be efficiently tuned by separating the overall design steps into two parts. The extracted performance parameters of the WPT system were validated with time-domain experiments.

  5. Agent based Particle Swarm Optimization for Load Frequency Control of Distribution Grid

    DEFF Research Database (Denmark)

    Cha, Seung-Tae; Saleem, Arshad; Wu, Qiuwei

    2012-01-01

    This paper presents a Particle Swarm Optimization (PSO) based on multi-agent controller. Real-time digital simulator (RTDS) is used for modelling the power system, while a PSO based multi-agent LFC algorithm is developed in JAVA for communicating with resource agents and determines the scenario...... to stabilize the frequency and voltage after the system enters into the islanding operation mode. The proposed algorithm is based on the formulation of an optimization problem using agent based PSO. The modified IEEE 9-bus system is employed to illustrate the performance of the proposed controller via RTDS...

  6. Spectral-Amplitude-Coded OCDMA Optimized for a Realistic FBG Frequency Response

    Science.gov (United States)

    Penon, Julien; El-Sahn, Ziad A.; Rusch, Leslie A.; Larochelle, Sophie

    2007-05-01

    We develop a methodology for numerical optimization of fiber Bragg grating frequency response to maximize the achievable capacity of a spectral-amplitude-coded optical code-division multiple-access (SAC-OCDMA) system. The optimal encoders are realized, and we experimentally demonstrate an incoherent SAC-OCDMA system with seven simultaneous users. We report a bit error rate (BER) of 2.7 x 10-8 at 622 Mb/s for a fully loaded network (seven users) using a 9.6-nm optical band. We achieve error-free transmission (BER < 1 x 10-9) for up to five simultaneous users.

  7. Efficiency Optimization Methods in Low-Power High-Frequency Digitally Controlled SMPS

    Directory of Open Access Journals (Sweden)

    Aleksandar Prodić

    2010-06-01

    Full Text Available This paper gives a review of several power efficiency optimization techniques that are utilizing advantages of emerging digital control in high frequency switch-mode power supplies (SMPS, processing power from a fraction of watt to several hundreds of watts. Loss mechanisms in semiconductor components are briefly reviewed and the related principles of online efficiency optimization through power stage segmentation and gate voltage variation presented. Practical implementations of such methods utilizing load prediction or data extraction from a digital control loop are shown. The benefits of the presented efficiency methods are verified through experimental results, showing efficiency improvements, ranging from 2% to 30%,depending on the load conditions.

  8. Multiobjective Optimization for Electronic Circuit Design in Time and Frequency Domains

    Directory of Open Access Journals (Sweden)

    J. Dobes

    2013-04-01

    Full Text Available The multiobjective optimization provides an extraordinary opportunity for the finest design of electronic circuits because it allows to mathematically balance contradictory requirements together with possible constraints. In this paper, an original and substantial improvement of an existing method for the multiobjective optimization known as GAM (Goal Attainment Method is suggested. In our proposal, the GAM algorithm itself is combined with a procedure that automatically provides a set of parameters -- weights, coordinates of the reference point -- for which the method generates noninferior solutions uniformly spread over an appropriately selected part of the Pareto front. Moreover, the resulting set of obtained solutions is then presented in a suitable graphic form so that the solution representing the most satisfactory tradeoff can be easily chosen by the designer. Our system generates various types of plots that conveniently characterize results of up to four-dimensional problems. Technically, the procedures of the multiobjective optimization were created as a software add-on to the CIA (Circuit Interactive Analyzer program. This way enabled us to utilize many powerful features of this program, including the sensitivity analyses in time and frequency domains. As a result, the system is also able to perform the multiobjective optimization in the time domain and even highly nonlinear circuits can be significantly improved by our program. As a demonstration of this feature, a multiobjective optimization of a C-class power amplifier in the time domain is thoroughly described in the paper. Further, a four-dimensional optimization of a video amplifier is demonstrated with an original graphic representation of the Pareto front, and also some comparison with the weighting method is done. As an example of improving noise properties, a multiobjective optimization of a low-noise amplifier is performed, and the results in the frequency domain are shown

  9. Optimization of Dimensions of Cylindrical Piezoceramics as Radio-Clean Low Frequency Acoustic Sensors

    Directory of Open Access Journals (Sweden)

    M. Ardid

    2017-01-01

    Full Text Available Circular piezoelectric transducers with axial polarization are proposed as low frequency acoustic sensors for dark matter bubble chamber detectors. The axial vibration behaviour of the transducer is studied by three different methods: analytical models, FEM simulation, and experimental setup. To optimize disk geometry for this application, the dependence of the vibrational modes in function of the diameter-to-thickness ratio from 0.5 (a tall cylinder to 20.0 (a thin disk has been studied. Resonant and antiresonant frequencies for each of the lowest modes are determined and electromechanical coupling coefficients are calculated. From this analysis, due to the requirements of radiopurity and little volume, optimal diameter-to-thickness ratios for good transducer performance are discussed.

  10. An integrated approach for optimal frequency regulation service procurement in India

    International Nuclear Information System (INIS)

    Parida, S.K.; Singh, S.N.; Srivastava, S.C.

    2009-01-01

    Ancillary services (AS) management has become an important issue to be addressed in the Indian power system after adaption of the restructuring and unbundling processes following the enactment of Indian Electricity Act 2003. In an electricity market, frequency regulation is one of the ancillary services, which must be procured by the system operator (SO) from the market participants by some regulatory mechanism or using market-based approaches. It is important for the SO to optimally procure this service from the AS market. In this paper, an approach for determining the optimal frequency regulation service procurement has been proposed for equitable payment to generators and recovery from the customers. The effectiveness of the proposed method has been demonstrated on a practical Northern Regional Electricity Board (NREB) system of India. (author)

  11. Optimization of Modulation Waveforms for Improved EMI Attenuation in Switching Frequency Modulated Power Converters

    Directory of Open Access Journals (Sweden)

    Deniss Stepins

    2015-01-01

    Full Text Available Electromagnetic interference (EMI is one of the major problems of switching power converters. This paper is devoted to switching frequency modulation used for conducted EMI suppression in switching power converters. Comprehensive theoretical analysis of switching power converter conducted EMI spectrum and EMI attenuation due the use of traditional ramp and multislope ramp modulation waveforms is presented. Expressions to calculate EMI spectrum and attenuation are derived. Optimization procedure of the multislope ramp modulation waveform is proposed to get maximum benefits from switching frequency modulation for EMI reduction. Experimental verification is also performed to prove that the optimized multislope ramp modulation waveform is very useful solution for effective EMI reduction in switching power converters.

  12. Optimal sample size for probability of detection curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2013-01-01

    Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not

  13. Note: Radio frequency surface impedance characterization system for superconducting samples at 7.5 GHz.

    Science.gov (United States)

    Xiao, B P; Reece, C E; Phillips, H L; Geng, R L; Wang, H; Marhauser, F; Kelley, M J

    2011-05-01

    A radio frequency (RF) surface impedance characterization (SIC) system that uses a novel sapphire-loaded niobium cavity operating at 7.5 GHz has been developed as a tool to measure the RF surface impedance of flat superconducting material samples. The SIC system can presently make direct calorimetric RF surface impedance measurements on the central 0.8 cm(2) area of 5 cm diameter disk samples from 2 to 20 K exposed to RF magnetic fields up to 14 mT. To illustrate system utility, we present first measurement results for a bulk niobium sample.

  14. Estimating an appropriate sampling frequency for monitoring ground water well contamination

    International Nuclear Information System (INIS)

    Tuckfield, R.C.

    1994-01-01

    Nearly 1,500 ground water wells at the Savannah River Site (SRS) are sampled quarterly to monitor contamination by radionuclides and other hazardous constituents from nearby waste sites. Some 10,000 water samples were collected in 1993 at a laboratory analysis cost of $10,000,000. No widely accepted statistical method has been developed, to date, for estimating a technically defensible ground water sampling frequency consistent and compliant with federal regulations. Such a method is presented here based on the concept of statistical independence among successively measured contaminant concentrations in time

  15. Distortions in frequency spectra of signals associated with sampling-pulse shapes

    International Nuclear Information System (INIS)

    Njau, E.C.

    1983-04-01

    A method developed earlier by the author [IC/82/44; IC/82/45] is used to investigate distortions introduced into frequency spectra of signals by the shapes of the sampling pulses involved. Conditions are established under which the use of trapezoid or exponentially-edged pulses to digitize signals can make the frequency spectra of the resultant data samples devoid of the main features of the signals. This observation does not, however, apply in any way to cosinusoidally-edged pulses or to pulses with cosine-squared edges. Since parts of the Earth's surface and atmosphere receive direct solar energy in discrete samples (i.e. only from sunrise to sunset) we have extended the technique and attempted to develop a theory that explains the observed solar terrestrial relationships. A very good agreement is obtained between the theory and previous long-term and short-term observations. (author)

  16. Time-Scale and Time-Frequency Analyses of Irregularly Sampled Astronomical Time Series

    Directory of Open Access Journals (Sweden)

    S. Roques

    2005-09-01

    Full Text Available We evaluate the quality of spectral restoration in the case of irregular sampled signals in astronomy. We study in details a time-scale method leading to a global wavelet spectrum comparable to the Fourier period, and a time-frequency matching pursuit allowing us to identify the frequencies and to control the error propagation. In both cases, the signals are first resampled with a linear interpolation. Both results are compared with those obtained using Lomb's periodogram and using the weighted waveletZ-transform developed in astronomy for unevenly sampled variable stars observations. These approaches are applied to simulations and to light variations of four variable stars. This leads to the conclusion that the matching pursuit is more efficient for recovering the spectral contents of a pulsating star, even with a preliminary resampling. In particular, the results are almost independent of the quality of the initial irregular sampling.

  17. Determination and optimization of spatial samples for distributed measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Huo, Xiaoming (Georgia Institute of Technology, Atlanta, GA); Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong (Georgia Institute of Technology, Atlanta, GA)

    2010-10-01

    There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.

  18. Optimization conditions of samples saponification for tocopherol analysis.

    Science.gov (United States)

    Souza, Aloisio Henrique Pereira; Gohara, Aline Kirie; Rodrigues, Ângela Claudia; Ströher, Gisely Luzia; Silva, Danielle Cristina; Visentainer, Jesuí Vergílio; Souza, Nilson Evelázio; Matsushita, Makoto

    2014-09-01

    A full factorial design 2(2) (two factors at two levels) with duplicates was performed to investigate the influence of the factors agitation time (2 and 4 h) and the percentage of KOH (60% and 80% w/v) in the saponification of samples for the determination of α, β and γ+δ-tocopherols. The study used samples of peanuts (cultivar armadillo), produced and marketed in Maringá, PR. The factors % KOH and agitation time were significant, and an increase in their values contributed negatively to the responses. The interaction effect was not significant for the response δ-tocopherol, and the contribution of this effect to the other responses was positive, but less than 10%. The ANOVA and response surfaces analysis showed that the most efficient saponification procedure was obtained using a 60% (w/v) solution of KOH and with an agitation time of 2 h. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Online Optimal Switching Frequency Selection for Grid-Connected Voltage Source Inverters

    Directory of Open Access Journals (Sweden)

    Saher Albatran

    2017-12-01

    Full Text Available Enhancing the performance of the voltage source inverters (VSIs without changing the hardware structure has recently acquired an increased amount of interest. In this study, an optimization algorithm, enhancing the quality of the output power and the efficiency of three-phase grid connected VSIs is proposed. Towards that end, the proposed algorithm varies the switching frequency (fsw to maintain the best balance between switching losses of the insulated-gate-bipolar-transistor (IGBT power module as well as the output power quality under all loading conditions, including the ambient temperature effect. Since there is a contradiction with these two measures in relation to the switching frequency, the theory of multi-objective optimization is employed. The proposed algorithm is executed on the platform of Altera® DE2-115 field-programmable-gate-array (FPGA in which the optimal value of the switching frequency is determined online without the need for heavy offline calculations and/or lookup tables. With adopting the proposed algorithm, there is an improvement in the VSI efficiency without degrading the output power quality. Therefore, the proposed algorithm enhances the lifetime of the IGBT power module because of reduced variations in the module’s junction temperature. An experimental prototype is built, and experimental tests are conducted for the verification of the viability of the proposed algorithm.

  20. Renal function monitoring in heart failure - what is the optimal frequency? A narrative review.

    Science.gov (United States)

    Al-Naher, Ahmed; Wright, David; Devonald, Mark Alexander John; Pirmohamed, Munir

    2018-01-01

    The second most common cause of hospitalization due to adverse drug reactions in the UK is renal dysfunction due to diuretics, particularly in patients with heart failure, where diuretic therapy is a mainstay of treatment regimens. Therefore, the optimal frequency for monitoring renal function in these patients is an important consideration for preventing renal failure and hospitalization. This review looks at the current evidence for optimal monitoring practices of renal function in patients with heart failure according to national and international guidelines on the management of heart failure (AHA/NICE/ESC/SIGN). Current guidance of renal function monitoring is in large part based on expert opinion, with a lack of clinical studies that have specifically evaluated the optimal frequency of renal function monitoring in patients with heart failure. Furthermore, there is variability between guidelines, and recommendations are typically nonspecific. Safer prescribing of diuretics in combination with other antiheart failure treatments requires better evidence for frequency of renal function monitoring. We suggest developing more personalized monitoring rather than from the current medication-based guidance. Such flexible clinical guidelines could be implemented using intelligent clinical decision support systems. Personalized renal function monitoring would be more effective in preventing renal decline, rather than reacting to it. © 2017 The Authors. British Journal of Clinical Pharmacology published by John Wiley & Sons Ltd on behalf of British Pharmacological Society.

  1. The influence of environmental parameters on the optimal frequency in a shallow underwater acoustic channel

    Science.gov (United States)

    Zarnescu, George

    2015-02-01

    In a shallow underwater acoustic channel the delayed replicas of a transmitted signal are mainly due to the interactions with the sea surface and the bottom layer. If a specific underwater region on the globe is considered, for which the sedimentary layer structure is constant across the transmission distance, then the variability of the amplitude-delay profile is determined by daily and seasonal changes of the sound speed profile (SSP) and by weather changes, such as variations of the wind speed. Such a parameter will influence the attenuation at the surface, the noise level and the profile of the sea surface. The temporal variation of the impulse response in a shallow underwater acoustic channel determines the variability of the optimal transmission frequency. If the ways in which the optimal frequency changes can be predicted, then an adaptive analog transceiver can be easily designed for an underwater acoustic modem or it can be found when a communication link has high throughput. In this article it will be highlighted the way in which the amplitude-delay profile is affected by the sound speed profile, wind speed and channel depth and also will be emphasized the changes of the optimal transmission frequency in a configuration, where the transmitter and receiver are placed on the seafloor and the bathymetry profile will be considered flat, having a given composition.

  2. Optimal frequency of rabies vaccination campaigns in Sub-Saharan Africa.

    Science.gov (United States)

    Bilinski, Alyssa M; Fitzpatrick, Meagan C; Rupprecht, Charles E; Paltiel, A David; Galvani, Alison P

    2016-11-16

    Rabies causes more than 24 000 human deaths annually in Sub-Saharan Africa. The World Health Organization recommends annual canine vaccination campaigns with at least 70% coverage to control the disease. While previous studies have considered optimal coverage of animal rabies vaccination, variation in the frequency of vaccination campaigns has not been explored. To evaluate the cost-effectiveness of rabies canine vaccination campaigns at varying coverage and frequency, we parametrized a rabies virus transmission model to two districts of northwest Tanzania, Ngorongoro (pastoral) and Serengeti (agro-pastoral). We found that optimal vaccination strategies were every 2 years, at 80% coverage in Ngorongoro and annually at 70% coverage in Serengeti. We further found that the optimality of these strategies was sensitive to the rate of rabies reintroduction from outside the district. Specifically, if a geographically coordinated campaign could reduce reintroduction, vaccination campaigns every 2 years could effectively manage rabies in both districts. Thus, coordinated campaigns may provide monetary savings in addition to public health benefits. Our results indicate that frequency and coverage of canine vaccination campaigns should be evaluated simultaneously and tailored to local canine ecology as well as to the risk of disease reintroduction from surrounding regions. © 2016 The Author(s).

  3. Object discrimination using optimized multi-frequency auditory cross-modal haptic feedback.

    Science.gov (United States)

    Gibson, Alison; Artemiadis, Panagiotis

    2014-01-01

    As the field of brain-machine interfaces and neuro-prosthetics continues to grow, there is a high need for sensor and actuation mechanisms that can provide haptic feedback to the user. Current technologies employ expensive, invasive and often inefficient force feedback methods, resulting in an unrealistic solution for individuals who rely on these devices. This paper responds through the development, integration and analysis of a novel feedback architecture where haptic information during the neural control of a prosthetic hand is perceived through multi-frequency auditory signals. Through representing force magnitude with volume and force location with frequency, the feedback architecture can translate the haptic experiences of a robotic end effector into the alternative sensory modality of sound. Previous research with the proposed cross-modal feedback method confirmed its learnability, so the current work aimed to investigate which frequency map (i.e. frequency-specific locations on the hand) is optimal in helping users distinguish between hand-held objects and tasks associated with them. After short use with the cross-modal feedback during the electromyographic (EMG) control of a prosthetic hand, testing results show that users are able to use audial feedback alone to discriminate between everyday objects. While users showed adaptation to three different frequency maps, the simplest map containing only two frequencies was found to be the most useful in discriminating between objects. This outcome provides support for the feasibility and practicality of the cross-modal feedback method during the neural control of prosthetics.

  4. Statistical Analysis of Solar PV Power Frequency Spectrum for Optimal Employment of Building Loads

    Energy Technology Data Exchange (ETDEWEB)

    Olama, Mohammed M [ORNL; Sharma, Isha [ORNL; Kuruganti, Teja [ORNL; Fugate, David L [ORNL

    2017-01-01

    In this paper, a statistical analysis of the frequency spectrum of solar photovoltaic (PV) power output is conducted. This analysis quantifies the frequency content that can be used for purposes such as developing optimal employment of building loads and distributed energy resources. One year of solar PV power output data was collected and analyzed using one-second resolution to find ideal bounds and levels for the different frequency components. The annual, seasonal, and monthly statistics of the PV frequency content are computed and illustrated in boxplot format. To examine the compatibility of building loads for PV consumption, a spectral analysis of building loads such as Heating, Ventilation and Air-Conditioning (HVAC) units and water heaters was performed. This defined the bandwidth over which these devices can operate. Results show that nearly all of the PV output (about 98%) is contained within frequencies lower than 1 mHz (equivalent to ~15 min), which is compatible for consumption with local building loads such as HVAC units and water heaters. Medium frequencies in the range of ~15 min to ~1 min are likely to be suitable for consumption by fan equipment of variable air volume HVAC systems that have time constants in the range of few seconds to few minutes. This study indicates that most of the PV generation can be consumed by building loads with the help of proper control strategies, thereby reducing impact on the grid and the size of storage systems.

  5. A Frequency Control Approach for Hybrid Power System Using Multi-Objective Optimization

    Directory of Open Access Journals (Sweden)

    Mohammed Elsayed Lotfy

    2017-01-01

    Full Text Available A hybrid power system uses many wind turbine generators (WTG and solar photovoltaics (PV in isolated small areas. However, the output power of these renewable sources is not constant and can diverge quickly, which has a serious effect on system frequency and the continuity of demand supply. In order to solve this problem, this paper presents a new frequency control scheme for a hybrid power system to ensure supplying a high-quality power in isolated areas. The proposed power system consists of a WTG, PV, aqua-electrolyzer (AE, fuel cell (FC, battery energy storage system (BESS, flywheel (FW and diesel engine generator (DEG. Furthermore, plug-in hybrid electric vehicles (EVs are implemented at the customer side. A full-order observer is utilized to estimate the supply error. Then, the estimated supply error is considered in a frequency domain. The high-frequency component is reduced by BESS and FW; while the low-frequency component of supply error is mitigated using FC, EV and DEG. Two PI controllers are implemented in the proposed system to control the system frequency and reduce the supply error. The epsilon multi-objective genetic algorithm ( ε -MOGA is applied to optimize the controllers’ parameters. The performance of the proposed control scheme is compared with that of recent well-established techniques, such as a PID controller tuned by the quasi-oppositional harmony search algorithm (QOHSA. The effectiveness and robustness of the hybrid power system are investigated under various operating conditions.

  6. Optimal purification and sensitive quantification of DNA from fecal samples

    DEFF Research Database (Denmark)

    Jensen, Annette Nygaard; Hoorfar, Jeffrey

    2002-01-01

    Application of reliable, rapid and sensitive methods to laboratory diagnosis of zoonotic infections continues to challenge microbiological laboratories. The recovery of DNA from a swine fecal sample and a bacterial culture extracted by a conventional phenol-chloroform extraction method was compared...... = 0.99 and R-2 = 1.00). In conclusion, silica-membrane, columns can provide a more convenient and less hazardous alternative to the conventional phenol-based method. The results have implication for further improvement of sensitive amplification methods for laboratory diagnosis....

  7. Optimal design of tilt carrier frequency computer-generated holograms to measure aspherics.

    Science.gov (United States)

    Peng, Jiantao; Chen, Zhe; Zhang, Xingxiang; Fu, Tianjiao; Ren, Jianyue

    2015-08-20

    Computer-generated holograms (CGHs) provide an approach to high-precision metrology of aspherics. A CGH is designed under the trade-off among size, mapping distortion, and line spacing. This paper describes an optimal design method based on the parametric model for tilt carrier frequency CGHs placed outside the interferometer focus points. Under the condition of retaining an admissible size and a tolerable mapping distortion, the optimal design method has two advantages: (1) separating the parasitic diffraction orders to improve the contrast of the interferograms and (2) achieving the largest line spacing to minimize sensitivity to fabrication errors. This optimal design method is applicable to common concave aspherical surfaces and illustrated with CGH design examples.

  8. Digital radiography: optimization of image quality and dose using multi-frequency software.

    Science.gov (United States)

    Precht, H; Gerke, O; Rosendahl, K; Tingberg, A; Waaler, D

    2012-09-01

    New developments in processing of digital radiographs (DR), including multi-frequency processing (MFP), allow optimization of image quality and radiation dose. This is particularly promising in children as they are believed to be more sensitive to ionizing radiation than adults. To examine whether the use of MFP software reduces the radiation dose without compromising quality at DR of the femur in 5-year-old-equivalent anthropomorphic and technical phantoms. A total of 110 images of an anthropomorphic phantom were imaged on a DR system (Canon DR with CXDI-50 C detector and MLT[S] software) and analyzed by three pediatric radiologists using Visual Grading Analysis. In addition, 3,500 images taken of a technical contrast-detail phantom (CDRAD 2.0) provide an objective image-quality assessment. Optimal image-quality was maintained at a dose reduction of 61% with MLT(S) optimized images. Even for images of diagnostic quality, MLT(S) provided a dose reduction of 88% as compared to the reference image. Software impact on image quality was found significant for dose (mAs), dynamic range dark region and frequency band. By optimizing image processing parameters, a significant dose reduction is possible without significant loss of image quality.

  9. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 1: Frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion

  10. Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.

    Science.gov (United States)

    Sidek, Khairul Azami; Khalil, Ibrahim

    2013-01-01

    Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.

  11. Topology optimization and fabrication of low frequency vibration energy harvesting microdevices

    International Nuclear Information System (INIS)

    Deng, Jiadong; Rorschach, Katherine; Baker, Evan; Sun, Cheng; Chen, Wei

    2015-01-01

    Topological design of miniaturized resonating structures capable of harvesting electrical energy from low frequency environmental mechanical vibrations encounters a particular physical challenge, due to the conflicting design requirements: low resonating frequency and miniaturization. In this paper structural static stiffness to resist undesired lateral deformation is included into the objective function, to prevent the structure from degenerating and forcing the solution to be manufacturable. The rational approximation of material properties interpolation scheme is introduced to deal with the problems of local vibration and instability of the low density area induced by the design dependent body forces. Both density and level set based topology optimization (TO) methods are investigated in their parameterization, sensitivity analysis, and applicability for low frequency energy harvester TO problems. Continuum based variation formulations for sensitivity analysis and the material derivative based shape sensitivity analysis are presented for the density method and the level set method, respectively; and their similarities and differences are highlighted. An external damper is introduced to simulate the energy output of the resonator due to electrical damping and the Rayleigh proportional damping is used for mechanical damping. Optimization results for different scenarios are tested to illustrate the influences of dynamic and static loads. To demonstrate manufacturability, the designs are built to scale using a 3D microfabrication method and assembled into vibration energy harvester prototypes. The fabricated devices based on the optimal results from using different TO techniques are tested and compared with the simulation results. The structures obtained by the level set based TO method require less post-processing before fabrication and the structures obtained by the density based TO method have resonating frequency as low as 100 Hz. The electrical voltage response

  12. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    Science.gov (United States)

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  13. Estimating species – area relationships by modeling abundance and frequency subject to incomplete sampling

    Science.gov (United States)

    Yamaura, Yuichi; Connor, Edward F.; Royle, Andy; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio

    2016-01-01

    Models and data used to describe species–area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species–area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species–area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density–area relationships and occurrence probability–area relationships can alter the form of species–area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied

  14. Fiber optics frequency comb enabled linear optical sampling with operation wavelength range extension.

    Science.gov (United States)

    Liao, Ruolin; Wu, Zhichao; Fu, Songnian; Zhu, Shengnan; Yu, Zhe; Tang, Ming; Liu, Deming

    2018-02-01

    Although the linear optical sampling (LOS) technique is powerful enough to characterize various advanced modulation formats with high symbol rates, the central wavelength of a pulsed local oscillator (LO) needs to be carefully set according to that of the signal under test, due to the coherent mixing operation. Here, we experimentally demonstrate wideband LOS enabled by a fiber optics frequency comb (FOFC). Meanwhile, when the broadband FOFC acts as the pulsed LO, we propose a scheme to mitigate the enhanced sampling error arising in the non-ideal response of a balanced photodetector. Finally, precise characterizations of arbitrary 128 Gbps PDM-QPSK wavelength channels from 1550 to 1570 nm are successfully achieved, when a 101.3 MHz frequency spaced comb with a 3 dB spectral power ripple of 20 nm is used.

  15. Measuring saccade peak velocity using a low-frequency sampling rate of 50 Hz.

    Science.gov (United States)

    Wierts, Roel; Janssen, Maurice J A; Kingma, Herman

    2008-12-01

    During the last decades, small head-mounted video eye trackers have been developed in order to record eye movements. Real-time systems-with a low sampling frequency of 50/60 Hz-are used for clinical vestibular practice, but are generally considered not to be suited for measuring fast eye movements. In this paper, it is shown that saccadic eye movements, having an amplitude of at least 5 degrees, can, in good approximation, be considered to be bandwidth limited up to a frequency of 25-30 Hz. Using the Nyquist theorem to reconstruct saccadic eye movement signals at higher temporal resolutions, it is shown that accurate values for saccade peak velocities, recorded at 50 Hz, can be obtained, but saccade peak accelerations and decelerations cannot. In conclusion, video eye trackers sampling at 50/60 Hz are appropriate for detecting the clinical relevant saccade peak velocities in contrast to what has been stated up till now.

  16. MalHaploFreq: A computer programme for estimating malaria haplotype frequencies from blood samples

    Directory of Open Access Journals (Sweden)

    Smith Thomas A

    2008-07-01

    Full Text Available Abstract Background Molecular markers, particularly those associated with drug resistance, are important surveillance tools that can inform policy choice. People infected with falciparum malaria often contain several genetically-distinct clones of the parasite; genotyping the patients' blood reveals whether or not the marker is present (i.e. its prevalence, but does not reveal its frequency. For example a person with four malaria clones may contain both mutant and wildtype forms of a marker but it is not possible to distinguish the relative frequencies of the mutant and wildtypes i.e. 1:3, 2:2 or 3:1. Methods An appropriate method for obtaining frequencies from prevalence data is by Maximum Likelihood analysis. A computer programme has been developed that allows the frequency of markers, and haplotypes defined by up to three codons, to be estimated from blood phenotype data. Results The programme has been fully documented [see Additional File 1] and provided with a user-friendly interface suitable for large scale analyses. It returns accurate frequencies and 95% confidence intervals from simulated dataset sets and has been extensively tested on field data sets. Additional File 1 User manual for MalHaploFreq. Click here for file Conclusion The programme is included [see Additional File 2] and/or may be freely downloaded from 1. It can then be used to extract molecular marker and haplotype frequencies from their prevalence in human blood samples. This should enhance the use of frequency data to inform antimalarial drug policy choice. Additional File 2 executable programme compiled for use on DOS or windows Click here for file

  17. The Value Estimation of an HFGW Frequency Time Standard for Telecommunications Network Optimization

    Science.gov (United States)

    Harper, Colby; Stephenson, Gary

    2007-01-01

    The emerging technology of gravitational wave control is used to augment a communication system using a development roadmap suggested in Stephenson (2003) for applications emphasized in Baker (2005). In the present paper consideration is given to the value of a High Frequency Gravitational Wave (HFGW) channel purely as providing a method of frequency and time reference distribution for use within conventional Radio Frequency (RF) telecommunications networks. Specifically, the native value of conventional telecommunications networks may be optimized by using an unperturbed frequency time standard (FTS) to (1) improve terminal navigation and Doppler estimation performance via improved time difference of arrival (TDOA) from a universal time reference, and (2) improve acquisition speed, coding efficiency, and dynamic bandwidth efficiency through the use of a universal frequency reference. A model utilizing a discounted cash flow technique provides an estimation of the additional value using HFGW FTS technology could bring to a mixed technology HFGW/RF network. By applying a simple net present value analysis with supporting reference valuations to such a network, it is demonstrated that an HFGW FTS could create a sizable improvement within an otherwise conventional RF telecommunications network. Our conservative model establishes a low-side value estimate of approximately 50B USD Net Present Value for an HFGW FTS service, with reasonable potential high-side values to significant multiples of this low-side value floor.

  18. Frequency-Selective Signal Sensing with Sub-Nyquist Uniform Sampling Scheme

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2015-01-01

    In this paper the authors discuss a problem of acquisition and reconstruction of a signal polluted by adjacent- channel interference. The authors propose a method to find a sub-Nyquist uniform sampling pattern which allows for correct reconstruction of selected frequencies. The method is inspired...... by the Restricted Isometry Property, which is known from the field of compressed sensing. Then, compressed sensing is used to successfully reconstruct a wanted signal even if some of the uniform samples were randomly lost, e. g. due to ADC saturation. An experiment which tests the proposed method in practice...

  19. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  20. The T-lock: automated compensation of radio-frequency induced sample heating

    International Nuclear Information System (INIS)

    Hiller, Sebastian; Arthanari, Haribabu; Wagner, Gerhard

    2009-01-01

    Modern high-field NMR spectrometers can stabilize the nominal sample temperature at a precision of less than 0.1 K. However, the actual sample temperature may differ from the nominal value by several degrees because the sample heating caused by high-power radio frequency pulses is not readily detected by the temperature sensors. Without correction, transfer of chemical shifts between different experiments causes problems in the data analysis. In principle, the temperature differences can be corrected by manual procedures but this is cumbersome and not fully reliable. Here, we introduce the concept of a 'T-lock', which automatically maintains the sample at the same reference temperature over the course of different NMR experiments. The T-lock works by continuously measuring the resonance frequency of a suitable spin and simultaneously adjusting the temperature control, thus locking the sample temperature at the reference value. For three different nuclei, 13 C, 17 O and 31 P in the compounds alanine, water, and phosphate, respectively, the T-lock accuracy was found to be <0.1 K. The use of dummy scan periods with variable lengths allows a reliable establishment of the thermal equilibrium before the acquisition of an experiment starts

  1. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  2. Optimal sampling period of the digital control system for the nuclear power plant steam generator water level control

    International Nuclear Information System (INIS)

    Hur, Woo Sung; Seong, Poong Hyun

    1995-01-01

    A great effort has been made to improve the nuclear plant control system by use of digital technologies and a long term schedule for the control system upgrade has been prepared with an aim to implementation in the next generation nuclear plants. In case of digital control system, it is important to decide the sampling period for analysis and design of the system, because the performance and the stability of a digital control system depend on the value of the sampling period of the digital control system. There is, however, currently no systematic method used universally for determining the sampling period of the digital control system. Generally, a traditional way to select the sampling frequency is to use 20 to 30 times the bandwidth of the analog control system which has the same system configuration and parameters as the digital one. In this paper, a new method to select the sampling period is suggested which takes into account of the performance as well as the stability of the digital control system. By use of the Irving's model steam generator, the optimal sampling period of an assumptive digital control system for steam generator level control is estimated and is actually verified in the digital control simulation system for Kori-2 nuclear power plant steam generator level control. Consequently, we conclude the optimal sampling period of the digital control system for Kori-2 nuclear power plant steam generator level control is 1 second for all power ranges. 7 figs., 3 tabs., 8 refs. (Author)

  3. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 2: Extension to time-frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.

  4. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution

    International Nuclear Information System (INIS)

    Cho, Sanghee; Grazioso, Ron; Zhang Nan; Aykac, Mehmet; Schmand, Matthias

    2011-01-01

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  5. SamplingStrata: An R Package for the Optimization of Strati?ed Sampling

    Directory of Open Access Journals (Sweden)

    Giulio Barcaroli

    2014-11-01

    Full Text Available When designing a sampling survey, usually constraints are set on the desired precision levels regarding one or more target estimates (the Ys. If a sampling frame is available, containing auxiliary information related to each unit (the Xs, it is possible to adopt a stratified sample design. For any given strati?cation of the frame, in the multivariate case it is possible to solve the problem of the best allocation of units in strata, by minimizing a cost function sub ject to precision constraints (or, conversely, by maximizing the precision of the estimates under a given budget. The problem is to determine the best stratification in the frame, i.e., the one that ensures the overall minimal cost of the sample necessary to satisfy precision constraints. The Xs can be categorical or continuous; continuous ones can be transformed into categorical ones. The most detailed strati?cation is given by the Cartesian product of the Xs (the atomic strata. A way to determine the best stratification is to explore exhaustively the set of all possible partitions derivable by the set of atomic strata, evaluating each one by calculating the corresponding cost in terms of the sample required to satisfy precision constraints. This is una?ordable in practical situations, where the dimension of the space of the partitions can be very high. Another possible way is to explore the space of partitions with an algorithm that is particularly suitable in such situations: the genetic algorithm. The R package SamplingStrata, based on the use of a genetic algorithm, allows to determine the best strati?cation for a population frame, i.e., the one that ensures the minimum sample cost necessary to satisfy precision constraints, in a multivariate and multi-domain case.

  6. An Agent-Based Model for Optimization of Road Width and Public Transport Frequency

    Directory of Open Access Journals (Sweden)

    Mark E. Koryagin

    2015-04-01

    Full Text Available An urban passenger transportation problem is studied. Municipal authorities and passengers are regarded as participants in the passenger transportation system. The municipal authorities have to optimise road width and public transport frequency. The road consists of a dedicated bus lane and lanes for passenger cars. The car travel time depends on the number of road lanes and passengers’ choice of travel mode. The passengers’ goal is to minimize total travel costs, including time value. The passengers try to find the optimal ratio between public transport and cars. The conflict between municipal authorities and the passengers is described as a game theoretic model. The existence of Nash equilibrium in the model is proved. The numerical example shows the influence of the value of time and intensity of passenger flow on the equilibrium road width and public transport frequency.

  7. Flux pinning characteristics in cylindrical niobium samples used for superconducting radio frequency cavity fabrication

    Science.gov (United States)

    Dhavale, Asavari S.; Dhakal, Pashupati; Polyanskii, Anatolii A.; Ciovati, Gianluigi

    2012-06-01

    We present the results from DC magnetization and penetration depth measurements of cylindrical bulk large-grain (LG) and fine-grain (FG) niobium samples used for the fabrication of superconducting radio frequency (SRF) cavities. The surface treatment consisted of electropolishing and low-temperature baking as they are typically applied to SRF cavities. The magnetization data are analyzed using a modified critical state model. The critical current density Jc and pinning force Fp are calculated from the magnetization data and their temperature dependence and field dependence are presented. The LG samples have lower critical current density and pinning force density compared to FG samples, favorable to lower flux trapping efficiency. This effect may explain the lower values of residual resistance often observed in LG cavities than FG cavities.

  8. Flux pinning characteristics in cylindrical niobium samples used for superconducting radio frequency cavity fabrication

    International Nuclear Information System (INIS)

    Dhavale, Asavari S; Dhakal, Pashupati; Ciovati, Gianluigi; Polyanskii, Anatolii A

    2012-01-01

    We present the results from DC magnetization and penetration depth measurements of cylindrical bulk large-grain (LG) and fine-grain (FG) niobium samples used for the fabrication of superconducting radio frequency (SRF) cavities. The surface treatment consisted of electropolishing and low-temperature baking as they are typically applied to SRF cavities. The magnetization data are analyzed using a modified critical state model. The critical current density J c and pinning force F p are calculated from the magnetization data and their temperature dependence and field dependence are presented. The LG samples have lower critical current density and pinning force density compared to FG samples, favorable to lower flux trapping efficiency. This effect may explain the lower values of residual resistance often observed in LG cavities than FG cavities. (paper)

  9. High frequency of parvovirus B19 DNA in bone marrow samples from rheumatic patients

    DEFF Research Database (Denmark)

    Lundqvist, Anders; Isa, Adiba; Tolfvenstam, Thomas

    2005-01-01

    BACKGROUND: Human parvovirus B19 (B19) polymerase chain reaction (PCR) is now a routine analysis and serves as a diagnostic marker as well as a complement or alternative to B19 serology. The clinical significance of a positive B19 DNA finding is however dependent on the type of tissue or body fluid...... analysed and of the immune status of the patient. OBJECTIVES: To analyse the clinical significance of B19 DNA positivity in bone marrow samples from rheumatic patients. STUDY DESIGN: Parvovirus B19 DNA was analysed in paired bone marrow and serum samples by nested PCR technique. Serum was also analysed...... negative group. A high frequency of parvovirus B19 DNA was thus detected in bone marrow samples in rheumatic patients. The clinical data does not support a direct association between B19 PCR positivity and rheumatic disease manifestation. Therefore, the clinical significance of B19 DNA positivity in bone...

  10. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, M; Li, R; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States); Ye, Y [Stanford Univ, Management Science and Engineering, Stanford, Ca (United States); Boyd, S [Stanford University, Electrical Engineering, Stanford, CA (United States)

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  11. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    International Nuclear Information System (INIS)

    Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S

    2014-01-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  12. A small perturbation based optimization approach for the frequency placement of high aspect ratio wings

    Science.gov (United States)

    Goltsch, Mandy

    Design denotes the transformation of an identified need to its physical embodiment in a traditionally iterative approach of trial and error. Conceptual design plays a prominent role but an almost infinite number of possible solutions at the outset of design necessitates fast evaluations. The corresponding practice of empirical equations and low fidelity analyses becomes obsolete in the light of novel concepts. Ever increasing system complexity and resource scarcity mandate new approaches to adequately capture system characteristics. Contemporary concerns in atmospheric science and homeland security created an operational need for unconventional configurations. Unmanned long endurance flight at high altitudes offers a unique showcase for the exploration of new design spaces and the incidental deficit of conceptual modeling and simulation capabilities. Structural and aerodynamic performance requirements necessitate light weight materials and high aspect ratio wings resulting in distinct structural and aeroelastic response characteristics that stand in close correlation with natural vibration modes. The present research effort evolves around the development of an efficient and accurate optimization algorithm for high aspect ratio wings subject to natural frequency constraints. Foundational corner stones are beam dimensional reduction and modal perturbation redesign. Local and global analyses inherent to the former suggest corresponding levels of local and global optimization. The present approach departs from this suggestion. It introduces local level surrogate models to capacitate a methodology that consists of multi level analyses feeding into a single level optimization. The innovative heart of the new algorithm originates in small perturbation theory. A sequence of small perturbation solutions allows the optimizer to make incremental movements within the design space. It enables a directed search that is free of costly gradients. System matrices are decomposed

  13. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  14. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  15. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  16. Optimal Control Method for Wind Farm to Support Temporary Primary Frequency Control with Minimized Wind Energy Cost

    DEFF Research Database (Denmark)

    Wang, Haijiao; Chen, Zhe; Jiang, Quanyuan

    2015-01-01

    This study proposes an optimal control method for variable speed wind turbines (VSWTs) based wind farm (WF) to support temporary primary frequency control. This control method consists of two layers: temporary frequency support control (TFSC) of the VSWT, and temporary support power optimal...... dispatch (TSPOD) of the WF. With TFSC, the VSWT could temporarily provide extra power to support system frequency under varying and wide-range wind speed. In the WF control centre, TSPOD optimally dispatches the frequency support power orders to the VSWTs that operate under different wind speeds, minimises...... the wind energy cost of frequency support, and satisfies the support capabilities of the VSWTs. The effectiveness of the whole control method is verified in the IEEE-RTS built in MATLABSimulink, and compared with a published de-loading method....

  17. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  18. Optimization of a space spectrograph main frame and frequency response analysis of the frame

    Science.gov (United States)

    Zhang, Xin-yu; Chen, Zhi-yuan; Yang, Shi-mo

    2009-07-01

    A space spectrograph main structure is optimized and examined in order to satisfy the space operational needs. The space spectrograph will be transported into its operational orbit by the launch vehicle and it will undergo dynamic environment in the spacecraft injection period. The unexpected shocks may cause declination of observation accuracy and even equipment damages. The main frame is one of the most important parts because its mechanical performance has great influence on the operational life of the spectrograph, accuracy of observation, etc. For the reason of cost reduction and stability confirming, lower weight and higher structure stiffness of the frame are simultaneously required. Structure optimization was conducted considering the initial design modal analysis results. The base modal frequency raised 10.34% while the whole weight lowered 8.63% compared to the initial design. The purpose of this study is to analyze the new design of main frame mechanical properties and verify whether it can satisfy strict optical demands under the dynamic impact during spacecraft injection. For realizing and forecasting the frequency response characteristics of the main structure in mechanical environment experiment, dynamic analysis of the structure should be performed simulating impulse loads from the bottom base. Therefore, frequency response analysis (FRA) of the frame was then performed using the FEA software MSC.PATRAN/NASTRAN. Results of shock response spectrum (SRS) responses from the base excitations were given. Stress and acceleration dynamic responses of essential positions in the spacecraft injection course were also calculated and spectrometer structure design was examined considering stiffness / strength demands. In this simulation, maximum stresses of Cesic material in two acceleration application cases are 45.1 and 74.1 MPa, respectively. They are all less than yield strengths. As is demonstrated from the simulation, strength reservation of the frame is

  19. Binary particle swarm optimization for frequency band selection in motor imagery based brain-computer interfaces.

    Science.gov (United States)

    Wei, Qingguo; Wei, Zhonghai

    2015-01-01

    A brain-computer interface (BCI) enables people suffering from affective neurological diseases to communicate with the external world. Common spatial pattern (CSP) is an effective algorithm for feature extraction in motor imagery based BCI systems. However, many studies have proved that the performance of CSP depends heavily on the frequency band of EEG signals used for the construction of covariance matrices. The use of different frequency bands to extract signal features may lead to different classification performances, which are determined by the discriminative and complementary information they contain. In this study, the broad frequency band (8-30 Hz) is divided into 10 sub-bands of band width 4 Hz and overlapping 2 Hz. Binary particle swarm optimization (BPSO) is used to find the best sub-band set to improve the performance of CSP and subsequent classification. Experimental results demonstrate that the proposed method achieved an average improvement of 6.91% in cross-validation accuracy when compared to broad band CSP.

  20. Resonant Frequency Calculation and Optimal Design of Peano Fractal Antenna for Partial Discharge Detection

    Directory of Open Access Journals (Sweden)

    Jian Li

    2012-01-01

    Full Text Available Ultra-high-frequency (UHF approaches have caught increasing attention recently and have been considered as a promising technology for online monitoring partial discharge (PD signals. This paper presents a Peano fractal antenna for UHF PD online monitoring of transformer with small size and multiband. The approximate formula for calculating the first resonant frequency of the Peano fractal antenna is presented. The results show that the first resonant frequency of the Peano fractal antenna is smaller than the Hilbert fractal antenna when the outer dimensions are equivalent approximately. The optimal geometric parameters of the antenna were obtained through simulation. Actual PD experiments had been carried out for two typically artificial insulation defect models, while the proposed antenna and the existing Hilbert antenna were both used for the PD measurement. The experimental results show that Peano fractal antenna is qualified for PD online UHF monitoring and a little more suitable than the Hilbert fractal antenna for pattern recognition by analyzing the waveforms of detected UHF PD signals.

  1. Mitigation of Power frequency Magnetic Fields. Using Scale Invariant and Shape Optimization Methods

    Energy Technology Data Exchange (ETDEWEB)

    Salinas, Ener; Yueqiang Liu; Daalder, Jaap; Cruz, Pedro; Antunez de Souza, Paulo Roberto Jr; Atalaya, Juan Carlos; Paula Marciano, Fabianna de; Eskinasy, Alexandre

    2006-10-15

    The present report describes the development and application of two novel methods for implementing mitigation techniques of magnetic fields at power frequencies. The first method makes use of scaling rules for electromagnetic quantities, while the second one applies a 2D shape optimization algorithm based on gradient methods. Before this project, the first method had already been successfully applied (by some of the authors of this report) to electromagnetic designs involving pure conductive Material (e.g. copper, aluminium) which implied a linear formulation. Here we went beyond this approach and tried to develop a formulation involving ferromagnetic (i.e. non-linear) Materials. Surprisingly, we obtained good equivalent replacement for test-transformers by varying the input current. In spite of the validity of this equivalence constrained to regions not too close to the source, the results can still be considered useful, as most field mitigation techniques are precisely developed for reducing the magnetic field in regions relatively far from the sources. The shape optimization method was applied in this project to calculate the optimal geometry of a pure conductive plate to mitigate the magnetic field originated from underground cables. The objective function was a weighted combination of magnetic energy at the region of interest and dissipated heat at the shielding Material. To our surprise, shapes of complex structure, difficult to interpret (and probably even harder to anticipate) were the results of the applied process. However, the practical implementation (using some approximation of these shapes) gave excellent experimental mitigation factors.

  2. Efficient computation of the joint sample frequency spectra for multiple populations.

    Science.gov (United States)

    Kamm, John A; Terhorst, Jonathan; Song, Yun S

    2017-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.

  3. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  4. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  5. Instantaneous Fundamental Frequency Estimation with Optimal Segmentation for Nonstationary Voiced Speech

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2016-01-01

    In speech processing, the speech is often considered stationary within segments of 20–30 ms even though it is well known not to be true. In this paper, we take the non-stationarity of voiced speech into account by using a linear chirp model to describe the speech signal. We propose a maximum...... likelihood estimator of the fundamental frequency and chirp rate of this model, and show that it reaches the Cramer-Rao bound. Since the speech varies over time, a fixed segment length is not optimal, and we propose to make a segmentation of the signal based on the maximum a posteriori (MAP) criterion. Using...... of the chirp model than the harmonic model to the speech signal. The methods are based on an assumption of white Gaussian noise, and, therefore, two prewhitening filters are also proposed....

  6. A Data-Driven Frequency-Domain Approach for Robust Controller Design via Convex Optimization

    CERN Document Server

    AUTHOR|(CDS)2092751; Martino, Michele

    The objective of this dissertation is to develop data-driven frequency-domain methods for designing robust controllers through the use of convex optimization algorithms. Many of today's industrial processes are becoming more complex, and modeling accurate physical models for these plants using first principles may be impossible. Albeit a model may be available; however, such a model may be too complex to consider for an appropriate controller design. With the increased developments in the computing world, large amounts of measured data can be easily collected and stored for processing purposes. Data can also be collected and used in an on-line fashion. Thus it would be very sensible to make full use of this data for controller design, performance evaluation, and stability analysis. The design methods imposed in this work ensure that the dynamics of a system are captured in an experiment and avoids the problem of unmodeled dynamics associated with parametric models. The devised methods consider robust designs...

  7. Nonlinear optimization of the modern synchrotron radiation storage ring based on frequency map analysis

    International Nuclear Information System (INIS)

    Tian Shunqiang; Liu Guimin; Hou Jie; Chen Guangling; Wan Chenglan; Li Haohu

    2009-01-01

    In this paper, we present a rule to improve the nonlinear solution with frequency map analysis (FMA), and without frequently revisiting the optimization algorithm. Two aspects of FMA are emphasized. The first one is the tune shift with amplitude, which can be used to improve the solution of harmonic sextupoles, and thus obtain a large dynamic aperture. The second one is the tune diffusion rate, which can be used to select a quiet tune. Application of these ideas is carried out in the storage ring of the Shanghai Synchrotron Radiation Facility (SSRF), and the detailed processes, as well as better solutions, are presented in this paper. Discussions about the nonlinear behaviors of off-momentum particles are also presented. (authors)

  8. Frequency, Antimicrobial Resistance and Genetic Diversity of Klebsiella pneumoniae in Food Samples.

    Directory of Open Access Journals (Sweden)

    Yumei Guo

    Full Text Available This study aimed to assess the frequency of Klebsiella pneumoniae in food samples and to detect antibiotic resistance phenotypes, antimicrobial resistance genes and the molecular subtypes of the recovered isolates. A total of 998 food samples were collected, and 99 (9.9% K. pneumoniae strains were isolated; the frequencies were 8.2% (4/49 in fresh raw seafood, 13.8% (26/188 in fresh raw chicken, 11.4% (34/297 in frozen raw food and 7.5% (35/464 in cooked food samples. Antimicrobial resistance was observed against 16 antimicrobials. The highest resistance rate was observed for ampicillin (92.3%, followed by tetracycline (31.3%, trimethoprim-sulfamethoxazole (18.2%, and chloramphenicol (10.1%. Two K. pneumoniae strains were identified as extended-spectrum β-lactamase (ESBL-one strain had three beta-lactamases genes (blaSHV, blaCTX-M-1, and blaCTX-M-10 and one had only the blaSHV gene. Nineteen multidrug-resistant (MDR strains were detected; the percentage of MDR strains in fresh raw chicken samples was significantly higher than in other sample types (P<0.05. Six of the 18 trimethoprim-sulfamethoxazole-resistant strains carried the folate pathway inhibitor gene (dhfr. Four isolates were screened by PCR for quinolone resistance genes; aac(6'-Ib-cr, qnrB, qnrA and qnrS were detected. In addition, gyrA gene mutations such as T247A (Ser83Ile, C248T (Ser83Phe, and A260C (Asp87Ala and a parC C240T (Ser80Ile mutation were identified. Five isolates were screened for aminoglycosides resistance genes; aacA4, aacC2, and aadA1 were detected. Pulsed-field gel electrophoresis-based subtyping identified 91 different patterns. Our results indicate that food, especially fresh raw chicken, is a reservoir of antimicrobial-resistant K. pneumoniae, and the potential health risks posed by such strains should not be underestimated. Our results demonstrated high prevalence, antibiotic resistance rate and genetic diversity of K. pneumoniae in food in China. Improved

  9. Optical Frequency Optimization of a High Intensity Laser Power Beaming System Utilizing VMJ Photovoltaic Cells

    Science.gov (United States)

    Raible, Daniel E.; Dinca, Dragos; Nayfeh, Taysir H.

    2012-01-01

    An effective form of wireless power transmission (WPT) has been developed to enable extended mission durations, increased coverage and added capabilities for both space and terrestrial applications that may benefit from optically delivered electrical energy. The high intensity laser power beaming (HILPB) system enables long range optical 'refueling" of electric platforms such as micro unmanned aerial vehicles (MUAV), airships, robotic exploration missions and spacecraft platforms. To further advance the HILPB technology, the focus of this investigation is to determine the optimal laser wavelength to be used with the HILPB receiver, which utilizes vertical multi-junction (VMJ) photovoltaic cells. Frequency optimization of the laser system is necessary in order to maximize the conversion efficiency at continuous high intensities, and thus increase the delivered power density of the HILPB system. Initial spectral characterizations of the device performed at the NASA Glenn Research Center (GRC) indicate the approximate range of peak optical-to-electrical conversion efficiencies, but these data sets represent transient conditions under lower levels of illumination. Extending these results to high levels of steady state illumination, with attention given to the compatibility of available commercial off-the-shelf semiconductor laser sources and atmospheric transmission constraints is the primary focus of this paper. Experimental hardware results utilizing high power continuous wave (CW) semiconductor lasers at four different operational frequencies near the indicated band gap of the photovoltaic VMJ cells are presented and discussed. In addition, the highest receiver power density achieved to date is demonstrated using a single photovoltaic VMJ cell, which provided an exceptionally high electrical output of 13.6 W/sq cm at an optical-to-electrical conversion efficiency of 24 percent. These results are very promising and scalable, as a potential 1.0 sq m HILPB receiver of

  10. High frequency of sub-optimal semen quality in an unselected population of young men

    DEFF Research Database (Denmark)

    Andersen, A G; Jensen, T K; Carlsen, E

    2000-01-01

    for military service, this provided a unique opportunity to study the reproductive function in an unbiased population. Altogether 891 young men delivered a blood sample in which reproductive hormones were measured. From 708 of these men data were also obtained on semen quality and testis size. The median sperm...... immotile spermatozoa and follicle stimulating hormone. Possible causes for this high frequency of young men with suboptimal semen quality are obscure and need to be explored. Whether these findings apply for young male populations of comparable countries remains to be seen....

  11. OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Maja Vrkljan

    2004-12-01

    Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.

  12. Frequency, stability and differentiation of self-reported school fear and truancy in a community sample

    Directory of Open Access Journals (Sweden)

    Metzke Christa

    2008-07-01

    Full Text Available Abstract Background Surprisingly little is known about the frequency, stability, and correlates of school fear and truancy based on self-reported data of adolescents. Methods Self-reported school fear and truancy were studied in a total of N = 834 subjects of the community-based Zurich Adolescent Psychology and Psychopathology Study (ZAPPS at two times with an average age of thirteen and sixteen years. Group definitions were based on two behavioural items of the Youth Self-Report (YSR. Comparisons included a control group without indicators of school fear or truancy. The three groups were compared across questionnaires measuring emotional and behavioural problems, life-events, self-related cognitions, perceived parental behaviour, and perceived school environment. Results The frequency of self-reported school fear decreased over time (6.9 vs. 3.6% whereas there was an increase in truancy (5.0 vs. 18.4%. Subjects with school fear displayed a pattern of associated internalizing problems and truants were characterized by associated delinquent behaviour. Among other associated psychosocial features, the distress coming from the perceived school environment in students with school fear is most noteworthy. Conclusion These findings from a community study show that school fear and truancy are frequent and display different developmental trajectories. Furthermore, previous results are corroborated which are based on smaller and selected clinical samples indicating that the two groups display distinct types of school-related behaviour.

  13. Comparison of mobile and stationary spore-sampling techniques for estimating virulence frequencies in aerial barley powdery mildew populations

    DEFF Research Database (Denmark)

    Hovmøller, M.S.; Munk, L.; Østergård, Hanne

    1995-01-01

    Gene frequencies in samples of aerial populations of barley powdery mildew (Erysiphe graminis f.sp. hordei), which were collected in adjacent barley areas and in successive periods of time, were compared using mobile and stationary sampling techniques. Stationary samples were collected from trap ...

  14. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  15. The frequency of sexual dysfunctions in male partners of women with vaginismus in a Turkish sample.

    Science.gov (United States)

    Dogan, S; Dogan, M

    2008-01-01

    The aim of this investigation is to determine the sexual history traits, sexual satisfaction level and frequency of sexual dysfunctions in men whose partners have vaginismus. The study included 32 male partners of vaginismic patients, who presented at a psychiatry department. Subjects were evaluated by a semi-structured questionnaire. The questionnaire was developed by researchers for assessing sexually dysfunctional patients and included detailed questions with regard to socio-demographic variables, general medical and sexual history. All participants also received the Golombok Rust Inventory of Sexual Satisfaction (GRISS). According to DSM-IV-TR criteria, 65.6% of the investigated males were diagnosed with one or more sexual dysfunctions. The most common problem was premature ejaculation (50%) and the second one was erectile dysfunction (28%). The transformed GRISS subscale scores provided similar data. It is concluded that the assessment of sexual functions of males who have vaginismic partners should be an integral part of the management procedure of vaginismus for optimal outcome.

  16. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    Science.gov (United States)

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  17. Surface analyses of electropolished niobium samples for superconducting radio frequency cavity

    International Nuclear Information System (INIS)

    Tyagi, P. V.; Nishiwaki, M.; Saeki, T.; Sawabe, M.; Hayano, H.; Noguchi, T.; Kato, S.

    2010-01-01

    The performance of superconducting radio frequency niobium cavities is sometimes limited by contaminations present on the cavity surface. In the recent years extensive research has been done to enhance the cavity performance by applying improved surface treatments such as mechanical grinding, electropolishing (EP), chemical polishing, tumbling, etc., followed by various rinsing methods such as ultrasonic pure water rinse, alcoholic rinse, high pressure water rinse, hydrogen per oxide rinse, etc. Although good cavity performance has been obtained lately by various post-EP cleaning methods, the detailed nature about the surface contaminants is still not fully characterized. Further efforts in this area are desired. Prior x-ray photoelectron spectroscopy (XPS) analyses of EPed niobium samples treated with fresh EP acid, demonstrated that the surfaces were covered mainly with the niobium oxide (Nb 2 O 5 ) along with carbon, in addition a small quantity of sulfur and fluorine were also found in secondary ion mass spectroscopy (SIMS) analysis. In this article, the authors present the analyses of surface contaminations for a series of EPed niobium samples located at various positions of a single cell niobium cavity followed by ultrapure water rinsing as well as our endeavor to understand the aging effect of EP acid solution in terms of contaminations presence at the inner surface of the cavity with the help of surface analytical tools such as XPS, SIMS, and scanning electron microscope at KEK.

  18. Surface analyses of electropolished niobium samples for superconducting radio frequency cavity

    Energy Technology Data Exchange (ETDEWEB)

    Tyagi, P. V.; Nishiwaki, M.; Saeki, T.; Sawabe, M.; Hayano, H.; Noguchi, T.; Kato, S. [GUAS, Tsukuba, Ibaraki 305-0801 (Japan); KEK, Tsukuba, Ibaraki 305-0801 (Japan); KAKEN Inc., Hokota, Ibaraki 311-1416 (Japan); GUAS, Tsukuba, Ibaraki 305-0801 (Japan) and KEK, Tsukuba, Ibaraki 305-0801 (Japan)

    2010-07-15

    The performance of superconducting radio frequency niobium cavities is sometimes limited by contaminations present on the cavity surface. In the recent years extensive research has been done to enhance the cavity performance by applying improved surface treatments such as mechanical grinding, electropolishing (EP), chemical polishing, tumbling, etc., followed by various rinsing methods such as ultrasonic pure water rinse, alcoholic rinse, high pressure water rinse, hydrogen per oxide rinse, etc. Although good cavity performance has been obtained lately by various post-EP cleaning methods, the detailed nature about the surface contaminants is still not fully characterized. Further efforts in this area are desired. Prior x-ray photoelectron spectroscopy (XPS) analyses of EPed niobium samples treated with fresh EP acid, demonstrated that the surfaces were covered mainly with the niobium oxide (Nb{sub 2}O{sub 5}) along with carbon, in addition a small quantity of sulfur and fluorine were also found in secondary ion mass spectroscopy (SIMS) analysis. In this article, the authors present the analyses of surface contaminations for a series of EPed niobium samples located at various positions of a single cell niobium cavity followed by ultrapure water rinsing as well as our endeavor to understand the aging effect of EP acid solution in terms of contaminations presence at the inner surface of the cavity with the help of surface analytical tools such as XPS, SIMS, and scanning electron microscope at KEK.

  19. Characteristic of selected frequency luminescence for samples collected in deserts north to Beijing

    International Nuclear Information System (INIS)

    Li Dongxu; Wei Mingjian; Wang Junping; Pan Baolin; Zhao Shiyuan; Liu Zhaowen

    2009-01-01

    Surface sand samples were collected in eight sites of the Horqin and Otindag deserts located in north to Beijing. BG2003 luminescence spectrograph was used to analyze the emitted photons and characteristic spectra of the selected frequency luminescence were obtained. It was found that high intensities of emitted photons stimulated by heat from 85 degree C-135 degree C and 350 degree C-400 degree C. It belong to the traps of 4.13 eV (300 nm), 4.00 eV (310 nm), 3.88 eV (320 nm) and 2.70 eV (460 nm), and the emitted photons belong to traps of 4.00 eV (310 nm), 3.88 eV (320 nm) and 2.70 eV (460 nm) were stimulated by green laser. And sand samples of the eight sites can respond to the increase of definite radiological dose at each wavelength, which is the characteristic spectrum to provide radiation dosimetry basis for dating. There are definite district characteristic in their characteristic spectra. (authors)

  20. Frequency and antimicrobial susceptibility of acinetobacter species isolated from blood samples of paediatric patients

    International Nuclear Information System (INIS)

    Javed, A.; Zafar, A.; Ejaz, H.; Zubair, M.

    2012-01-01

    Objective: Acinetobacter species is a major nosocomial pathogen causing serious infections in immuno-compromised and hospitalized patients. The aim of this study was to determine the frequency and antimicrobial susceptibility pattern of Acinetobacter species in blood samples of paediatric patients. Methodology: This cross sectional observational study was conducted during January to October, 2011 at The Children's Hospital and Institute of Child Health, Lahore. A total number of 12,032 blood samples were analysed during the study period. Acinetobacter species were Bauer disc diffusion method. Results: The blood cultures showed growth in 1,141 cultures out of which 46 (4.0%) were Acinetobacter species. The gender distribution of Acinetobacter species was 29 (63.0%) in males and 17 (37.0%) in females. A good antimicrobial susceptibility pattern of Acinetobacter species was seen with sulbactam-cefoperazone (93.0%), imepenem and meropenem (82.6% (30.4%) was poor. Conclusion: The results of the present study shows high rate of resistance of Acinetobacter species with cephalosporins in nosocomial infections. The sulbactam-cefoperazone, carbapenems and piperacillin-tazobactam showed effective antimicrobial susceptibility against Acinetobacter species. (author)

  1. Landslide Susceptibility Assessment Using Frequency Ratio Technique with Iterative Random Sampling

    Directory of Open Access Journals (Sweden)

    Hyun-Joo Oh

    2017-01-01

    Full Text Available This paper assesses the performance of the landslide susceptibility analysis using frequency ratio (FR with an iterative random sampling. A pair of before-and-after digital aerial photographs with 50 cm spatial resolution was used to detect landslide occurrences in Yongin area, Korea. Iterative random sampling was run ten times in total and each time it was applied to the training and validation datasets. Thirteen landslide causative factors were derived from the topographic, soil, forest, and geological maps. The FR scores were calculated from the causative factors and training occurrences repeatedly ten times. The ten landslide susceptibility maps were obtained from the integration of causative factors that assigned FR scores. The landslide susceptibility maps were validated by using each validation dataset. The FR method achieved susceptibility accuracies from 89.48% to 93.21%. And the landslide susceptibility accuracy of the FR method is higher than 89%. Moreover, the ten times iterative FR modeling may contribute to a better understanding of a regularized relationship between the causative factors and landslide susceptibility. This makes it possible to incorporate knowledge-driven considerations of the causative factors into the landslide susceptibility analysis and also be extensively used to other areas.

  2. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    International Nuclear Information System (INIS)

    Oliveira, Karina B. de; Oliveira, Bras H. de

    2013-01-01

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C 18 column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min−1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 ± 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  3. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  4. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  5. Optimizing headspace sampling temperature and time for analysis of volatile oxidation products in fish oil

    DEFF Research Database (Denmark)

    Rørbæk, Karen; Jensen, Benny

    1997-01-01

    Headspace-gas chromatography (HS-GC), based on adsorption to Tenax GR(R), thermal desorption and GC, has been used for analysis of volatiles in fish oil. To optimize sam sampling conditions, the effect of heating the fish oil at various temperatures and times was evaluated from anisidine values (AV...

  6. Isolation and identification of phytase-producing strains from soil samples and optimization of production parameters

    Directory of Open Access Journals (Sweden)

    Masoud Mohammadi

    2017-09-01

    Discussion and conclusion: Penicillium sp. isolated from a soil sample near Qazvin, was able to produce highly active phytase in optimized environmental conditions, which could be a suitable candidate for commercial production of phytase to be used as complement in poultry feeding industries.

  7. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  9. The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes

    Science.gov (United States)

    Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark

    2000-01-01

    Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.

  10. NSGA-II based optimal control scheme of wind thermal power system for improvement of frequency regulation characteristics

    Directory of Open Access Journals (Sweden)

    S. Chaine

    2015-09-01

    Full Text Available This work presents a methodology to optimize the controller parameters of doubly fed induction generator modeled for frequency regulation in interconnected two-area wind power integrated thermal power system. The gains of integral controller of automatic generation control loop and the proportional and derivative controllers of doubly fed induction generator inertial control loop are optimized in a coordinated manner by employing the multi-objective non-dominated sorting genetic algorithm-II. To reduce the numbers of optimization parameters, a sensitivity analysis is done to determine that the above mentioned three controller parameters are the most sensitive among the rest others. Non-dominated sorting genetic algorithm-II has depicted better efficiency of optimization compared to the linear programming, genetic algorithm, particle swarm optimization, and cuckoo search algorithm. The performance of the designed optimal controller exhibits robust performance even with the variation in penetration levels of wind energy, disturbances, parameter and operating conditions in the system.

  11. Time optimization of 90Sr measurements: Sequential measurement of multiple samples during ingrowth of 90Y

    International Nuclear Information System (INIS)

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-01-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing 90 Sr by making the Cherenkov measurement of the daughter nuclide 90 Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of 90 Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21 h to 6.5 h, when assuming a MDA of 1 Bq/L and at a background count rate of approximately 0.8 cpm. - Highlights: • An approach roughly a factor of three more efficient than an un-optimized method. • The optimization gives a more efficient use of instrument time. • The efficiency increase ranges from a factor of three to 10, for 10 to 40 samples.

  12. Optimal sampling plan for clean development mechanism energy efficiency lighting projects

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2013-01-01

    Highlights: • A metering cost minimisation model is built to assist the sampling plan for CDM projects. • The model minimises the total metering cost by the determination of optimal sample size. • The required 90/10 criterion sampling accuracy is maintained. • The proposed metering cost minimisation model is applicable to other CDM projects as well. - Abstract: Clean development mechanism (CDM) project developers are always interested in achieving required measurement accuracies with the least metering cost. In this paper, a metering cost minimisation model is proposed for the sampling plan of a specific CDM energy efficiency lighting project. The problem arises from the particular CDM sampling requirement of 90% confidence and 10% precision for the small-scale CDM energy efficiency projects, which is known as the 90/10 criterion. The 90/10 criterion can be met through solving the metering cost minimisation problem. All the lights in the project are classified into different groups according to uncertainties of the lighting energy consumption, which are characterised by their statistical coefficient of variance (CV). Samples from each group are randomly selected to install power meters. These meters include less expensive ones with less functionality and more expensive ones with greater functionality. The metering cost minimisation model will minimise the total metering cost through the determination of the optimal sample size at each group. The 90/10 criterion is formulated as constraints to the metering cost objective. The optimal solution to the minimisation problem will therefore minimise the metering cost whilst meeting the 90/10 criterion, and this is verified by a case study. Relationships between the optimal metering cost and the population sizes of the groups, CV values and the meter equipment cost are further explored in three simulations. The metering cost minimisation model proposed for lighting systems is applicable to other CDM projects as

  13. Crack identification method in beam-like structures using changes in experimentally measured frequencies and Particle Swarm Optimization

    Science.gov (United States)

    Khatir, Samir; Dekemele, Kevin; Loccufier, Mia; Khatir, Tawfiq; Abdel Wahab, Magd

    2018-02-01

    In this paper, a technique is presented for the detection and localization of an open crack in beam-like structures using experimentally measured natural frequencies and the Particle Swarm Optimization (PSO) method. The technique considers the variation in local flexibility near the crack. The natural frequencies of a cracked beam are determined experimentally and numerically using the Finite Element Method (FEM). The optimization algorithm is programmed in MATLAB. The algorithm is used to estimate the location and severity of a crack by minimizing the differences between measured and calculated frequencies. The method is verified using experimentally measured data on a cantilever steel beam. The Fourier transform is adopted to improve the frequency resolution. The results demonstrate the good accuracy of the proposed technique.

  14. Optimized Irregular Low-Density Parity-Check Codes for Multicarrier Modulations over Frequency-Selective Channels

    Directory of Open Access Journals (Sweden)

    Valérian Mannoni

    2004-09-01

    Full Text Available This paper deals with optimized channel coding for OFDM transmissions (COFDM over frequency-selective channels using irregular low-density parity-check (LDPC codes. Firstly, we introduce a new characterization of the LDPC code irregularity called “irregularity profile.” Then, using this parameterization, we derive a new criterion based on the minimization of the transmission bit error probability to design an irregular LDPC code suited to the frequency selectivity of the channel. The optimization of this criterion is done using the Gaussian approximation technique. Simulations illustrate the good performance of our approach for different transmission channels.

  15. Design, Simulation, and Optimization of a Frequency-Tunable Vibration Energy Harvester That Uses a Magnetorheological Elastomer

    Directory of Open Access Journals (Sweden)

    Wan Sun

    2015-01-01

    Full Text Available This study focuses on the design, simulation, and load power optimization for the development of a novel frequency-tunable electromagnetic vibrational energy harvester. The unique characteristic of a magnetorheological elastomer (MRE is utilized, that the shear modulus can be varied by changing the strength of an applied magnetic field. The electromagnetic energy harvester is fabricated, the external electric circuit is connected, and the performance is evaluated through a series of experiments. The resonant frequencies and the parasitic damping constant are measured experimentally for different tuning magnet gap distances, which validate the application of the MRE to the development of a frequency-tunable energy harvesting system. The harvested energy of the system is measured by the voltage across the load resistor. The maximum load power is attained by optimizing the external circuit connected to the coil system. The analysis results are presented for harvesting the maximum load power in terms of the coil parameters and external circuit resistance. The optimality of the load resistance is validated by comparing the analytical results with experimental results. The optimal load resistances under various resonance frequencies are also found for the design and composition of the optimal energy harvesting circuit of the energy harvester system.

  16. Optimization of Pulsed Operation of the Superconducting Radio-Frequency (SRF) Cavities at the Spallation Neutron Source (SNS)

    International Nuclear Information System (INIS)

    Kim, Sang-Ho; Campisi, Isidoro E.

    2007-01-01

    In order to address the optimization in a pulsed operation, a systematic computational analysis has been made in comparison with operational experiences in superconducting radio-frequency (SRF) cavities at the Spallation Neutron Source (SNS). From the analysis it appears that the SNS SRF cavities can be operated at temperatures higher than 2.1 K, a fact resulting from both the pulsed nature of the superconducting cavities, the specific configuration of the existing cryogenic plant and the operating frequency

  17. Optimal feeding frequency of captive head-started green turtles (Chelonia mydas).

    Science.gov (United States)

    Kanghae, H; Thongprajukaew, K; Yeetam, P; Jarit-Ngam, T; Hwan-Air, W; Rueangjeen, S; Kittiwattanawong, K

    2017-08-01

    Optimal feeding frequency was investigated to improve head-started propagation programme of juvenile green turtles (Chelonia mydas). The 15-day-old turtles (25-26 g body weight) were fed for ad libitum intake at one (1MD), two (2MD), three (3MD) or four (4MD) meals daily over a 3-month trial. Responses in growth, feed utilization, faecal characteristics, haematological parameters and carapace elemental composition were used to compare treatment effects. At the end of the feeding trial, no treatment had induced mortality. Growth performance in terms of weight gain and specific growth rate was similar in turtles fed 2MD, 3MD or 4MD (p > 0.05), but 1MD differed from these (p Turtles fed 2MD had significantly lower feed intake than in 3MD and 4MD groups, but the feed conversion ratios were similar. Faecal digestive enzyme analysis indicated higher catabolism of lipid and protein in the deprivation group (1MD), when compared with turtles fed at least twice daily. The feeding frequency did not affect the specific activities of carbohydrate-digesting enzymes. The results on enzymes activities were corroborated by the transition enthalpy characteristics of faeces, indicating nutrients remaining after digestion. The 2MD treatment also improved the haematological characteristics and the carapace quality, relative to low or excess feeding. Overall, the findings indicate that feeding juvenile green turtles twice a day is the preferred option in their head-started propagation. This promotes growth, reduces feed consumption, and improves health and carapace quality. Journal of Animal Physiology and Animal Nutrition © 2016 Blackwell Verlag GmbH.

  18. Single-trial log transformation is optimal in frequency analysis of resting EEG alpha.

    Science.gov (United States)

    Smulders, Fren T Y; Ten Oever, Sanne; Donkers, Franc C L; Quaedflieg, Conny W E M; van de Ven, Vincent

    2018-02-01

    The appropriate definition and scaling of the magnitude of electroencephalogram (EEG) oscillations is an underdeveloped area. The aim of this study was to optimize the analysis of resting EEG alpha magnitude, focusing on alpha peak frequency and nonlinear transformation of alpha power. A family of nonlinear transforms, Box-Cox transforms, were applied to find the transform that (a) maximized a non-disputed effect: the increase in alpha magnitude when the eyes are closed (Berger effect), and (b) made the distribution of alpha magnitude closest to normal across epochs within each participant, or across participants. The transformations were performed either at the single epoch level or at the epoch-average level. Alpha peak frequency showed large individual differences, yet good correspondence between various ways to estimate it in 2 min of eyes-closed and 2 min of eyes-open resting EEG data. Both alpha magnitude and the Berger effect were larger for individual alpha than for a generic (8-12 Hz) alpha band. The log-transform on single epochs (a) maximized the t-value of the contrast between the eyes-open and eyes-closed conditions when tested within each participant, and (b) rendered near-normally distributed alpha power across epochs and participants, thereby making further transformation of epoch averages superfluous. The results suggest that the log-normal distribution is a fundamental property of variations in alpha power across time in the order of seconds. Moreover, effects on alpha power appear to be multiplicative rather than additive. These findings support the use of the log-transform on single epochs to achieve appropriate scaling of alpha magnitude. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. Optimized IMAC-IMAC protocol for phosphopeptide recovery from complex biological samples

    DEFF Research Database (Denmark)

    Ye, Juanying; Zhang, Xumin; Young, Clifford

    2010-01-01

    using Fe(III)-NTA IMAC resin and it proved to be highly selective in the phosphopeptide enrichment of a highly diluted standard sample (1:1000) prior to MALDI MS analysis. We also observed that a higher iron purity led to an increased IMAC enrichment efficiency. The optimized method was then adapted...... to phosphoproteome analyses of cell lysates of high protein complexity. From either 20 microg of mouse sample or 50 microg of Drosophila melanogaster sample, more than 1000 phosphorylation sites were identified in each study using IMAC-IMAC and LC-MS/MS. We demonstrate efficient separation of multiply phosphorylated...... characterization of phosphoproteins in functional phosphoproteomics research projects....

  20. Scenario-based stochastic optimal operation of wind, photovoltaic, pump-storage hybrid system in frequency- based pricing

    International Nuclear Information System (INIS)

    Zare Oskouei, Morteza; Sadeghi Yazdankhah, Ahmad

    2015-01-01

    Highlights: • Two-stage objective function is proposed for optimization problem. • Hourly-based optimal contractual agreement is calculated. • Scenario-based stochastic optimization problem is solved. • Improvement of system frequency by utilizing PSH unit. - Abstract: This paper proposes the operating strategy of a micro grid connected wind farm, photovoltaic and pump-storage hybrid system. The strategy consists of two stages. In the first stage, the optimal hourly contractual agreement is determined. The second stage corresponds to maximizing its profit by adapting energy management strategy of wind and photovoltaic in coordination with optimum operating schedule of storage device under frequency based pricing for a day ahead electricity market. The pump-storage hydro plant is utilized to minimize unscheduled interchange flow and maximize the system benefit by participating in frequency control based on energy price. Because of uncertainties in power generation of renewable sources and market prices, generation scheduling is modeled by a stochastic optimization problem. Uncertainties of parameters are modeled by scenario generation and scenario reduction method. A powerful optimization algorithm is proposed using by General Algebraic Modeling System (GAMS)/CPLEX. In order to verify the efficiency of the method, the algorithm is applied to various scenarios with different wind and photovoltaic power productions in a day ahead electricity market. The numerical results demonstrate the effectiveness of the proposed approach.

  1. Maternal obesity alters immune cell frequencies and responses in umbilical cord blood samples.

    Science.gov (United States)

    Wilson, Randall M; Marshall, Nicole E; Jeske, Daniel R; Purnell, Jonathan Q; Thornburg, Kent; Messaoudi, Ilhem

    2015-06-01

    Maternal obesity is one of the several key factors thought to modulate neonatal immune system development. Data from murine studies demonstrate worse outcomes in models of infection, autoimmunity, and allergic sensitization in offspring of obese dams. In humans, children born to obese mothers are at increased risk for asthma. These findings suggest a dysregulation of immune function in the children of obese mothers; however, the underlying mechanisms remain poorly understood. The aim of this study was to examine the relationship between maternal body weight and the human neonatal immune system. Umbilical cord blood samples were collected from infants born to lean, overweight, and obese mothers. Frequency and function of major innate and adaptive immune cell populations were quantified using flow cytometry and multiplex analysis of circulating factors. Compared to babies born to lean mothers, babies of obese mothers had fewer eosinophils and CD4 T helper cells, reduced monocyte and dendritic cell responses to Toll-like receptor ligands, and increased plasma levels of IFN-α2 and IL-6 in cord blood. These results support the hypothesis that maternal obesity influences programming of the neonatal immune system, providing a potential link to increased incidence of chronic inflammatory diseases such as asthma and cardiovascular disease in the offspring. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Design of aseismic class components: measurement of frequency parameters and optimization of analytical models

    International Nuclear Information System (INIS)

    Panet, M.; Delmas, J.; Ballester, J.L.

    1993-04-01

    In each plant unit, there are about 250 earthquake-qualified safety related valves. Justifying their aseismic capacity has proved complex. The structures are so diversified that it is not easy for designers to determine a generic model. Generally speaking, the models tend to overestimate the resonance frequencies. An approach more representative of the actual structure of the component was consequently sought, on which qualification of technological options with respect to the safety authorities would be based, thereby optimizing vibrating table qualification test schedules. The paper describes application of the approximate spectral identification method from the OPTDIM system, which determines basic structure modal data to forecast the approximate eigenfrequencies of a sub-domain, materialized by the component. It is used for a posteriori justification of topworks in operating equipment (900 MWe series), with respect to the 33 Hz ≤ f condition, which guarantees zero amplification of seismic induced internal loads. In the seismic design context and supplementing the preliminary eigenfrequency studies, inverse method solution techniques are used to define the most representative model of the modal behaviour of an electrically controlled motor-operated valve. (authors). 6 figs., 6 tabs., 11 refs

  3. Compressive sensing-based wideband capacitance measurement with a fixed sampling rate lower than the highest exciting frequency

    International Nuclear Information System (INIS)

    Xu, Lijun; Ren, Ying; Sun, Shijie; Cao, Zhang

    2016-01-01

    In this paper, an under-sampling method for wideband capacitance measurement was proposed by using the compressive sensing strategy. As the excitation signal is sparse in the frequency domain, the compressed sampling method that uses a random demodulator was adopted, which could greatly decrease the sampling rate. Besides, four switches were used to replace the multiplier in the random demodulator. As a result, not only the sampling rate can be much smaller than the signal excitation frequency, but also the circuit’s structure is simpler and its power consumption is lower. A hardware prototype was constructed to validate the method. In the prototype, an excitation voltage with a frequency up to 200 kHz was applied to a capacitance-to-voltage converter. The output signal of the converter was randomly modulated by a pseudo-random sequence through four switches. After a low-pass filter, the signal was sampled by an analog-to-digital converter at a sampling rate of 50 kHz, which was three times lower than the highest exciting frequency. The frequency and amplitude of the signal were then reconstructed to obtain the measured capacitance. Both theoretical analysis and experiments were carried out to show the feasibility of the proposed method and to evaluate the performance of the prototype, including its linearity, sensitivity, repeatability, accuracy and stability within a given measurement range. (paper)

  4. Optimization of sampling for the determination of the mean Radium-226 concentration in surface soil

    International Nuclear Information System (INIS)

    Williams, L.R.; Leggett, R.W.; Espegren, M.L.; Little, C.A.

    1987-08-01

    This report describes a field experiment that identifies an optimal method for determination of compliance with the US Environmental Protection Agency's Ra-226 guidelines for soil. The primary goals were to establish practical levels of accuracy and precision in estimating the mean Ra-226 concentration of surface soil in a small contaminated region; to obtain empirical information on composite vs. individual soil sampling and on random vs. uniformly spaced sampling; and to examine the practicality of using gamma measurements in predicting the average surface radium concentration and in estimating the number of soil samples required to obtain a given level of accuracy and precision. Numerous soil samples were collected on each six sites known to be contaminated with uranium mill tailings. Three types of samples were collected on each site: 10-composite samples, 20-composite samples, and individual or post hole samples; 10-composite sampling is the method of choice because it yields a given level of accuracy and precision for the least cost. Gamma measurements can be used to reduce surface soil sampling on some sites. 2 refs., 5 figs., 7 tabs

  5. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    Science.gov (United States)

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  6. Plasma treatment of bulk niobium surface for superconducting rf cavities: Optimization of the experimental conditions on flat samples

    Directory of Open Access Journals (Sweden)

    M. Rašković

    2010-11-01

    Full Text Available Accelerator performance, in particular the average accelerating field and the cavity quality factor, depends on the physical and chemical characteristics of the superconducting radio-frequency (SRF cavity surface. Plasma based surface modification provides an excellent opportunity to eliminate nonsuperconductive pollutants in the penetration depth region and to remove the mechanically damaged surface layer, which improves the surface roughness. Here we show that the plasma treatment of bulk niobium (Nb presents an alternative surface preparation method to the commonly used buffered chemical polishing and electropolishing methods. We have optimized the experimental conditions in the microwave glow discharge system and their influence on the Nb removal rate on flat samples. We have achieved an etching rate of 1.7  μm/min⁡ using only 3% chlorine in the reactive mixture. Combining a fast etching step with a moderate one, we have improved the surface roughness without exposing the sample surface to the environment. We intend to apply the optimized experimental conditions to the preparation of single cell cavities, pursuing the improvement of their rf performance.

  7. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Optimization of the two-sample rank Neyman-Pearson detector

    Science.gov (United States)

    Akimov, P. S.; Barashkov, V. M.

    1984-10-01

    The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.

  9. Split Hopkinson Resonant Bar Test for Sonic-Frequency Acoustic Velocity and Attenuation Measurements of Small, Isotropic Geologic Samples

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, S.

    2011-04-01

    Mechanical properties (seismic velocities and attenuation) of geological materials are often frequency dependent, which necessitates measurements of the properties at frequencies relevant to a problem at hand. Conventional acoustic resonant bar tests allow measuring seismic properties of rocks and sediments at sonic frequencies (several kilohertz) that are close to the frequencies employed for geophysical exploration of oil and gas resources. However, the tests require a long, slender sample, which is often difficult to obtain from the deep subsurface or from weak and fractured geological formations. In this paper, an alternative measurement technique to conventional resonant bar tests is presented. This technique uses only a small, jacketed rock or sediment core sample mediating a pair of long, metal extension bars with attached seismic source and receiver - the same geometry as the split Hopkinson pressure bar test for large-strain, dynamic impact experiments. Because of the length and mass added to the sample, the resonance frequency of the entire system can be lowered significantly, compared to the sample alone. The experiment can be conducted under elevated confining pressures up to tens of MPa and temperatures above 100 C, and concurrently with x-ray CT imaging. The described Split Hopkinson Resonant Bar (SHRB) test is applied in two steps. First, extension and torsion-mode resonance frequencies and attenuation of the entire system are measured. Next, numerical inversions for the complex Young's and shear moduli of the sample are performed. One particularly important step is the correction of the inverted Young's moduli for the effect of sample-rod interfaces. Examples of the application are given for homogeneous, isotropic polymer samples and a natural rock sample.

  10. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  11. Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples

    Directory of Open Access Journals (Sweden)

    Hyunok Oh

    2003-05-01

    Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.

  12. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    Directory of Open Access Journals (Sweden)

    Arnaud Chapon

    Full Text Available Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples’ characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters. Keywords: Liquid Scintillation Counting (LSC, PerkinElmer, Tri-Carb, Smear, Swipe

  13. Frequency-Modulated Continuous Flow Analysis Electrospray Ionization Mass Spectrometry (FM-CFA-ESI-MS) for Sample Multiplexing.

    Science.gov (United States)

    Filla, Robert T; Schrell, Adrian M; Coulton, John B; Edwards, James L; Roper, Michael G

    2018-02-20

    A method for multiplexed sample analysis by mass spectrometry without the need for chemical tagging is presented. In this new method, each sample is pulsed at unique frequencies, mixed, and delivered to the mass spectrometer while maintaining a constant total flow rate. Reconstructed ion currents are then a time-dependent signal consisting of the sum of the ion currents from the various samples. Spectral deconvolution of each reconstructed ion current reveals the identity of each sample, encoded by its unique frequency, and its concentration encoded by the peak height in the frequency domain. This technique is different from other approaches that have been described, which have used modulation techniques to increase the signal-to-noise ratio of a single sample. As proof of concept of this new method, two samples containing up to 9 analytes were multiplexed. The linear dynamic range of the calibration curve was increased with extended acquisition times of the experiment and longer oscillation periods of the samples. Because of the combination of the samples, salt had little effect on the ability of this method to achieve relative quantitation. Continued development of this method is expected to allow for increased numbers of samples that can be multiplexed.

  14. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  15. Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2015-01-01

    Roč. 52, č. 2 (2015), s. 419-440 ISSN 0021-9002 Grant - others:GA AV ČR(CZ) 171396 Institutional support: RVO:67985556 Keywords : Dominated Convergence theorem for the expected average criterion * Discrepancy function * Kolmogorov inequality * Innovations * Strong sample-path optimality Subject RIV: BC - Control Systems Theory Impact factor: 0.665, year: 2015 http://library.utia.cas.cz/separaty/2015/E/sladky-0449029.pdf

  16. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Science.gov (United States)

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  17. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Directory of Open Access Journals (Sweden)

    Tak Fung

    Full Text Available The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%, a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L., occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  18. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  19. Suicide in bipolar disorder in a national English sample, 1996-2009: frequency, trends and characteristics.

    Science.gov (United States)

    Clements, C; Morriss, R; Jones, S; Peters, S; Roberts, C; Kapur, N

    2013-12-01

    Bipolar disorder (BD) has been reported to be associated with high risk of suicide. We aimed to investigate the frequency and characteristics of suicide in people with BD in a national sample. Suicide in BD in England from 1996 to 2009 was explored using descriptive statistics on data collected by the National Confidential Inquiry into Suicide and Homicide by People with Mental Illness (NCI). Suicide cases with a primary diagnosis of BD were compared to suicide cases with any other primary diagnosis. During the study period 1489 individuals with BD died by suicide, an average of 116 cases/year. Compared to other primary diagnosis suicides, those with BD were more likely to be female, more than 5 years post-diagnosis, current/recent in-patients, to have more than five in-patient admissions, and to have depressive symptoms. In BD suicides the most common co-morbid diagnoses were personality disorder and alcohol dependence. Approximately 40% were not prescribed mood stabilizers at the time of death. More than 60% of BD suicides were in contact with services the week prior to suicide but were assessed as low risk. Given the high rate of suicide in BD and the low estimates of risk, it is important that health professionals can accurately identify patients most likely to experience poor outcomes. Factors such as alcohol dependence/misuse, personality disorder, depressive illness and current/recent in-patient admission could characterize a high-risk group. Future studies need to operationalize clinically useful indicators of suicide risk in BD.

  20. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  1. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    Science.gov (United States)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  2. Tuning and optimization of the field distribution for 4-rod Radio Frequency Quadrupole linacs

    International Nuclear Information System (INIS)

    Schmidt, Janet Susan

    2014-01-01

    In this thesis, the tuning process of the 4-rod Radio Frequency Quadrupole has been analyzed and a theory for the prediction of the tuning plate's influence on the longitudinal voltage distribution was developed together with RF design options for the optimization of the fringe fields. The basic principles of the RFQ's particle dynamics and resonant behavior are introduced in the theory part of this thesis. All studies that are presented are based on the work on four RFQs of recent linac projects. These RFQs are described in one chapter. Here, the projects are introduced together with details about the RFQ parameters and performance. In the meantime two of these RFQs are in full operation at NSCL at MSU and FNAL. One is operating in the test phase of the MedAustron Cancer Therapy Center and the fourth one for LANL is about to be built. The longitudinal voltage distribution has been studied in detail with a focus on the influence of the RF design with tuning elements and parameters like the electrodes overlap or the distance between stems. The theory for simulation methods for the field flatness that were developed as part of this thesis, as well as its simulation with CST MWS have been analyzed and compared to measurements. The lumped circuit model has proven to predict results with an accuracy that can be used in the tuning process of 4-rod RFQs. Together with results from the tuning studies, the studies on the fringe fields of the 4-rod structure lead to a proposal for a 4-rod RFQ model with an improved field distribution in the transverse and longitudinal electric field.

  3. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    International Nuclear Information System (INIS)

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-01-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality

  4. Evaluation of sample preparation methods and optimization of nickel determination in vegetable tissues

    Directory of Open Access Journals (Sweden)

    Rodrigo Fernando dos Santos Salazar

    2011-02-01

    Full Text Available Nickel, although essential to plants, may be toxic to plants and animals. It is mainly assimilated by food ingestion. However, information about the average levels of elements (including Ni in edible vegetables from different regions is still scarce in Brazil. The objectives of this study were to: (a evaluate and optimize a method for preparation of vegetable tissue samples for Ni determination; (b optimize the analytical procedures for determination by Flame Atomic Absorption Spectrometry (FAAS and by Electrothermal Atomic Absorption (ETAAS in vegetable samples and (c determine the Ni concentration in vegetables consumed in the cities of Lorena and Taubaté in the Vale do Paraíba, State of São Paulo, Brazil. By means of the analytical technique for determination by ETAAS or FAAS, the results were validated by the test of analyte addition and recovery. The most viable method tested for quantification of this element was HClO4-HNO3 wet digestion. All samples but carrot tissue collected in Lorena contained Ni levels above the permitted by the Brazilian Ministry of Health. The most disturbing results, requiring more detailed studies, were the Ni concentrations measured in carrot samples from Taubaté, where levels were five times higher than permitted by Brazilian regulations.

  5. Demonstration and Optimization of BNFL's Pulsed Jet Mixing and RFD Sampling Systems Using NCAW Simulant

    International Nuclear Information System (INIS)

    Bontha, J.R.; Golcar, G.R.; Hannigan, N.

    2000-01-01

    The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%

  6. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  7. Optimal sample to tracer ratio for isotope dilution mass spectrometry: the polyisotopic case

    International Nuclear Information System (INIS)

    Laszlo, G.; Ridder, P. de; Goldman, A.; Cappis, J.; Bievre, P. de

    1991-01-01

    The Isotope Dilution Mass Spectrometry (IDMS) measurement technique provides a means for determining the unknown amount of various isotopes of an element in a sample solution of known mass. The sample solution is mixed with an auxiliary solution, or tracer, containing a known amount of the same element having the same isotopes but of different relative abundances or isotopic composition and the induced change in the isotopic composition measured by isotope mass spectrometry. The technique involves the measurement of the abundance ratio of each isotope to a (same) reference isotope in the sample solution, in the tracer solution and in the blend of the sample and tracer solution. These isotope ratio measurements, the known element amount in the tracer and the known mass of sample solution are used to calculate the unknown amount of one isotope in the sample solution. Subsequently the unknown amount of element is determined. The purpose of this paper is to examine the optimization of the ratio of the estimated unknown amount of element in the sample solution to the known amount of element in the tracer solution in order to minimize the relative uncertainty in the determination of the unknown amount of element

  8. Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt; Sørensen, Michael

    Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...

  9. Flaw-size measurement in a weld samples by ultrasonic frequency analysis

    International Nuclear Information System (INIS)

    Adler, L.; Cook, K.V.; Whaley, H.L. Jr.; McClung, R.W.

    1975-01-01

    An ultrasonic frequency-analysis technique was developed and applies to characterize flaws in an 8-in. (203-mm) thick heavy-section steel weld specimen. The technique applies a multitransducer system. The spectrum of the received broad-band signal is frequency analyzed at two different receivers for each of the flaws. From the two spectra, the size and orientation of the flaw are determined by the use of an analytic model proposed earlier. (auth)

  10. [Sampling optimization for tropical invertebrates: an example using dung beetles (Coleoptera: Scarabaeinae) in Venezuela].

    Science.gov (United States)

    Ferrer-Paris, José Rafael; Sánchez-Mercado, Ada; Rodríguez, Jon Paul

    2013-03-01

    The development of efficient sampling protocols is an essential prerequisite to evaluate and identify priority conservation areas. There are f ew protocols for fauna inventory and monitoring in wide geographical scales for the tropics, where the complexity of communities and high biodiversity levels, make the implementation of efficient protocols more difficult. We proposed here a simple strategy to optimize the capture of dung beetles, applied to sampling with baited traps and generalizable to other sampling methods. We analyzed data from eight transects sampled between 2006-2008 withthe aim to develop an uniform sampling design, that allows to confidently estimate species richness, abundance and composition at wide geographical scales. We examined four characteristics of any sampling design that affect the effectiveness of the sampling effort: the number of traps, sampling duration, type and proportion of bait, and spatial arrangement of the traps along transects. We used species accumulation curves, rank-abundance plots, indicator species analysis, and multivariate correlograms. We captured 40 337 individuals (115 species/morphospecies of 23 genera). Most species were attracted by both dung and carrion, but two thirds had greater relative abundance in traps baited with human dung. Different aspects of the sampling design influenced each diversity attribute in different ways. To obtain reliable richness estimates, the number of traps was the most important aspect. Accurate abundance estimates were obtained when the sampling period was increased, while the spatial arrangement of traps was determinant to capture the species composition pattern. An optimum sampling strategy for accurate estimates of richness, abundance and diversity should: (1) set 50-70 traps to maximize the number of species detected, (2) get samples during 48-72 hours and set trap groups along the transect to reliably estimate species abundance, (3) set traps in groups of at least 10 traps to

  11. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    Science.gov (United States)

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  12. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    International Nuclear Information System (INIS)

    Carpy, R; Picker, G; Amann, B; Ranebo, H; Vincent-Bonnieu, S; Minster, O; Winter, J; Dettmann, J; Castiglione, L; Höhler, R; Langevin, D

    2011-01-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of 'wet foams' have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume 3 . These units, will be on orbit replaceable sets, that will allow multiple sample compositions processing (in the range of >40).

  13. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  14. Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.

    Science.gov (United States)

    Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh

    2012-02-28

    Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.

  15. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.

    Science.gov (United States)

    Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden.

  16. Investigation and optimization of low-frequency noise performance in readout electronics of dc superconducting quantum interference device

    International Nuclear Information System (INIS)

    Zhao, Jing; Zhang, Yi; Krause, Hans-Joachim; Lee, Yong-Ho

    2014-01-01

    We investigated and optimized the low-frequency noise characteristics of a preamplifier used for readout of direct current superconducting quantum interference devices (SQUIDs). When the SQUID output was detected directly using a room-temperature low-voltage-noise preamplifier, the low-frequency noise of a SQUID system was found to be dominated by the input current noise of the preamplifiers in case of a large dynamic resistance of the SQUID. To reduce the current noise of the preamplifier in the low-frequency range, we investigated the dependence of total preamplifier noise on the collector current and source resistance. When the collector current was decreased from 8.4 mA to 3 mA in the preamplifier made of 3 parallel SSM2220 transistor pairs, the low-frequency total voltage noise of the preamplifier (at 0.1 Hz) decreased by about 3 times for a source resistance of 30 Ω whereas the white noise level remained nearly unchanged. Since the relative contribution of preamplifier's input voltage and current noise is different depending on the dynamic resistance or flux-to-voltage transfer of the SQUID, the results showed that the total noise of a SQUID system at low-frequency range can be improved significantly by optimizing the preamplifier circuit parameters, mainly the collector current in case of low-noise bipolar transistor pairs

  17. Investigation and optimization of low-frequency noise performance in readout electronics of dc superconducting quantum interference device

    Science.gov (United States)

    Zhao, Jing; Zhang, Yi; Lee, Yong-Ho; Krause, Hans-Joachim

    2014-05-01

    We investigated and optimized the low-frequency noise characteristics of a preamplifier used for readout of direct current superconducting quantum interference devices (SQUIDs). When the SQUID output was detected directly using a room-temperature low-voltage-noise preamplifier, the low-frequency noise of a SQUID system was found to be dominated by the input current noise of the preamplifiers in case of a large dynamic resistance of the SQUID. To reduce the current noise of the preamplifier in the low-frequency range, we investigated the dependence of total preamplifier noise on the collector current and source resistance. When the collector current was decreased from 8.4 mA to 3 mA in the preamplifier made of 3 parallel SSM2220 transistor pairs, the low-frequency total voltage noise of the preamplifier (at 0.1 Hz) decreased by about 3 times for a source resistance of 30 Ω whereas the white noise level remained nearly unchanged. Since the relative contribution of preamplifier's input voltage and current noise is different depending on the dynamic resistance or flux-to-voltage transfer of the SQUID, the results showed that the total noise of a SQUID system at low-frequency range can be improved significantly by optimizing the preamplifier circuit parameters, mainly the collector current in case of low-noise bipolar transistor pairs.

  18. Optimization of carrier frequency and duty cycle for pulse modulation of biological signals.

    Science.gov (United States)

    Tandon, S N; Singh, S; Sharma, P K; Khosla, S

    1980-10-01

    Digital modulation techniques are commonly used for the recording and transmission of biological signals. Hitherto, the choice of subcarrier frequency for recording or transmission of biological signals has been arbitary and this usually results in poor signal to noise ratio (SNR) due to the limited frequency characteristics of the system. In the present study the frequency characteristics of the system (first order approximation) has been taken to be that of a Butterworth filter. Computations based on this assumption show that for a given input signal there exists an optimum subcarrier frequency and a corresponding optimum duty cycle which would give maximum SNR of the system. For convenience, a nomogram has been prepared and it has been shown that for a given frequency response of the system, the nomogram could be used for selecting an optimum subcarrier frequency and a corresponding duty cycle. The theoretical formulations have been verified with experimental work.

  19. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  20. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Energy Technology Data Exchange (ETDEWEB)

    Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it [Sapienza Università di Roma, Dipartimento di Ingegneria Civile, Edile e Ambientale (Italy); Alfonso, L. [Hydroinformatics Chair Group, UNESCO-IHE, Delft (Netherlands); Di Baldassarre, G. [Department of Earth Sciences, Program for Air, Water and Landscape Sciences, Uppsala University (Sweden)

    2016-06-08

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  1. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    International Nuclear Information System (INIS)

    Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.

    2016-01-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  2. Analysis of the rebalancing frequency in log-optimal portfolio selection

    OpenAIRE

    Kuhn, Daniel; Luenberger, David G.

    2010-01-01

    In a dynamic investment situation, the right timing of portfolio revisions and adjustments is essential to sustain long-term growth. A high rebalancing frequency reduces the portfolio performance in the presence of transaction costs, whereas a low rebalancing frequency entails a static investment strategy that hardly reacts to changing market conditions. This article studies a family of portfolio problems in a Black-Scholes type economy which depend parametrically on the rebalancing frequency...

  3. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  4. Optimal Scheduling of Distributed Energy Resources and Responsive Loads in Islanded Microgrids Considering Voltage and Frequency Security Constraints

    DEFF Research Database (Denmark)

    Vahedipour-Dahraie, Mostafa; Najafi, Hamid Reza; Anvari-Moghaddam, Amjad

    2018-01-01

    in islanded MGs with regard to voltage and frequency security constraints. Based on the proposed model, scheduling of the controllable units in both supply and demand sides is done in a way not only to maximize the expected profit of MG operator (MGO), but also to minimize the energy payments of customers...... on the system’s performance in terms of voltage and frequency stability. Moreover, optimal coordination of DERs and responsive loads can increase the expected profit of MGO significantly. The effectiveness of the proposed scheduling approach is verified on an islanded MG test system over a 24-h period....

  5. Optimal sampling plan for clean development mechanism lighting projects with lamp population decay

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2014-01-01

    Highlights: • A metering cost minimisation model is built with the lamp population decay to optimise CDM lighting projects sampling plan. • The model minimises the total metering cost and optimise the annual sample size during the crediting period. • The required 90/10 criterion sampling accuracy is satisfied for each CDM monitoring report. - Abstract: This paper proposes a metering cost minimisation model that minimises metering cost under the constraints of sampling accuracy requirement for clean development mechanism (CDM) energy efficiency (EE) lighting project. Usually small scale (SSC) CDM EE lighting projects expect a crediting period of 10 years given that the lighting population will decay as time goes by. The SSC CDM sampling guideline requires that the monitored key parameters for the carbon emission reduction quantification must satisfy the sampling accuracy of 90% confidence and 10% precision, known as the 90/10 criterion. For the existing registered CDM lighting projects, sample sizes are either decided by professional judgment or by rule-of-thumb without considering any optimisation. Lighting samples are randomly selected and their energy consumptions are monitored continuously by power meters. In this study, the sampling size determination problem is formulated as a metering cost minimisation model by incorporating a linear lighting decay model as given by the CDM guideline AMS-II.J. The 90/10 criterion is formulated as constraints to the metering cost minimisation problem. Optimal solutions to the problem minimise the metering cost whilst satisfying the 90/10 criterion for each reporting period. The proposed metering cost minimisation model is applicable to other CDM lighting projects with different population decay characteristics as well

  6. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Tijssen, Rob H.N. [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Senneville, Baudouin D. de [Imaging Division, University Medical Center Utrecht, Utrecht (Netherlands); L' Institut de Mathématiques de Bordeaux, Unité Mixte de Recherche 5251, Centre National de la Recherche Scientifique/University of Bordeaux, Bordeaux (France); Heerkens, Hanne D.; Vulpen, Marco van; Lagendijk, Jan J.W.; Berg, Cornelis A.T. van den [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands)

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.

  7. Optimal sampling in damage detection of flexural beams by continuous wavelet transform

    International Nuclear Information System (INIS)

    Basu, B; Broderick, B M; Montanari, L; Spagnoli, A

    2015-01-01

    Modern measurement techniques are improving in capability to capture spatial displacement fields occurring in deformed structures with high precision and in a quasi-continuous manner. This in turn has made the use of vibration-based damage identification methods more effective and reliable for real applications. However, practical measurement and data processing issues still present barriers to the application of these methods in identifying several types of structural damage. This paper deals with spatial Continuous Wavelet Transform (CWT) damage identification methods in beam structures with the aim of addressing the following key questions: (i) can the cost of damage detection be reduced by down-sampling? (ii) what is the minimum number of sampling intervals required for optimal damage detection ? The first three free vibration modes of a cantilever and a simple supported beam with an edge open crack are numerically simulated. A thorough parametric study is carried out by taking into account the key parameters governing the problem, including level of noise, crack depth and location, mechanical and geometrical parameters of the beam. The results are employed to assess the optimal number of sampling intervals for effective damage detection. (paper)

  8. Effects of diurnal emission patterns and sampling frequency on precision of measurement methods for daily ammonia emissions from animal houses

    NARCIS (Netherlands)

    Estelles, F.; Calvet, S.; Ogink, N.W.M.

    2010-01-01

    Ammonia concentrations and airflow rates are the main parameters needed to determine ammonia emissions from animal houses. It is possible to classify their measurement methods into two main groups according to the sampling frequency: semi-continuous and daily average measurements. In the first

  9. The effect of sampling frequency on the accuracy of estimates of milk ...

    African Journals Online (AJOL)

    The results of this study support the five-weekly sampling procedure currently used by the South African National Dairy Cattle Performance Testing Scheme. However, replacement of proportional bulking of individual morning and evening samples with a single evening milk sample would not compromise accuracy provided ...

  10. Optimization of a phased-array transducer for multiple harmonic imaging in medical applications: frequency and topology.

    Science.gov (United States)

    Matte, Guillaume M; Van Neer, Paul L M J; Danilouchkine, Mike G; Huijssen, Jacob; Verweij, Martin D; de Jong, Nico

    2011-03-01

    Second-harmonic imaging is currently one of the standards in commercial echographic systems for diagnosis, because of its high spatial resolution and low sensitivity to clutter and near-field artifacts. The use of nonlinear phenomena mirrors is a great set of solutions to improve echographic image resolution. To further enhance the resolution and image quality, the combination of the 3rd to 5th harmonics--dubbed the superharmonics--could be used. However, this requires a bandwidth exceeding that of conventional transducers. A promising solution features a phased-array design with interleaved low- and high-frequency elements for transmission and reception, respectively. Because the amplitude of the backscattered higher harmonics at the transducer surface is relatively low, it is highly desirable to increase the sensitivity in reception. Therefore, we investigated the optimization of the number of elements in the receiving aperture as well as their arrangement (topology). A variety of configurations was considered, including one transmit element for each receive element (1/2) up to one transmit for 7 receive elements (1/8). The topologies are assessed based on the ratio of the harmonic peak pressures in the main and grating lobes. Further, the higher harmonic level is maximized by optimization of the center frequency of the transmitted pulse. The achievable SNR for a specific application is a compromise between the frequency-dependent attenuation and nonlinearity at a required penetration depth. To calculate the SNR of the complete imaging chain, we use an approach analogous to the sonar equation used in underwater acoustics. The generated harmonic pressure fields caused by nonlinear wave propagation were modeled with the iterative nonlinear contrast source (INCS) method, the KZK, or the Burger's equation. The optimal topology for superharmonic imaging was an interleaved design with 1 transmit element per 6 receive elements. It improves the SNR by ~5 dB compared with

  11. Partner wealth predicts self-reported orgasm frequency in a sample of Chinese women

    NARCIS (Netherlands)

    Pollet, T.V.; Nettle, D.

    There has been considerable speculation about the adaptive significance of the human female orgasm, with one hypothesis being that it promotes differential affiliation or conception with high-quality males. We investigated the relationship between women's self-reported orgasm frequency and the

  12. A novel sampling method for multiple multiscale targets from scattering amplitudes at a fixed frequency

    Science.gov (United States)

    Liu, Xiaodong

    2017-08-01

    A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.

  13. Optimal Design of a High Efficiency LLC Resonant Converter with a Narrow Frequency Range for Voltage Regulation

    Directory of Open Access Journals (Sweden)

    Junhao Luo

    2018-05-01

    Full Text Available As a key factor in the design of a voltage-adjustable LLC resonant converter, frequency regulation range is very important to the optimization of magnetic components and efficiency improvement. This paper presents a novel optimal design method for LLC resonant converters, which can narrow the frequency variation range and ensure high efficiency under the premise of a required gain achievement. A simplified gain model was utilized to simplify the calculation and the expected efficiency was initially set as 96.5%. The restricted area of parameter optimization design can be obtained by taking the intersection of the gain requirement, the efficiency requirement, and three restrictions of ZVS (Zero Voltage Switch. The proposed method was verified by simulation and experiments of a 150 W prototype. The results show that the proposed method can achieve ZVS from full-load to no-load conditions and can reach 1.6 times the normalized voltage gain in the frequency variation range of 18 kHz with a peak efficiency of up to 96.3%. Moreover, the expected efficiency is adjustable, which means a converter with a higher efficiency can be designed. The proposed method can also be used for the design of large-power LLC resonant converters to obtain a wide output voltage range and higher efficiency.

  14. High frequency oscillatory ventilation with lung volume optimization in very low birth weight newborns – a nine-year experience

    Directory of Open Access Journals (Sweden)

    José Nona

    2009-09-01

    Full Text Available Objective: To evaluate the clinical outcome of very low birth weight newborns, submitted to high frequency oscillatory ventilation with a strategy of early lung volume optimization. Methods: Descriptive prospective study in a nine-year period, between 1999 January 1st to 2008 January 1st. All the very low birth weight newborns were born in Dr. Alfredo da Costa Maternity, Lisbon, Portugal, were admitted to the Neonatal Intensive Care Unit and submitted to high frequency oscillatory ventilation with early lung volume optimization; these newborns were followed-up since birth and their charts were analyzed periodically until hospital discharge. Rresults: From a total population of 730 very low birth weight inborns, 117 babies died (16% and 613 survived (84%. The median of birth weight was 975 g and the gestational age median was 28 weeks. For the survivors, the median ventilation and oxygenation times were 3 and 18 days, respectively. The incidence of chronic lung disease was 9.5%, with nine newborns discharged on oxygen therapy. The incidence of intraventricular hemorrhage III – IV (total population group was 11.5% and the incidence of retinopathy of prematurity grade 3 or higher was 8.0%. Cconclusions: High frequency oscillatory ventilation with early lung volume optimization strategy reduced the need of respiratory support, and improved pulmonary and global outcomes in very low birth weight infants with respiratory distress syndrome.

  15. Design of a Fractional Order Frequency PID Controller for an Islanded Microgrid: A Multi-Objective Extremal Optimization Method

    Directory of Open Access Journals (Sweden)

    Huan Wang

    2017-10-01

    Full Text Available Fractional order proportional-integral-derivative(FOPID controllers have attracted increasing attentions recently due to their better control performance than the traditional integer-order proportional-integral-derivative (PID controllers. However, there are only few studies concerning the fractional order control of microgrids based on evolutionary algorithms. From the perspective of multi-objective optimization, this paper presents an effective FOPID based frequency controller design method called MOEO-FOPID for an islanded microgrid by using a Multi-objective extremal optimization (MOEO algorithm to minimize frequency deviation and controller output signal simultaneously in order to improve finally the efficient operation of distributed generations and energy storage devices. Its superiority to nondominated sorting genetic algorithm-II (NSGA-II based FOPID/PID controllers and other recently reported single-objective evolutionary algorithms such as Kriging-based surrogate modeling and real-coded population extremal optimization-based FOPID controllers is demonstrated by the simulation studies on a typical islanded microgrid in terms of the control performance including frequency deviation, deficit grid power, controller output signal and robustness.

  16. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    Science.gov (United States)

    Carpy, R.; Picker, G.; Amann, B.; Ranebo, H.; Vincent-Bonnieu, S.; Minster, O.; Winter, J.; Dettmann, J.; Castiglione, L.; Höhler, R.; Langevin, D.

    2011-12-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of "wet foams" have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy [1] and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume 40).

  17. Population pharmacokinetic analysis of clopidogrel in healthy Jordanian subjects with emphasis optimal sampling strategy.

    Science.gov (United States)

    Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A

    2013-05-01

    Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Influence of sampling frequency and load calculation methods on quantification of annual river nutrient and suspended solids loads.

    Science.gov (United States)

    Elwan, Ahmed; Singh, Ranvir; Patterson, Maree; Roygard, Jon; Horne, Dave; Clothier, Brent; Jones, Geoffrey

    2018-01-11

    Better management of water quality in streams, rivers and lakes requires precise and accurate estimates of different contaminant loads. We assessed four sampling frequencies (2 days, weekly, fortnightly and monthly) and five load calculation methods (global mean (GM), rating curve (RC), ratio estimator (RE), flow-stratified (FS) and flow-weighted (FW)) to quantify loads of nitrate-nitrogen (NO 3 - -N), soluble inorganic nitrogen (SIN), total nitrogen (TN), dissolved reactive phosphorus (DRP), total phosphorus (TP) and total suspended solids (TSS), in the Manawatu River, New Zealand. The estimated annual river loads were compared to the reference 'true' loads, calculated using daily measurements of flow and water quality from May 2010 to April 2011, to quantify bias (i.e. accuracy) and root mean square error 'RMSE' (i.e. accuracy and precision). The GM method resulted into relatively higher RMSE values and a consistent negative bias (i.e. underestimation) in estimates of annual river loads across all sampling frequencies. The RC method resulted in the lowest RMSE for TN, TP and TSS at monthly sampling frequency. Yet, RC highly overestimated the loads for parameters that showed dilution effect such as NO 3 - -N and SIN. The FW and RE methods gave similar results, and there was no essential improvement in using RE over FW. In general, FW and RE performed better than FS in terms of bias, but FS performed slightly better than FW and RE in terms of RMSE for most of the water quality parameters (DRP, TP, TN and TSS) using a monthly sampling frequency. We found no significant decrease in RMSE values for estimates of NO 3 - N, SIN, TN and DRP loads when the sampling frequency was increased from monthly to fortnightly. The bias and RMSE values in estimates of TP and TSS loads (estimated by FW, RE and FS), however, showed a significant decrease in the case of weekly or 2-day sampling. This suggests potential for a higher sampling frequency during flow peaks for more precise

  19. Optimization of multi-channel neutron focusing guides for extreme sample environments

    International Nuclear Information System (INIS)

    Di Julio, D D; Lelièvre-Berna, E; Andersen, K H; Bentley, P M; Courtois, P

    2014-01-01

    In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.

  20. Neutron activation analysis for the optimal sampling and extraction of extractable organohalogens in human hari

    International Nuclear Information System (INIS)

    Zhang, H.; Chai, Z.F.; Sun, H.B.; Xu, H.F.

    2005-01-01

    Many persistent organohalogen compounds such as DDTs and polychlorinated biphenyls have caused seriously environmental pollution problem that now involves all life. It is know that neutron activation analysis (NAA) is a very convenient method for halogen analysis and is also the only method currently available for simultaneously determining organic chlorine, bromine and iodine in one extract. Human hair is a convenient material to evaluate the burden of such compounds in human body and dan be easily collected from people over wide ranges of age, sex, residential areas, eating habits and working environments. To effectively extract organohalogen compounds from human hair, in present work the optimal Soxhelt-extraction time of extractable organohalogen (EOX) and extractable persistent organohalogen (EPOX) from hair of different lengths were studied by NAA. The results indicated that the optimal Soxhelt-extraction time of EOX and EPOX from human hair was 8-11 h, and the highest EOX and EPOX contents were observed in hair powder extract. The concentrations of both EOX and EPOX in different hair sections were in the order of hair powder ≥ 2 mm > 5 mm, which stated that hair samples milled into hair powder or cut into very short sections were not only for homogeneous. hair sample but for the best hair extraction efficiency.

  1. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    Science.gov (United States)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically

  2. Frequency, stability and differentiation of self-reported school fear and truancy in a community sample

    OpenAIRE

    Steinhausen, Hans-Christoph; Müller, Nora; Metzke, Christa Winkler

    2008-01-01

    Abstract Background Surprisingly little is known about the frequency, stability, and correlates of school fear and truancy based on self-reported data of adolescents. Methods Self-reported school fear and truancy were studied in a total of N = 834 subjects of the community-based Zurich Adolescent Psychology and Psychopathology Study (ZAPPS) at two times with an average age of thirteen and sixteen years. Group definitions were based on two behavioural items of the Youth Self-Report (YSR). Comp...

  3. Allele Frequency Data for 17 Short Tandem Repeats in a Czech Population Sample

    Czech Academy of Sciences Publication Activity Database

    Šimková, H.; Faltus, Václav; Marván, Richard; Pexa, T.; Stenzl, V.; Brouček, J.; Hořínek, A.; Mazura, Ivan; Zvárová, Jana

    2009-01-01

    Roč. 4, č. 1 (2009), e15-e17 ISSN 1872-4973 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : short tandem repeat (STR) * allelic frequency * PowerPlex 16 System * AmpflSTR Identifiler * population genetics * Czech Republic Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 2.421, year: 2009

  4. Measurement of flaw size in a weld sample by ultrasonic frequency analysis

    International Nuclear Information System (INIS)

    Whaley, H.L. Jr.; Adler, L.; Cook, K.V.; McClung, R.W.

    1975-05-01

    An ultrasonic frequency analysis technique has been developed and applied to the measurement of flaws in an 8-in.-thick heavy-section steel specimen belonging to the Pressure Vessel Research Committee program. Using the technique the flaws occurring in the weld area were characterized in quantitative terms of both dimension and orientation. Several modifications of the technique were made during the study to include the application of several transducers and to consider ultrasonic mode conversion. (U.S.)

  5. Laser ablation: Laser parameters: Frequency, pulse length, power, and beam charter play significant roles with regard to sampling complex samples for ICP/MS analysis

    International Nuclear Information System (INIS)

    Smith, M.R.; Alexander, M.L.; Hartman, J.S.; Koppenaal, D.W.

    1996-01-01

    Inductively coupled plasma mass spectrometry is used to investigate the influence of laser parameters with regard to sampling complex matrices ranging from relatively homogenous glasses to multi-phase sludge/slurry materials including radioactive Hanford tank waste. The resulting plume composition caused by the pulsed laser is evaluated as a function of wavelength, pulse energy, pulse length, focus, and beam power profiles. The author's studies indicate that these parameters play varying and often synergistic roles regarding quantitative results. (In a companion paper, particle transport and size distribution studies are presented.) The work described here will illustrate other laser parameters such as focusing and consequently power density and beam power profiles which are shown to influence precision and accuracy. Representative sampling by the LA approach is largely dependent on the sample's optical properties as well as laser parameters. Experimental results indicate that optimal laser parameters; short wavelength (UV), relatively low power (300 mJ), low-to-sub ns pulse lengths, and laser beams with reasonable power distributions (i.e., Gaussian or top-hat beam profiles) provide superior precision and accuracy. Remote LA-ICP/MS analyses of radioactive sludges are used to illustrate these optimal conditions laser ablation sampling

  6. Optimizing placement and equalization of multiple low frequency loudspeakers in rooms

    DEFF Research Database (Denmark)

    Celestinos, Adrian; Nielsen, Sofus Birkedal

    2005-01-01

    loudspeakers in rooms a simulation tool has been created based on finite-difference time-domain approximations (FDTD). Simulations have shown that by increasing the number of loudspeakers and modifying their placement a significant improvement is achieved. A more even sound pressure level distribution along...... a listening area is obtained. The placement of loudspeakers has been optimized. Furthermore an equalization strategy can be implemented for optimization purpose. This solution can be combined with multi channel sound systems....

  7. Optimization of a Pre-MEKC Separation SPE Procedure for Steroid Molecules in Human Urine Samples

    Directory of Open Access Journals (Sweden)

    Ilona Olędzka

    2013-11-01

    Full Text Available Many steroid hormones can be considered as potential biomarkers and their determination in body fluids can create opportunities for the rapid diagnosis of many diseases and disorders of the human body. Most existing methods for the determination of steroids are usually time- and labor-consuming and quite costly. Therefore, the aim of analytical laboratories is to develop a new, relatively low-cost and rapid implementation methodology for their determination in biological samples. Due to the fact that there is little literature data on concentrations of steroid hormones in urine samples, we have made attempts at the electrophoretic determination of these compounds. For this purpose, an extraction procedure for the optimized separation and simultaneous determination of seven steroid hormones in urine samples has been investigated. The isolation of analytes from biological samples was performed by liquid-liquid extraction (LLE with dichloromethane and compared to solid phase extraction (SPE with C18 and hydrophilic-lipophilic balance (HLB columns. To separate all the analytes a micellar electrokinetic capillary chromatography (MECK technique was employed. For full separation of all the analytes a running buffer (pH 9.2, composed of 10 mM sodium tetraborate decahydrate (borax, 50 mM sodium dodecyl sulfate (SDS, and 10% methanol was selected. The methodology developed in this work for the determination of steroid hormones meets all the requirements of analytical methods. The applicability of the method has been confirmed for the analysis of urine samples collected from volunteers—both men and women (students, amateur bodybuilders, using and not applying steroid doping. The data obtained during this work can be successfully used for further research on the determination of steroid hormones in urine samples.

  8. Enhancing the Frequency Adaptability of Periodic Current Controllers with a Fixed Sampling Rate for Grid-Connected Power Converters

    DEFF Research Database (Denmark)

    Yang, Yongheng; Zhou, Keliang; Blaabjerg, Frede

    2016-01-01

    Grid-connected power converters should employ advanced current controllers, e.g., Proportional Resonant (PR) and Repetitive Controllers (RC), in order to produce high-quality feed-in currents that are required to be synchronized with the grid. The synchronization is actually to detect...... of the resonant controllers and by approximating the fractional delay using a Lagrange interpolating polynomial for the RC, respectively, the frequency-variation-immunity of these periodic current controllers with a fixed sampling rate is improved. Experiments on a single-phase grid-connected system are presented...... the instantaneous grid information (e.g., frequency and phase of the grid voltage) for the current control, which is commonly performed by a Phase-Locked-Loop (PLL) system. Hence, harmonics and deviations in the estimated frequency by the PLL could lead to current tracking performance degradation, especially...

  9. Modeling Optimal Cutoffs for the Brazilian Household Food Insecurity Measurement Scale in a Nationwide Representative Sample.

    Science.gov (United States)

    Interlenghi, Gabriela S; Reichenheim, Michael E; Segall-Corrêa, Ana M; Pérez-Escamilla, Rafael; Moraes, Claudia L; Salles-Costa, Rosana

    2017-07-01

    Background: This is the second part of a model-based approach to examine the suitability of the current cutoffs applied to the raw score of the Brazilian Household Food Insecurity Measurement Scale [Escala Brasileira de Insegurança Alimentar (EBIA)]. The approach allows identification of homogeneous groups who correspond to severity levels of food insecurity (FI) and, by extension, discriminant cutoffs able to accurately distinguish these groups. Objective: This study aims to examine whether the model-based approach for identifying optimal cutoffs first implemented in a local sample is replicated in a countrywide representative sample. Methods: Data were derived from the Brazilian National Household Sample Survey of 2013 ( n = 116,543 households). Latent class factor analysis (LCFA) models from 2 to 5 classes were applied to the scale's items to identify the number of underlying FI latent classes. Next, identification of optimal cutoffs on the overall raw score was ascertained from these identified classes. Analyses were conducted in the aggregate data and by macroregions. Finally, model-based classifications (latent classes and groupings identified thereafter) were contrasted to the traditionally used classification. Results: LCFA identified 4 homogeneous groups with a very high degree of class separation (entropy = 0.934-0.975). The following cutoffs were identified in the aggregate data: between 1 and 2 (1/2), 5 and 6 (5/6), and 10 and 11 (10/11) in households with children and/or adolescents category emerged consistently in all analyses. Conclusions: Nationwide findings corroborate previous local evidence that households with an overall score of 1 are more akin to those scoring negative on all items. These results may contribute to guide experts' and policymakers' decisions on the most appropriate EBIA cutoffs. © 2017 American Society for Nutrition.

  10. Energy consumption optimization of the total-FETI solver by changing the CPU frequency

    Science.gov (United States)

    Horak, David; Riha, Lubomir; Sojka, Radim; Kruzik, Jakub; Beseda, Martin; Cermak, Martin; Schuchart, Joseph

    2017-07-01

    The energy consumption of supercomputers is one of the critical problems for the upcoming Exascale supercomputing era. The awareness of power and energy consumption is required on both software and hardware side. This paper deals with the energy consumption evaluation of the Finite Element Tearing and Interconnect (FETI) based solvers of linear systems, which is an established method for solving real-world engineering problems. We have evaluated the effect of the CPU frequency on the energy consumption of the FETI solver using a linear elasticity 3D cube synthetic benchmark. In this problem, we have evaluated the effect of frequency tuning on the energy consumption of the essential processing kernels of the FETI method. The paper provides results for two types of frequency tuning: (1) static tuning and (2) dynamic tuning. For static tuning experiments, the frequency is set before execution and kept constant during the runtime. For dynamic tuning, the frequency is changed during the program execution to adapt the system to the actual needs of the application. The paper shows that static tuning brings up 12% energy savings when compared to default CPU settings (the highest clock rate). The dynamic tuning improves this further by up to 3%.

  11. Frequency of Aggressive Behaviors in a Nationally Representative Sample of Iranian Children and Adolescents: The CASPIAN-IV Study.

    Science.gov (United States)

    Sadinejad, Morteza; Bahreynian, Maryam; Motlagh, Mohammad-Esmaeil; Qorbani, Mostafa; Movahhed, Mohsen; Ardalan, Gelayol; Heshmat, Ramin; Kelishadi, Roya

    2015-01-01

    This study aims to explore the frequency of aggressive behaviors among a nationally representative sample of Iranian children and adolescents. This nationwide study was performed on a multi-stage sample of 6-18 years students, living in 30 provinces in Iran. Students were asked to confidentially report the frequency of aggressive behaviors including physical fighting, bullying and being bullied in the previous 12 months, using the questionnaire of the World Health Organization Global School Health Survey. In this cross-sectional study, 13,486 students completed the study (90.6% participation rate); they consisted of 49.2% girls and 75.6% urban residents. The mean age of participants was 12.47 years (95% confidence interval: 12.29, 12.65). In total, physical fight was more prevalent among boys than girls (48% vs. 31%, P bulling to other classmates had a higher frequency among boys compared to girls (29% vs. 25%, P bulling to others). Physical fighting was more prevalent among rural residents (40% vs. 39%, respectively, P = 0.61), while being bullied was more common among urban students (27% vs. 26%, respectively, P = 0.69). Although in this study the frequency of aggressive behaviors was lower than many other populations, still these findings emphasize on the importance of designing preventive interventions that target the students, especially in early adolescence, and to increase their awareness toward aggressive behaviors. Implications for future research and aggression prevention programming are recommended.

  12. Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation

    International Nuclear Information System (INIS)

    Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A.; Bouquerel, Hélène

    2016-01-01

    Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L"−"1 and 10% for 10 mBq L"−"1. While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L"−"1, a conservative experimental estimate is rather 5 mBq L"−"1, corresponding to 0.14 fg g"−"1. The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported. - Highlights: • Radium-226 concentration measured with optimized accumulation in a container. • Radon-222 in air measured precisely with scintillation flasks and long countings. • Method tested by repetition tests, dilution experiments, and successful blind tests. • Estimated conservative detection limit without pre-concentration is 5 mBq L"−"1. • Method is portable, cost

  13. A CMOS-compatible silicon substrate optimization technique and its application in radio frequency crosstalk isolation

    International Nuclear Information System (INIS)

    Li Chen; Liao Huailin; Huang Ru; Wang Yangyuan

    2008-01-01

    In this paper, a complementary metal-oxide semiconductor (CMOS)-compatible silicon substrate optimization technique is proposed to achieve effective isolation. The selective growth of porous silicon is used to effectively suppress the substrate crosstalk. The isolation structures are fabricated in standard CMOS process and then this post-CMOS substrate optimization technique is carried out to greatly improve the performances of crosstalk isolation. Three-dimensional electro-magnetic simulation is implemented to verify the obvious effect of our substrate optimization technique. The morphologies and growth condition of porous silicon fabricated have been investigated in detail. Furthermore, a thick selectively grown porous silicon (SGPS) trench for crosstalk isolation has been formed and about 20dB improvement in substrate isolation is achieved. These results demonstrate that our post-CMOS SGPS technique is very promising for RF IC applications. (cross-disciplinary physics and related areas of science and technology)

  14. Frequency of single nucleotide polymorphisms of some immune response genes in a population sample from São Paulo, Brazil

    Directory of Open Access Journals (Sweden)

    Léa Campos de Oliveira

    2011-09-01

    Full Text Available Objective: To present the frequency of single nucleotide polymorphismsof a few immune response genes in a population sample from SãoPaulo City (SP, Brazil. Methods: Data on allele frequencies ofknown polymorphisms of innate and acquired immunity genes werepresented, the majority with proven impact on gene function. Datawere gathered from a sample of healthy individuals, non-HLA identicalsiblings of bone marrow transplant recipients from the Hospital dasClínicas da Faculdade de Medicina da Universidade de São Paulo,obtained between 1998 and 2005. The number of samples variedfor each single nucleotide polymorphism analyzed by polymerasechain reaction followed by restriction enzyme cleavage. Results:Allele and genotype distribution of 41 different gene polymorphisms,mostly cytokines, but also including other immune response genes,were presented. Conclusion: We believe that the data presentedhere can be of great value for case-control studies, to define whichpolymorphisms are present in biologically relevant frequencies and toassess targets for therapeutic intervention in polygenic diseases witha component of immune and inflammatory responses.

  15. Surface Characterization of Nb Samples Electro-polished Together With Real Superconducting Radio-frequency Accelerator Cavities

    International Nuclear Information System (INIS)

    Zhao, Xin; Geng, Rong-Li; Tyagi, P.V.; Hayano, Hitoshi; Kato, Shigeki; Nishiwaki, Michiru; Saeki, Takayuki; Sawabe, Motoaki

    2010-01-01

    We report the results of surface characterizations of niobium (Nb) samples electropolished together with a single cell superconducting radio-frequency accelerator cavity. These witness samples were located in three regions of the cavity, namely at the equator, the iris and the beam-pipe. Auger electron spectroscopy (AES) was utilized to probe the chemical composition of the topmost four atomic layers. Scanning electron microscopy with energy dispersive X-ray for elemental analysis (SEM/EDX) was used to observe the surface topography and chemical composition at the micrometer scale. A few atomic layers of sulfur (S) were found covering the samples non-uniformly. Niobium oxide granules with a sharp geometry were observed on every sample. Some Nb-O granules appeared to also contain sulfur.

  16. Effectiveness of increasing the frequency of posaconazole syrup administration to achieve optimal plasma concentrations in patients with haematological malignancy.

    Science.gov (United States)

    Park, Wan Beom; Cho, Joo-Youn; Park, Sang-In; Kim, Eun Jung; Yoon, Seonghae; Yoon, Seo Hyun; Lee, Jeong-Ok; Koh, Youngil; Song, Kyoung-Ho; Choe, Pyoeng Gyun; Yu, Kyung-Sang; Kim, Eu Suk; Bang, Su Mi; Kim, Nam Joong; Kim, Inho; Oh, Myoung-Don; Kim, Hong Bin; Song, Sang Hoon

    2016-07-01

    Few data are available on whether adjusting the dose of posaconazole syrup is effective in patients receiving anti-cancer chemotherapy. The aim of this prospective study was to analyse the impact of increasing the frequency of posaconazole administration on optimal plasma concentrations in adult patients with haematological malignancy. A total of 133 adult patients receiving chemotherapy for acute myeloid leukaemia or myelodysplastic syndrome who received posaconazole syrup 200 mg three times daily for fungal prophylaxis were enrolled in this study. Drug trough levels were measured by liquid chromatography-tandem mass spectrometry. In 20.2% of patients (23/114) the steady-state concentration of posaconazole was suboptimal (increased to 200 mg four times daily. On Day 15, the median posaconazole concentration was significantly increased from 368 ng/mL [interquartile range (IQR), 247-403 ng/mL] to 548 ng/mL (IQR, 424-887 ng/mL) (P = 0.0003). The median increase in posaconazole concentration was 251 ng/mL (IQR, 93-517 ng/mL). Among the patients with initially suboptimal levels, 79% achieved the optimal level unless the steady-state level was increasing the administration frequency of posaconazole syrup is effective for achieving optimal levels in patients with haematological malignancy undergoing chemotherapy. Copyright © 2016 Elsevier B.V. and International Society of Chemotherapy. All rights reserved.

  17. Application of frequency methods for optimization of tuning parameters of fast-response control systems

    Energy Technology Data Exchange (ETDEWEB)

    Gruzdev, I.A.; Temirbulatov, R.A.; Ladvishchenko, B.G.; Zhenenko, G.N.

    1980-09-01

    The electric power system is considered as a system of matrices of transfer functions (frequency characteristics) describing non-controlled system and control devices of generator excitation and steam turbine torques. This mathematical model can be used for the study of static stability of complex electric power systems. 5 refs.

  18. The effect of sampling frequency on the accuracy of estimates of milk ...

    African Journals Online (AJOL)

    Unknown

    1ARC-Animal Improvement Institute, Private Bag X5013, Stellenbosch 7599, South Africa; 2Department of Animal. Science, University of Stellenbosch, Stellenbosch, ... weekly sampling procedure currently used by the South African National Dairy Cattle Performance Testing Scheme. However, replacement of proportional ...

  19. Optimization of the soliton self-frequency shift in a tapered photonic crystal fiber

    DEFF Research Database (Denmark)

    Judge, A.C.; Bang, Ole; Eggleton, B.J.

    2009-01-01

    nonuniform waists, an additional enhancement of the SSFS is achieved by varying the taper waist diameter along its length in a carefully designed fashion in order to present an optimal level of group-velocity dispersion to the soliton at each point, thus avoiding the spectral recoil due to the emission...

  20. Optimization of high frequency flip-chip interconnects for digital superconducting circuits

    International Nuclear Information System (INIS)

    Rafique, M R; Engseth, H; Kidiyarova-Shevchenko, A

    2006-01-01

    This paper presents the results of theoretical optimization of the multi-chip-module (MCM) contact and driver circuitries for gigabit chip-to-chip communication. Optimization has been done using 3D electromagnetic (EM) simulations of MCM contacts and time domain simulations of drivers and receivers. A single optimized MCM contact has a signal reflection of less than -20 dB for more than 400 GHz bandwidth. The MCM data link with the optimized SFQ driver, receiver and two MCM contacts has operational margins on the global bias current of ± 30% at 30 Gbit s -1 speedand can operate above 100 Gbit s -1 speed. Wide bandwidth transmission requires the realization of an advanced flip-chip process with a small dimension of the MCM contact (less than 30 μm diameter of the contact pad) and small height of the flip-chip contact bumps of the order of 2 μm. Current processes with about 7 μm height of the bumps require the application of a double-flux-quantum (DFQ) driver. The data link with the DFQ driver was also simulated. It has operational margins on the global bias current of ± 30% at 30 Gbit s -1 ; however, the maximum speed of operation is 61 Gbit s -1 . Several test structures have been designed for measurements of signal reflection, bit error rate and operational margins of the data link

  1. Frequency of Haemophilus spp. in urinary and and genital tract samples

    Directory of Open Access Journals (Sweden)

    Tatjana Marijan,

    2010-02-01

    Full Text Available Aim To determine the prevalence and antibiotic susceptibility of Haemophilus influenzae and H. parainfluenzae isolated from the urinary and genital tracts. Methods Identification of strains bacteria Haemophilus spp. was carried out by using API NH identifi-cation system, and antibiotic susceptibility was performed by Kirby-Bauer disk diffusion method. Results A total number of 50 (0,03% H. influenzae and 14 (0,01% H. parainfluenzae (out of 180, 415 samples were isolated from genitourinary tract. From urine samples of the girls under 15 years of age these bacteria were isolated in 13 (0,88% and two (0,13% cases, respectively, and only in one case(0,11% of the UTI in boys (H. influenzae. In persons of fertile age, it was only H. influenzae bacteria that was found in urine samples of the five women (0,04% and in three men (0,22%. As a cause of vulvovaginitis, H. influenzae was isolated in four (5,63%, and H. parainfluenzae in two (2,82% girls. In persons of fertile age, H. influenzae was isolated from 10 (0,49% smears of the cervix, and in nine (1,74% male samples. H. parainfluenzae was isolated from seven (1,36% male samples. (p<0.01. Susceptibility testing of H. influenzae and H. parainfluenzae revealed that both pathogens were signifi- cantly resistant to cotrimoxasol only (26.0% and 42.9%, respectively. Conclusion In the etiology of genitourinary infections of girls during childhood, genital infections of women in fertile age (especially in pregnant women, and men with cases of epididimytis and/or orchitis,it is important to think about this rare and demanding bacteria in terms of cultivation.

  2. Evaluation and optimization of DNA extraction and purification procedures for soil and sediment samples.

    Science.gov (United States)

    Miller, D N; Bryant, J E; Madsen, E L; Ghiorse, W C

    1999-11-01

    We compared and statistically evaluated the effectiveness of nine DNA extraction procedures by using frozen and dried samples of two silt loam soils and a silt loam wetland sediment with different organic matter contents. The effects of different chemical extractants (sodium dodecyl sulfate [SDS], chloroform, phenol, Chelex 100, and guanadinium isothiocyanate), different physical disruption methods (bead mill homogenization and freeze-thaw lysis), and lysozyme digestion were evaluated based on the yield and molecular size of the recovered DNA. Pairwise comparisons of the nine extraction procedures revealed that bead mill homogenization with SDS combined with either chloroform or phenol optimized both the amount of DNA extracted and the molecular size of the DNA (maximum size, 16 to 20 kb). Neither lysozyme digestion before SDS treatment nor guanidine isothiocyanate treatment nor addition of Chelex 100 resin improved the DNA yields. Bead mill homogenization in a lysis mixture containing chloroform, SDS, NaCl, and phosphate-Tris buffer (pH 8) was found to be the best physical lysis technique when DNA yield and cell lysis efficiency were used as criteria. The bead mill homogenization conditions were also optimized for speed and duration with two different homogenizers. Recovery of high-molecular-weight DNA was greatest when we used lower speeds and shorter times (30 to 120 s). We evaluated four different DNA purification methods (silica-based DNA binding, agarose gel electrophoresis, ammonium acetate precipitation, and Sephadex G-200 gel filtration) for DNA recovery and removal of PCR inhibitors from crude extracts. Sephadex G-200 spin column purification was found to be the best method for removing PCR-inhibiting substances while minimizing DNA loss during purification. Our results indicate that for these types of samples, optimum DNA recovery requires brief, low-speed bead mill homogenization in the presence of a phosphate-buffered SDS-chloroform mixture, followed

  3. Optimization of a radiochemistry method for plutonium determination in biological samples

    International Nuclear Information System (INIS)

    Cerchetti, Maria L.; Arguelles, Maria G.

    2005-01-01

    Plutonium has been widely used for civilian an military activities. Nevertheless, the methods to control work exposition have not evolved in the same way, remaining as one of the major challengers for the radiological protection practice. Due to the low acceptable incorporation limit, the usual determination is based on indirect methods in urine samples. Our main objective was to optimize a technique used to monitor internal contamination of workers exposed to Plutonium isotopes. Different parameters were modified and their influence on the three steps of the method was evaluated. Those which gave the highest yield and feasibility were selected. The method involves: 1-) Sample concentration (coprecipitation); 2-) Plutonium purification; and 3-) Source preparation by electrodeposition. On the coprecipitation phase, changes on temperature and concentration of the carrier were evaluated. On the ion-exchange separation, changes on the type of the resin, elution solution for hydroxylamine (concentration and volume), length and column recycle were evaluated. Finally, on the electrodeposition phase, we modified the following: electrolytic solution, pH and time. Measures were made by liquid scintillation counting and alpha spectrometry (PIPS). We obtained the following yields: 88% for coprecipitation (at 60 C degree with 2 ml of CaHPO 4 ), 71% for ion-exchange (resins AG 1x8 Cl - 100-200 mesh, hydroxylamine 0.1N in HCl 0.2N as eluent, column between 4.5 and 8 cm), and 93% for electrodeposition (H 2 SO 4 -NH 4 OH, 100 minutes and pH from 2 to 2.8). The expand uncertainty was 30% (NC 95%), the decision threshold (Lc) was 0.102 Bq/L and the minimum detectable activity was 0.218 Bq/L of urine. We obtained an optimized method to screen workers exposed to Plutonium. (author)

  4. Frequency of isolation of Campylobacter from roasted chicken samples from Mexico City.

    Science.gov (United States)

    Quiñones-Ramírez, E I; Vázquez-Salinas, C; Rodas-Suárez, O R; Ramos-Flores, M O; Rodríguez-Montaño, R

    2000-01-01

    The presence of Campylobacter spp. was investigated in 100 samples of roasted chicken tacos sold in well-established commercial outlets and semisettled street stands in Mexico City. From 600 colonies displaying Campylobacter morphology only 123 isolates were positive. From these isolates, 51 (41%) were identified as C. jejuni, 23 (19%) as C. coli, and 49 (40%) as other species of this genus. All of the 27 positive samples came from one location where handling practices allowed cross-contamination of the cooked product. The results indicate that these ready-to-consume products are contaminated with these bacteria, representing a potential risk for consumers, especially in establishments lacking adequate sanitary measures to prevent cross-contamination.

  5. Inference for Local Distributions at High Sampling Frequencies: A Bootstrap Approach

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Varneskov, Rasmus T.

    of "large" jumps. Our locally dependent wild bootstrap (LDWB) accommodate issues related to the stochastic scale and jumps as well as account for a special block-wise dependence structure induced by sampling errors. We show that the LDWB replicates first and second-order limit theory from the usual...... empirical process and the stochastic scale estimate, respectively, as well as an asymptotic bias. Moreover, we design the LDWB sufficiently general to establish asymptotic equivalence between it and and a nonparametric local block bootstrap, also introduced here, up to second-order distribution theory....... Finally, we introduce LDWB-aided Kolmogorov-Smirnov tests for local Gaussianity as well as local von-Mises statistics, with and without bootstrap inference, and establish their asymptotic validity using the second-order distribution theory. The finite sample performance of CLT and LDWB-aided local...

  6. Optimal fuzzy logic-based PID controller for load-frequency control including superconducting magnetic energy storage units

    International Nuclear Information System (INIS)

    Pothiya, Saravuth; Ngamroo, Issarachai

    2008-01-01

    This paper proposes a new optimal fuzzy logic-based-proportional-integral-derivative (FLPID) controller for load frequency control (LFC) including superconducting magnetic energy storage (SMES) units. Conventionally, the membership functions and control rules of fuzzy logic control are obtained by trial and error method or experiences of designers. To overcome this problem, the multiple tabu search (MTS) algorithm is applied to simultaneously tune PID gains, membership functions and control rules of FLPID controller to minimize frequency deviations of the system against load disturbances. The MTS algorithm introduces additional techniques for improvement of search process such as initialization, adaptive search, multiple searches, crossover and restarting process. Simulation results explicitly show that the performance of the optimum FLPID controller is superior to the conventional PID controller and the non-optimum FLPID controller in terms of the overshoot, settling time and robustness against variations of system parameters

  7. Optimizing electrical conductivity and optical transparency of IZO thin film deposited by radio frequency (RF) magnetron sputtering

    Science.gov (United States)

    Zhang, Lei

    Transparent conducting oxide (TCO) thin films of In2O3, SnO2, ZnO, and their mixtures have been extensively used in optoelectronic applications such as transparent electrodes in solar photovoltaic devices. In this project I deposited amorphous indium-zinc oxide (IZO) thin films by radio frequency (RF) magnetron sputtering from a In2O3-10 wt.% ZnO sintered ceramic target to optimize the RF power, argon gas flowing rate, and the thickness of film to reach the maximum conductivity and transparency in visible spectrum. The results indicated optimized conductivity and transparency of IZO thin film is closer to ITO's conductivity and transparency, and is even better when the film was deposited with one specific tilted angle. National Science Foundation (NSF) MRSEC program at University of Nebraska Lincoln, and was hosted by Professor Jeff Shields lab.

  8. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  9. Acute cognitive dysfunction after hip fracture: frequency and risk factors in an optimized, multimodal, rehabilitation program

    DEFF Research Database (Denmark)

    Bitsch, Martin; Foss, Nicolai Bang; Kristensen, Billy Bjarne

    2006-01-01

    hip fracture surgery in an optimized, multimodal, peri-operative rehabilitation regimen. METHODS: One hundred unselected hip fracture patients treated in a well-defined, optimized, multimodal, peri-operative rehabilitation regimen were included. Patients were tested upon admission and on the second......BACKGROUND: Patients undergoing hip fracture surgery often experience acute post-operative cognitive dysfunction (APOCD). The pathogenesis of APOCD is probably multifactorial, and no single intervention has been successful in its prevention. No studies have investigated the incidence of APOCD after......, fourth and seventh post-operative days with the Mini Mental State Examination (MMSE) score. RESULTS: Thirty-two per cent of patients developed a significant post-operative cognitive decline, which was associated with several pre-fracture patient characteristics, including age and cognitive function...

  10. The Physics of Ultrabroadband Frequency Comb Generation and Optimized Combs for Measurements in Fundamental Physics

    Science.gov (United States)

    2016-07-02

    order phase-matched cascaded frequency gene , high harmonic generation, fine structure constant measurements, -envelope phase stabilization, ultra fast...MHz repetition rate are generated from a picosecond fiber laser (Pritel FFL-500) before amplifica- tion in an erbium- doped fiber amplifier (EDFA). The...width from 1 to 36 nm with central wavelength tunable over 1527–1550 nm. The pump pulses were combined with the seed and injected into 9.5 m of Ge- doped

  11. A simple optimized microwave digestion method for multielement monitoring in mussel samples

    International Nuclear Information System (INIS)

    Saavedra, Y.; Gonzalez, A.; Fernandez, P.; Blanco, J.

    2004-01-01

    With the aim of obtaining a set of common decomposition conditions allowing the determination of several metals in mussel tissue (Hg by cold vapour atomic absorption spectrometry; Cu and Zn by flame atomic absorption spectrometry; and Cd, PbCr, Ni, As and Ag by electrothermal atomic absorption spectrometry), a factorial experiment was carried out using as factors the sample weight, digestion time and acid addition. It was found that the optimal conditions were 0.5 g of freeze-dried and triturated samples with 6 ml of nitric acid and subjected to microwave heating for 20 min at 180 psi. This pre-treatment, using only one step and one oxidative reagent, was suitable to determine the nine metals studied with no subsequent handling of the digest. It was possible to carry out the determination of atomic absorption using calibrations with aqueous standards and matrix modifiers for cadmium, lead, chromium, arsenic and silver. The accuracy of the procedure was checked using oyster tissue (SRM 1566b) and mussel tissue (CRM 278R) certified reference materials. The method is now used routinely to monitor these metals in wild and cultivated mussels, and found to be good

  12. Optimal sampling designs for estimation of Plasmodium falciparum clearance rates in patients treated with artemisinin derivatives

    Science.gov (United States)

    2013-01-01

    Background The emergence of Plasmodium falciparum resistance to artemisinins in Southeast Asia threatens the control of malaria worldwide. The pharmacodynamic hallmark of artemisinin derivatives is rapid parasite clearance (a short parasite half-life), therefore, the in vivo phenotype of slow clearance defines the reduced susceptibility to the drug. Measurement of parasite counts every six hours during the first three days after treatment have been recommended to measure the parasite clearance half-life, but it remains unclear whether simpler sampling intervals and frequencies might also be sufficient to reliably estimate this parameter. Methods A total of 2,746 parasite density-time profiles were selected from 13 clinical trials in Thailand, Cambodia, Mali, Vietnam, and Kenya. In these studies, parasite densities were measured every six hours until negative after treatment with an artemisinin derivative (alone or in combination with a partner drug). The WWARN Parasite Clearance Estimator (PCE) tool was used to estimate “reference” half-lives from these six-hourly measurements. The effect of four alternative sampling schedules on half-life estimation was investigated, and compared to the reference half-life (time zero, 6, 12, 24 (A1); zero, 6, 18, 24 (A2); zero, 12, 18, 24 (A3) or zero, 12, 24 (A4) hours and then every 12 hours). Statistical bootstrap methods were used to estimate the sampling distribution of half-lives for parasite populations with different geometric mean half-lives. A simulation study was performed to investigate a suite of 16 potential alternative schedules and half-life estimates generated by each of the schedules were compared to the “true” half-life. The candidate schedules in the simulation study included (among others) six-hourly sampling, schedule A1, schedule A4, and a convenience sampling schedule at six, seven, 24, 25, 48 and 49 hours. Results The median (range) parasite half-life for all clinical studies combined was 3.1 (0

  13. Frequency of hepatitis E and Hepatitis A virus in water sample collected from Faisalabad, Pakistan

    Directory of Open Access Journals (Sweden)

    Tahir Ahmad

    2015-12-01

    Full Text Available Hepatitis E and Hepatitis A virus both are highly prevalent in Pakistan mainly present as a sporadic disease. The aim of the current study is to isolate and characterized the specific genotype of Hepatitis E virus from water bodies of Faisalabad, Pakistan. Drinking and sewage samples were qualitatively analyzed by using RT-PCR. HEV Genotype 1 strain was recovered from sewage water of Faisalabad. Prevalence of HEV and HAV in sewage water propose the possibility of gradual decline in the protection level of the circulated vaccine in the Pakistani population.

  14. Evaluation of the Problem Behavior Frequency Scale-Teacher Report Form for Assessing Behavior in a Sample of Urban Adolescents.

    Science.gov (United States)

    Farrell, Albert D; Goncy, Elizabeth A; Sullivan, Terri N; Thompson, Erin L

    2018-02-01

    This study evaluated the structure and validity of the Problem Behavior Frequency Scale-Teacher Report Form (PBFS-TR) for assessing students' frequency of specific forms of aggression and victimization, and positive behavior. Analyses were conducted on two waves of data from 727 students from two urban middle schools (Sample 1) who were rated by their teachers on the PBFS-TR and the Social Skills Improvement System (SSIS), and on data collected from 1,740 students from three urban middle schools (Sample 2) for whom data on both the teacher and student report version of the PBFS were obtained. Confirmatory factor analyses supported first-order factors representing 3 forms of aggression (physical, verbal, and relational), 3 forms of victimization (physical, verbal and relational), and 2 forms of positive behavior (prosocial behavior and effective nonviolent behavior), and higher-order factors representing aggression, victimization, and positive behavior. Strong measurement invariance was established over gender, grade, intervention condition, and time. Support for convergent validity was found based on correlations between corresponding scales on the PBFS-TR and teacher ratings on the SSIS in Sample 1. Significant correlations were also found between teacher ratings on the PBFS-TR and student ratings of their behavior on the Problem Behavior Frequency Scale-Adolescent Report (PBFS-AR) and a measure of nonviolent behavioral intentions in Sample 2. Overall the findings provided support for the PBFS-TR and suggested that teachers can provide useful data on students' aggressive and prosocial behavior and victimization experiences within the school setting. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Optimized Analytical Method to Determine Gallic and Picric Acids in Pyrotechnic Samples by Using HPLC/UV (Reverse Phase)

    International Nuclear Information System (INIS)

    Garcia Alonso, S.; Perez Pastor, R. M.

    2013-01-01

    A study on the optimization and development of a chromatographic method for the determination of gallic and picric acids in pyrotechnic samples is presented. In order to achieve this, both analytical conditions by HPLC with diode detection and extraction step of a selected sample were studied. (Author)

  16. Ambush frequency should increase over time during optimal predator search for prey.

    Science.gov (United States)

    Alpern, Steve; Fokkink, Robbert; Timmer, Marco; Casas, Jérôme

    2011-11-07

    We advance and apply the mathematical theory of search games to model the problem faced by a predator searching for prey. Two search modes are available: ambush and cruising search. Some species can adopt either mode, with their choice at a given time traditionally explained in terms of varying habitat and physiological conditions. We present an additional explanation of the observed predator alternation between these search modes, which is based on the dynamical nature of the search game they are playing: the possibility of ambush decreases the propensity of the prey to frequently change locations and thereby renders it more susceptible to the systematic cruising search portion of the strategy. This heuristic explanation is supported by showing that in a new idealized search game where the predator is allowed to ambush or search at any time, and the prey can change locations at intermittent times, optimal predator play requires an alternation (or mixture) over time of ambush and cruise search. Thus, our game is an extension of the well-studied 'Princess and Monster' search game. Search games are zero sum games, where the pay-off is the capture time and neither the Searcher nor the Hider knows the location of the other. We are able to determine the optimal mixture of the search modes when the predator uses a mixture which is constant over time, and also to determine how the mode mixture changes over time when dynamic strategies are allowed (the ambush probability increases over time). In particular, we establish the 'square root law of search predation': the optimal proportion of active search equals the square root of the fraction of the region that has not yet been explored.

  17. On Mathematical Optimization for the Visualization of Frequencies and Adjacencies as Rectangular Maps

    DEFF Research Database (Denmark)

    Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero

    2018-01-01

    individuals as adjacent rectangular portions as possible and adding as few false adjacencies, i.e., adjacencies between rectangular portions corresponding to non-adjacent individuals, as possible. We formulate this visualization problem as a Mixed Integer Linear Programming (MILP) model. We propose......In this paper we address the problem of visualizing a frequency distribution and an adjacency relation attached to a set of individuals. We represent this information using a rectangular map, i.e., a subdivision of a rectangle into rectangular portions so that each portion is associated with one...

  18. Frequency of aggressive behaviors in a nationally representative sample of Iranian children and adolescents: The CASPIAN-IV study

    Directory of Open Access Journals (Sweden)

    Morteza Sadinejad

    2015-01-01

    Full Text Available Background: This study aims to explore the frequency of aggressive behaviors among a nationally representative sample of Iranian children and adolescents. Methods: This nationwide study was performed on a multi-stage sample of 6-18 years students, living in 30 provinces in Iran. Students were asked to confidentially report the frequency of aggressive behaviors including physical fighting, bullying and being bullied in the previous 12 months, using the questionnaire of the World Health Organization Global School Health Survey. Results: In this cross-sectional study, 13,486 students completed the study (90.6% participation rate; they consisted of 49.2% girls and 75.6% urban residents. The mean age of participants was 12.47 years (95% confidence interval: 12.29, 12.65. In total, physical fight was more prevalent among boys than girls (48% vs. 31%, P < 0.001. Higher rates of involvement in two other behaviors namely being bullied and bulling to other classmates had a higher frequency among boys compared to girls (29% vs. 25%, P < 0.001 for being bullied and (20% vs. 14%, P < 0.001 for bulling to others. Physical fighting was more prevalent among rural residents (40% vs. 39%, respectively, P = 0.61, while being bullied was more common among urban students (27% vs. 26%, respectively, P = 0.69. Conclusions: Although in this study the frequency of aggressive behaviors was lower than many other populations, still these findings emphasize on the importance of designing preventive interventions that target the students, especially in early adolescence, and to increase their awareness toward aggressive behaviors. Implications for future research and aggression prevention programming are recommended.

  19. Design and optimization of stress centralized MEMS vector hydrophone with high sensitivity at low frequency

    Science.gov (United States)

    Zhang, Guojun; Ding, Junwen; Xu, Wei; Liu, Yuan; Wang, Renxin; Han, Janjun; Bai, Bing; Xue, Chenyang; Liu, Jun; Zhang, Wendong

    2018-05-01

    A micro hydrophone based on piezoresistive effect, "MEMS vector hydrophone" was developed for acoustic detection application. To improve the sensitivity of MEMS vector hydrophone at low frequency, we reported a stress centralized MEMS vector hydrophone (SCVH) mainly used in 20-500 Hz. Stress concentration area was actualized in sensitive unit of hydrophone by silicon micromachining technology. Then piezoresistors were placed in stress concentration area for better mechanical response, thereby obtaining higher sensitivity. Static analysis was done to compare the mechanical response of three different sensitive microstructure: SCVH, conventional micro-silicon four-beam vector hydrophone (CFVH) and Lollipop-shaped vector hydrophone (LVH) respectively. And fluid-structure interaction (FSI) was used to analyze the natural frequency of SCVH for ensuring the measurable bandwidth. Eventually, the calibration experiment in standing wave field was done to test the property of SCVH and verify the accuracy of simulation. The results show that the sensitivity of SCVH has nearly increased by 17.2 dB in contrast to CFVH and 7.6 dB in contrast to LVH during 20-500 Hz.

  20. A Dynamic Programming Model for Optimizing Frequency of Time-Lapse Seismic Monitoring in Geological CO2 Storage

    Science.gov (United States)

    Bhattacharjya, D.; Mukerji, T.; Mascarenhas, O.; Weyant, J.

    2005-12-01

    Designing a cost-effective and reliable monitoring program is crucial to the success of any geological CO2 storage project. Effective design entails determining both, the optimal measurement modality, as well as the frequency of monitoring the site. Time-lapse seismic provides the best spatial coverage and resolution for reservoir monitoring. Initial results from Sleipner (Norway) have demonstrated effective monitoring of CO2 plume movement. However, time-lapse seismic is an expensive monitoring technique especially over the long term life of a storage project and should be used judiciously. We present a mathematical model based on dynamic programming that can be used to estimate site-specific optimal frequency of time-lapse surveys. The dynamics of the CO2 sequestration process are simplified and modeled as a four state Markov process with transition probabilities. The states are M: injected CO2 safely migrating within the target zone; L: leakage from the target zone to the adjacent geosphere; R: safe migration after recovery from leakage state; and S: seepage from geosphere to the biosphere. The states are observed only when a monitoring survey is performed. We assume that the system may go to state S only from state L. We also assume that once observed to be in state L, remedial measures are always taken to bring it back to state R. Remediation benefits are captured by calculating the expected penalty if CO2 seeped into the biosphere. There is a trade-off between the conflicting objectives of minimum discounted costs of performing the next time-lapse survey and minimum risk of seepage and its associated costly consequences. A survey performed earlier would spot the leakage earlier. Remediation methods would have been utilized earlier, resulting in savings in costs attributed to excessive seepage. On the other hand, there are also costs for the survey and remedial measures. The problem is solved numerically using Bellman's optimality principal of dynamic

  1. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    Science.gov (United States)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  2. Optimal control of a high-frequency class-D amplifier

    DEFF Research Database (Denmark)

    Dahl, Nicolai J.; Iversen, Niels Elkjær; Knott, Arnold

    2018-01-01

    Control loops have been used with switch-mode audio amplifiers to improve the sound quality of the amplifier. Because these amplifiers use a high-frequency modulation, precautions in the controller design must be taken. Further, the quality factor of the output filter can have a great impact...... on the controller's capabilities to suppress noise and track the audio signal. In this paper design methods for modern control are presented. The control method proves to easily overcome the challenge of designing a good performing controller when the output filter has a high quality factor. The results show...... that the controller is able to produce a clear improvement in the Total Harmonic Distortion with up to a 30 times improvement compared to open-loop with a clear reduction in the noise. This places the audio quality on pair with current solutions....

  3. Using an integrated automated system to optimize retention and increase frequency of blood donations.

    Science.gov (United States)

    Whitney, J Garrett; Hall, Robert F

    2010-07-01

    This study examines the impact of an integrated, automated phone system to reinforce retention and increase frequency of donations among blood donors. Cultivated by incorporating data results over the past 7 years, the system uses computerized phone messaging to contact blood donors with individualized, multilevel notifications. Donors are contacted at planned intervals to acknowledge and recognize their donations, informed where their blood was sent, asked to participate in a survey, and reminded when they are eligible to donate again. The report statistically evaluates the impact of the various components of the system on donor retention and blood donations and quantifies the fiscal advantages to blood centers. By using information and support systems provided by the automated services and then incorporating the phlebotomists and recruiters to reinforce donor retention, both retention and donations will increase. © 2010 American Association of Blood Banks.

  4. Identification of hydrologic and geochemical pathways using high frequency sampling, REE aqueous sampling and soil characterization at Koiliaris Critical Zone Observatory, Crete

    Energy Technology Data Exchange (ETDEWEB)

    Moraetis, Daniel, E-mail: moraetis@mred.tuc.gr [Department of Environmental Engineering, Technical University of Crete, 73100 Chania (Greece); Stamati, Fotini; Kotronakis, Manolis; Fragia, Tasoula; Paranychnianakis, Nikolaos; Nikolaidis, Nikolaos P. [Department of Environmental Engineering, Technical University of Crete, 73100 Chania (Greece)

    2011-06-15

    Highlights: > Identification of hydrological and geochemical pathways within a complex watershed. > Water increased N-NO{sub 3} concentration and E.C. values during flash flood events. > Soil degradation and impact on water infiltration within the Koiliaris watershed. > Analysis of Rare Earth Elements in water bodies for identification of karstic water. - Abstract: Koiliaris River watershed is a Critical Zone Observatory that represents severely degraded soils due to intensive agricultural activities and biophysical factors. It has typical Mediterranean soils under the imminent threat of desertification which is expected to intensify due to projected climate change. High frequency hydro-chemical monitoring with targeted sampling for Rare Earth Elements (REE) analysis of different water bodies and geochemical characterization of soils were used for the identification of hydrologic and geochemical pathways. The high frequency monitoring of water chemical data highlighted the chemical alterations of water in Koiliaris River during flash flood events. Soil physical and chemical characterization surveys were used to identify erodibility patterns within the watershed and the influence of soils on surface and ground water chemistry. The methodology presented can be used to identify the impacts of degraded soils to surface and ground water quality as well as in the design of methods to minimize the impacts of land use practices.

  5. A real-frequency solver for the Anderson impurity model based on bath optimization and cluster perturbation theory

    Science.gov (United States)

    Zingl, Manuel; Nuss, Martin; Bauernfeind, Daniel; Aichhorn, Markus

    2018-05-01

    Recently solvers for the Anderson impurity model (AIM) working directly on the real-frequency axis have gained much interest. A simple and yet frequently used impurity solver is exact diagonalization (ED), which is based on a discretization of the AIM bath degrees of freedom. Usually, the bath parameters cannot be obtained directly on the real-frequency axis, but have to be determined by a fit procedure on the Matsubara axis. In this work we present an approach where the bath degrees of freedom are first discretized directly on the real-frequency axis using a large number of bath sites (≈ 50). Then, the bath is optimized by unitary transformations such that it separates into two parts that are weakly coupled. One part contains the impurity site and its interacting Green's functions can be determined with ED. The other (larger) part is a non-interacting system containing all the remaining bath sites. Finally, the Green's function of the full AIM is calculated via coupling these two parts with cluster perturbation theory.

  6. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  7. Modeling of frequency-domain scalar wave equation with the average-derivative optimal scheme based on a multigrid-preconditioned iterative solver

    Science.gov (United States)

    Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue

    2018-01-01

    An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.

  8. Implication of the first decision on visual information-sampling in the spatial frequency domain in pulmonary nodule recognition

    Science.gov (United States)

    Pietrzyk, Mariusz W.; Manning, David; Donovan, Tim; Dix, Alan

    2010-02-01

    Aim: To investigate the impact on visual sampling strategy and pulmonary nodule recognition of image-based properties of background locations in dwelled regions where the first overt decision was made. . Background: Recent studies in mammography show that the first overt decision (TP or FP) has an influence on further image reading including the correctness of the following decisions. Furthermore, the correlation between the spatial frequency properties of the local background following decision sites and the first decision correctness has been reported. Methods: Subjects with different radiological experience were eye tracked during detection of pulmonary nodules from PA chest radiographs. Number of outcomes and the overall quality of performance are analysed in terms of the cases where correct or incorrect decisions were made. JAFROC methodology is applied. The spatial frequency properties of selected local backgrounds related to a certain decisions were studied. ANOVA was used to compare the logarithmic values of energy carried by non redundant stationary wavelet packet coefficients. Results: A strong correlation has been found between the number of TP as a first decision and the JAFROC score (r = 0.74). The number of FP as a first decision was found negatively correlated with JAFROC (r = -0.75). Moreover, the differential spatial frequency profiles outcomes depend on the first choice correctness.

  9. Testing of Alignment Parameters for Ancient Samples: Evaluating and Optimizing Mapping Parameters for Ancient Samples Using the TAPAS Tool

    Directory of Open Access Journals (Sweden)

    Ulrike H. Taron

    2018-03-01

    Full Text Available High-throughput sequence data retrieved from ancient or other degraded samples has led to unprecedented insights into the evolutionary history of many species, but the analysis of such sequences also poses specific computational challenges. The most commonly used approach involves mapping sequence reads to a reference genome. However, this process becomes increasingly challenging with an elevated genetic distance between target and reference or with the presence of contaminant sequences with high sequence similarity to the target species. The evaluation and testing of mapping efficiency and stringency are thus paramount for the reliable identification and analysis of ancient sequences. In this paper, we present ‘TAPAS’, (Testing of Alignment Parameters for Ancient Samples, a computational tool that enables the systematic testing of mapping tools for ancient data by simulating sequence data reflecting the properties of an ancient dataset and performing test runs using the mapping software and parameter settings of interest. We showcase TAPAS by using it to assess and improve mapping strategy for a degraded sample from a banded linsang (Prionodon linsang, for which no closely related reference is currently available. This enables a 1.8-fold increase of the number of mapped reads without sacrificing mapping specificity. The increase of mapped reads effectively reduces the need for additional sequencing, thus making more economical use of time, resources, and sample material.

  10. Determining the optimal system-specific cut-off frequencies for filtering in-vitro upper extremity impact force and acceleration data by residual analysis.

    Science.gov (United States)

    Burkhart, Timothy A; Dunning, Cynthia E; Andrews, David M

    2011-10-13

    The fundamental nature of impact testing requires a cautious approach to signal processing, to minimize noise while preserving important signal information. However, few recommendations exist regarding the most suitable filter frequency cut-offs to achieve these goals. Therefore, the purpose of this investigation is twofold: to illustrate how residual analysis can be utilized to quantify optimal system-specific filter cut-off frequencies for force, moment, and acceleration data resulting from in-vitro upper extremity impacts, and to show how optimal cut-off frequencies can vary based on impact condition intensity. Eight human cadaver radii specimens were impacted with a pneumatic impact testing device at impact energies that increased from 20J, in 10J increments, until fracture occurred. The optimal filter cut-off frequency for pre-fracture and fracture trials was determined with a residual analysis performed on all force and acceleration waveforms. Force and acceleration data were filtered with a dual pass, 4th order Butterworth filter at each of 14 different cut-off values ranging from 60Hz to 1500Hz. Mean (SD) pre-fracture and fracture optimal cut-off frequencies for the force variables were 605.8 (82.7)Hz and 513.9 (79.5)Hz, respectively. Differences in the optimal cut-off frequency were also found between signals (e.g. Fx (medial-lateral), Fy (superior-inferior), Fz (anterior-posterior)) within the same test. These optimal cut-off frequencies do not universally agree with the recommendations of filtering all upper extremity impact data using a cut-off frequency of 600Hz. This highlights the importance of quantifying the filter frequency cut-offs specific to the instrumentation and experimental set-up. Improper digital filtering may lead to erroneous results and a lack of standardized approaches makes it difficult to compare findings of in-vitro dynamic testing between laboratories. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Optimal degree of protonation for {sup 1}H detection of aliphatic sites in randomly deuterated proteins as a function of the MAS frequency

    Energy Technology Data Exchange (ETDEWEB)

    Asami, Sam [Helmholtz-Zentrum Muenchen (HMGU), Deutsches Forschungszentrum fuer Gesundheit und Umwelt (HMGU) (Germany); Szekely, Kathrin; Schanda, Paul; Meier, Beat H. [Eidgenoessische Technische Hochschule Zuerich (ETH Zuerich) (Switzerland); Reif, Bernd, E-mail: reif@tum.de [Helmholtz-Zentrum Muenchen (HMGU), Deutsches Forschungszentrum fuer Gesundheit und Umwelt (HMGU) (Germany)

    2012-10-15

    The {sup 1}H dipolar network, which is the major obstacle for applying proton detection in the solid-state, can be reduced by deuteration, employing the RAP (Reduced Adjoining Protonation) labeling scheme, which yields random protonation at non-exchangeable sites. We present here a systematic study on the optimal degree of random sidechain protonation in RAP samples as a function of the MAS (magic angle spinning) frequency. In particular, we compare {sup 1}H sensitivity and linewidth of a microcrystalline protein, the SH3 domain of chicken {alpha}-spectrin, for samples, prepared with 5-25 % H{sub 2}O in the E. coli growth medium, in the MAS frequency range of 20-60 kHz. At an external field of 19.96 T (850 MHz), we find that using a proton concentration between 15 and 25 % in the M9 medium yields the best compromise in terms of sensitivity and resolution, with an achievable average {sup 1}H linewidth on the order of 40-50 Hz. Comparing sensitivities at a MAS frequency of 60 versus 20 kHz, a gain in sensitivity by a factor of 4-4.5 is observed in INEPT-based {sup 1}H detected 1D {sup 1}H,{sup 13}C correlation experiments. In total, we find that spectra recorded with a 1.3 mm rotor at 60 kHz have almost the same sensitivity as spectra recorded with a fully packed 3.2 mm rotor at 20 kHz, even though {approx}20 Multiplication-Sign less material is employed. The improved sensitivity is attributed to {sup 1}H line narrowing due to fast MAS and to the increased efficiency of the 1.3 mm coil.

  12. Application of CRAFT (complete reduction to amplitude frequency table) in nonuniformly sampled (NUS) 2D NMR data processing.

    Science.gov (United States)

    Krishnamurthy, Krish; Hari, Natarajan

    2017-09-15

    The recently published CRAFT (complete reduction to amplitude frequency table) technique converts the raw FID data (i.e., time domain data) into a table of frequencies, amplitudes, decay rate constants, and phases. It offers an alternate approach to decimate time-domain data, with minimal preprocessing step. It has been shown that application of CRAFT technique to process the t 1 dimension of the 2D data significantly improved the detectable resolution by its ability to analyze without the use of ubiquitous apodization of extensively zero-filled data. It was noted earlier that CRAFT did not resolve sinusoids that were not already resolvable in time-domain (i.e., t 1 max dependent resolution). We present a combined NUS-IST-CRAFT approach wherein the NUS acquisition technique (sparse sampling technique) increases the intrinsic resolution in time-domain (by increasing t 1 max), IST fills the gap in the sparse sampling, and CRAFT processing extracts the information without loss due to any severe apodization. NUS and CRAFT are thus complementary techniques to improve intrinsic and usable resolution. We show that significant improvement can be achieved with this combination over conventional NUS-IST processing. With reasonable sensitivity, the models can be extended to significantly higher t 1 max to generate an indirect-DEPT spectrum that rivals the direct observe counterpart. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Differences in Orgasm Frequency Among Gay, Lesbian, Bisexual, and Heterosexual Men and Women in a U.S. National Sample.

    Science.gov (United States)

    Frederick, David A; John, H Kate St; Garcia, Justin R; Lloyd, Elisabeth A

    2018-01-01

    There is a notable gap between heterosexual men and women in frequency of orgasm during sex. Little is known, however, about sexual orientation differences in orgasm frequency. We examined how over 30 different traits or behaviors were associated with frequency of orgasm when sexually intimate during the past month. We analyzed a large US sample of adults (N = 52,588) who identified as heterosexual men (n = 26,032), gay men (n = 452), bisexual men (n = 550), lesbian women (n = 340), bisexual women (n = 1112), and heterosexual women (n = 24,102). Heterosexual men were most likely to say they usually-always orgasmed when sexually intimate (95%), followed by gay men (89%), bisexual men (88%), lesbian women (86%), bisexual women (66%), and heterosexual women (65%). Compared to women who orgasmed less frequently, women who orgasmed more frequently were more likely to: receive more oral sex, have longer duration of last sex, be more satisfied with their relationship, ask for what they want in bed, praise their partner for something they did in bed, call/email to tease about doing something sexual, wear sexy lingerie, try new sexual positions, anal stimulation, act out fantasies, incorporate sexy talk, and express love during sex. Women were more likely to orgasm if their last sexual encounter included deep kissing, manual genital stimulation, and/or oral sex in addition to vaginal intercourse. We consider sociocultural and evolutionary explanations for these orgasm gaps. The results suggest a variety of behaviors couples can try to increase orgasm frequency.

  14. Optimization of measurement methods for a multi-frequency electromagnetic field from mobile phone base station using broadband EMF meter

    Directory of Open Access Journals (Sweden)

    Paweł Bieńkowski

    2015-10-01

    Full Text Available Background: This paper presents the characteristics of the mobile phone base station (BS as an electromagnetic field (EMF source. The most common system configurations with their construction are described. The parameters of radiated EMF in the context of the access to methods and other parameters of the radio transmission are discussed. Attention was also paid to antennas that are used in this technology. Material and Methods: The influence of individual components of a multi-frequency EMF, most commonly found in the BS surroundings, on the resultant EMF strength value indicated by popular broadband EMF meters was analyzed. The examples of metrological characteristics of the most common EMF probes and 2 measurement scenarios of the multisystem base station, with and without microwave relays, are shown. Results: The presented method for measuring the multi-frequency EMF using 2 broadband probes allows for the significant minimization of measurement uncertainty. Equations and formulas that can be used to calculate the actual EMF intensity from multi-frequency sources are shown. They have been verified in the laboratory conditions on a specific standard setup as well as in real conditions in a survey of the existing base station with microwave relays. Conclusions: Presented measurement methodology of multi-frequency EMF from BS with microwave relays, validated both in laboratory and real conditions. It has been proven that the described measurement methodology is the optimal approach to the evaluation of EMF exposure in BS surrounding. Alternative approaches with much greater uncertainty (precaution method or more complex measuring procedure (sources exclusion method are also presented. Med Pr 2015;66(5:701–712

  15. [Optimization of measurement methods for a multi-frequency electromagnetic field from mobile phone base station using broadband EMF meter].

    Science.gov (United States)

    Bieńkowski, Paweł; Cała, Paweł; Zubrzak, Bartłomiej

    2015-01-01

    This paper presents the characteristics of the mobile phone base station (BS) as an electromagnetic field (EMF) source. The most common system configurations with their construction are described. The parameters of radiated EMF in the context of the access to methods and other parameters of the radio transmission are discussed. Attention was also paid to antennas that are used in this technology. The influence of individual components of a multi-frequency EMF, most commonly found in the BS surroundings, on the resultant EMF strength value indicated by popular broadband EMF meters was analyzed. The examples of metrological characteristics of the most common EMF probes and 2 measurement scenarios of the multisystem base station, with and without microwave relays, are shown. The presented method for measuring the multi-frequency EMF using 2 broadband probes allows for the significant minimization of measurement uncertainty. Equations and formulas that can be used to calculate the actual EMF intensity from multi-frequency sources are shown. They have been verified in the laboratory conditions on a specific standard setup as well as in real conditions in a survey of the existing base station with microwave relays. Presented measurement methodology of multi-frequency EMF from BS with microwave relays, validated both in laboratory and real conditions. It has been proven that the described measurement methodology is the optimal approach to the evaluation of EMF exposure in BS surrounding. Alternative approaches with much greater uncertainty (precaution method) or more complex measuring procedure (sources exclusion method) are also presented). This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  16. Calculation of frequency of optimal inspection in non-notice inspection game

    International Nuclear Information System (INIS)

    Kumakura, Shinichi; Gotoh, Yoshiki; Kikuchi, Masahiro

    2011-01-01

    We consider a non-notice inspection game between an inspection party, who verifies absence of diversion of nuclear materials and misuse of nuclear facility, and a facility operator, who tries them in a nuclear facility. In the game, the payoff for each player, inspection party and facility operator, is composed of various elements (parameters) such as facility type, a type of nuclear material, number of inspection and others. Their payoffs consist of profits and costs (minus profit). Because of random nature by non-notice inspection, its deterrence effect and inspection number could have the potential to affect their payoffs. In this paper, their payoffs taking into consideration of the inspection environment above are represented as a function of inspection number. Then, the optimal number is calculated from a condition on their payoffs for number of inspection. Comparable statics analysis is performed in order to observe the change of inspection number which is equilibrium point by changing these parameters including deterrence effect, because the number derived depends on each parameter within the inspection environment. Based on the analysis results, necessary conditions to reduce the inspection number keeping inspection effect are pointed out. (author)

  17. Development of a method of robust rain gauge network optimization based on intensity-duration-frequency results

    Directory of Open Access Journals (Sweden)

    A. Chebbi

    2013-10-01

    Full Text Available Based on rainfall intensity-duration-frequency (IDF curves, fitted in several locations of a given area, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimization can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables, and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short- and a long-term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2. Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World

  18. Determination of total concentration of chemically labeled metabolites as a means of metabolome sample normalization and sample loading optimization in mass spectrometry-based metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2012-12-18

    For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C

  19. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    Science.gov (United States)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one

  20. Evolution of concentration-discharge relations revealed by high frequency diurnal sampling of stream water during spring snowmelt

    Science.gov (United States)

    Olshansky, Y.; White, A. M.; Thompson, M.; Moravec, B. G.; McIntosh, J. C.; Chorover, J.

    2017-12-01

    Concentration discharge (C-Q) relations contain potentially important information on critical zone (CZ) processes including: weathering reactions, water flow paths and nutrient export. To examine the C-Q relations in a small (3.3 km2) headwater catchment - La Jara Creek located in the Jemez River Basin Critical Zone Observatory, daily, diurnal stream water samples were collected during spring snow melt 2017, from two flumes located in outlets of the La Jara Creek and a high elevation zero order basin within this catchment. Previous studies from this site (McIntosh et al., 2017) suggested that high frequency sampling was needed to improve our interpretation of C-Q relations. The dense sampling covered two ascending and two descending limbs of the snowmelt hydrograph, from March 1 to May 15, 2017. While Na showed inverse correlation (dilution) with discharge, most other solutes (K, Mg, Fe, Al, dissolved organic carbon) exhibited positive (concentration) or chemostatic trends (Ca, Mn, Si, dissolved inorganic carbon and dissolved nitrogen). Hysteresis in the C-Q relation was most pronounced for bio-cycled cations (K, Mg) and for Fe, which exhibited concentration during the first ascending limb followed by a chemostatic trend. A pulsed increase in Si concentration immediately after the first ascending limb in both flumes suggests mixing of deep groundwater with surface water. A continual increase in Ge/Si concentrations followed by a rapid decrease after the second rising limb may suggest a fast transition between soil water to ground water dominating the stream flow. Fourier transform infrared spectroscopy of selected samples across the hydrograph demonstrated pronounced changes in dissolved organic matter molecular composition with the advancement of the spring snow melt. X-ray micro-spectroscopy of colloidal material isolated from the collected water samples indicated a significant role for organic matter in the transport of inorganic colloids. Analyses of high

  1. The Proteome of Ulcerative Colitis in Colon Biopsies from Adults - Optimized Sample Preparation and Comparison with Healthy Controls.

    Science.gov (United States)

    Schniers, Armin; Anderssen, Endre; Fenton, Christopher Graham; Goll, Rasmus; Pasing, Yvonne; Paulssen, Ruth Hracky; Florholmen, Jon; Hansen, Terkel

    2017-12-01

    The purpose of the study was to optimize the sample preparation and to further use an improved sample preparation to identify proteome differences between inflamed ulcerative colitis tissue from untreated adults and healthy controls. To optimize the sample preparation, we studied the effect of adding different detergents to a urea containing lysis buffer for a Lys-C/trypsin tandem digestion. With the optimized method, we prepared clinical samples from six ulcerative colitis patients and six healthy controls and analysed them by LC-MS/MS. We examined the acquired data to identify differences between the states. We improved the protein extraction and protein identification number by utilizing a urea and sodium deoxycholate containing buffer. Comparing ulcerative colitis and healthy tissue, we found 168 of 2366 identified proteins differently abundant. Inflammatory proteins are higher abundant in ulcerative colitis, proteins related to anion-transport and mucus production are lower abundant. A high proportion of S100 proteins is differently abundant, notably with both up-regulated and down-regulated proteins. The optimized sample preparation method will improve future proteomic studies on colon mucosa. The observed protein abundance changes and their enrichment in various groups improve our understanding of ulcerative colitis on protein level. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Development and optimization of the determination of pharmaceuticals in water samples by SPE and HPLC with diode-array detection.

    Science.gov (United States)

    Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra

    2013-09-01

    This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  4. Optimized fan-shaped chiral metamaterial as an ultrathin narrow-band circular polarizer at visible frequencies

    Science.gov (United States)

    He, Yizhuo; Wang, Xinghai; Ingram, Whitney; Ai, Bin; Zhao, Yiping

    2018-04-01

    Chiral metamaterials have the great ability to manipulate the circular polarizations of light, which can be utilized to build ultrathin circular polarizers. Here we build a narrow-band circular polarizer at visible frequencies based on plasmonic fan-shaped chiral nanostructures. In order to achieve the best optical performance, we systematically investigate how different fabrication factors affect the chiral optical response of the fan-shaped chiral nanostructures, including incident angle of vapor depositions, nanostructure thickness, and post-deposition annealing. The optimized fan-shaped nanostructures show two narrow bands for different circular polarizations with the maximum extinction ratios 7.5 and 6.9 located at wavelength 687 nm and 774 nm, respectively.

  5. Subdivision, Sampling, and Initialization Strategies for Simplical Branch and Bound in Global Optimization

    DEFF Research Database (Denmark)

    Clausen, Jens; Zilinskas, A,

    2002-01-01

    We consider the problem of optimizing a Lipshitzian function. The branch and bound technique is a well-known solution method, and the key components for this are the subdivision scheme, the bound calculation scheme, and the initialization. For Lipschitzian optimization, the bound calculations are...

  6. Hyphenation of optimized microfluidic sample preparation with nano liquid chromatography for faster and greener alkaloid analysis

    NARCIS (Netherlands)

    Shen, Y.; Beek, van T.A.; Zuilhof, H.; Chen, B.

    2013-01-01

    A glass liquid–liquid extraction (LLE) microchip with three parallel 3.5 cm long and 100 µm wide interconnecting channels was optimized in terms of more environmentally friendly (greener) solvents and extraction efficiency. In addition, the optimized chip was successfully hyphenated with nano-liquid

  7. The optimal amount and allocation of of sampling effort for plant health inspection

    NARCIS (Netherlands)

    Surkov, I.; Oude Lansink, A.G.J.M.; Werf, van der W.

    2009-01-01

    Plant import inspection can prevent the introduction of exotic pests and diseases, thereby averting economic losses. We explore the optimal allocation of a fixed budget, taking into account risk differentials, and the optimal-sized budget to minimise total pest costs. A partial-equilibrium market

  8. Determination of head conductivity frequency response in vivo with optimized EIT-EEG.

    Science.gov (United States)

    Dabek, Juhani; Kalogianni, Konstantina; Rotgans, Edwin; van der Helm, Frans C T; Kwakkel, Gert; van Wegen, Erwin E H; Daffertshofer, Andreas; de Munck, Jan C

    2016-02-15

    Electroencephalography (EEG) benefits from accurate head models. Dipole source modelling errors can be reduced from over 1cm to a few millimetres by replacing generic head geometry and conductivity with tailored ones. When adequate head geometry is available, electrical impedance tomography (EIT) can be used to infer the conductivities of head tissues. In this study, the boundary element method (BEM) is applied with three-compartment (scalp, skull and brain) subject-specific head models. The optimal injection of small currents to the head with a modular EIT current injector, and voltage measurement by an EEG amplifier is first sought by simulations. The measurement with a 64-electrode EEG layout is studied with respect to three noise sources affecting EIT: background EEG, deviations from the fitting assumption of equal scalp and brain conductivities, and smooth model geometry deviations from the true head geometry. The noise source effects were investigated depending on the positioning of the injection and extraction electrode and the number of their combinations used sequentially. The deviation from equal scalp and brain conductivities produces rather deterministic errors in the three conductivities irrespective of the current injection locations. With a realistic measurement of around 2 min and around 8 distant distinct current injection pairs, the error from the other noise sources is reduced to around 10% or less in the skull conductivity. The analysis of subsequent real measurements, however, suggests that there could be subject-specific local thinnings in the skull, which could amplify the conductivity fitting errors. With proper analysis of multiplexed sinusoidal EIT current injections, the measurements on average yielded conductivities of 340 mS/m (scalp and brain) and 6.6 mS/m (skull) at 2 Hz. From 11 to 127 Hz, the conductivities increased by 1.6% (scalp and brain) and 6.7% (skull) on the average. The proper analysis was ensured by using recombination of

  9. High frequency of sub-optimal semen quality in an unselected population of young men

    DEFF Research Database (Denmark)

    Andersen, A G; Jensen, Tina Kold; Carlsen, E

    2000-01-01

    concentration was 41 x 10(6)/ml (mean 57.4 x 10(6)/ml). Men with ejaculation abstinence above 48 h had slightly higher sperm concentrations (median 45 x10(6)/ml, mean 63.2 x 10(6)/ml), but even in this subgroup, 21 and 43% respectively had sperm counts below 20 x 10(6)/ml and 40 x 10(6)/ml. Among men...... with no history of reproductive diseases and a period of abstinence above 48 h, as many as 18 and 40% respectively had concentrations below 20 and 40 x 10(6)/ml. Sperm counts were positively correlated with testis size, percentage normal spermatozoa and inhibin B, and negatively correlated with percentage...... for military service, this provided a unique opportunity to study the reproductive function in an unbiased population. Altogether 891 young men delivered a blood sample in which reproductive hormones were measured. From 708 of these men data were also obtained on semen quality and testis size. The median sperm...

  10. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    Energy Technology Data Exchange (ETDEWEB)

    Rabiet, M., E-mail: marion.rabiet@unilim.f [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France); Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M. [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France)

    2010-03-15

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10{sup -3} g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  11. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    International Nuclear Information System (INIS)

    Rabiet, M.; Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M.

    2010-01-01

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10 -3 g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  12. Efficiency optimization of class-D biomedical inductive wireless power transfer systems by means of frequency adjustment.

    Science.gov (United States)

    Schormans, Matthew; Valente, Virgilio; Demosthenous, Andreas

    2015-01-01

    Inductive powering for implanted medical devices is a commonly employed technique, that allows for implants to avoid more dangerous methods such as the use of transcutaneous wires or implanted batteries. However, wireless powering in this way also comes with a number of difficulties and conflicting requirements, which are often met by using designs based on compromise. In particular, one aspect common to most inductive power links is that they are driven with a fixed frequency, which may not be optimal depending on factors such as coupling and load. In this paper, a method is proposed in which an inductive power link is driven by a frequency that is maintained at an optimum value f(opt), to ensure that the link is in resonance. In order to maintain this resonance, a phase tracking technique is employed at the primary side of the link; this allows for compensation of changes in coil separation and load. The technique is shown to provide significant improvements in maintained secondary voltage and efficiency for a range of loads when the link is overcoupled.

  13. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  14. Executive control resources and frequency of fatty food consumption: findings from an age-stratified community sample.

    Science.gov (United States)

    Hall, Peter A

    2012-03-01

    Fatty foods are regarded as highly appetitive, and self-control is often required to resist consumption. Executive control resources (ECRs) are potentially facilitative of self-control efforts, and therefore could predict success in the domain of dietary self-restraint. It is not currently known whether stronger ECRs facilitate resistance to fatty food consumption, and moreover, it is unknown whether such an effect would be stronger in some age groups than others. The purpose of the present study was to examine the association between ECRs and consumption of fatty foods among healthy community-dwelling adults across the adult life span. An age-stratified sample of individuals between 18 and 89 years of age attended two laboratory sessions. During the first session they completed two computer-administered tests of ECRs (Stroop and Go-NoGo) and a test of general cognitive function (Wechsler Abbreviated Scale of Intelligence); participants completed two consecutive 1-week recall measures to assess frequency of fatty and nonfatty food consumption. Regression analyses revealed that stronger ECRs were associated with lower frequency of fatty food consumption over the 2-week interval. This association was observed for both measures of ECR and a composite measure. The effect remained significant after adjustment for demographic variables (age, gender, socioeconomic status), general cognitive function, and body mass index. The observed effect of ECRs on fatty food consumption frequency was invariant across age group, and did not generalize to nonfatty food consumption. ECRs may be potentially important, though understudied, determinants of dietary behavior in adults across the life span.

  15. Transmission characteristics and optimal diagnostic samples to detect an FMDV infection in vaccinated and non-vaccinated sheep

    NARCIS (Netherlands)

    Eble, P.L.; Orsel, K.; Kluitenberg-van Hemert, F.; Dekker, A.

    2015-01-01

    We wanted to quantify transmission of FMDV Asia-1 in sheep and to evaluate which samples would be optimal for detection of an FMDV infection in sheep. For this, we used 6 groups of 4 non-vaccinated and 6 groups of 4 vaccinated sheep. In each group 2 sheep were inoculated and contact exposed to 2

  16. Polymorphisms in the Innate Immune IFIH1 Gene, Frequency of Enterovirus in Monthly Fecal Samples during Infancy, and Islet Autoimmunity

    Science.gov (United States)

    Witsø, Elisabet; Tapia, German; Cinek, Ondrej; Pociot, Flemming Michael; Stene, Lars C.; Rønningen, Kjersti S.

    2011-01-01

    Interferon induced with helicase C domain 1 (IFIH1) senses and initiates antiviral activity against enteroviruses. Genetic variants of IFIH1, one common and four rare SNPs have been associated with lower risk for type 1 diabetes. Our aim was to test whether these type 1 diabetes-associated IFIH1 polymorphisms are associated with the occurrence of enterovirus infection in the gut of healthy children, or influence the lack of association between gut enterovirus infection and islet autoimmunity. After testing of 46,939 Norwegian newborns, 421 children carrying the high risk genotype for type 1 diabetes (HLA-DR4-DQ8/DR3-DQ2) as well as 375 children without this genotype were included for monthly fecal collections from 3 to 35 months of age, and genotyped for the IFIH1 polymorphisms. A total of 7,793 fecal samples were tested for presence of enterovirus RNA using real time reverse transcriptase PCR. We found no association with frequency of enterovirus in the gut for the common IFIH1 polymorphism rs1990760, or either of the rare variants of rs35744605, rs35667974, rs35337543, while the enterovirus prevalence marginally differed in samples from the 8 carriers of a rare allele of rs35732034 (26.1%, 18/69 samples) as compared to wild-type homozygotes (12.4%, 955/7724 samples); odds ratio 2.5, p = 0.06. The association was stronger when infections were restricted to those with high viral loads (odds ratio 3.3, 95% CI 1.3–8.4, p = 0.01). The lack of association between enterovirus frequency and islet autoimmunity reported in our previous study was not materially influenced by the IFIH1 SNPs. We conclude that the type 1 diabetes-associated IFIH1 polymorphisms have no, or only minor influence on the occurrence, quantity or duration of enterovirus infection in the gut. Its effect on the risk of diabetes is likely to lie elsewhere in the pathogenic process than in the modification of gut infection. PMID:22110759

  17. Polymorphism discovery and allele frequency estimation using high-throughput DNA sequencing of target-enriched pooled DNA samples

    Directory of Open Access Journals (Sweden)

    Mullen Michael P

    2012-01-01

    Full Text Available Abstract Background The central role of the somatotrophic axis in animal post-natal growth, development and fertility is well established. Therefore, the identification of genetic variants affecting quantitative traits within this axis is an attractive goal. However, large sample numbers are a pre-requisite for the identification of genetic variants underlying complex traits and although technologies are improving rapidly, high-throughput sequencing of large numbers of complete individual genomes remains prohibitively expensive. Therefore using a pooled DNA approach coupled with target enrichment and high-throughput sequencing, the aim of this study was to identify polymorphisms and estimate allele frequency differences across 83 candidate genes of the somatotrophic axis, in 150 Holstein-Friesian dairy bulls divided into two groups divergent for genetic merit for fertility. Results In total, 4,135 SNPs and 893 indels were identified during the resequencing of the 83 candidate genes. Nineteen percent (n = 952 of variants were located within 5' and 3' UTRs. Seventy-two percent (n = 3,612 were intronic and 9% (n = 464 were exonic, including 65 indels and 236 SNPs resulting in non-synonymous substitutions (NSS. Significant (P ® MassARRAY. No significant differences (P > 0.1 were observed between the two methods for any of the 43 SNPs across both pools (i.e., 86 tests in total. Conclusions The results of the current study support previous findings of the use of DNA sample pooling and high-throughput sequencing as a viable strategy for polymorphism discovery and allele frequency estimation. Using this approach we have characterised the genetic variation within genes of the somatotrophic axis and related pathways, central to mammalian post-natal growth and development and subsequent lactogenesis and fertility. We have identified a large number of variants segregating at significantly different frequencies between cattle groups divergent for calving

  18. Ionizing radiation as optimization method for aluminum detection from drinking water samples

    International Nuclear Information System (INIS)

    Bazante-Yamguish, Renata; Geraldo, Aurea Beatriz C.; Moura, Eduardo; Manzoli, Jose Eduardo

    2013-01-01

    The presence of organic compounds in water samples is often responsible for metal complexation; depending on the analytic method, the organic fraction may dissemble the evaluation of the real values of metal concentration. Pre-treatment of the samples is advised when organic compounds are interfering agents, and thus sample mineralization may be accomplished by several chemical and/or physical methods. Here, the ionizing radiation was used as an advanced oxidation process (AOP), for sample pre-treatment before the analytic determination of total and dissolved aluminum by ICP-OES in drinking water samples from wells and spring source located at Billings dam region. Before irradiation, the spring source and wells' samples showed aluminum levels of 0.020 mg/l and 0.2 mg/l respectively; after irradiation, both samples showed a 8-fold increase of aluminum concentration. These results are discussed considering other physical and chemical parameters and peculiarities of sample sources. (author)

  19. Investigation of optimal seismic design methodology for piping systems supported by elasto-plastic dampers. Part. 2. Applicability for seismic waves with various frequency characteristics

    International Nuclear Information System (INIS)

    Ito, Tomohiro; Michiue, Masashi; Fujita, Katsuhisa

    2010-01-01

    In this study, the applicability of a previously developed optimal seismic design methodology, which can consider the structural integrity of not only piping systems but also elasto-plastic supporting devices, is studied for seismic waves with various frequency characteristics. This methodology employs a genetic algorithm and can search the optimal conditions such as the supporting location and the capacity and stiffness of the supporting devices. Here, a lead extrusion damper is treated as a typical elasto-plastic damper. Numerical simulations are performed using a simple piping system model. As a result, it is shown that the proposed optimal seismic design methodology is applicable to the seismic design of piping systems subjected to seismic waves with various frequency characteristics. The mechanism of optimization is also clarified. (author)

  20. Microelectrical Impedance Spectroscopy for the Differentiation between Normal and Cancerous Human Urothelial Cell Lines: Real-Time Electrical Impedance Measurement at an Optimal Frequency

    Directory of Open Access Journals (Sweden)

    Yangkyu Park

    2016-01-01

    Full Text Available Purpose. To distinguish between normal (SV-HUC-1 and cancerous (TCCSUP human urothelial cell lines using microelectrical impedance spectroscopy (μEIS. Materials and Methods. Two types of μEIS devices were designed and used in combination to measure the impedance of SV-HUC-1 and TCCSUP cells flowing through the channels of the devices. The first device (μEIS-OF was designed to determine the optimal frequency at which the impedance of two cell lines is most distinguishable. The μEIS-OF trapped the flowing cells and measured their impedance at a frequency ranging from 5 kHz to 1 MHz. The second device (μEIS-RT was designed for real-time impedance measurement of the cells at the optimal frequency. The impedance was measured instantaneously as the cells passed the sensing electrodes of μEIS-RT. Results. The optimal frequency, which maximized the average difference of the amplitude and phase angle between the two cell lines (p<0.001, was determined to be 119 kHz. The real-time impedance of the cell lines was measured at 119 kHz; the two cell lines differed significantly in terms of amplitude and phase angle (p<0.001. Conclusion. The μEIS-RT can discriminate SV-HUC-1 and TCCSUP cells by measuring the impedance at the optimal frequency determined by the μEIS-OF.

  1. Optimization of sample preparation variables for wedelolactone from Eclipta alba using Box-Behnken experimental design followed by HPLC identification.

    Science.gov (United States)

    Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S

    2013-07-01

    Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  2. Sampling optimization trade-offs for long-term monitoring of gamma dose rates

    NARCIS (Netherlands)

    Melles, S.J.; Heuvelink, G.B.M.; Twenhöfel, C.J.W.; Stöhlker, U.

    2008-01-01

    This paper applies a recently developed optimization method to examine the design of networks that monitor radiation under routine conditions. Annual gamma dose rates were modelled by combining regression with interpolation of the regression residuals using spatially exhaustive predictors and an

  3. Counting, enumerating and sampling of execution plans in a cost-based query optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    1999-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on

  4. Counting, Enumerating and Sampling of Execution Plans in a Cost-Based Query Optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    2000-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on the query

  5. Evaluation of dynamically dimensioned search algorithm for optimizing SWAT by altering sampling distributions and searching range

    Science.gov (United States)

    The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...

  6. Relationships between depressive symptoms and perceived social support, self-esteem, & optimism in a sample of rural adolescents.

    Science.gov (United States)

    Weber, Scott; Puskar, Kathryn Rose; Ren, Dianxu

    2010-09-01

    Stress, developmental changes and social adjustment problems can be significant in rural teens. Screening for psychosocial problems by teachers and other school personnel is infrequent but can be a useful health promotion strategy. We used a cross-sectional survey descriptive design to examine the inter-relationships between depressive symptoms and perceived social support, self-esteem, and optimism in a sample of rural school-based adolescents. Depressive symptoms were negatively correlated with peer social support, family social support, self-esteem, and optimism. Findings underscore the importance for teachers and other school staff to provide health education. Results can be used as the basis for education to improve optimism, self-esteem, social supports and, thus, depression symptoms of teens.

  7. Optimization of Sample Preparation for the Identification and Quantification of Saxitoxin in Proficiency Test Mussel Sample using Liquid Chromatography-Tandem Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Kirsi Harju

    2015-11-01

    Full Text Available Saxitoxin (STX and some selected paralytic shellfish poisoning (PSP analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS. Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk. Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD.

  8. Optimizing detection of noble gas emission at a former UNE site: sample strategy, collection, and analysis

    Science.gov (United States)

    Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.

    2013-12-01

    Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.

  9. High-frequency, long-duration water sampling in acid mine drainage studies: a short review of current methods and recent advances in automated water samplers

    Science.gov (United States)

    Chapin, Thomas

    2015-01-01

    Hand-collected grab samples are the most common water sampling method but using grab sampling to monitor temporally variable aquatic processes such as diel metal cycling or episodic events is rarely feasible or cost-effective. Currently available automated samplers are a proven, widely used technology and typically collect up to 24 samples during a deployment. However, these automated samplers are not well suited for long-term sampling in remote areas or in freezing conditions. There is a critical need for low-cost, long-duration, high-frequency water sampling technology to improve our understanding of the geochemical response to temporally variable processes. This review article will examine recent developments in automated water sampler technology and utilize selected field data from acid mine drainage studies to illustrate the utility of high-frequency, long-duration water sampling.

  10. Optimization and Implementation of Scaling-Free CORDIC-Based Direct Digital Frequency Synthesizer for Body Care Area Network Systems

    Directory of Open Access Journals (Sweden)

    Ying-Shen Juang

    2012-01-01

    Full Text Available Coordinate rotation digital computer (CORDIC is an efficient algorithm for computations of trigonometric functions. Scaling-free-CORDIC is one of the famous CORDIC implementations with advantages of speed and area. In this paper, a novel direct digital frequency synthesizer (DDFS based on scaling-free CORDIC is presented. The proposed multiplier-less architecture with small ROM and pipeline data path has advantages of high data rate, high precision, high performance, and less hardware cost. The design procedure with performance and hardware analysis for optimization has also been given. It is verified by Matlab simulations and then implemented with field programmable gate array (FPGA by Verilog. The spurious-free dynamic range (SFDR is over 86.85 dBc, and the signal-to-noise ratio (SNR is more than 81.12 dB. The scaling-free CORDIC-based architecture is suitable for VLSI implementations for the DDFS applications in terms of hardware cost, power consumption, SNR, and SFDR. The proposed DDFS is very suitable for medical instruments and body care area network systems.

  11. Towards an optimal sampling strategy for assessing genetic variation within and among white clover (Trifolium repens L. cultivars using AFLP

    Directory of Open Access Journals (Sweden)

    Khosro Mehdi Khanlou

    2011-01-01

    Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.

  12. Optimal sample preparation for nanoparticle metrology (statistical size measurements) using atomic force microscopy

    International Nuclear Information System (INIS)

    Hoo, Christopher M.; Doan, Trang; Starostin, Natasha; West, Paul E.; Mecartney, Martha L.

    2010-01-01

    Optimal deposition procedures are determined for nanoparticle size characterization by atomic force microscopy (AFM). Accurate nanoparticle size distribution analysis with AFM requires non-agglomerated nanoparticles on a flat substrate. The deposition of polystyrene (100 nm), silica (300 and 100 nm), gold (100 nm), and CdSe quantum dot (2-5 nm) nanoparticles by spin coating was optimized for size distribution measurements by AFM. Factors influencing deposition include spin speed, concentration, solvent, and pH. A comparison using spin coating, static evaporation, and a new fluid cell deposition method for depositing nanoparticles is also made. The fluid cell allows for a more uniform and higher density deposition of nanoparticles on a substrate at laminar flow rates, making nanoparticle size analysis via AFM more efficient and also offers the potential for nanoparticle analysis in liquid environments.

  13. Optimizing human semen cryopreservation by reducing test vial volume and repetitive test vial sampling

    DEFF Research Database (Denmark)

    Jensen, Christian F S; Ohl, Dana A; Parker, Walter R

    2015-01-01

    OBJECTIVE: To investigate optimal test vial (TV) volume, utility and reliability of TVs, intermediate temperature exposure (-88°C to -93°C) before cryostorage, cryostorage in nitrogen vapor (VN2) and liquid nitrogen (LN2), and long-term stability of VN2 cryostorage of human semen. DESIGN......: Prospective clinical laboratory study. SETTING: University assisted reproductive technology (ART) laboratory. PATIENT(S): A total of 594 patients undergoing semen analysis and cryopreservation. INTERVENTION(S): Semen analysis, cryopreservation with different intermediate steps and in different volumes (50......-1,000 μL), and long-term storage in LN2 or VN2. MAIN OUTCOME MEASURE(S): Optimal TV volume, prediction of cryosurvival (CS) in ART procedure vials (ARTVs) with pre-freeze semen parameters and TV CS, post-thaw motility after two- or three-step semen cryopreservation and cryostorage in VN2 and LN2. RESULT...

  14. Optimization of Sample Preparation processes of Bone Material for Raman Spectroscopy.

    Science.gov (United States)

    Chikhani, Madelen; Wuhrer, Richard; Green, Hayley

    2018-03-30

    Raman spectroscopy has recently been investigated for use in the calculation of postmortem interval from skeletal material. The fluorescence generated by samples, which affects the interpretation of Raman data, is a major limitation. This study compares the effectiveness of two sample preparation techniques, chemical bleaching and scraping, in the reduction of fluorescence from bone samples during testing with Raman spectroscopy. Visual assessment of Raman spectra obtained at 1064 nm excitation following the preparation protocols indicates an overall reduction in fluorescence. Results demonstrate that scraping is more effective at resolving fluorescence than chemical bleaching. The scraping of skeletonized remains prior to Raman analysis is a less destructive method and allows for the preservation of a bone sample in a state closest to its original form, which is beneficial in forensic investigations. It is recommended that bone scraping supersedes chemical bleaching as the preferred method for sample preparation prior to Raman spectroscopy. © 2018 American Academy of Forensic Sciences.

  15. Optimism and self-esteem are related to sleep. Results from a large community-based sample.

    Science.gov (United States)

    Lemola, Sakari; Räikkönen, Katri; Gomez, Veronica; Allemand, Mathias

    2013-12-01

    There is evidence that positive personality characteristics, such as optimism and self-esteem, are important for health. Less is known about possible determinants of positive personality characteristics. To test the relationship of optimism and self-esteem with insomnia symptoms and sleep duration. Sleep parameters, optimism, and self-esteem were assessed by self-report in a community-based sample of 1,805 adults aged between 30 and 84 years in the USA. Moderation of the relation between sleep and positive characteristics by gender and age as well as potential confounding of the association by depressive disorder was tested. Individuals with insomnia symptoms scored lower on optimism and self-esteem largely independent of age and sex, controlling for symptoms of depression and sleep duration. Short sleep duration (self-esteem when compared to individuals sleeping 7-8 h, controlling depressive symptoms. Long sleep duration (>9 h) was also related to low optimism and self-esteem independent of age and sex. Good and sufficient sleep is associated with positive personality characteristics. This relationship is independent of the association between poor sleep and depression.

  16. A Counterexample on Sample-Path Optimality in Stable Markov Decision Chains with the Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2014-01-01

    Roč. 163, č. 2 (2014), s. 674-684 ISSN 0022-3239 Grant - others:PSF Organization(US) 012/300/02; CONACYT (México) and ASCR (Czech Republic)(MX) 171396 Institutional support: RVO:67985556 Keywords : Strong sample-path optimality * Lyapunov function condition * Stationary policy * Expected average reward criterion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.509, year: 2014 http://library.utia.cas.cz/separaty/2014/E/sladky-0432661.pdf

  17. Optimization Extracting Technology of Cynomorium songaricum Rupr. Saponins by Ultrasonic and Determination of Saponins Content in Samples with Different Source

    OpenAIRE

    Xiaoli Wang; Qingwei Wei; Xinqiang Zhu; Chunmei Wang; Yonggang Wang; Peng Lin; Lin Yang

    2015-01-01

    Extraction process was optimized by single factor and orthogonal experiment (L9 (34)). Moreover, the content determination was studied in methodology. The optimum ultrasonic extraction conditions were: ethanol concentration of 75%, ultrasonic power of 420 w, the solid-liquid ratio of 1:15, extraction duration of 45 min, extraction temperature of 90°C and extraction for 2 times. Saponins content in Guazhou samples was significantly higher than those in Xinjiang and Inner Mongolia. Meanwhile, G...

  18. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    Science.gov (United States)

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  19. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    Directory of Open Access Journals (Sweden)

    D. Ramyachitra

    2015-09-01

    Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  20. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    Science.gov (United States)

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  1. Effects of feeding frequency on apparent energy and nutrient digestibility/availability of channel catfish, Ictalurus punctatus, reared at optimal and suboptimal temperatures

    Science.gov (United States)

    This study examined the effects of feeding frequency (daily versus every other day [EOD]) on nutrient digestibility/availability of channel catfish, Ictalurus punctatus, reared at optimal (30 C) and suboptimal (24 C) temperatures. A 28% protein practical diet was used as the test diet, and chromic o...

  2. Optimized Clinical Use of RNALater and FFPE Samples for Quantitative Proteomics

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Kastaniegaard, Kenneth; Padurariu, Simona

    2015-01-01

    Introduction and Objectives The availability of patient samples is essential for clinical proteomic research. Biobanks worldwide store mainly samples stabilized in RNAlater as well as formalin-fixed and paraffin embedded (FFPE) biopsies. Biobank material is a potential source for clinical...... we compare to FFPE and frozen samples being the control. Methods From the sigmoideum of two healthy participants’ twenty-four biopsies were extracted using endoscopy. The biopsies was stabilized either by being directly frozen, RNAlater, FFPE or incubated for 30 min at room temperature prior to FFPE...... information. Conclusion We have demonstrated that quantitative proteome analysis and pathway mapping of samples stabilized in RNAlater as well as by FFPE is feasible with minimal impact on the quality of protein quantification and post-translational modifications....

  3. COARSE: Convex Optimization based autonomous control for Asteroid Rendezvous and Sample Exploration, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Sample return missions, by nature, require high levels of spacecraft autonomy. Developments in hardware avionics have led to more capable real-time onboard computing...

  4. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  5. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  6. The optimally sampled galaxy-wide stellar initial mass function. Observational tests and the publicly available GalIMF code

    Science.gov (United States)

    Yan, Zhiqiang; Jerabkova, Tereza; Kroupa, Pavel

    2017-11-01

    Here we present a full description of the integrated galaxy-wide initial mass function (IGIMF) theory in terms of the optimal sampling and compare it with available observations. Optimal sampling is the method we use to discretize the IMF deterministically into stellar masses. Evidence indicates that nature may be closer to deterministic sampling as observations suggest a smaller scatter of various relevant observables than random sampling would give, which may result from a high level of self-regulation during the star formation process. We document the variation of IGIMFs under various assumptions. The results of the IGIMF theory are consistent with the empirical relation between the total mass of a star cluster and the mass of its most massive star, and the empirical relation between the star formation rate (SFR) of a galaxy and the mass of its most massive cluster. Particularly, we note a natural agreement with the empirical relation between the IMF power-law index and the SFR of a galaxy. The IGIMF also results in a relation between the SFR of a galaxy and the mass of its most massive star such that, if there were no binaries, galaxies with SFR first time, we show optimally sampled galaxy-wide IMFs (OSGIMF) that mimic the IGIMF with an additional serrated feature. Finally, a Python module, GalIMF, is provided allowing the calculation of the IGIMF and OSGIMF dependent on the galaxy-wide SFR and metallicity. A copy of the python code model is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A126

  7. MCMC-ODPR: Primer design optimization using Markov Chain Monte Carlo sampling

    Directory of Open Access Journals (Sweden)

    Kitchen James L

    2012-11-01

    Full Text Available Abstract Background Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR algorithm. Results After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. Conclusions MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  8. MCMC-ODPR: primer design optimization using Markov Chain Monte Carlo sampling.

    Science.gov (United States)

    Kitchen, James L; Moore, Jonathan D; Palmer, Sarah A; Allaby, Robin G

    2012-11-05

    Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  9. Spatio-temporal optimization of sampling for bluetongue vectors (Culicoides) near grazing livestock

    DEFF Research Database (Denmark)

    Kirkeby, Carsten; Stockmarr, Anders; Bødker, Rene

    2013-01-01

    BACKGROUND: Estimating the abundance of Culicoides using light traps is influenced by a large variation in abundance in time and place. This study investigates the optimal trapping strategy to estimate the abundance or presence/absence of Culicoides on a field with grazing animals. We used 45 light...... absence of vectors on the field. The variation in the estimated abundance decreased steeply when using up to six traps, and was less pronounced when using more traps, although no clear cutoff was found. CONCLUSIONS: Despite spatial clustering in vector abundance, we found no effect of increasing...... monitoring programmes on fields with grazing animals....

  10. Optimized sample preparation for two-dimensional gel electrophoresis of soluble proteins from chicken bursa of Fabricius

    Directory of Open Access Journals (Sweden)

    Zheng Xiaojuan

    2009-10-01

    Full Text Available Abstract Background Two-dimensional gel electrophoresis (2-DE is a powerful method to study protein expression and function in living organisms and diseases. This technique, however, has not been applied to avian bursa of Fabricius (BF, a central immune organ. Here, optimized 2-DE sample preparation methodologies were constructed for the chicken BF tissue. Using the optimized protocol, we performed further 2-DE analysis on a soluble protein extract from the BF of chickens infected with virulent avibirnavirus. To demonstrate the quality of the extracted proteins, several differentially expressed protein spots selected were cut from 2-DE gels and identified by matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS. Results An extraction buffer containing 7 M urea, 2 M thiourea, 2% (w/v 3-[(3-cholamidopropyl-dimethylammonio]-1-propanesulfonate (CHAPS, 50 mM dithiothreitol (DTT, 0.2% Bio-Lyte 3/10, 1 mM phenylmethylsulfonyl fluoride (PMSF, 20 U/ml Deoxyribonuclease I (DNase I, and 0.25 mg/ml Ribonuclease A (RNase A, combined with sonication and vortex, yielded the best 2-DE data. Relative to non-frozen immobilized pH gradient (IPG strips, frozen IPG strips did not result in significant changes in the 2-DE patterns after isoelectric focusing (IEF. When the optimized protocol was used to analyze the spleen and thymus, as well as avibirnavirus-infected bursa, high quality 2-DE protein expression profiles were obtained. 2-DE maps of BF of chickens infected with virulent avibirnavirus were visibly different and many differentially expressed proteins were found. Conclusion These results showed that method C, in concert extraction buffer IV, was the most favorable for preparing samples for IEF and subsequent protein separation and yielded the best quality 2-DE patterns. The optimized protocol is a useful sample preparation method for comparative proteomics analysis of chicken BF tissues.

  11. Optimizing sampling strategy for radiocarbon dating of Holocene fluvial systems in a vertically aggrading setting

    International Nuclear Information System (INIS)

    Toernqvist, T.E.; Dijk, G.J. Van

    1993-01-01

    The authors address the question of how to determine the period of activity (sedimentation) of fossil (Holocene) fluvial systems in vertically aggrading environments. The available data base consists of almost 100 14 C ages from the Rhine-Meuse delta. Radiocarbon samples from the tops of lithostratigraphically correlative organic beds underneath overbank deposits (sample type 1) yield consistent ages, indicating a synchronous onset of overbank deposition over distances of at least up to 20 km along channel belts. Similarly, 14 C ages from the base of organic residual channel fills (sample type 3) generally indicate a clear termination of within-channel sedimentation. In contrast, 14 C ages from the base of organic beds overlying overbank deposits (sample type 2), commonly assumed to represent the end of fluvial sedimentation, show a large scatter reaching up to 1000 14 C years. It is concluded that a combination of sample types 1 and 3 generally yields a satisfactory delimitation of the period of activity of a fossil fluvial system. 30 refs., 11 figs., 4 tabs

  12. Sterile Reverse Osmosis Water Combined with Friction Are Optimal for Channel and Lever Cavity Sample Collection of Flexible Duodenoscopes

    Directory of Open Access Journals (Sweden)

    Michelle J. Alfa

    2017-11-01

    Full Text Available IntroductionSimulated-use buildup biofilm (BBF model was used to assess various extraction fluids and friction methods to determine the optimal sample collection method for polytetrafluorethylene channels. In addition, simulated-use testing was performed for the channel and lever cavity of duodenoscopes.Materials and methodsBBF was formed in polytetrafluorethylene channels using Enterococcus faecalis, Escherichia coli, and Pseudomonas aeruginosa. Sterile reverse osmosis (RO water, and phosphate-buffered saline with and without Tween80 as well as two neutralizing broths (Letheen and Dey–Engley were each assessed with and without friction. Neutralizer was added immediately after sample collection and samples concentrated using centrifugation. Simulated-use testing was done using TJF-Q180V and JF-140F Olympus duodenoscopes.ResultsDespite variability in the bacterial CFU in the BBF model, none of the extraction fluids tested were significantly better than RO. Borescope examination showed far less residual material when friction was part of the extraction protocol. The RO for flush-brush-flush (FBF extraction provided significantly better recovery of E. coli (p = 0.02 from duodenoscope lever cavities compared to the CDC flush method.Discussion and conclusionWe recommend RO with friction for FBF extraction of the channel and lever cavity of duodenoscopes. Neutralizer and sample concentration optimize recovery of viable bacteria on culture.

  13. Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.

    Directory of Open Access Journals (Sweden)

    João Tiago Marques

    Full Text Available Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.

  14. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems

    International Nuclear Information System (INIS)

    Zio, E.; Pedroni, N.

    2010-01-01

    The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the

  15. Cadmium and lead determination by ICPMS: Method optimization and application in carabao milk samples

    Directory of Open Access Journals (Sweden)

    Riza A. Magbitang

    2012-06-01

    Full Text Available A method utilizing inductively coupled plasma mass spectrometry (ICPMS as the element-selective detector with microwave-assisted nitric acid digestion as the sample pre-treatment technique was developed for the simultaneous determination of cadmium (Cd and lead (Pb in milk samples. The estimated detection limits were 0.09ìg kg-1 and 0.33ìg kg-1 for Cd and Pb, respectively. The method was linear in the concentration range 0.01 to 500ìg kg-1with correlation coefficients of 0.999 for both analytes.The method was validated using certified reference material BCR 150 and the determined values for Cd and Pb were 18.24 ± 0.18 ìg kg-1 and 807.57 ± 7.07ìg kg-1, respectively. Further validation using another certified reference material, NIST 1643e, resulted in determined concentrations of 6.48 ± 0.10 ìg L-1 for Cd and 21.96 ± 0.87 ìg L-1 for Pb. These determined values agree well with the certified values in the reference materials.The method was applied to processed and raw carabao milk samples collected in Nueva Ecija, Philippines.The Cd levels determined in the samples were in the range 0.11 ± 0.07 to 5.17 ± 0.13 ìg kg-1 for the processed milk samples, and 0.11 ± 0.07 to 0.45 ± 0.09 ìg kg-1 for the raw milk samples. The concentrations of Pb were in the range 0.49 ± 0.21 to 5.82 ± 0.17 ìg kg-1 for the processed milk samples, and 0.72 ± 0.18 to 6.79 ± 0.20 ìg kg-1 for the raw milk samples.

  16. D1S80 (pMCT118) allele frequencies in a Malay population sample from Malaysia.

    Science.gov (United States)

    Koh, C L; Lim, M E; Ng, H S; Sam, C K

    1997-01-01

    The D1S80 allele frequencies in 124 unrelated Malays from the Malaysian population were determined and 51 genotypes and 19 alleles were encountered. The D1S80 frequency distribution met Hardy-Weinberg expectations. The observed heterozygosity was 0.80 and the power of discrimination was 0.96.

  17. Method optimization for non-equilibrium solid phase microextraction sampling of HAPs for GC/MS analysis

    Science.gov (United States)

    Zawadowicz, M. A.; Del Negro, L. A.

    2010-12-01

    Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.

  18. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    International Nuclear Information System (INIS)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles

    2014-01-01

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr 2 ) than is the pentafluorostyrene component distribution

  19. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    Energy Technology Data Exchange (ETDEWEB)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles, E-mail: cwilkins@uark.edu

    2014-01-15

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr{sub 2}) than is the pentafluorostyrene component distribution.

  20. Geochemical sampling scheme optimization on mine wastes based on hyperspectral data

    CSIR Research Space (South Africa)

    Zhao, T

    2008-07-01

    Full Text Available decontamination, for example, acid-generating minerals. Acid rock drainage can adversely have an impact on the quality of drinking water and the health of riparian ecosystems. To assess or monitor environmental impact of mining, sampling of mine waste is required...

  1. Robust, Sensitive, and Automated Phosphopeptide Enrichment Optimized for Low Sample Amounts Applied to Primary Hippocampal Neurons

    NARCIS (Netherlands)

    Post, Harm; Penning, Renske; Fitzpatrick, Martin; Garrigues, L.B.; Wu, W.; Mac Gillavry, H.D.; Hoogenraad, C.C.; Heck, A.J.R.; Altelaar, A.F.M.

    2017-01-01

    Because of the low stoichiometry of protein phosphorylation, targeted enrichment prior to LC–MS/MS analysis is still essential. The trend in phosphoproteome analysis is shifting toward an increasing number of biological replicates per experiment, ideally starting from very low sample amounts,

  2. Optimal sampling strategies to assess inulin clearance in children by the inulin single-injection method

    NARCIS (Netherlands)

    van Rossum, Lyonne K.; Mathot, Ron A. A.; Cransberg, Karlien; Vulto, Arnold G.

    2003-01-01

    Glomerular filtration rate in patients can be determined by estimating the plasma clearance of inulin with the single-injection method. In this method, a single bolus injection of inulin is administered and several blood samples are collected. For practical and convenient application of this method

  3. Optimization of deconvolution software used in the study of spectra of soil samples from Madagascar

    International Nuclear Information System (INIS)

    ANDRIAMADY NARIMANANA, S.F.

    2005-01-01

    The aim of this work is to perform the deconvolution of gamma spectra by using the deconvolution peak program. Synthetic spectra, reference materials and ten soil samples with various U-238 activities from three regions of Madagascar were used. This work concerns : soil sample spectra with low activities of about (47±2) Bq.kg -1 from Ankatso, soil sample spectra with average activities of about (125±2)Bq.kg -1 from Antsirabe and soil sample spectra with high activities of about (21100± 120) Bq.kg -1 from Vinaninkarena. Singlet and multiplet peaks with various intensities were found in each soil spectrum. Interactive Peak Fit (IPF) program in Genie-PC from Canberra Industries allows to deconvoluate many multiplet regions : quartet within 235 keV-242 keV, Pb-214 and Pb-212 within 294 keV -301 keV; Th-232 daughters within 582 keV - 584 keV; Ac-228 within 904 keV -911 keV and within 964 keV-970 keV and Bi-214 within 1401 keV - 1408 keV. Those peaks were used to quantify considered radionuclides. However, IPF cannot resolve Ra-226 peak at 186,1 keV. [fr

  4. A systematic random sampling scheme optimized to detect the proportion of rare synapses in the neuropil.

    Science.gov (United States)

    da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C

    2009-05-30

    Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.

  5. Optimization of fecal cytology in the dog: comparison of three sampling methods.

    Science.gov (United States)

    Frezoulis, Petros S; Angelidou, Elisavet; Diakou, Anastasia; Rallis, Timoleon S; Mylonakis, Mathios E

    2017-09-01

    Dry-mount fecal cytology (FC) is a component of the diagnostic evaluation of gastrointestinal diseases. There is limited information on the possible effect of the sampling method on the cytologic findings of healthy dogs or dogs admitted with diarrhea. We aimed to: (1) establish sampling method-specific expected values of selected cytologic parameters (isolated or clustered epithelial cells, neutrophils, lymphocytes, macrophages, spore-forming rods) in clinically healthy dogs; (2) investigate if the detection of cytologic abnormalities differs among methods in dogs admitted with diarrhea; and (3) investigate if there is any association between FC abnormalities and the anatomic origin (small- or large-bowel diarrhea) or the chronicity of diarrhea. Sampling with digital examination (DE), rectal scraping (RS), and rectal lavage (RL) was prospectively assessed in 37 healthy and 34 diarrheic dogs. The median numbers of isolated ( p = 0.000) or clustered ( p = 0.002) epithelial cells, and of lymphocytes ( p = 0.000), differed among the 3 methods in healthy dogs. In the diarrheic dogs, the RL method was the least sensitive in detecting neutrophils, and isolated or clustered epithelial cells. Cytologic abnormalities were not associated with the origin or the chronicity of diarrhea. Sampling methods differed in their sensitivity to detect abnormalities in FC; DE or RS may be of higher sensitivity compared to RL. Anatomic origin or chronicity of diarrhea do not seem to affect the detection of cytologic abnormalities.

  6. Study of the Effect of Temporal Sampling Frequency on DSCOVR Observations Using the GEOS-5 Nature Run Results. Part II; Cloud Coverage

    Science.gov (United States)

    Holdaway, Daniel; Yang, Yuekui

    2016-01-01

    This is the second part of a study on how temporal sampling frequency affects satellite retrievals in support of the Deep Space Climate Observatory (DSCOVR) mission. Continuing from Part 1, which looked at Earth's radiation budget, this paper presents the effect of sampling frequency on DSCOVR-derived cloud fraction. The output from NASA's Goddard Earth Observing System version 5 (GEOS-5) Nature Run is used as the "truth". The effect of temporal resolution on potential DSCOVR observations is assessed by subsampling the full Nature Run data. A set of metrics, including uncertainty and absolute error in the subsampled time series, correlation between the original and the subsamples, and Fourier analysis have been used for this study. Results show that, for a given sampling frequency, the uncertainties in the annual mean cloud fraction of the sunlit half of the Earth are larger over land than over ocean. Analysis of correlation coefficients between the subsamples and the original time series demonstrates that even though sampling at certain longer time intervals may not increase the uncertainty in the mean, the subsampled time series is further and further away from the "truth" as the sampling interval becomes larger and larger. Fourier analysis shows that the simulated DSCOVR cloud fraction has underlying periodical features at certain time intervals, such as 8, 12, and 24 h. If the data is subsampled at these frequencies, the uncertainties in the mean cloud fraction are higher. These results provide helpful insights for the DSCOVR temporal sampling strategy.

  7. Optimization of sample absorbance for quantitative analysis in the presence of pathlength error in the IR and NIR regions

    International Nuclear Information System (INIS)

    Hirschfeld, T.; Honigs, D.; Hieftje, G.

    1985-01-01

    Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable

  8. Centrifugation protocols: tests to determine optimal lithium heparin and citrate plasma sample quality.

    Science.gov (United States)

    Dimeski, Goce; Solano, Connie; Petroff, Mark K; Hynd, Matthew

    2011-05-01

    Currently, no clear guidelines exist for the most appropriate tests to determine sample quality from centrifugation protocols for plasma sample types with both lithium heparin in gel barrier tubes for biochemistry testing and citrate tubes for coagulation testing. Blood was collected from 14 participants in four lithium heparin and one serum tube with gel barrier. The plasma tubes were centrifuged at four different centrifuge settings and analysed for potassium (K(+)), lactate dehydrogenase (LD), glucose and phosphorus (Pi) at zero time, poststorage at six hours at 21 °C and six days at 2-8°C. At the same time, three citrate tubes were collected and centrifuged at three different centrifuge settings and analysed immediately for prothrombin time/international normalized ratio, activated partial thromboplastin time, derived fibrinogen and surface-activated clotting time (SACT). The biochemistry analytes indicate plasma is less stable than serum. Plasma sample quality is higher with longer centrifugation time, and much higher g force. Blood cells present in the plasma lyse with time or are damaged when transferred in the reaction vessels, causing an increase in the K(+), LD and Pi above outlined limits. The cells remain active and consume glucose even in cold storage. The SACT is the only coagulation parameter that was affected by platelets >10 × 10(9)/L in the citrate plasma. In addition to the platelet count, a limited but sensitive number of assays (K(+), LD, glucose and Pi for biochemistry, and SACT for coagulation) can be used to determine appropriate centrifuge settings to consistently obtain the highest quality lithium heparin and citrate plasma samples. The findings will aid laboratories to balance the need to provide the most accurate results in the best turnaround time.

  9. [Optimization of solid-phase extraction for enrichment of toxic organic compounds in water samples].

    Science.gov (United States)

    Zhang, Ming-quan; Li, Feng-min; Wu, Qian-yuan; Hu, Hong-ying

    2013-05-01

    A concentration method for enrichment of toxic organic compounds in water samples has been developed based on combined solid-phase extraction (SPE) to reduce impurities and improve recoveries of target compounds. This SPE method was evaluated in every stage to identify the source of impurities. Based on the analysis of Waters Oasis HLB without water samples, the eluent of SPE sorbent after dichloromethane and acetone contributed 85% of impurities during SPE process. In order to reduce the impurities from SPE sorbent, soxhlet extraction of dichloromethane followed by acetone and lastly methanol was applied to the sorbents for 24 hours and the results had proven that impurities were reduced significantly. In addition to soxhlet extraction, six types of prevalent SPE sorbents were used to absorb 40 target compounds, the lgK(ow) values of which were within the range of 1.46 and 8.1, and recovery rates were compared. It was noticed and confirmed that Waters Oasis HLB had shown the best recovery results for most of the common testing samples among all three styrenedivinylbenzene (SDB) polymer sorbents, which were 77% on average. Furthermore, Waters SepPak AC-2 provided good recovery results for pesticides among three types of activated carbon sorbents and the average recovery rates reached 74%. Therefore, Waters Oasis HLB and Waters SepPak AC-2 were combined to obtain a better recovery and the average recovery rate for the tested 40 compounds of this new SPE method was 87%.

  10. Zoonotic species of the genus Arcobacter in poultry from different regions of Costa Rica: frequency of isolation and comparison of two types of sampling

    International Nuclear Information System (INIS)

    Valverde Bogantes, Esteban

    2014-01-01

    The presence of the zoonotic species of Arcobacter are evaluated in laying hens, broilers, ducks and geese of Costa Rica. The frequency of isolation of the genus Arcobacter is determined in samples of poultry using culture methods and molecular techniques. The performance of cloacal swab sampling and fecal collection is compared from poultry for isolation of Arcobacter. The isolation frequencies of Arcobacter species in poultry have indicated a potential public health problem in Costa Rica. Poultry are determined as sources of contamination and dispersion of the bacteria [es

  11. Optimizing the data acquisition rate for a remotely controllable structural monitoring system with parallel operation and self-adaptive sampling

    International Nuclear Information System (INIS)

    Sheng, Wenjuan; Guo, Aihuang; Liu, Yang; Azmi, Asrul Izam; Peng, Gang-Ding

    2011-01-01

    We present a novel technique that optimizes the real-time remote monitoring and control of dispersed civil infrastructures. The monitoring system is based on fiber Bragg gating (FBG) sensors, and transfers data via Ethernet. This technique combines parallel operation and self-adaptive sampling to increase the data acquisition rate in remote controllable structural monitoring systems. The compact parallel operation mode is highly efficient at achieving the highest possible data acquisition rate for the FBG sensor based local data acquisition system. Self-adaptive sampling is introduced to continuously coordinate local acquisition and remote control for data acquisition rate optimization. Key issues which impact the operation of the whole system, such as the real-time data acquisition rate, data processing capability, and buffer usage, are investigated. The results show that, by introducing parallel operation and self-adaptive sampling, the data acquisition rate can be increased by several times without affecting the system operating performance on both local data acquisition and remote process control

  12. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas

    Science.gov (United States)

    Adu-Brimpong, Joel; Coffey, Nathan; Ayers, Colby; Berrigan, David; Yingling, Leah R.; Thomas, Samantha; Mitchell, Valerie; Ahuja, Chaarushi; Rivers, Joshua; Hartz, Jacob; Powell-Wiley, Tiffany M.

    2017-01-01

    Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist), a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783) participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions). Twelve street segments per home address were assessed for (1) Land-Use Type; (2) Public Transportation Availability; (3) Street Characteristics; (4) Environment Quality and (5) Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9) and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6). Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3). Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p health behaviors and outcomes. PMID:28282878

  13. Population Pharmacokinetics of Gemcitabine and dFdU in Pancreatic Cancer Patients Using an Optimal Design, Sparse Sampling Approach.

    Science.gov (United States)

    Serdjebi, Cindy; Gattacceca, Florence; Seitz, Jean-François; Fein, Francine; Gagnière, Johan; François, Eric; Abakar-Mahamat, Abakar; Deplanque, Gael; Rachid, Madani; Lacarelle, Bruno; Ciccolini, Joseph; Dahan, Laetitia

    2017-06-01

    Gemcitabine remains a pillar in pancreatic cancer treatment. However, toxicities are frequently observed. Dose adjustment based on therapeutic drug monitoring might help decrease the occurrence of toxicities. In this context, this work aims at describing the pharmacokinetics (PK) of gemcitabine and its metabolite dFdU in pancreatic cancer patients and at identifying the main sources of their PK variability using a population PK approach, despite a sparse sampled-population and heterogeneous administration and sampling protocols. Data from 38 patients were included in the analysis. The 3 optimal sampling times were determined using KineticPro and the population PK analysis was performed on Monolix. Available patient characteristics, including cytidine deaminase (CDA) status, were tested as covariates. Correlation between PK parameters and occurrence of severe hematological toxicities was also investigated. A two-compartment model best fitted the gemcitabine and dFdU PK data (volume of distribution and clearance for gemcitabine: V1 = 45 L and CL1 = 4.03 L/min; for dFdU: V2 = 36 L and CL2 = 0.226 L/min). Renal function was found to influence gemcitabine clearance, and body surface area to impact the volume of distribution of dFdU. However, neither CDA status nor the occurrence of toxicities was correlated to PK parameters. Despite sparse sampling and heterogeneous administration and sampling protocols, population and individual PK parameters of gemcitabine and dFdU were successfully estimated using Monolix population PK software. The estimated parameters were consistent with previously published results. Surprisingly, CDA activity did not influence gemcitabine PK, which was explained by the absence of CDA-deficient patients enrolled in the study. This work suggests that even sparse data are valuable to estimate population and individual PK parameters in patients, which will be usable to individualize the dose for an optimized benefit to risk ratio.

  14. Tracking a changing environment: optimal sampling, adaptive memory and overnight effects.

    Science.gov (United States)

    Dunlap, Aimee S; Stephens, David W

    2012-02-01

    Foraging in a variable environment presents a classic problem of decision making with incomplete information. Animals must track the changing environment, remember the best options and make choices accordingly. While several experimental studies have explored the idea that sampling behavior reflects the amount of environmental change, we take the next logical step in asking how change influences memory. We explore the hypothesis that memory length should be tied to the ecological relevance and the value of the information learned, and that environmental change is a key determinant of the value of memory. We use a dynamic programming model to confirm our predictions and then test memory length in a factorial experiment. In our experimental situation we manipulate rates of change in a simple foraging task for blue jays over a 36 h period. After jays experienced an experimentally determined change regime, we tested them at a range of retention intervals, from 1 to 72 h. Manipulated rates of change influenced learning and sampling rates: subjects sampled more and learned more quickly in the high change condition. Tests of retention revealed significant interactions between retention interval and the experienced rate of change. We observed a striking and surprising difference between the high and low change treatments at the 24h retention interval. In agreement with earlier work we find that a circadian retention interval is special, but we find that the extent of this 'specialness' depends on the subject's prior experience of environmental change. Specifically, experienced rates of change seem to influence how subjects balance recent information against past experience in a way that interacts with the passage of time. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Determination of Ergot Alkaloids: Purity and Stability Assessment of Standards and Optimization of Extraction Conditions for Cereal Samples

    DEFF Research Database (Denmark)

    Krska, R.; Berthiller, F.; Schuhmacher, R.

    2008-01-01

    as those that are the most common and physiologically active. The purity of the standards was investigated by means of liquid chromatography with diode array detection, electrospray ionization, and time-of-flight mass spectrometry (LC-DAD-ESI-TOF-MS). All of the standards assessed showed purity levels...... (PSA) before LC/MS/MS. Based on the results obtained from these optimization studies, a mixture of acetonitrile with ammonium carbonate buffer was used as extraction solvent, as recoveries for all analyzed ergot alkaloids were significantly higher than those with the other solvents. Different sample...

  16. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas

    Directory of Open Access Journals (Sweden)

    Joel Adu-Brimpong

    2017-03-01

    Full Text Available Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist, a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783 participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions. Twelve street segments per home address were assessed for (1 Land-Use Type; (2 Public Transportation Availability; (3 Street Characteristics; (4 Environment Quality and (5 Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9 and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6. Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3. Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p < 0.001. This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes.

  17. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    Science.gov (United States)

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost

  18. Optimized cryo-focused ion beam sample preparation aimed at in situ structural studies of membrane proteins.

    Science.gov (United States)

    Schaffer, Miroslava; Mahamid, Julia; Engel, Benjamin D; Laugks, Tim; Baumeister, Wolfgang; Plitzko, Jürgen M

    2017-02-01

    While cryo-electron tomography (cryo-ET) can reveal biological structures in their native state within the cellular environment, it requires the production of high-quality frozen-hydrated sections that are thinner than 300nm. Sample requirements are even more stringent for the visualization of membrane-bound protein complexes within dense cellular regions. Focused ion beam (FIB) sample preparation for transmission electron microscopy (TEM) is a well-established technique in material science, but there are only few examples of biological samples exhibiting sufficient quality for high-resolution in situ investigation by cryo-ET. In this work, we present a comprehensive description of a cryo-sample preparation workflow incorporating additional conductive-coating procedures. These coating steps eliminate the adverse effects of sample charging on imaging with the Volta phase plate, allowing data acquisition with improved contrast. We discuss optimized FIB milling strategies adapted from material science and each critical step required to produce homogeneously thin, non-charging FIB lamellas that make large areas of unperturbed HeLa and Chlamydomonas cells accessible for cryo-ET at molecular resolution. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Optimization of the solvent-based dissolution method to sample volatile organic compound vapors for compound-specific isotope analysis.

    Science.gov (United States)

    Bouchard, Daniel; Wanner, Philipp; Luo, Hong; McLoughlin, Patrick W; Henderson, James K; Pirkle, Robert J; Hunkeler, Daniel

    2017-10-20

    The methodology of the solvent-based dissolution method used to sample gas phase volatile organic compounds (VOC) for compound-specific isotope analysis (CSIA) was optimized to lower the method detection limits for TCE and benzene. The sampling methodology previously evaluated by [1] consists in pulling the air through a solvent to dissolve and accumulate the gaseous VOC. After the sampling process, the solvent can then be treated similarly as groundwater samples to perform routine CSIA by diluting an aliquot of the solvent into water to reach the required concentration of the targeted contaminant. Among solvents tested, tetraethylene glycol dimethyl ether (TGDE) showed the best aptitude for the method. TGDE has a great affinity with TCE and benzene, hence efficiently dissolving the compounds during their transition through the solvent. The method detection limit for TCE (5±1μg/m 3 ) and benzene (1.7±0.5μg/m 3 ) is lower when using TGDE compared to methanol, which was previously used (385μg/m 3 for TCE and 130μg/m 3 for benzene) [2]. The method detection limit refers to the minimal gas phase concentration in ambient air required to load sufficient VOC mass into TGDE to perform δ 13 C analysis. Due to a different analytical procedure, the method detection limit associated with δ 37 Cl analysis was found to be 156±6μg/m 3 for TCE. Furthermore, the experimental results validated the relationship between the gas phase TCE and the progressive accumulation of dissolved TCE in the solvent during the sampling process. Accordingly, based on the air-solvent partitioning coefficient, the sampling methodology (e.g. sampling rate, sampling duration, amount of solvent) and the final TCE concentration in the solvent, the concentration of TCE in the gas phase prevailing during the sampling event can be determined. Moreover, the possibility to analyse for TCE concentration in the solvent after sampling (or other targeted VOCs) allows the field deployment of the sampling

  20. Optimal design of sampling and mapping schemes in the radiometric exploration of Chipilapa, El Salvador (Geo-statistics)

    International Nuclear Information System (INIS)

    Balcazar G, M.; Flores R, J.H.

    1992-01-01

    As part of the knowledge about the radiometric surface exploration, carried out in the geothermal field of Chipilapa, El Salvador, its were considered the geo-statistical parameters starting from the calculated variogram of the field data, being that the maxim distance of correlation of the samples in 'radon' in the different observation addresses (N-S, E-W, N W-S E, N E-S W), it was of 121 mts for the monitoring grill in future prospectus in the same area. Being derived of it an optimization (minimum cost) in the spacing of the field samples by means of geo-statistical techniques, without losing the detection of the anomaly. (Author)

  1. Optimal sample size for predicting viability of cabbage and radish seeds based on near infrared spectra of single seeds

    DEFF Research Database (Denmark)

    Shetty, Nisha; Min, Tai-Gi; Gislum, René

    2011-01-01

    The effects of the number of seeds in a training sample set on the ability to predict the viability of cabbage or radish seeds are presented and discussed. The supervised classification method extended canonical variates analysis (ECVA) was used to develop a classification model. Calibration sub......-sets of different sizes were chosen randomly with several iterations and using the spectral-based sample selection algorithms DUPLEX and CADEX. An independent test set was used to validate the developed classification models. The results showed that 200 seeds were optimal in a calibration set for both cabbage...... using all 600 seeds in the calibration set. Thus, the number of seeds in the calibration set can be reduced by up to 67% without significant loss of classification accuracy, which will effectively enhance the cost-effectiveness of NIR spectral analysis. Wavelength regions important...

  2. Immunosuppressant therapeutic drug monitoring by LC-MS/MS: workflow optimization through automated processing of whole blood samples.

    Science.gov (United States)

    Marinova, Mariela; Artusi, Carlo; Brugnolo, Laura; Antonelli, Giorgia; Zaninotto, Martina; Plebani, Mario

    2013-11-01

    Although, due to its high specificity and sensitivity, LC-MS/MS is an efficient technique for the routine determination of immunosuppressants in whole blood, it involves time-consuming manual sample preparation. The aim of the present study was therefore to develop an automated sample-preparation protocol for the quantification of sirolimus, everolimus and tacrolimus by LC-MS/MS using a liquid handling platform. Six-level commercially available blood calibrators were used for assay development, while four quality control materials and three blood samples from patients under immunosuppressant treatment were employed for the evaluation of imprecision. Barcode reading, sample re-suspension, transfer of whole blood samples into 96-well plates, addition of internal standard solution, mixing, and protein precipitation were performed with a liquid handling platform. After plate filtration, the deproteinised supernatants were submitted for SPE on-line. The only manual steps in the entire process were de-capping of the tubes, and transfer of the well plates to the HPLC autosampler. Calibration curves were linear throughout the selected ranges. The imprecision and accuracy data for all analytes were highly satisfactory. The agreement between the results obtained with manual and those obtained with automated sample preparation was optimal (n=390, r=0.96). In daily routine (100 patient samples) the typical overall total turnaround time was less than 6h. Our findings indicate that the proposed analytical system is suitable for routine analysis, since it is straightforward and precise. Furthermore, it incurs less manual workload and less risk of error in the quantification of whole blood immunosuppressant concentrations than conventional methods. © 2013.

  3. Optimized Field Sampling and Monitoring of Airborne Hazardous Transport Plumes; A Geostatistical Simulation Approach

    International Nuclear Information System (INIS)

    Chen, DI-WEN

    2001-01-01

    Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian/geostatistical methodology, quantitatively combines soft information

  4. Optimization of Region-of-Interest Sampling Strategies for Hepatic MRI Proton Density Fat Fraction Quantification

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.

    2017-01-01

    BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of

  5. Optimization of region-of-interest sampling strategies for hepatic MRI proton density fat fraction quantification.

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B

    2018-04-01

    Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance

  6. Optimization of microwave-assisted extraction with saponification (MAES) for the determination of polybrominated flame retardants in aquaculture samples.

    Science.gov (United States)

    Fajar, N M; Carro, A M; Lorenzo, R A; Fernandez, F; Cela, R

    2008-08-01

    The efficiency of microwave-assisted extraction with saponification (MAES) for the determination of seven polybrominated flame retardants (polybrominated biphenyls, PBBs; and polybrominated diphenyl ethers, PBDEs) in aquaculture samples is described and compared with microwave-assisted extraction (MAE). Chemometric techniques based on experimental designs and desirability functions were used for simultaneous optimization of the operational parameters used in both MAES and MAE processes. Application of MAES to this group of contaminants in aquaculture samples, which had not been previously applied to this type of analytes, was shown to be superior to MAE in terms of extraction efficiency, extraction time and lipid content extracted from complex matrices (0.7% as against 18.0% for MAE extracts). PBBs and PBDEs were determined by gas chromatography with micro-electron capture detection (GC-muECD). The quantification limits for the analytes were 40-750 pg g(-1) (except for BB-15, which was 1.43 ng g(-1)). Precision for MAES-GC-muECD (%RSD < 11%) was significantly better than for MAE-GC-muECD (%RSD < 20%). The accuracy of both optimized methods was satisfactorily demonstrated by analysis of appropriate certified reference material (CRM), WMF-01.

  7. Optimal sample size of signs for classification of radiational and oily soils

    International Nuclear Information System (INIS)

    Babayev, M.P.; Iskenderov, S.M.; Aghayev, R.A.

    2012-01-01

    Full text : This article tells about classification of radiational and oily soils that should be in essence a compact intelligence system which contains maximum information on classes of soil objects in the accepted feature space. The stored experience shows that the volume of the most informative soil signs can make up maximum 7-8 indexes. More correct approach to our opinion for a sample of the most informative (most important) indexes is the method of testing and mistakes, that is the experimental method, allowing to make use a wide experience and intuition of the researcher, or group of the researchers, engaged for many years in the field of soil science. At this operational stage of the formal device of soils classification, to say more concrete, the assessment section of selfdescriptiveness of soil signs of this formal device, in our opinion, is purely mathematized and in some cases even not reflect the true picture. In this case it will be calculated 21 pair of correlative elements between the selected soil signs as a measure of the linear communication. The volume of the correlative row will be equal to 6, as the increase in volume of the correlative row can sharply increase the volume calculation. Pertinently to note that, it is the first time an attempt is made to create correlative matrixes of the most important signs of radiation and oily soils

  8. A boundary-optimized rejection region test for the two-sample binomial problem.

    Science.gov (United States)

    Gabriel, Erin E; Nason, Martha; Fay, Michael P; Follmann, Dean A

    2018-03-30

    Testing the equality of 2 proportions for a control group versus a treatment group is a well-researched statistical problem. In some settings, there may be strong historical data that allow one to reliably expect that the control proportion is one, or nearly so. While one-sample tests or comparisons to historical controls could be used, neither can rigorously control the type I error rate in the event the true control rate changes. In this work, we propose an unconditional exact test that exploits the historical information while controlling the type I error rate. We sequentially construct a rejection region by first maximizing the rejection region in the space where all controls have an event, subject to the constraint that our type I error rate does not exceed α for any true event rate; then with any remaining α we maximize the additional rejection region in the space where one control avoids the event, and so on. When the true control event rate is one, our test is the most powerful nonrandomized test for all points in the alternative space. When the true control event rate is nearly one, we demonstrate that our test has equal or higher mean power, averaging over the alternative space, than a variety of well-known tests. For the comparison of 4 controls and 4 treated subjects, our proposed test has higher power than all comparator tests. We demonstrate the properties of our proposed test by simulation and use our method to design a malaria vaccine trial. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  9. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    International Nuclear Information System (INIS)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-01-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors. (paper)

  10. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    Science.gov (United States)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-06-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors.

  11. Optimized pre-thinning procedures of ion-beam thinning for TEM sample preparation by magnetorheological polishing.

    Science.gov (United States)

    Luo, Hu; Yin, Shaohui; Zhang, Guanhua; Liu, Chunhui; Tang, Qingchun; Guo, Meijian

    2017-10-01

    Ion-beam-thinning is a well-established sample preparation technique for transmission electron microscopy (TEM), but tedious procedures and labor consuming pre-thinning could seriously reduce its efficiency. In this work, we present a simple pre-thinning technique by using magnetorheological (MR) polishing to replace manual lapping and dimpling, and demonstrate the successful preparation of electron-transparent single crystal silicon samples after MR polishing and single-sided ion milling. Dimples pre-thinned to less than 30 microns and with little mechanical surface damage were repeatedly produced under optimized MR polishing conditions. Samples pre-thinned by both MR polishing and traditional technique were ion-beam thinned from the rear side until perforation, and then observed by optical microscopy and TEM. The results show that the specimen pre-thinned by MR technique was free from dimpling related defects, which were still residual in sample pre-thinned by conventional technique. Nice high-resolution TEM images could be acquired after MR polishing and one side ion-thinning. MR polishing promises to be an adaptable and efficient method for pre-thinning in preparation of TEM specimens, especially for brittle ceramics. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. An Optimized DNA Analysis Workflow for the Sampling, Extraction, and Concentration of DNA obtained from Archived Latent Fingerprints.

    Science.gov (United States)

    Solomon, April D; Hytinen, Madison E; McClain, Aryn M; Miller, Marilyn T; Dawson Cruz, Tracey

    2018-01-01

    DNA profiles have been obtained from fingerprints, but there is limited knowledge regarding DNA analysis from archived latent fingerprints-touch DNA "sandwiched" between adhesive and paper. Thus, this study sought to comparatively analyze a variety of collection and analytical methods in an effort to seek an optimized workflow for this specific sample type. Untreated and treated archived latent fingerprints were utilized to compare different biological sampling techniques, swab diluents, DNA extraction systems, DNA concentration practices, and post-amplification purification methods. Archived latent fingerprints disassembled and sampled via direct cutting, followed by DNA extracted using the QIAamp® DNA Investigator Kit, and concentration with Centri-Sep™ columns increased the odds of obtaining an STR profile. Using the recommended DNA workflow, 9 of the 10 samples provided STR profiles, which included 7-100% of the expected STR alleles and two full profiles. Thus, with carefully selected procedures, archived latent fingerprints can be a viable DNA source for criminal investigations including cold/postconviction cases. © 2017 American Academy of Forensic Sciences.

  13. Comparison of allele frequencies of Plasmodium falciparum merozoite antigens in malaria infections sampled in different years in a Kenyan population.

    Science.gov (United States)

    Ochola-Oyier, Lynette Isabella; Okombo, John; Wagatua, Njoroge; Ochieng, Jacob; Tetteh, Kevin K; Fegan, Greg; Bejon, Philip; Marsh, Kevin

    2016-05-06

    Plasmodium falciparum merozoite antigens elicit antibody responses in malaria-endemic populations, some of which are clinically protective, which is one of the reasons why merozoite antigens are the focus of malaria vaccine development efforts. Polymorphisms in several merozoite antigen-encoding genes are thought to arise as a result of selection by the human immune system. The allele frequency distribution of 15 merozoite antigens over a two-year period, 2007 and 2008, was examined in parasites obtained from children with uncomplicated malaria. In the same population, allele frequency changes pre- and post-anti-malarial treatment were also examined. Any gene which showed a significant shift in allele frequencies was also assessed longitudinally in asymptomatic and complicated malaria infections. Fluctuating allele frequencies were identified in codons 147 and 148 of reticulocyte-binding homologue (Rh) 5, with a shift from HD to YH haplotypes over the two-year period in uncomplicated malaria infections. However, in both the asymptomatic and complicated malaria infections YH was the dominant and stable haplotype over the two-year and ten-year periods, respectively. A logistic regression analysis of all three malaria infection populations between 2007 and 2009 revealed, that the chance of being infected with the HD haplotype decreased with time from 2007 to 2009 and increased in the uncomplicated and asymptomatic infections. Rh5 codons 147 and 148 showed heterogeneity at both an individual and population level and may be under some degree of immune selection.

  14. Novel synthesis of nanocomposite for the extraction of Sildenafil Citrate (Viagra) from water and urine samples: Process screening and optimization.

    Science.gov (United States)

    Asfaram, Arash; Ghaedi, Mehrorang; Purkait, Mihir Kumar

    2017-09-01

    A sensitive analytical method is investigated to concentrate and determine trace level of Sildenafil Citrate (SLC) present in water and urine samples. The method is based on a sample treatment using dispersive solid-phase micro-extraction (DSPME) with laboratory-made Mn@ CuS/ZnS nanocomposite loaded on activated carbon (Mn@ CuS/ZnS-NCs-AC) as a sorbent for the target analyte. The efficiency was enhanced by ultrasound-assisted (UA) with dispersive nanocomposite solid-phase micro-extraction (UA-DNSPME). Four significant variables affecting SLC recovery like; pH, eluent volume, sonication time and adsorbent mass were selected by the Plackett-Burman design (PBD) experiments. These selected factors were optimized by the central composite design (CCD) to maximize extraction of SLC. The results exhibited that the optimum conditions for maximizing extraction of SLC were 6.0 pH, 300μL eluent (acetonitrile) volume, 10mg of adsorbent and 6min sonication time. Under optimized conditions, virtuous linearity of SLC was ranged from 30 to 4000ngmL -1 with R 2 of 0.99. The limit of detection (LOD) was 2.50ngmL -1 and the recoveries at two spiked levels were ranged from 97.37 to 103.21% with the relative standard deviation (RSD) less than 4.50% (n=15). The enhancement factor (EF) was 81.91. The results show that the combination UAE with DNSPME is a suitable method for the determination of SLC in water and urine samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Exploring structural variability in X-ray crystallographic models using protein local optimization by torsion-angle sampling

    International Nuclear Information System (INIS)

    Knight, Jennifer L.; Zhou, Zhiyong; Gallicchio, Emilio; Himmel, Daniel M.; Friesner, Richard A.; Arnold, Eddy; Levy, Ronald M.

    2008-01-01

    Torsion-angle sampling, as implemented in the Protein Local Optimization Program (PLOP), is used to generate multiple structurally variable single-conformer models which are in good agreement with X-ray data. An ensemble-refinement approach to differentiate between positional uncertainty and conformational heterogeneity is proposed. Modeling structural variability is critical for understanding protein function and for modeling reliable targets for in silico docking experiments. Because of the time-intensive nature of manual X-ray crystallographic refinement, automated refinement methods that thoroughly explore conformational space are essential for the systematic construction of structurally variable models. Using five proteins spanning resolutions of 1.0–2.8 Å, it is demonstrated how torsion-angle sampling of backbone and side-chain libraries with filtering against both the chemical energy, using a modern effective potential, and the electron density, coupled with minimization of a reciprocal-space X-ray target function, can generate multiple structurally variable models which fit the X-ray data well. Torsion-angle sampling as implemented in the Protein Local Optimization Program (PLOP) has been used in this work. Models with the lowest R free values are obtained when electrostatic and implicit solvation terms are included in the effective potential. HIV-1 protease, calmodulin and SUMO-conjugating enzyme illustrate how variability in the ensemble of structures captures structural variability that is observed across multiple crystal structures and is linked to functional flexibility at hinge regions and binding interfaces. An ensemble-refinement procedure is proposed to differentiate between variability that is a consequence of physical conformational heterogeneity and that which reflects uncertainty in the atomic coordinates

  16. Optimization of low-frequency low-intensity ultrasound-mediated microvessel disruption on prostate cancer xenografts in nude mice using an orthogonal experimental design.

    Science.gov (United States)

    Yang, Y U; Bai, Wenkun; Chen, Yini; Lin, Yanduan; Hu, Bing

    2015-11-01

    The present study aimed to provide a complete exploration of the effect of sound intensity, frequency, duty cycle, microbubble volume and irradiation time on low-frequency low-intensity ultrasound (US)-mediated microvessel disruption, and to identify an optimal combination of the five factors that maximize the blockage effect. An orthogonal experimental design approach was used. Enhanced US imaging and acoustic quantification were performed to assess tumor blood perfusion. In the confirmatory test, in addition to acoustic quantification, the specimens of the tumor were stained with hematoxylin and eosin and observed using light microscopy. The results revealed that sound intensity, frequency, duty cycle, microbubble volume and irradiation time had a significant effect on the average peak intensity (API). The extent of the impact of the variables on the API was in the following order: Sound intensity; frequency; duty cycle; microbubble volume; and irradiation time. The optimum conditions were found to be as follows: Sound intensity, 1.00 W/cm 2 ; frequency, 20 Hz; duty cycle, 40%; microbubble volume, 0.20 ml; and irradiation time, 3 min. In the confirmatory test, the API was 19.97±2.66 immediately subsequent to treatment, and histological examination revealed signs of tumor blood vessel injury in the optimum parameter combination group. In conclusion, the Taguchi L 18 (3) 6 orthogonal array design was successfully applied for determining the optimal parameter combination of API following treatment. Under the optimum orthogonal design condition, a minimum API of 19.97±2.66 subsequent to low-frequency and low-intensity mediated blood perfusion blockage was obtained.

  17. FREQUENCY ANALYSIS OF RLE-BLOCKS REPETITIONS IN THE SERIES OF BINARY CODES WITH OPTIMAL MINIMAX CRITERION OF AUTOCORRELATION FUNCTION

    Directory of Open Access Journals (Sweden)

    A. A. Kovylin

    2013-01-01

    Full Text Available The article describes the problem of searching for binary pseudo-random sequences with quasi-ideal autocorrelation function, which are to be used in contemporary communication systems, including mobile and wireless data transfer interfaces. In the synthesis of binary sequences sets, the target set is manning them based on the minimax criterion by which a sequence is considered to be optimal according to the intended application. In the course of the research the optimal sequences with order of up to 52 were obtained; the analysis of Run Length Encoding was carried out. The analysis showed regularities in the distribution of series number of different lengths in the codes that are optimal on the chosen criteria, which would make it possible to optimize the searching process for such codes in the future.

  18. Difference optimization: Automatic correction of relative frequency and phase for mean non-edited and edited GABA 1H MEGA-PRESS spectra

    Science.gov (United States)

    Cleve, Marianne; Krämer, Martin; Gussew, Alexander; Reichenbach, Jürgen R.

    2017-06-01

    Phase and frequency corrections of magnetic resonance spectroscopic data are of major importance to obtain reliable and unambiguous metabolite estimates as validated in recent research for single-shot scans with the same spectral fingerprint. However, when using the J-difference editing technique 1H MEGA-PRESS, misalignment between mean edited (ON ‾) and non-edited (OFF ‾) spectra that may remain even after correction of the corresponding individual single-shot scans results in subtraction artefacts compromising reliable GABA quantitation. We present a fully automatic routine that iteratively optimizes simultaneously relative frequencies and phases between the mean ON ‾ and OFF ‾ 1H MEGA-PRESS spectra while minimizing the sum of the magnitude of the difference spectrum (L1 norm). The proposed method was applied to simulated spectra at different SNR levels with deliberately preset frequency and phase errors. Difference optimization proved to be more sensitive to small signal fluctuations, as e.g. arising from subtraction artefacts, and outperformed the alternative spectral registration approach, that, in contrast to our proposed linear approach, uses a nonlinear least squares minimization (L2 norm), at all investigated levels of SNR. Moreover, the proposed method was applied to 47 MEGA-PRESS datasets acquired in vivo at 3 T. The results of the alignment between the mean OFF ‾ and ON ‾ spectra were compared by applying (a) no correction, (b) difference optimization or (c) spectral registration. Since the true frequency and phase errors are not known for in vivo data, manually corrected spectra were used as the gold standard reference (d). Automatically corrected data applying both, method (b) or method (c), showed distinct improvements of spectra quality as revealed by the mean Pearson correlation coefficient between corresponding real part mean DIFF ‾ spectra of Rbd = 0.997 ± 0.003 (method (b) vs. (d)), compared to Rad = 0.764 ± 0.220 (method (a) vs

  19. The importance of the sampling frequency in determining short-time-averaged irradiance and illuminance for rapidly changing cloud cover

    International Nuclear Information System (INIS)

    Delaunay, J.J.; Rommel, M.; Geisler, J.

    1994-01-01

    The sampling interval is an important parameter which must be chosen carefully, if measurements of the direct, global, and diffuse irradiance or illuminance are carried out to determine their averages over a given period. Using measurements from a day with rapidly moving clouds, we investigated the influence of the sampling interval on the uncertainly of the calculated 15-min averages. We conclude, for this averaging period, that the sampling interval should not exceed 60 s and 10 s for measurement of the diffuse and global components respectively, to reduce the influence of the sampling interval below 2%. For the direct component, even a 5 s sampling interval is too long to reach this influence level for days with extremely quickly changing insolation conditions. (author)

  20. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  1. Optimization and application of octadecyl-modified monolithic silica for solid-phase extraction of drugs in whole blood samples.

    Science.gov (United States)

    Namera, Akira; Saito, Takeshi; Ota, Shigenori; Miyazaki, Shota; Oikawa, Hiroshi; Murata, Kazuhiro; Nagao, Masataka

    2017-09-29

    Monolithic silica in MonoSpin for solid-phase extraction of drugs from whole blood samples was developed to facilitate high-throughput analysis. Monolithic silica of various pore sizes and octadecyl contents were synthesized, and their effects on recovery rates were evaluated. The silica monolith M18-200 (20μm through-pore size, 10.4nm mesopore size, and 17.3% carbon content) achieved the best recovery of the target analytes in whole blood samples. The extraction proceeded with centrifugal force at 1000rpm for 2min, and the eluate was directly injected into the liquid chromatography-mass spectrometry system without any tedious steps such as evaporation of extraction solvents. Under the optimized condition, low detection limits of 0.5-2.0ngmL -1 and calibration ranges up to 1000ngmL -1 were obtained. The recoveries of the target drugs in the whole blood were 76-108% with relative standard deviation of less than 14.3%. These results indicate that the developed method based on monolithic silica is convenient, highly efficient, and applicable for detecting drugs in whole blood samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Dual frequency modulation with two cantilevers in series: a possible means to rapidly acquire tip–sample interaction force curves with dynamic AFM

    International Nuclear Information System (INIS)

    Solares, Santiago D; Chawla, Gaurav

    2008-01-01

    One common application of atomic force microscopy (AFM) is the acquisition of tip–sample interaction force curves. However, this can be a slow process when the user is interested in studying non-uniform samples, because existing contact- and dynamic-mode methods require that the measurement be performed at one fixed surface point at a time. This paper proposes an AFM method based on dual frequency modulation using two cantilevers in series, which could be used to measure the tip–sample interaction force curves and topography of the entire sample with a single surface scan, in a time that is comparable to the time needed to collect a topographic image with current AFM imaging modes. Numerical simulation results are provided along with recommended parameters to characterize tip–sample interactions resembling those of conventional silicon tips and carbon nanotube tips tapping on silicon surfaces

  3. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    Science.gov (United States)

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  4. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers.

    Science.gov (United States)

    Tisdale, Evgenia; Kennedy, Devin; Xu, Xiaodong; Wilkins, Charles

    2014-01-15

    The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr2) than is the pentafluorostyrene component distribution. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn; Zhu, Weiliang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn [ACS Key Laboratory of Receptor Research, Drug Discovery and Design Center, Shanghai Institute of Materia Medica, Chinese Academy of Sciences, 555 Zuchongzhi Road, Shanghai 201203 (China); Shi, Jiye, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn [UCB Pharma, 216 Bath Road, Slough SL1 4EN (United Kingdom)

    2015-03-28

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much less computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.

  6. Design and sampling plan optimization for RT-qPCR experiments in plants: a case study in blueberry

    Directory of Open Access Journals (Sweden)

    Jose V Die

    2016-03-01

    Full Text Available The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data.

  7. Multi-probe-based resonance-frequency electrical impedance spectroscopy for detection of suspicious breast lesions: improving performance using partial ROC optimization

    Science.gov (United States)

    Lederman, Dror; Zheng, Bin; Wang, Xingwei; Wang, Xiao Hui; Gur, David

    2011-03-01

    We have developed a multi-probe resonance-frequency electrical impedance spectroscope (REIS) system to detect breast abnormalities. Based on assessing asymmetry in REIS signals acquired between left and right breasts, we developed several machine learning classifiers to classify younger women (i.e., under 50YO) into two groups of having high and low risk for developing breast cancer. In this study, we investigated a new method to optimize performance based on the area under a selected partial receiver operating characteristic (ROC) curve when optimizing an artificial neural network (ANN), and tested whether it could improve classification performance. From an ongoing prospective study, we selected a dataset of 174 cases for whom we have both REIS signals and diagnostic status verification. The dataset includes 66 "positive" cases recommended for biopsy due to detection of highly suspicious breast lesions and 108 "negative" cases determined by imaging based examinations. A set of REIS-based feature differences, extracted from the two breasts using a mirror-matched approach, was computed and constituted an initial feature pool. Using a leave-one-case-out cross-validation method, we applied a genetic algorithm (GA) to train the ANN with an optimal subset of features. Two optimization criteria were separately used in GA optimization, namely the area under the entire ROC curve (AUC) and the partial area under the ROC curve, up to a predetermined threshold (i.e., 90% specificity). The results showed that although the ANN optimized using the entire AUC yielded higher overall performance (AUC = 0.83 versus 0.76), the ANN optimized using the partial ROC area criterion achieved substantially higher operational performance (i.e., increasing sensitivity level from 28% to 48% at 95% specificity and/ or from 48% to 58% at 90% specificity).

  8. A study on reducing update frequency of the forecast samples in the ensemble-based 4DVar data assimilation method

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Aimei; Xu, Daosheng [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province; Chinese Academy of Meteorological Sciences, Beijing (China). State Key Lab. of Severe Weather; Qiu, Xiaobin [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province; Tianjin Institute of Meteorological Science (China); Qiu, Chongjian [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province

    2013-02-15

    In the ensemble-based four dimensional variational assimilation method (SVD-En4DVar), a singular value decomposition (SVD) technique is used to select the leading eigenvectors and the analysis variables are expressed as the orthogonal bases expansion of the eigenvectors. The experiments with a two-dimensional shallow-water equation model and simulated observations show that the truncation error and rejection of observed signals due to the reduced-dimensional reconstruction of the analysis variable are the major factors that damage the analysis when the ensemble size is not large enough. However, a larger-sized ensemble is daunting computational burden. Experiments with a shallow-water equation model also show that the forecast error covariances remain relatively constant over time. For that reason, we propose an approach that increases the members of the forecast ensemble while reducing the update frequency of the forecast error covariance in order to increase analysis accuracy and to reduce the computational cost. A series of experiments were conducted with the shallow-water equation model to test the efficiency of this approach. The experimental results indicate that this approach is promising. Further experiments with the WRF model show that this approach is also suitable for the real atmospheric data assimilation problem, but the update frequency of the forecast error covariances should not be too low. (orig.)

  9. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines

    Directory of Open Access Journals (Sweden)

    Jingjing Xu

    2015-08-01

    Full Text Available In this paper, a wireless sensor network (WSN technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD algorithm with particle swarm optimization (PSO, namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.

  10. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines.

    Science.gov (United States)

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-08-27

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.

  11. [Application of N-isopropyl-p-[123I] iodoamphetamine quantification of regional cerebral blood flow using iterative reconstruction methods: selection of the optimal reconstruction method and optimization of the cutoff frequency of the preprocessing filter].

    Science.gov (United States)

    Asazu, Akira; Hayashi, Masuo; Arai, Mami; Kumai, Yoshiaki; Akagi, Hiroyuki; Okayama, Katsuyoshi; Narumi, Yoshifumi

    2013-05-01

    In cerebral blood flow tests using N-Isopropyl-p-[123I] Iodoamphetamine "I-IMP, quantitative results of greater accuracy than possible using the autoradiography (ARG) method can be obtained with attenuation and scatter correction and image reconstruction by filtered back projection (FBP). However, the cutoff frequency of the preprocessing Butterworth filter affects the quantitative value; hence, we sought an optimal cutoff frequency, derived from the correlation between the FBP method and Xenon-enhanced computed tomography (XeCT)/cerebral blood flow (CBF). In this study, we reconstructed images using ordered subsets expectation maximization (OSEM), a method of successive approximation which has recently come into wide use, and also three-dimensional (3D)-OSEM, a method by which the resolution can be corrected with the addition of collimator broad correction, to examine the effects on the regional cerebral blood flow (rCBF) quantitative value of changing the cutoff frequency, and to determine whether successive approximation is applicable to cerebral blood flow quantification. Our results showed that quantification of greater accuracy was obtained with reconstruction employing the 3D-OSEM method and using a cutoff frequency set near 0.75-0.85 cycles/cm, which is higher than the frequency used in image reconstruction by the ordinary FBP method.

  12. Application of N-isopropyl-p-[123I] iodoamphetamine quantification of regional cerebral blood flow using iterative reconstruction methods. Selection of the optimal reconstruction method and optimization of the cutoff frequency of the preprocessing filter

    International Nuclear Information System (INIS)

    Asazu, Akira; Hayashi, Masuo; Arai, Mami; Kumai, Yoshiaki; Akagi, Hiroyuki; Okayama, Katsuyoshi; Narumi, Yoshifumi

    2013-01-01

    In cerebral blood flow tests using N-Isopropyl-p-[ 123 I] Iodoamphetamine 123 I-IMP, quantitative results of greater accuracy than possible using the autoradiography (ARG) method can be obtained with attenuation and scatter correction and image reconstruction by filtered back projection (FBP). However, the cutoff frequency of the preprocessing Butterworth filter affects the quantitative value; hence, we sought an optimal cutoff frequency, derived from the correlation between the FBP method and Xenon-enhanced computed tomography (XeCT)/cerebral blood flow (CBF). In this study, we reconstructed images using ordered subsets expectation maximization (OSEM), a method of successive approximation which has recently come into wide use, and also three-dimensional (3D)-OSEM, a method by which the resolution can be corrected with the addition of collimator broad correction, to examine the effects on the regional cerebral blood flow (rCBF) quantitative value of changing the cutoff frequency, and to determine whether successive approximation is applicable to cerebral blood flow quantification. Our results showed