WorldWideScience

Sample records for optimal sampling schemes

  1. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  2. Optimal sampling schemes applied in geology

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-05-01

    Full Text Available Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology UP 2010 2 / 47 Outline 1 Introduction to hyperspectral remote... sensing 2 Objective of Study 1 3 Study Area 4 Data used 5 Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology...

  3. Optimal sampling schemes for vegetation and geological field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2012-07-01

    Full Text Available The presentation made to Wits Statistics Department was on common classification methods used in the field of remote sensing, and the use of remote sensing to design optimal sampling schemes for field visits with applications in vegetation...

  4. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available sampling schemes case studies Optimized field sampling representing the overall distribution of a particular mineral Deriving optimal exploration target zones CONTINUUM REMOVAL for vegetation [13, 27, 46]. The convex hull transform is a method... of normalizing spectra [16, 41]. The convex hull technique is anal- ogous to fitting a rubber band over a spectrum to form a continuum. Figure 5 shows the concept of the convex hull transform. The differ- ence between the hull and the orig- inal spectrum...

  5. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  6. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    Science.gov (United States)

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  7. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  8. Geochemical sampling scheme optimization on mine wastes based on hyperspectral data

    CSIR Research Space (South Africa)

    Zhao, T

    2008-07-01

    Full Text Available decontamination, for example, acid-generating minerals. Acid rock drainage can adversely have an impact on the quality of drinking water and the health of riparian ecosystems. To assess or monitor environmental impact of mining, sampling of mine waste is required...

  9. A systematic random sampling scheme optimized to detect the proportion of rare synapses in the neuropil.

    Science.gov (United States)

    da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C

    2009-05-30

    Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.

  10. Optimal design of sampling and mapping schemes in the radiometric exploration of Chipilapa, El Salvador (Geo-statistics)

    International Nuclear Information System (INIS)

    Balcazar G, M.; Flores R, J.H.

    1992-01-01

    As part of the knowledge about the radiometric surface exploration, carried out in the geothermal field of Chipilapa, El Salvador, its were considered the geo-statistical parameters starting from the calculated variogram of the field data, being that the maxim distance of correlation of the samples in 'radon' in the different observation addresses (N-S, E-W, N W-S E, N E-S W), it was of 121 mts for the monitoring grill in future prospectus in the same area. Being derived of it an optimization (minimum cost) in the spacing of the field samples by means of geo-statistical techniques, without losing the detection of the anomaly. (Author)

  11. An Optimization Scheme for ProdMod

    International Nuclear Information System (INIS)

    Gregory, M.V.

    1999-01-01

    A general purpose dynamic optimization scheme has been devised in conjunction with the ProdMod simulator. The optimization scheme is suitable for the Savannah River Site (SRS) High Level Waste (HLW) complex operations, and able to handle different types of optimizations such as linear, nonlinear, etc. The optimization is performed in the stand-alone FORTRAN based optimization deliver, while the optimizer is interfaced with the ProdMod simulator for flow of information between the two

  12. Optimal Sales Schemes for Network Goods

    DEFF Research Database (Denmark)

    Parakhonyak, Alexei; Vikander, Nick

    consumers simultaneously, serve them all sequentially, or employ any intermediate scheme. We show that the optimal sales scheme is purely sequential, where each consumer observes all previous sales before choosing whether to buy himself. A sequential scheme maximizes the amount of information available...

  13. Interpolation-free scanning and sampling scheme for tomographic reconstructions

    International Nuclear Information System (INIS)

    Donohue, K.D.; Saniie, J.

    1987-01-01

    In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation

  14. Optimal Face-Iris Multimodal Fusion Scheme

    Directory of Open Access Journals (Sweden)

    Omid Sharifi

    2016-06-01

    Full Text Available Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA, a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.

  15. Evolutionary Algorithm for Optimal Vaccination Scheme

    International Nuclear Information System (INIS)

    Parousis-Orthodoxou, K J; Vlachos, D S

    2014-01-01

    The following work uses the dynamic capabilities of an evolutionary algorithm in order to obtain an optimal immunization strategy in a user specified network. The produced algorithm uses a basic genetic algorithm with crossover and mutation techniques, in order to locate certain nodes in the inputted network. These nodes will be immunized in an SIR epidemic spreading process, and the performance of each immunization scheme, will be evaluated by the level of containment that provides for the spreading of the disease

  16. Optimization of a middle atmosphere diagnostic scheme

    Science.gov (United States)

    Akmaev, Rashid A.

    1997-06-01

    A new assimilative diagnostic scheme based on the use of a spectral model was recently tested on the CIRA-86 empirical model. It reproduced the observed climatology with an annual global rms temperature deviation of 3.2 K in the 15-110 km layer. The most important new component of the scheme is that the zonal forcing necessary to maintain the observed climatology is diagnosed from empirical data and subsequently substituted into the simulation model at the prognostic stage of the calculation in an annual cycle mode. The simulation results are then quantitatively compared with the empirical model, and the above mentioned rms temperature deviation provides an objective measure of the `distance' between the two climatologies. This quantitative criterion makes it possible to apply standard optimization procedures to the whole diagnostic scheme and/or the model itself. The estimates of the zonal drag have been improved in this study by introducing a nudging (Newtonian-cooling) term into the thermodynamic equation at the diagnostic stage. A proper optimal adjustment of the strength of this term makes it possible to further reduce the rms temperature deviation of simulations down to approximately 2.7 K. These results suggest that direct optimization can successfully be applied to atmospheric model parameter identification problems of moderate dimensionality.

  17. Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme

    Science.gov (United States)

    Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook

    1995-01-01

    Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

  18. Failure Probability Estimation Using Asymptotic Sampling and Its Dependence upon the Selected Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Martinásková Magdalena

    2017-12-01

    Full Text Available The article examines the use of Asymptotic Sampling (AS for the estimation of failure probability. The AS algorithm requires samples of multidimensional Gaussian random vectors, which may be obtained by many alternative means that influence the performance of the AS method. Several reliability problems (test functions have been selected in order to test AS with various sampling schemes: (i Monte Carlo designs; (ii LHS designs optimized using the Periodic Audze-Eglājs (PAE criterion; (iii designs prepared using Sobol’ sequences. All results are compared with the exact failure probability value.

  19. Optimal powering schemes for legged robotics

    Science.gov (United States)

    Muench, Paul; Bednarz, David; Czerniak, Gregory P.; Cheok, Ka C.

    2010-04-01

    Legged Robots have tremendous mobility, but they can also be very inefficient. These inefficiencies can be due to suboptimal control schemes, among other things. If your goal is to get from point A to point B in the least amount of time, your control scheme will be different from if your goal is to get there using the least amount of energy. In this paper, we seek a balance between these extremes by looking at both efficiency and speed. We model a walking robot as a rimless wheel, and, using Pontryagin's Maximum Principle (PMP), we find an "on-off" control for the model, and describe the switching curve between these control extremes.

  20. Field sampling scheme optimization using simulated annealing

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-10-01

    Full Text Available : silica (quartz, chalcedony, and opal)→ alunite → kaolinite → illite → smectite → chlorite. Associated with this mineral alteration are high sulphidation gold deposits and low sulphidation base metal deposits. Gold min- eralization is located... of vuggy (porous) quartz, opal and gray and black chalcedony veins. Vuggy quartz (porous quartz) is formed from extreme leaching of the host rock. It hosts high sulphidation gold mineralization and is evidence for a hypogene event. Alteration...

  1. Performance comparison of renewable incentive schemes using optimal control

    International Nuclear Information System (INIS)

    Oak, Neeraj; Lawson, Daniel; Champneys, Alan

    2014-01-01

    Many governments worldwide have instituted incentive schemes for renewable electricity producers in order to meet carbon emissions targets. These schemes aim to boost investment and hence growth in renewable energy industries. This paper examines four such schemes: premium feed-in tariffs, fixed feed-in tariffs, feed-in tariffs with contract for difference and the renewable obligations scheme. A generalised mathematical model of industry growth is presented and fitted with data from the UK onshore wind industry. The model responds to subsidy from each of the four incentive schemes. A utility or ‘fitness’ function that maximises installed capacity at some fixed time in the future while minimising total cost of subsidy is postulated. Using this function, the optimal strategy for provision and timing of subsidy for each scheme is calculated. Finally, a comparison of the performance of each scheme, given that they use their optimal control strategy, is presented. This model indicates that the premium feed-in tariff and renewable obligation scheme produce the joint best results. - Highlights: • Stochastic differential equation model of renewable energy industry growth and prices, using UK onshore wind data 1992–2010. • Cost of production reduces as cumulative installed capacity of wind energy increases, consistent with the theory of learning. • Studies the effect of subsidy using feed-in tariff schemes, and the ‘renewable obligations’ scheme. • We determine the optimal timing and quantity of subsidy required to maximise industry growth and minimise costs. • The premium feed-in tariff scheme and the renewable obligations scheme produce the best results under optimal control

  2. Optimal on/off scheme for all-optical switching

    DEFF Research Database (Denmark)

    Kristensen, Philip Trøst; Heuck, Mikkel; Mørk, Jesper

    2012-01-01

    We present a two-pulsed on/off scheme based on coherent control for fast switching of the optical energy in a micro cavity and use calculus of variations to optimize the switching in terms of energy.......We present a two-pulsed on/off scheme based on coherent control for fast switching of the optical energy in a micro cavity and use calculus of variations to optimize the switching in terms of energy....

  3. β-NMR sample optimization

    CERN Document Server

    Zakoucka, Eva

    2013-01-01

    During my summer student programme I was working on sample optimization for a new β-NMR project at the ISOLDE facility. The β-NMR technique is well-established in solid-state physics and just recently it is being introduced for applications in biochemistry and life sciences. The β-NMR collaboration will be applying for beam time to the INTC committee in September for three nuclei: Cu, Zn and Mg. Sample optimization for Mg was already performed last year during the summer student programme. Therefore sample optimization for Cu and Zn had to be completed as well for the project proposal. My part in the project was to perform thorough literature research on techniques studying Cu and Zn complexes in native conditions, search for relevant binding candidates for Cu and Zn applicable for ß-NMR and eventually evaluate selected binding candidates using UV-VIS spectrometry.

  4. Optimal updating magnitude in adaptive flat-distribution sampling.

    Science.gov (United States)

    Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery

    2017-11-07

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  5. Optimized difference schemes for multidimensional hyperbolic partial differential equations

    Directory of Open Access Journals (Sweden)

    Adrian Sescu

    2009-04-01

    Full Text Available In numerical solutions to hyperbolic partial differential equations in multidimensions, in addition to dispersion and dissipation errors, there is a grid-related error (referred to as isotropy error or numerical anisotropy that affects the directional dependence of the wave propagation. Difference schemes are mostly analyzed and optimized in one dimension, wherein the anisotropy correction may not be effective enough. In this work, optimized multidimensional difference schemes with arbitrary order of accuracy are designed to have improved isotropy compared to conventional schemes. The derivation is performed based on Taylor series expansion and Fourier analysis. The schemes are restricted to equally-spaced Cartesian grids, so the generalized curvilinear transformation method and Cartesian grid methods are good candidates.

  6. Multiobjective hyper heuristic scheme for system design and optimization

    Science.gov (United States)

    Rafique, Amer Farhan

    2012-11-01

    As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.

  7. Investigation of optimal photoionization schemes for Sm by multi-step resonance ionization

    International Nuclear Information System (INIS)

    Cha, H.; Song, K.; Lee, J.

    1997-01-01

    Excited states of Sm atoms are investigated by using multi-color resonance enhanced multiphoton ionization spectroscopy. Among the ionization signals one observed at 577.86 nm is regarded as the most efficient excited state if an 1-color 3-photon scheme is applied. Meanwhile an observed level located at 587.42 nm is regarded as the most efficient state if one uses a 2-color scheme. For 2-color scheme a level located at 573.50 nm from this first excited state is one of the best second excited state for the optimal photoionization scheme. Based on this ionization scheme various concentrations of standard solutions for samarium are determined. The minimum amount of sample which can be detected by a 2-color scheme is determined as 200 fg. The detection sensitivity is limited mainly due to the pollution of the graphite atomizer. copyright 1997 American Institute of Physics

  8. Effects of sparse sampling schemes on image quality in low-dose CT

    International Nuclear Information System (INIS)

    Abbas, Sajid; Lee, Taewon; Cho, Seungryong; Shin, Sukyoung; Lee, Rena

    2013-01-01

    Purpose: Various scanning methods and image reconstruction algorithms are actively investigated for low-dose computed tomography (CT) that can potentially reduce a health-risk related to radiation dose. Particularly, compressive-sensing (CS) based algorithms have been successfully developed for reconstructing images from sparsely sampled data. Although these algorithms have shown promises in low-dose CT, it has not been studied how sparse sampling schemes affect image quality in CS-based image reconstruction. In this work, the authors present several sparse-sampling schemes for low-dose CT, quantitatively analyze their data property, and compare effects of the sampling schemes on the image quality.Methods: Data properties of several sampling schemes are analyzed with respect to the CS-based image reconstruction using two measures: sampling density and data incoherence. The authors present five different sparse sampling schemes, and simulated those schemes to achieve a targeted dose reduction. Dose reduction factors of about 75% and 87.5%, compared to a conventional scan, were tested. A fully sampled circular cone-beam CT data set was used as a reference, and sparse sampling has been realized numerically based on the CBCT data.Results: It is found that both sampling density and data incoherence affect the image quality in the CS-based reconstruction. Among the sampling schemes the authors investigated, the sparse-view, many-view undersampling (MVUS)-fine, and MVUS-moving cases have shown promising results. These sampling schemes produced images with similar image quality compared to the reference image and their structure similarity index values were higher than 0.92 in the mouse head scan with 75% dose reduction.Conclusions: The authors found that in CS-based image reconstructions both sampling density and data incoherence affect the image quality, and suggest that a sampling scheme should be devised and optimized by use of these indicators. With this strategic

  9. Numerical Comparison of Optimal Charging Schemes for Electric Vehicles

    DEFF Research Database (Denmark)

    You, Shi; Hu, Junjie; Pedersen, Anders Bro

    2012-01-01

    of four different charging schemes, namely night charging, night charging with V2G, 24 hour charging and 24 hour charging with V2G, on the basis of real driving data and electricity price of Denmark in 2003. For all schemes, optimal charging plans with 5 minute resolution are derived through the solving...... of a mixed integer programming problem which aims to minimize the charging cost and meanwhile takes into account the users' driving needs and the practical limitations of the EV battery. In the post processing stage, the rainflow counting algorithm is implemented to assess the lifetime usage of a lithium...

  10. Optimization bitumen-based upgrading and refining schemes

    Energy Technology Data Exchange (ETDEWEB)

    Munteanu, M.; Chen, J. [National Centre for Upgrading Technology, Devon, AB (Canada); Natural Resources Canada, Devon, AB (Canada). CanmetENERGY

    2009-07-01

    This poster highlighted the results of a study in which the entire refining scheme for Canadian bitumen as feedstocks was modelled and simulated under different process configurations, operating conditions and product structures. The aim of the study was to optimize the economic benefits, product quality and energy use under a range of operational scenarios. Optimal refining schemes were proposed along with process conditions for existing refinery configurations and objectives. The goal was to provide guidelines and information for upgrading and refining process design and retrofitting. Critical steps were identified with regards to the upgrading process. It was concluded that the information obtained from this study would lead to significant improvement in process performance and operations, and in reducing the capital cost for building new upgraders and refineries. The simulation results provided valuable information for increasing the marketability of bitumen, reducing greenhouse gas emissions and other environmental impacts associated with bitumen upgrading and refining. tabs., figs.

  11. OLT-centralized sampling frequency offset compensation scheme for OFDM-PON.

    Science.gov (United States)

    Chen, Ming; Zhou, Hui; Zheng, Zhiwei; Deng, Rui; Chen, Qinghui; Peng, Miao; Liu, Cuiwei; He, Jing; Chen, Lin; Tang, Xionggui

    2017-08-07

    We propose an optical line terminal (OLT)-centralized sampling frequency offset (SFO) compensation scheme for adaptively-modulated OFDM-PON systems. By using the proposed SFO scheme, the phase rotation and inter-symbol interference (ISI) caused by SFOs between OLT and multiple optical network units (ONUs) can be centrally compensated in the OLT, which reduces the complexity of ONUs. Firstly, the optimal fast Fourier transform (FFT) size is identified in the intensity-modulated and direct-detection (IMDD) OFDM system in the presence of SFO. Then, the proposed SFO compensation scheme including phase rotation modulation (PRM) and length-adaptive OFDM frame has been experimentally demonstrated in the downlink transmission of an adaptively modulated optical OFDM with the optimal FFT size. The experimental results show that up to ± 300 ppm SFO can be successfully compensated without introducing any receiver performance penalties.

  12. An optimal probabilistic multiple-access scheme for cognitive radios

    KAUST Repository

    Hamza, Doha R.; Aï ssa, Sonia

    2012-01-01

    We study a time-slotted multiple-access system with a primary user (PU) and a secondary user (SU) sharing the same channel resource. The SU senses the channel at the beginning of the slot. If found free, it transmits with probability 1. If busy, it transmits with a certain access probability that is a function of its queue length and whether it has a new packet arrival. Both users, i.e., the PU and the SU, transmit with a fixed transmission rate by employing a truncated channel inversion power control scheme. We consider the case of erroneous sensing. The goal of the SU is to optimize its transmission scheduling policy to minimize its queueing delay under constraints on its average transmit power and the maximum tolerable primary outage probability caused by the miss detection of the PU. We consider two schemes regarding the secondary's reaction to transmission errors. Under the so-called delay-sensitive (DS) scheme, the packet received in error is removed from the queue to minimize delay, whereas under the delay-tolerant (DT) scheme, the said packet is kept in the buffer and is retransmitted until correct reception. Using the latter scheme, there is a probability of buffer loss that is also constrained to be lower than a certain specified value. We also consider the case when the PU maintains an infinite buffer to store its packets. In the latter case, we modify the SU access scheme to guarantee the stability of the PU queue. We show that the performance significantly changes if the realistic situation of a primary queue is considered. In all cases, although the delay minimization problem is nonconvex, we show that the access policies can be efficiently obtained using linear programming and grid search over one or two parameters. © 1967-2012 IEEE.

  13. An optimal probabilistic multiple-access scheme for cognitive radios

    KAUST Repository

    Hamza, Doha R.

    2012-09-01

    We study a time-slotted multiple-access system with a primary user (PU) and a secondary user (SU) sharing the same channel resource. The SU senses the channel at the beginning of the slot. If found free, it transmits with probability 1. If busy, it transmits with a certain access probability that is a function of its queue length and whether it has a new packet arrival. Both users, i.e., the PU and the SU, transmit with a fixed transmission rate by employing a truncated channel inversion power control scheme. We consider the case of erroneous sensing. The goal of the SU is to optimize its transmission scheduling policy to minimize its queueing delay under constraints on its average transmit power and the maximum tolerable primary outage probability caused by the miss detection of the PU. We consider two schemes regarding the secondary\\'s reaction to transmission errors. Under the so-called delay-sensitive (DS) scheme, the packet received in error is removed from the queue to minimize delay, whereas under the delay-tolerant (DT) scheme, the said packet is kept in the buffer and is retransmitted until correct reception. Using the latter scheme, there is a probability of buffer loss that is also constrained to be lower than a certain specified value. We also consider the case when the PU maintains an infinite buffer to store its packets. In the latter case, we modify the SU access scheme to guarantee the stability of the PU queue. We show that the performance significantly changes if the realistic situation of a primary queue is considered. In all cases, although the delay minimization problem is nonconvex, we show that the access policies can be efficiently obtained using linear programming and grid search over one or two parameters. © 1967-2012 IEEE.

  14. Planning Framework for Mesolevel Optimization of Urban Runoff Control Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Qianqian; Blohm, Andrew; Liu, Bo

    2017-04-01

    A planning framework is developed to optimize runoff control schemes at scales relevant for regional planning at an early stage. The framework employs less sophisticated modeling approaches to allow a practical application in developing regions with limited data sources and computing capability. The methodology contains three interrelated modules: (1)the geographic information system (GIS)-based hydrological module, which aims at assessing local hydrological constraints and potential for runoff control according to regional land-use descriptions; (2)the grading module, which is built upon the method of fuzzy comprehensive evaluation. It is used to establish a priority ranking system to assist the allocation of runoff control targets at the subdivision level; and (3)the genetic algorithm-based optimization module, which is included to derive Pareto-based optimal solutions for mesolevel allocation with multiple competing objectives. The optimization approach describes the trade-off between different allocation plans and simultaneously ensures that all allocation schemes satisfy the minimum requirement on runoff control. Our results highlight the importance of considering the mesolevel allocation strategy in addition to measures at macrolevels and microlevels in urban runoff management. (C) 2016 American Society of Civil Engineers.

  15. Study on a new meteorological sampling scheme developed for the OSCAAR code system

    International Nuclear Information System (INIS)

    Liu Xinhe; Tomita, Kenichi; Homma, Toshimitsu

    2002-03-01

    One important step in Level-3 Probabilistic Safety Assessment is meteorological sequence sampling, on which the previous studies were mainly related to code systems using the straight-line plume model and more efforts are needed for those using the trajectory puff model such as the OSCAAR code system. This report describes the development of a new meteorological sampling scheme for the OSCAAR code system that explicitly considers population distribution. A group of principles set for the development of this new sampling scheme includes completeness, appropriate stratification, optimum allocation, practicability and so on. In this report, discussions are made about the procedures of the new sampling scheme and its application. The calculation results illustrate that although it is quite difficult to optimize stratification of meteorological sequences based on a few environmental parameters the new scheme do gather the most inverse conditions in a single subset of meteorological sequences. The size of this subset may be as small as a few dozens, so that the tail of a complementary cumulative distribution function is possible to remain relatively static in different trials of the probabilistic consequence assessment code. (author)

  16. The optimal sampling of outsourcing product

    International Nuclear Information System (INIS)

    Yang Chao; Pei Jiacheng

    2014-01-01

    In order to improve quality and cost, the sampling c = 0 has been introduced to the inspection of outsourcing product. According to the current quality level (p = 0.4%), we confirmed the optimal sampling that is: Ac = 0; if N ≤ 3000, n = 55; 3001 ≤ N ≤ 10000, n = 86; N ≥ 10001, n = 108. Through analyzing the OC curve, we came to the conclusion that when N ≤ 3000, the protective ability of optimal sampling for product quality is stronger than current sampling. Corresponding to the same 'consumer risk', the product quality of optimal sampling is superior to current sampling. (authors)

  17. A new configurational bias scheme for sampling supramolecular structures

    Energy Technology Data Exchange (ETDEWEB)

    De Gernier, Robin; Mognetti, Bortolo M., E-mail: bmognett@ulb.ac.be [Center for Nonlinear Phenomena and Complex Systems, Université Libre de Bruxelles, Code Postal 231, Campus Plaine, B-1050 Brussels (Belgium); Curk, Tine [Department of Chemistry, University of Cambridge, Cambridge CB2 1EW (United Kingdom); Dubacheva, Galina V. [Biosurfaces Unit, CIC biomaGUNE, Paseo Miramon 182, 20009 Donostia - San Sebastian (Spain); Richter, Ralf P. [Biosurfaces Unit, CIC biomaGUNE, Paseo Miramon 182, 20009 Donostia - San Sebastian (Spain); Université Grenoble Alpes, DCM, 38000 Grenoble (France); CNRS, DCM, 38000 Grenoble (France); Max Planck Institute for Intelligent Systems, 70569 Stuttgart (Germany)

    2014-12-28

    We present a new simulation scheme which allows an efficient sampling of reconfigurable supramolecular structures made of polymeric constructs functionalized by reactive binding sites. The algorithm is based on the configurational bias scheme of Siepmann and Frenkel and is powered by the possibility of changing the topology of the supramolecular network by a non-local Monte Carlo algorithm. Such a plan is accomplished by a multi-scale modelling that merges coarse-grained simulations, describing the typical polymer conformations, with experimental results accounting for free energy terms involved in the reactions of the active sites. We test the new algorithm for a system of DNA coated colloids for which we compute the hybridisation free energy cost associated to the binding of tethered single stranded DNAs terminated by short sequences of complementary nucleotides. In order to demonstrate the versatility of our method, we also consider polymers functionalized by receptors that bind a surface decorated by ligands. In particular, we compute the density of states of adsorbed polymers as a function of the number of ligand–receptor complexes formed. Such a quantity can be used to study the conformational properties of adsorbed polymers useful when engineering adsorption with tailored properties. We successfully compare the results with the predictions of a mean field theory. We believe that the proposed method will be a useful tool to investigate supramolecular structures resulting from direct interactions between functionalized polymers for which efficient numerical methodologies of investigation are still lacking.

  18. Optimal Interpolation scheme to generate reference crop evapotranspiration

    Science.gov (United States)

    Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco

    2018-05-01

    We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.

  19. Optimal Retrofit Scheme for Highway Network under Seismic Hazards

    Directory of Open Access Journals (Sweden)

    Yongxi Huang

    2014-06-01

    Full Text Available Many older highway bridges in the United States (US are inadequate for seismic loads and could be severely damaged or collapsed in a relatively small earthquake. According to the most recent American Society of Civil Engineers’ infrastructure report card, one-third of the bridges in the US are rated as structurally deficient and many of these structurally deficient bridges are located in seismic zones. To improve this situation, at-risk bridges must be identified and evaluated and effective retrofitting programs should be in place to reduce their seismic vulnerabilities. In this study, a new retrofit strategy decision scheme for highway bridges under seismic hazards is developed and seamlessly integrate the scenario-based seismic analysis of bridges and the traffic network into the proposed optimization modeling framework. A full spectrum of bridge retrofit strategies is considered based on explicit structural assessment for each seismic damage state. As an empirical case study, the proposed retrofit strategy decision scheme is utilized to evaluate the bridge network in one of the active seismic zones in the US, Charleston, South Carolina. The developed modeling framework, on average, will help increase network throughput traffic capacity by 45% with a cost increase of only $15million for the Mw 5.5 event and increase the capacity fourfold with a cost of only $32m for the Mw 7.0 event.

  20. Prospective and retrospective spatial sampling scheme to characterize geochemicals in a mine tailings area

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-07-01

    Full Text Available This study demonstrates that designing sampling schemes using simulated annealing results in much better selection of samples from an existing scheme in terms of prediction accuracy. The presentation to the SASA Eastern Cape Chapter as an invited...

  1. Minimizing transient influence in WHPA delineation: An optimization approach for optimal pumping rate schemes

    Science.gov (United States)

    Rodriguez-Pretelin, A.; Nowak, W.

    2017-12-01

    For most groundwater protection management programs, Wellhead Protection Areas (WHPAs) have served as primarily protection measure. In their delineation, the influence of time-varying groundwater flow conditions is often underestimated because steady-state assumptions are commonly made. However, it has been demonstrated that temporary variations lead to significant changes in the required size and shape of WHPAs. Apart from natural transient groundwater drivers (e.g., changes in the regional angle of flow direction and seasonal natural groundwater recharge), anthropogenic causes such as transient pumping rates are of the most influential factors that require larger WHPAs. We hypothesize that WHPA programs that integrate adaptive and optimized pumping-injection management schemes can counter transient effects and thus reduce the additional areal demand in well protection under transient conditions. The main goal of this study is to present a novel management framework that optimizes pumping schemes dynamically, in order to minimize the impact triggered by transient conditions in WHPA delineation. For optimizing pumping schemes, we consider three objectives: 1) to minimize the risk of pumping water from outside a given WHPA, 2) to maximize the groundwater supply and 3) to minimize the involved operating costs. We solve transient groundwater flow through an available transient groundwater and Lagrangian particle tracking model. The optimization problem is formulated as a dynamic programming problem. Two different optimization approaches are explored: I) the first approach aims for single-objective optimization under objective (1) only. The second approach performs multiobjective optimization under all three objectives where compromise pumping rates are selected from the current Pareto front. Finally, we look for WHPA outlines that are as small as possible, yet allow the optimization problem to find the most suitable solutions.

  2. Optimization of reliability centered predictive maintenance scheme for inertial navigation system

    International Nuclear Information System (INIS)

    Jiang, Xiuhong; Duan, Fuhai; Tian, Heng; Wei, Xuedong

    2015-01-01

    The goal of this study is to propose a reliability centered predictive maintenance scheme for a complex structure Inertial Navigation System (INS) with several redundant components. GO Methodology is applied to build the INS reliability analysis model—GO chart. Components Remaining Useful Life (RUL) and system reliability are updated dynamically based on the combination of components lifetime distribution function, stress samples, and the system GO chart. Considering the redundant design in INS, maintenance time is based not only on components RUL, but also (and mainly) on the timing of when system reliability fails to meet the set threshold. The definition of components maintenance priority balances three factors: components importance to system, risk degree, and detection difficulty. Maintenance Priority Number (MPN) is introduced, which may provide quantitative maintenance priority results for all components. A maintenance unit time cost model is built based on components MPN, components RUL predictive model and maintenance intervals for the optimization of maintenance scope. The proposed scheme can be applied to serve as the reference for INS maintenance. Finally, three numerical examples prove the proposed predictive maintenance scheme is feasible and effective. - Highlights: • A dynamic PdM with a rolling horizon is proposed for INS with redundant components. • GO Methodology is applied to build the system reliability analysis model. • A concept of MPN is proposed to quantify the maintenance sequence of components. • An optimization model is built to select the optimal group of maintenance components. • The optimization goal is minimizing the cost of maintaining system reliability

  3. Estimates and sampling schemes for the instrumentation of accountability systems

    International Nuclear Information System (INIS)

    Jewell, W.S.; Kwiatkowski, J.W.

    1976-10-01

    The problem of estimation of a physical quantity from a set of measurements is considered, where the measurements are made on samples with a hierarchical error structure, and where within-groups error variances may vary from group to group at each level of the structure; minimum mean squared-error estimators are developed, and the case where the physical quantity is a random variable with known prior mean and variance is included. Estimators for the error variances are also given, and optimization of experimental design is considered

  4. Axially perpendicular offset Raman scheme for reproducible measurement of housed samples in a noncircular container under variation of container orientation.

    Science.gov (United States)

    Duy, Pham K; Chang, Kyeol; Sriphong, Lawan; Chung, Hoeil

    2015-03-17

    An axially perpendicular offset (APO) scheme that is able to directly acquire reproducible Raman spectra of samples contained in an oval container under variation of container orientation has been demonstrated. This scheme utilized an axially perpendicular geometry between the laser illumination and the Raman photon detection, namely, irradiation through a sidewall of the container and gathering of the Raman photon just beneath the container. In the case of either backscattering or transmission measurements, Raman sampling volumes for an internal sample vary when the orientation of an oval container changes; therefore, the Raman intensities of acquired spectra are inconsistent. The generated Raman photons traverse the same bottom of the container in the APO scheme; the Raman sampling volumes can be relatively more consistent under the same situation. For evaluation, the backscattering, transmission, and APO schemes were simultaneously employed to measure alcohol gel samples contained in an oval polypropylene container at five different orientations and then the accuracies of the determination of the alcohol concentrations were compared. The APO scheme provided the most reproducible spectra, yielding the best accuracy when the axial offset distance was 10 mm. Monte Carlo simulations were performed to study the characteristics of photon propagation in the APO scheme and to explain the origin of the optimal offset distance that was observed. In addition, the utility of the APO scheme was further demonstrated by analyzing samples in a circular glass container.

  5. Optimized spectroscopic scheme for enhanced precision CO measurements with applications to urban source attribution

    Science.gov (United States)

    Nottrott, A.; Hoffnagle, J.; Farinas, A.; Rella, C.

    2014-12-01

    Carbon monoxide (CO) is an urban pollutant generated by internal combustion engines which contributes to the formation of ground level ozone (smog). CO is also an excellent tracer for emissions from mobile combustion sources. In this work we present an optimized spectroscopic sampling scheme that enables enhanced precision CO measurements. The scheme was implemented on the Picarro G2401 Cavity Ring-Down Spectroscopy (CRDS) analyzer which measures CO2, CO, CH4 and H2O at 0.2 Hz. The optimized scheme improved the raw precision of CO measurements by 40% from 5 ppb to 3 ppb. Correlations of measured CO2, CO, CH4 and H2O from an urban tower were partitioned by wind direction and combined with a concentration footprint model for source attribution. The application of a concentration footprint for source attribution has several advantages. The upwind extent of the concentration footprint for a given sensor is much larger than the flux footprint. Measurements of mean concentration at the sensor location can be used to estimate source strength from a concentration footprint, while measurements of the vertical concentration flux are necessary to determine source strength from the flux footprint. Direct measurement of vertical concentration flux requires high frequency temporal sampling and increases the cost and complexity of the measurement system.

  6. Adaptive multi-objective Optimization scheme for cognitive radio resource management

    KAUST Repository

    Alqerm, Ismail; Shihada, Basem

    2014-01-01

    configuration by exploiting optimization and machine learning techniques. In this paper, we propose an Adaptive Multi-objective Optimization Scheme (AMOS) for cognitive radio resource management to improve spectrum operation and network performance

  7. Optimized variational analysis scheme of single Doppler radar wind data

    Science.gov (United States)

    Sasaki, Yoshi K.; Allen, Steve; Mizuno, Koki; Whitehead, Victor; Wilk, Kenneth E.

    1989-01-01

    A computer scheme for extracting singularities has been developed and applied to single Doppler radar wind data. The scheme is planned for use in real-time wind and singularity analysis and forecasting. The method, known as Doppler Operational Variational Extraction of Singularities is outlined, focusing on the principle of local symmetry. Results are presented from the application of the scheme to a storm-generated gust front in Oklahoma on May 28, 1987.

  8. Optimal design of a hybridization scheme with a fuel cell using genetic optimization

    Science.gov (United States)

    Rodriguez, Marco A.

    Fuel cell is one of the most dependable "green power" technologies, readily available for immediate application. It enables direct conversion of hydrogen and other gases into electric energy without any pollution of the environment. However, the efficient power generation is strictly stationary process that cannot operate under dynamic environment. Consequently, fuel cell becomes practical only within a specially designed hybridization scheme, capable of power storage and power management functions. The resultant technology could be utilized to its full potential only when both the fuel cell element and the entire hybridization scheme are optimally designed. The design optimization in engineering is among the most complex computational tasks due to its multidimensionality, nonlinearity, discontinuity and presence of constraints in the underlying optimization problem. this research aims at the optimal utilization of the fuel cell technology through the use of genetic optimization, and advance computing. This study implements genetic optimization in the definition of optimum hybridization rules for a PEM fuel cell/supercapacitor power system. PEM fuel cells exhibit high energy density but they are not intended for pulsating power draw applications. They work better in steady state operation and thus, are often hybridized. In a hybrid system, the fuel cell provides power during steady state operation while capacitors or batteries augment the power of the fuel cell during power surges. Capacitors and batteries can also be recharged when the motor is acting as a generator. Making analogies to driving cycles, three hybrid system operating modes are investigated: 'Flat' mode, 'Uphill' mode, and 'Downhill' mode. In the process of discovering the switching rules for these three modes, we also generate a model of a 30W PEM fuel cell. This study also proposes the optimum design of a 30W PEM fuel cell. The PEM fuel cell model and hybridization's switching rules are postulated

  9. Optimization of refueling-shuffling scheme in PWR core by random search strategy

    International Nuclear Information System (INIS)

    Wu Yuan

    1991-11-01

    A random method for simulating optimization of refueling management in a pressurized water reactor (PWR) core is described. The main purpose of the optimization was to select the 'best' refueling arrangement scheme which would produce maximum economic benefits under certain imposed conditions. To fulfill this goal, an effective optimization strategy, two-stage random search method was developed. First, the search was made in a manner similar to the stratified sampling technique. A local optimum can be reached by comparison of the successive results. Then the other random experiences would be carried on between different strata to try to find the global optimum. In general, it can be used as a practical tool for conventional fuel management scheme. However, it can also be used in studies on optimization of Low-Leakage fuel management. Some calculations were done for a typical PWR core on a CYBER-180/830 computer. The results show that the method proposed can obtain satisfactory approach at reasonable low computational cost

  10. Near-optimal labeling schemes for nearest common ancestors

    DEFF Research Database (Denmark)

    Alstrup, Stephen; Bistrup Halvorsen, Esben; Larsen, Kasper Green

    2014-01-01

    and Korman (STOC'10) established that labels in ancestor labeling schemes have size log n + Θ(log log n), our new lower bound separates ancestor and NCA labeling schemes. Our upper bound improves the 10 log n upper bound by Alstrup, Gavoille, Kaplan and Rauhe (TOCS'04), and our theoretical result even...

  11. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  12. A hybrid iterative scheme for optimal control problems governed by ...

    African Journals Online (AJOL)

    MRT

    KEY WORDS: Optimal control problem; Fredholm integral equation; ... control problems governed by Fredholm integral and integro-differential equations is given in (Brunner and Yan, ..... The exact optimal trajectory and control functions are. 2.

  13. Binary cuckoo search based optimal PMU placement scheme for ...

    African Journals Online (AJOL)

    without including zero-injection effect, an Optimal PMU Placement strategy considering ..... in Indian power grid — A case study, Frontiers in Energy, Vol. ... optimization approach, Proceedings: International Conference on Intelligent Systems ...

  14. A simple and optimal ancestry labeling scheme for trees

    DEFF Research Database (Denmark)

    Dahlgaard, Søren; Knudsen, Mathias Bæk Tejs; Rotbart, Noy Galil

    2015-01-01

    We present a lg n + 2 lg lg n + 3 ancestry labeling scheme for trees. The problem was first presented by Kannan et al. [STOC 88’] along with a simple 2 lg n solution. Motivated by applications to XML files, the label size was improved incrementally over the course of more than 20 years by a series...

  15. Optimization Route of Food Logistics Distribution Based on Genetic and Graph Cluster Scheme Algorithm

    OpenAIRE

    Jing Chen

    2015-01-01

    This study takes the concept of food logistics distribution as the breakthrough point, by means of the aim of optimization of food logistics distribution routes and analysis of the optimization model of food logistics route, as well as the interpretation of the genetic algorithm, it discusses the optimization of food logistics distribution route based on genetic and cluster scheme algorithm.

  16. Control charts for location based on different sampling schemes

    NARCIS (Netherlands)

    Mehmood, R.; Riaz, M.; Does, R.J.M.M.

    2013-01-01

    Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X̄ control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set

  17. Control Methods Utilizing Energy Optimizing Schemes in Refrigeration Systems

    DEFF Research Database (Denmark)

    Larsen, L.S; Thybo, C.; Stoustrup, Jakob

    2003-01-01

    The potential energy savings in refrigeration systems using energy optimal control has been proved to be substantial. This however requires an intelligent control that drives the refrigeration systems towards the energy optimal state. This paper proposes an approach for a control, which drives th...... the condenser pressure towards an optimal state. The objective of this is to present a feasible method that can be used for energy optimizing control. A simulation model of a simple refrigeration system will be used as basis for testing the control method....

  18. Laplace-Fourier-domain dispersion analysis of an average derivative optimal scheme for scalar-wave equation

    Science.gov (United States)

    Chen, Jing-Bo

    2014-06-01

    By using low-frequency components of the damped wavefield, Laplace-Fourier-domain full waveform inversion (FWI) can recover a long-wavelength velocity model from the original undamped seismic data lacking low-frequency information. Laplace-Fourier-domain modelling is an important foundation of Laplace-Fourier-domain FWI. Based on the numerical phase velocity and the numerical attenuation propagation velocity, a method for performing Laplace-Fourier-domain numerical dispersion analysis is developed in this paper. This method is applied to an average-derivative optimal scheme. The results show that within the relative error of 1 per cent, the Laplace-Fourier-domain average-derivative optimal scheme requires seven gridpoints per smallest wavelength and smallest pseudo-wavelength for both equal and unequal directional sampling intervals. In contrast, the classical five-point scheme requires 23 gridpoints per smallest wavelength and smallest pseudo-wavelength to achieve the same accuracy. Numerical experiments demonstrate the theoretical analysis.

  19. Optimal calculational schemes for solving multigroup photon transport problem

    International Nuclear Information System (INIS)

    Dubinin, A.A.; Kurachenko, Yu.A.

    1987-01-01

    A scheme of complex algorithm for solving multigroup equation of radiation transport is suggested. The algorithm is based on using the method of successive collisions, the method of forward scattering and the spherical harmonics method, and is realized in the FORAP program (FORTRAN, BESM-6 computer). As an example the results of calculating reactor photon transport in water are presented. The considered algorithm being modified may be used for solving neutron transport problems

  20. Optimal spatial sampling scheme to characterize mine tailings

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-08-01

    Full Text Available , the location and the covariates as external drift were used to estimate the heavy metal concentration, E[Z(x)] = b 0 + b1 · xu + b2 · xv + b3 ·GOE(x) + b4 · JAR(x) + b5 · FER(x) + b6 ·HEM(x) (8) +b 7 ·KAO(x) + b8 · COP(x) , S TC P M s S es si on s namely, a first... (4) E[Z(x)] = b 0 + ∑ bi · yi(x) = ∑ bi · yi(x) , i=1 i=0 where y0(x) = 1. The method of merging both sources of information uses {yi(x)} as an external drift function for the estimation of Z(x). The drift of Z(x) is defined externally through...

  1. Optimization of reference library used in content-based medical image retrieval scheme

    International Nuclear Information System (INIS)

    Park, Sang Cheol; Sukthankar, Rahul; Mummert, Lily; Satyanarayanan, Mahadev; Zheng Bin

    2007-01-01

    Building an optimal image reference library is a critical step in developing the interactive computer-aided detection and diagnosis (I-CAD) systems of medical images using content-based image retrieval (CBIR) schemes. In this study, the authors conducted two experiments to investigate (1) the relationship between I-CAD performance and size of reference library and (2) a new reference selection strategy to optimize the library and improve I-CAD performance. The authors assembled a reference library that includes 3153 regions of interest (ROI) depicting either malignant masses (1592) or CAD-cued false-positive regions (1561) and an independent testing data set including 200 masses and 200 false-positive regions. A CBIR scheme using a distance-weighted K-nearest neighbor algorithm is applied to retrieve references that are considered similar to the testing sample from the library. The area under receiver operating characteristic curve (A z ) is used as an index to evaluate the I-CAD performance. In the first experiment, the authors systematically increased reference library size and tested I-CAD performance. The result indicates that scheme performance improves initially from A z =0.715 to 0.874 and then plateaus when the library size reaches approximately half of its maximum capacity. In the second experiment, based on the hypothesis that a ROI should be removed if it performs poorly compared to a group of similar ROIs in a large and diverse reference library, the authors applied a new strategy to identify 'poorly effective' references. By removing 174 identified ROIs from the reference library, I-CAD performance significantly increases to A z =0.914 (p<0.01). The study demonstrates that increasing reference library size and removing poorly effective references can significantly improve I-CAD performance

  2. Energy mesh optimization for multi-level calculation schemes

    International Nuclear Information System (INIS)

    Mosca, P.; Taofiki, A.; Bellier, P.; Prevost, A.

    2011-01-01

    The industrial calculations of third generation nuclear reactors are based on sophisticated strategies of homogenization and collapsing at different spatial and energetic levels. An important issue to ensure the quality of these calculation models is the choice of the collapsing energy mesh. In this work, we show a new approach to generate optimized energy meshes starting from the SHEM 281-group library. The optimization model is applied on 1D cylindrical cells and consists of finding an energy mesh which minimizes the errors between two successive collision probability calculations. The former is realized over the fine SHEM mesh with Livolant-Jeanpierre self-shielded cross sections and the latter is performed with collapsed cross sections over the energy mesh being optimized. The optimization is done by the particle swarm algorithm implemented in the code AEMC and multigroup flux solutions are obtained from standard APOLLO2 solvers. By this new approach, a set of new optimized meshes which encompass from 10 to 50 groups has been defined for PWR and BWR calculations. This set will allow users to adapt the energy detail of the solution to the complexity of the calculation (assembly, multi-assembly, two-dimensional whole core). Some preliminary verifications, in which the accuracy of the new meshes is measured compared to a direct 281-group calculation, show that the 30-group optimized mesh offers a good compromise between simulation time and accuracy for a standard 17 x 17 UO 2 assembly with and without control rods. (author)

  3. Optimized low-order explicit Runge-Kutta schemes for high- order spectral difference method

    KAUST Repository

    Parsani, Matteo

    2012-01-01

    Optimal explicit Runge-Kutta (ERK) schemes with large stable step sizes are developed for method-of-lines discretizations based on the spectral difference (SD) spatial discretization on quadrilateral grids. These methods involve many stages and provide the optimal linearly stable time step for a prescribed SD spectrum and the minimum leading truncation error coefficient, while admitting a low-storage implementation. Using a large number of stages, the new ERK schemes lead to efficiency improvements larger than 60% over standard ERK schemes for 4th- and 5th-order spatial discretization.

  4. The same number of optimized parameters scheme for determining intermolecular interaction energies

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Ettenhuber, Patrick; Eriksen, Janus Juul

    2015-01-01

    We propose the Same Number Of Optimized Parameters (SNOOP) scheme as an alternative to the counterpoise method for treating basis set superposition errors in calculations of intermolecular interaction energies. The key point of the SNOOP scheme is to enforce that the number of optimized wave...... as numerically. Numerical results for second-order Møller-Plesset perturbation theory (MP2) and coupled-cluster with single, double, and approximate triple excitations (CCSD(T)) show that the SNOOP scheme in general outperforms the uncorrected and counterpoise approaches. Furthermore, we show that SNOOP...

  5. Optimal Scheme Selection of Agricultural Production Structure Adjustment - Based on DEA Model; Punjab (Pakistan)

    Institute of Scientific and Technical Information of China (English)

    Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan

    2015-01-01

    This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.

  6. Sample Adaptive Offset Optimization in HEVC

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2014-11-01

    Full Text Available As the next generation of video coding standard, High Efficiency Video Coding (HEVC adopted many useful tools to improve coding efficiency. Sample Adaptive Offset (SAO, is a technique to reduce sample distortion by providing offsets to pixels in in-loop filter. In SAO, pixels in LCU are classified into several categories, then categories and offsets are given based on Rate-Distortion Optimization (RDO of reconstructed pixels in a Largest Coding Unit (LCU. Pixels in a LCU are operated by the same SAO process, however, transform and inverse transform makes the distortion of pixels in Transform Unit (TU edge larger than the distortion inside TU even after deblocking filtering (DF and SAO. And the categories of SAO can also be refined, since it is not proper for many cases. This paper proposed a TU edge offset mode and a category refinement for SAO in HEVC. Experimental results shows that those two kinds of optimization gets -0.13 and -0.2 gain respectively compared with the SAO in HEVC. The proposed algorithm which using the two kinds of optimization gets -0.23 gain on BD-rate compared with the SAO in HEVC which is a 47 % increase with nearly no increase on coding time.

  7. Investigation of the influence of sampling schemes on quantitative dynamic fluorescence imaging.

    Science.gov (United States)

    Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Wang, Guodong; Wang, Bo; Zhan, Yonghua; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin

    2018-04-01

    Dynamic optical data from a series of sampling intervals can be used for quantitative analysis to obtain meaningful kinetic parameters of probe in vivo . The sampling schemes may affect the quantification results of dynamic fluorescence imaging. Here, we investigate the influence of different sampling schemes on the quantification of binding potential ( BP ) with theoretically simulated and experimentally measured data. Three groups of sampling schemes are investigated including the sampling starting point, sampling sparsity, and sampling uniformity. In the investigation of the influence of the sampling starting point, we further summarize two cases by considering the missing timing sequence between the probe injection and sampling starting time. Results show that the mean value of BP exhibits an obvious growth trend with an increase in the delay of the sampling starting point, and has a strong correlation with the sampling sparsity. The growth trend is much more obvious if throwing the missing timing sequence. The standard deviation of BP is inversely related to the sampling sparsity, and independent of the sampling uniformity and the delay of sampling starting time. Moreover, the mean value of BP obtained by uniform sampling is significantly higher than that by using the non-uniform sampling. Our results collectively suggest that a suitable sampling scheme can help compartmental modeling of dynamic fluorescence imaging provide more accurate results and simpler operations.

  8. Optimum sampling scheme for characterization of mine tailings

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-07-01

    Full Text Available The paper describes a novice method for sampling geochemicals to characterize mine tailings. The author’s model the spatial relationships between a multi-element signature and, as covariates, abundance estimates of secondary iron-bearing minerals...

  9. An Optimal Control Scheme to Minimize Loads in Wind Farms

    DEFF Research Database (Denmark)

    Soleimanzadeh, Maryam; Wisniewski, Rafal

    2012-01-01

    This work presents a control algorithm for wind farms that optimizes the power production of the farm and helps to increase the lifetime of wind turbines components. The control algorithm is a centralized approach, and it determines the power reference signals for individual wind turbines...

  10. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  11. Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks.

    Science.gov (United States)

    Robinson, Y Harold; Rajaram, M

    2015-01-01

    Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique.

  12. An Optimization Scheme for Water Pump Control in Smart Fish Farm with Efficient Energy Consumption

    Directory of Open Access Journals (Sweden)

    Israr Ullah

    2018-06-01

    Full Text Available Healthy fish production requires intensive care and ensuring stable and healthy production environment inside the farm tank is a challenging task. An Internet of Things (IoT based automated system is highly desirable that can continuously monitor the fish tanks with optimal resources utilization. Significant cost reduction can be achieved if farm equipment and water pumps are operated only when required using optimization schemes. In this paper, we present a general system design for smart fish farms. We have developed an optimization scheme for water pump control to maintain desired water level in fish tank with efficient energy consumption through appropriate selection of pumping flow rate and tank filling level. Proposed optimization scheme attempts to achieve a trade-off between pumping duration and flow rate through selection of optimized water level. Kalman filter algorithm is applied to remove error in sensor readings. We observed through simulation results that optimization scheme achieve significant reduction in energy consumption as compared to the two alternate schemes, i.e., pumping with maximum and minimum flow rates. Proposed system can help in collecting the data about the farm for long-term analysis and better decision making in future for efficient resource utilization and overall profit maximization.

  13. An adaptive sampling scheme for deep-penetration calculation

    International Nuclear Information System (INIS)

    Wang, Ruihong; Ji, Zhicheng; Pei, Lucheng

    2013-01-01

    As we know, the deep-penetration problem has been one of the important and difficult problems in shielding calculation with Monte Carlo Method for several decades. In this paper, an adaptive Monte Carlo method under the emission point as a sampling station for shielding calculation is investigated. The numerical results show that the adaptive method may improve the efficiency of the calculation of shielding and might overcome the under-estimation problem easy to happen in deep-penetration calculation in some degree

  14. The Optimal Configuration Scheme of the Virtual Power Plant Considering Benefits and Risks of Investors

    Directory of Open Access Journals (Sweden)

    Jingmin Wang

    2017-07-01

    Full Text Available A virtual power plant (VPP is a special virtual unit that integrates various distributed energy resources (DERs distributed in the generation and consumption sides. The optimal configuration scheme of the VPP needs to break the geographical restrictions to make full use of DERs, considering the uncertainties. First, the components of the DERs and the structure of the VPP are briefly introduced. Next, the cubic exponential smoothing method is adopted to predict the VPP load requirement. Finally, the optimal configuration of the DER capacities inside the VPP is calculated by using portfolio theory and genetic algorithms (GA. The results show that the configuration scheme can optimize the DER capacities considering uncertainties, guaranteeing economic benefits of investors, and fully utilizing the DERs. Therefore, this paper provides a feasible reference for the optimal configuration scheme of the VPP from the perspective of investors.

  15. A Novel Spectrum Scheduling Scheme with Ant Colony Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Liping Liu

    2018-01-01

    Full Text Available Cognitive radio is a promising technology for improving spectrum utilization, which allows cognitive users access to the licensed spectrum while primary users are absent. In this paper, we design a resource allocation framework based on graph theory for spectrum assignment in cognitive radio networks. The framework takes into account the constraints that interference for primary users and possible collision among cognitive users. Based on the proposed model, we formulate a system utility function to maximize the system benefit. Based on the proposed model and objective problem, we design an improved ant colony optimization algorithm (IACO from two aspects: first, we introduce differential evolution (DE process to accelerate convergence speed by monitoring mechanism; then we design a variable neighborhood search (VNS process to avoid the algorithm falling into the local optimal. Simulation results demonstrate that the improved algorithm achieves better performance.

  16. Resolution optimization with irregularly sampled Fourier data

    International Nuclear Information System (INIS)

    Ferrara, Matthew; Parker, Jason T; Cheney, Margaret

    2013-01-01

    Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer–Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications. (paper)

  17. ANALYSIS OF EXISTING SCHEMES AND THE OPTIMIZING SETTLEMENT CHOIS OF PILES WORK SCHEMES IN CLAY SOILS

    Directory of Open Access Journals (Sweden)

    BOLSHAKOV V. I.

    2016-09-01

    Full Text Available Summary. It were considered and analyzed the existing schemes of piles work in clay soils. 1. Leningrad scientific school, where the formation of pile bearing capacity use as the basis of the thixotropic clay soils hardening and radial soil pressing around the pile shaft during the piles driving with pile-driving equipment for the exploitation period. 2. Odessa scientific school, in which the uplift soil formation from the edge pile use as the basis of the pile bearing capacity during the piles driving, the formation of the pressed zones (platform in the piles edge plane, the gap formation around the pile shaft during its diving by ground pushed moving with the pile edge. 3. Preconditions of the pile bearing capacity formation of the pile by the thixotropic soil hardening in time and the radial soil pressing around the pile shaft can not give an answer to the following questions: 1 Why during the pile driving is formed the gap around the trunk of dived piles, when by condition there is a radial soil hardening around the trunk? 2 Why in the interpiled space is formed the lune (deflection, not the soil mass swelling (due to the radial hardening? 3 By what is formed the calculated soil resistance under the lower end (edge of the pile? which is about 10 times higher than the calculated soil resistance in the edge plane, according to the Building Code V.2.1-10. 2009? The justified answers on all these and other technical and technological matters give perquisites of the Odessa scientific school with additions and authors developments

  18. An Optimized Virtual Scheme for Reducing Collisions in MAC Layer

    OpenAIRE

    M. Sivakumar; S. Saravanan

    2015-01-01

    The main function of Medium Access Control (MAC) is to share the channel efficiently between all nodes. In the real-time scenario, there will be certain amount of wastage in bandwidth due to back-off periods. More bandwidth will be wasted in idle state if the back-off period is very high and collision may occur if the back-off period is small. So, an optimization is needed for this problem. The main objective of the work is to reduce delay due to back-off period thereby reducing collision and...

  19. How to decide the optimal scheme and the optimal time for construction

    International Nuclear Information System (INIS)

    Gjermundsen, T.; Dalsnes, B.; Jensen, T.

    1991-01-01

    Since the development in Norway began some 105 years ago the mean annual generation has reached approximately 110 TWh. This means that there is a large potential for uprating and refurbishing (U/R). A project undertaken by the Norwegian Water Resources and Energy Administration (NVE) has identified energy resources by means of U/R to about 10 TWh annual generation. One problem in harnessing the potential owned by small and medium sized electricity boards is the lack of simple tools to help us carry out the right decisions. The paper describes a simple model to find the best solution of scheme and the optimal time to start. The principle of present value is used. The main input is: production, price, annual costs of maintenance, the remaining lifetime and the social rate of return. The model calculates the present value of U/R/N for different points of time to start U/R/N. In addition the present value of the existing plant is calculated. Several alternatives can be considered. The best one will be the one which gives the highest present value according to the value of the existing plant. The internal rate of return is also calculated. To be aware of the sensitivity a star diagram is shown. The model gives the opportunity to include environmental charges and the value of effect (peak power). (Author)

  20. Evaluation of alternative macroinvertebrate sampling techniques for use in a new tropical freshwater bioassessment scheme

    OpenAIRE

    Isabel Eleanor Moore; Kevin Joseph Murphy

    2015-01-01

    Aim: The study aimed to determine the effectiveness of benthic macroinvertebrate dredge net sampling procedures as an alternative method to kick net sampling in tropical freshwater systems, specifically as an evaluation of sampling methods used in the Zambian Invertebrate Scoring System (ZISS) river bioassessment scheme. Tropical freshwater ecosystems are sometimes dangerous or inaccessible to sampling teams using traditional kick-sampling methods, so identifying an alternative procedure that...

  1. A Distributed Intrusion Detection Scheme about Communication Optimization in Smart Grid

    Directory of Open Access Journals (Sweden)

    Yunfa Li

    2013-01-01

    Full Text Available We first propose an efficient communication optimization algorithm in smart grid. Based on the optimization algorithm, we propose an intrusion detection algorithm to detect malicious data and possible cyberattacks. In this scheme, each node acts independently when it processes communication flows or cybersecurity threats. And neither special hardware nor nodes cooperation is needed. In order to justify the feasibility and the availability of this scheme, a series of experiments have been done. The results show that it is feasible and efficient to detect malicious data and possible cyberattacks with less computation and communication cost.

  2. Nearly optimal measurement schemes in a noisy Mach-Zehnder interferometer with coherent and squeezed vacuum

    Energy Technology Data Exchange (ETDEWEB)

    Gard, Bryan T.; You, Chenglong; Singh, Robinjeet; Lee, Hwang; Corbitt, Thomas R.; Dowling, Jonathan P. [Louisiana State University, Baton Rouge, LA (United States); Mishra, Devendra K. [Louisiana State University, Baton Rouge, LA (United States); V.S. Mehta College of Science, Physics Department, Bharwari, UP (India)

    2017-12-15

    The use of an interferometer to perform an ultra-precise parameter estimation under noisy conditions is a challenging task. Here we discuss nearly optimal measurement schemes for a well known, sensitive input state, squeezed vacuum and coherent light. We find that a single mode intensity measurement, while the simplest and able to beat the shot-noise limit, is outperformed by other measurement schemes in the low-power regime. However, at high powers, intensity measurement is only outperformed by a small factor. Specifically, we confirm, that an optimal measurement choice under lossless conditions is the parity measurement. In addition, we also discuss the performance of several other common measurement schemes when considering photon loss, detector efficiency, phase drift, and thermal photon noise. We conclude that, with noise considerations, homodyne remains near optimal in both the low and high power regimes. Surprisingly, some of the remaining investigated measurement schemes, including the previous optimal parity measurement, do not remain even near optimal when noise is introduced. (orig.)

  3. Adaptive multi-objective Optimization scheme for cognitive radio resource management

    KAUST Repository

    Alqerm, Ismail

    2014-12-01

    Cognitive Radio is an intelligent Software Defined Radio that is capable to alter its transmission parameters according to predefined objectives and wireless environment conditions. Cognitive engine is the actuator that performs radio parameters configuration by exploiting optimization and machine learning techniques. In this paper, we propose an Adaptive Multi-objective Optimization Scheme (AMOS) for cognitive radio resource management to improve spectrum operation and network performance. The optimization relies on adapting radio transmission parameters to environment conditions using constrained optimization modeling called fitness functions in an iterative manner. These functions include minimizing power consumption, Bit Error Rate, delay and interference. On the other hand, maximizing throughput and spectral efficiency. Cross-layer optimization is exploited to access environmental parameters from all TCP/IP stack layers. AMOS uses adaptive Genetic Algorithm in terms of its parameters and objective weights as the vehicle of optimization. The proposed scheme has demonstrated quick response and efficiency in three different scenarios compared to other schemes. In addition, it shows its capability to optimize the performance of TCP/IP layers as whole not only the physical layer.

  4. A numerical scheme for optimal transition paths of stochastic chemical kinetic systems

    International Nuclear Information System (INIS)

    Liu Di

    2008-01-01

    We present a new framework for finding the optimal transition paths of metastable stochastic chemical kinetic systems with large system size. The optimal transition paths are identified to be the most probable paths according to the Large Deviation Theory of stochastic processes. Dynamical equations for the optimal transition paths are derived using the variational principle. A modified Minimum Action Method (MAM) is proposed as a numerical scheme to solve the optimal transition paths. Applications to Gene Regulatory Networks such as the toggle switch model and the Lactose Operon Model in Escherichia coli are presented as numerical examples

  5. Optimal Tradable Credits Scheme and Congestion Pricing with the Efficiency Analysis to Congestion

    Directory of Open Access Journals (Sweden)

    Ge Gao

    2015-01-01

    Full Text Available We allow for three traffic scenarios: the tradable credits scheme, congestion pricing, and no traffic measure. The utility functions of different modes (car, bus, and bicycle are developed by considering the income’s impact on travelers’ behaviors. Their purpose is to analyze the demand distribution of different modes. A social optimization model is built aiming at maximizing the social welfare. The optimal tradable credits scheme (distribution of credits, credits charging, and the credit price, congestion pricing fees, bus frequency, and bus fare are obtained by solving the model. Mode choice behavior under the tradable credits scheme is also studied. Numerical examples are presented to demonstrate the model’s availability and explore the effects of the three schemes on traffic system’s performance. Results show congestion pricing would earn more social welfare than the other traffic measures. However, tradable credits scheme will give travelers more consumer surplus than congestion pricing. Travelers’ consumer surplus with congestion pricing is the minimum, which injures the travelers’ benefits. Tradable credits scheme is considered the best scenario by comparing the three scenarios’ efficiency.

  6. DRO: domain-based route optimization scheme for nested mobile networks

    Directory of Open Access Journals (Sweden)

    Chuang Ming-Chin

    2011-01-01

    Full Text Available Abstract The network mobility (NEMO basic support protocol is designed to support NEMO management, and to ensure communication continuity between nodes in mobile networks. However, in nested mobile networks, NEMO suffers from the pinball routing problem, which results in long packet transmission delays. To solve the problem, we propose a domain-based route optimization (DRO scheme that incorporates a domain-based network architecture and ad hoc routing protocols for route optimization. DRO also improves the intra-domain handoff performance, reduces the convergence time during route optimization, and avoids the out-of-sequence packet problem. A detailed performance analysis and simulations were conducted to evaluate the scheme. The results demonstrate that DRO outperforms existing mechanisms in terms of packet transmission delay (i.e., better route-optimization, intra-domain handoff latency, convergence time, and packet tunneling overhead.

  7. 'Massfunktionen' as limit conditions of an optimization scheme for the telecobalt therapy

    International Nuclear Information System (INIS)

    Kirsch, M.; Forth, E.; Schumann, E.

    1978-01-01

    The basic ideas of the 'Score-Funktionen-Modell' of Hope and his collaborators are used for the establishment of the first stage of an optimization scheme for the telecobalt therapy. The new 'Massfunktionen' for the telecobalt therapy are limit conditions for the criterion of the optimum, i.e. the dose distribution in a body section. The 'Massfunktionen' are an analytic registration of parameters for the dose distribution such as dose homogeneity in the focal region and sparing of the subcutaneous tissues, the radiosensitive organs and the sound surroundings of the tumor. The functions are derived from the dose conditions in the irradiated body section. At the actual stage of development of the optimization scheme, these functions allow to decide whether an irradiation scheme is acceptable or not. (orig.) [de

  8. Flexible aluminum tubes and a least square multi-objective non-linear optimization scheme

    International Nuclear Information System (INIS)

    Endelt, Benny; Nielsen, Karl Brian; Olsen, Soeren

    2004-01-01

    The automotive industry currently uses rubber hoses as the media carrier between e.g. the radiator and the engine, and the basic idea is to replace the rubber hoses with flexible aluminum tubes.A good quality is defined through several quality measurements, i.e. in the current case the key objective is to produce a flexible convolution through optimization of the tool geometry, but the process should also be stable, and the process stability is evaluated through Forming Limit Diagrams. Typically the defined objectives are conflicting, i.e. the optimized configuration represents therefore a trade-off between the individual objectives, in this case flexibility versus process stability.The optimization problem is solved through iteratively minimizing the object function. A second-order least square scheme is used for the approximation of the quadratic model, and the change in the design parameters is evaluated through the trust region scheme and box constraints are introduced within the trust region framework. Furthermore, the object function is minimized by applying the non-monotone scheme, and the trust region subproblem is solved by applying the Cholesky factorization scheme.An optimal bell shaped geometry is identified and the design is verified experimentally

  9. Optimized helper data scheme for biometric verification under zero leakage constraint

    NARCIS (Netherlands)

    Groot, de J.A.; Linnartz, J.P.M.G.

    2012-01-01

    In biometric verication, special measures are needed to prevent that a dishon- est verier can steal privacy-sensitive information about the prover from the template database. We introduce an improved version of the zero leakage quan- tization scheme, which optimizes detection performance in terms of

  10. Unified Importance Sampling Schemes for Efficient Simulation of Outage Capacity over Generalized Fading Channels

    KAUST Repository

    Rached, Nadhir B.; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The outage capacity (OC) is among the most important performance metrics of communication systems operating over fading channels. Of interest in the present paper is the evaluation of the OC at the output of the Equal Gain Combining (EGC) and the Maximum Ratio Combining (MRC) receivers. In this case, it can be seen that this problem turns out to be that of computing the Cumulative Distribution Function (CDF) for the sum of independent random variables. Since finding a closedform expression for the CDF of the sum distribution is out of reach for a wide class of commonly used distributions, methods based on Monte Carlo (MC) simulations take pride of price. In order to allow for the estimation of the operating range of small outage probabilities, it is of paramount importance to develop fast and efficient estimation methods as naive Monte Carlo (MC) simulations would require high computational complexity. In this line, we propose in this work two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels. The first estimator is shown to possess the asymptotic optimality criterion and applies for arbitrary fading models, whereas the second one achieves the well-desired bounded relative error property for the majority of the well-known fading variates. Moreover, the second estimator is shown to achieve the asymptotic optimality property under the particular Log-normal environment. Some selected simulation results are finally provided in order to illustrate the substantial computational gain achieved by the proposed IS schemes over naive MC simulations.

  11. Unified Importance Sampling Schemes for Efficient Simulation of Outage Capacity over Generalized Fading Channels

    KAUST Repository

    Rached, Nadhir B.

    2015-11-13

    The outage capacity (OC) is among the most important performance metrics of communication systems operating over fading channels. Of interest in the present paper is the evaluation of the OC at the output of the Equal Gain Combining (EGC) and the Maximum Ratio Combining (MRC) receivers. In this case, it can be seen that this problem turns out to be that of computing the Cumulative Distribution Function (CDF) for the sum of independent random variables. Since finding a closedform expression for the CDF of the sum distribution is out of reach for a wide class of commonly used distributions, methods based on Monte Carlo (MC) simulations take pride of price. In order to allow for the estimation of the operating range of small outage probabilities, it is of paramount importance to develop fast and efficient estimation methods as naive Monte Carlo (MC) simulations would require high computational complexity. In this line, we propose in this work two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels. The first estimator is shown to possess the asymptotic optimality criterion and applies for arbitrary fading models, whereas the second one achieves the well-desired bounded relative error property for the majority of the well-known fading variates. Moreover, the second estimator is shown to achieve the asymptotic optimality property under the particular Log-normal environment. Some selected simulation results are finally provided in order to illustrate the substantial computational gain achieved by the proposed IS schemes over naive MC simulations.

  12. Optimization of sampling parameters for standardized exhaled breath sampling.

    Science.gov (United States)

    Doran, Sophie; Romano, Andrea; Hanna, George B

    2017-09-05

    The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample volume

  13. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  14. Further optimization of a parallel double-effect organosilicon distillation scheme through exergy analysis

    International Nuclear Information System (INIS)

    Sun, Jinsheng; Dai, Leilei; Shi, Ming; Gao, Hong; Cao, Xijia; Liu, Guangxin

    2014-01-01

    In our previous work, a significant improvement in organosilicon monomer distillation using parallel double-effect heat integration between a heavies removal column and six other columns, as well as heat integration between methyltrichlorosilane and dimethylchlorosilane columns, reduced the total exergy loss of the currently running counterpart by 40.41%. Further research regarding this optimized scheme demonstrated that it was necessary to reduce the higher operating pressure of the methyltrichlorosilane column, which is required for heat integration between the methyltrichlorosilane and dimethylchlorosilane columns. Therefore, in this contribution, a challenger scheme is presented with heat pumps introduced separately from the originally heat-coupled methyltrichlorosilane and dimethylchlorosilane columns in the above-mentioned optimized scheme, which is the prototype for this work. Both schemes are simulated using the same purity requirements used in running industrial units. The thermodynamic properties from the simulation are used to calculate the energy consumption and exergy loss of the two schemes. The results show that the heat pump option further reduces the flowsheet energy consumption and exergy loss by 27.35% and 10.98% relative to the prototype scheme. These results indicate that the heat pumps are superior to heat integration in the context of energy-savings during organosilicon monomer distillation. - Highlights: • Combine the paralleled double-effect and heat pump distillation to organosilicon distillation. • Compare the double-effect with the heat pump in saving energy. • Further cut down the flowsheet energy consumption and exergy loss by 27.35% and 10.98% respectively

  15. An adaptive robust optimization scheme for water-flooding optimization in oil reservoirs using residual analysis

    NARCIS (Netherlands)

    Siraj, M.M.; Van den Hof, P.M.J.; Jansen, J.D.

    2017-01-01

    Model-based dynamic optimization of the water-flooding process in oil reservoirs is a computationally complex problem and suffers from high levels of uncertainty. A traditional way of quantifying uncertainty in robust water-flooding optimization is by considering an ensemble of uncertain model

  16. Optimization of Compton-suppression and summing schemes for the TIGRESS HPGe detector array

    Science.gov (United States)

    Schumaker, M. A.; Svensson, C. E.; Andreoiu, C.; Andreyev, A.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Boston, A. J.; Chakrawarthy, R. S.; Churchman, R.; Drake, T. E.; Finlay, P.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hyland, B.; Jones, B.; Maharaj, R.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Sarazin, F.; Scraggs, H. C.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Watters, L. M.

    2007-04-01

    Methods of optimizing the performance of an array of Compton-suppressed, segmented HPGe clover detectors have been developed which rely on the physical position sensitivity of both the HPGe crystals and the Compton-suppression shields. These relatively simple analysis procedures promise to improve the precision of experiments with the TRIUMF-ISAC Gamma-Ray Escape-Suppressed Spectrometer (TIGRESS). Suppression schemes will improve the efficiency and peak-to-total ratio of TIGRESS for high γ-ray multiplicity events by taking advantage of the 20-fold segmentation of the Compton-suppression shields, while the use of different summing schemes will improve results for a wide range of experimental conditions. The benefits of these methods are compared for many γ-ray energies and multiplicities using a GEANT4 simulation, and the optimal physical configuration of the TIGRESS array under each set of conditions is determined.

  17. Experimental research of UWB over fiber system employing 128-QAM and ISFA-optimized scheme

    Science.gov (United States)

    He, Jing; Xiang, Changqing; Long, Fengting; Chen, Zuo

    2018-05-01

    In this paper, an optimized intra-symbol frequency-domain averaging (ISFA) scheme is proposed and experimentally demonstrated in intensity-modulation and direct-detection (IMDD) multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system. According to the channel responses of three MB-OFDM UWB sub-bands, the optimal ISFA window size for each sub-band is investigated. After 60-km standard single mode fiber (SSMF) transmission, the experimental results show that, at the bit error rate (BER) of 3.8 × 10-3, the receiver sensitivity of 128-quadrature amplitude modulation (QAM) can be improved by 1.9 dB using the proposed enhanced ISFA combined with training sequence (TS)-based channel estimation scheme, compared with the conventional TS-based channel estimation. Moreover, the spectral efficiency (SE) is up to 5.39 bit/s/Hz.

  18. Optimal placement of combined heat and power scheme (cogeneration): application to an ethylbenzene plant

    International Nuclear Information System (INIS)

    Zainuddin Abd Manan; Lim Fang Yee

    2001-01-01

    Combined heat and power (CHP) scheme, also known as cogeneration is widely accepted as a highly efficient energy saving measure, particularly in medium to large scale chemical process plants. To date, CHP application is well established in the developed countries. The advantage of a CHP scheme for a chemical plant is two-fold: (i) drastically cut down on the electricity bill from on-site power generation (ii) to save the fuel bills through recovery of the quality waste heat from power generation for process heating. In order to be effective, a CHP scheme must be placed at the right temperature level in the context of the overall process. Failure to do so might render a CHP venture worthless. This paper discusses the procedure for an effective implementation of a CHP scheme. An ethylbenzene process is used as a case study. A key visualization tool known as the grand composite curves is used to provide an overall picture of the process heat source and heat sink profiles. The grand composite curve, which is generated based on the first principles of Pinch Analysis enables the CHP scheme to be optimally placed within the overall process scenario. (Author)

  19. An energy-efficient adaptive sampling scheme for wireless sensor networks

    NARCIS (Netherlands)

    Masoum, Alireza; Meratnia, Nirvana; Havinga, Paul J.M.

    2013-01-01

    Wireless sensor networks are new monitoring platforms. To cope with their resource constraints, in terms of energy and bandwidth, spatial and temporal correlation in sensor data can be exploited to find an optimal sampling strategy to reduce number of sampling nodes and/or sampling frequencies while

  20. Linear triangular optimization technique and pricing scheme in residential energy management systems

    Science.gov (United States)

    Anees, Amir; Hussain, Iqtadar; AlKhaldi, Ali Hussain; Aslam, Muhammad

    2018-06-01

    This paper presents a new linear optimization algorithm for power scheduling of electric appliances. The proposed system is applied in a smart home community, in which community controller acts as a virtual distribution company for the end consumers. We also present a pricing scheme between community controller and its residential users based on real-time pricing and likely block rates. The results of the proposed optimization algorithm demonstrate that by applying the anticipated technique, not only end users can minimise the consumption cost, but it can also reduce the power peak to an average ratio which will be beneficial for the utilities as well.

  1. Tank waste remediation system optimized processing strategy with an altered treatment scheme

    International Nuclear Information System (INIS)

    Slaathaug, E.J.

    1996-03-01

    This report provides an alternative strategy evolved from the current Hanford Site Tank Waste Remediation System (TWRS) programmatic baseline for accomplishing the treatment and disposal of the Hanford Site tank wastes. This optimized processing strategy with an altered treatment scheme performs the major elements of the TWRS Program, but modifies the deployment of selected treatment technologies to reduce the program cost. The present program for development of waste retrieval, pretreatment, and vitrification technologies continues, but the optimized processing strategy reuses a single facility to accomplish the separations/low-activity waste (LAW) vitrification and the high-level waste (HLW) vitrification processes sequentially, thereby eliminating the need for a separate HLW vitrification facility

  2. Study on the structure optimization scheme design of a double-tube once-through steam

    International Nuclear Information System (INIS)

    Wei, Xinyu; Wu, Shifa; Wang, Pengfei; Zhao, Fuyu

    2016-01-01

    A double-tube once-through steam generator (DOTSG) consisting of an outer straight tube and an inner helical tube is studied in this work. First, the structure of the DOTSG is optimized by considering two different objective functions. The tube length and the total pressure drop are considered as the first and second objective functions, respectively. Because the DOTSG is divided into the subcooled, boiling, and superheated sections according to the different secondary fluid states, the pitches in the three sections are defined as the optimization variables. A multi-objective optimization model is established and solved by particle swarm optimization. The optimization pitch is small in the subcooled region and superheated region, and large in the boiling region. Considering the availability of the optimum structure at power levels below 100% full power, we propose a new operating scheme that can fix the boundaries between the three heat-transfer sections. The operation scheme is proposed on the basis of data for full power, and the operation parameters are calculated at low power level. The primary inlet and outlet temperatures, as well as flow rate and secondary outlet temperature are changed according to the operation procedure

  3. Study on the structure optimization scheme design of a double-tube once-through steam

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Xinyu; Wu, Shifa; Wang, Pengfei; Zhao, Fuyu [Dept. of Nuclear Science and Technology, Xi' an Jiaotong University, Xi' an (China)

    2016-08-15

    A double-tube once-through steam generator (DOTSG) consisting of an outer straight tube and an inner helical tube is studied in this work. First, the structure of the DOTSG is optimized by considering two different objective functions. The tube length and the total pressure drop are considered as the first and second objective functions, respectively. Because the DOTSG is divided into the subcooled, boiling, and superheated sections according to the different secondary fluid states, the pitches in the three sections are defined as the optimization variables. A multi-objective optimization model is established and solved by particle swarm optimization. The optimization pitch is small in the subcooled region and superheated region, and large in the boiling region. Considering the availability of the optimum structure at power levels below 100% full power, we propose a new operating scheme that can fix the boundaries between the three heat-transfer sections. The operation scheme is proposed on the basis of data for full power, and the operation parameters are calculated at low power level. The primary inlet and outlet temperatures, as well as flow rate and secondary outlet temperature are changed according to the operation procedure.

  4. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  5. Sample preparation optimization in fecal metabolic profiling.

    Science.gov (United States)

    Deda, Olga; Chatziioannou, Anastasia Chrysovalantou; Fasoula, Stella; Palachanis, Dimitris; Raikos, Νicolaos; Theodoridis, Georgios A; Gika, Helen G

    2017-03-15

    Metabolomic analysis of feces can provide useful insight on the metabolic status, the health/disease state of the human/animal and the symbiosis with the gut microbiome. As a result, recently there is increased interest on the application of holistic analysis of feces for biomarker discovery. For metabolomics applications, the sample preparation process used prior to the analysis of fecal samples is of high importance, as it greatly affects the obtained metabolic profile, especially since feces, as matrix are diversifying in their physicochemical characteristics and molecular content. However there is still little information in the literature and lack of a universal approach on sample treatment for fecal metabolic profiling. The scope of the present work was to study the conditions for sample preparation of rat feces with the ultimate goal of the acquisition of comprehensive metabolic profiles either untargeted by NMR spectroscopy and GC-MS or targeted by HILIC-MS/MS. A fecal sample pooled from male and female Wistar rats was extracted under various conditions by modifying the pH value, the nature of the organic solvent and the sample weight to solvent volume ratio. It was found that the 1/2 (w f /v s ) ratio provided the highest number of metabolites under neutral and basic conditions in both untargeted profiling techniques. Concerning LC-MS profiles, neutral acetonitrile and propanol provided higher signals and wide metabolite coverage, though extraction efficiency is metabolite dependent. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Optimal relaxed causal sampler using sampled-date system theory

    NARCIS (Netherlands)

    Shekhawat, Hanumant; Meinsma, Gjerrit

    This paper studies the design of an optimal relaxed causal sampler using sampled data system theory. A lifted frequency domain approach is used to obtain the existence conditions and the optimal sampler. A state space formulation of the results is also provided. The resulting optimal relaxed causal

  7. Optimizing sampling approaches along ecological gradients

    DEFF Research Database (Denmark)

    Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel

    2016-01-01

    1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...... patterns in a statistically solid and reproducible manner, given the normal restrictions in labour, time and money. However, a technical guideline about an adequate sampling design to maximize prediction success under restricted resources is lacking. This study aims at developing such a solid...... and reproducible guideline for sampling along gradients in all fields of ecology and science in general. 2. We conducted simulations with artificial data for five common response types known in ecology, each represented by a simple function (no response, linear, exponential, symmetric unimodal and asymmetric...

  8. Application of Allan Deviation to Assessing Uncertainties of Continuous-measurement Instruments, and Optimizing Calibration Schemes

    Science.gov (United States)

    Jacobson, Gloria; Rella, Chris; Farinas, Alejandro

    2014-05-01

    Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits

  9. A new and fast image feature selection method for developing an optimal mammographic mass detection scheme.

    Science.gov (United States)

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-08-01

    Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Among these four methods, SFFS has highest efficacy, which takes 3%-5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC results of the ANNs optimized

  10. An optimal guarding scheme for thermal conductivity measurement using a guarded cut-bar technique, part 1 experimental study

    International Nuclear Information System (INIS)

    Xing, Changhu

    2014-01-01

    In the guarded cut-bar technique, a guard surrounding the measured sample and reference (meter) bars is temperature controlled to carefully regulate heat losses from the sample and reference bars. Guarding is typically carried out by matching the temperature profiles between the guard and the test stack of sample and meter bars. Problems arise in matching the profiles, especially when the thermal conductivities of the meter bars and of the sample differ, as is usually the case. In a previous numerical study, the applied guarding condition (guard temperature profile) was found to be an important factor in measurement accuracy. Different from the linear-matched or isothermal schemes recommended in literature, the optimal guarding condition is dependent on the system geometry and thermal conductivity ratio of sample to meter bar. To validate the numerical results, an experimental study was performed to investigate the resulting error under different guarding conditions using stainless steel 304 as both the sample and meter bars. The optimal guarding condition was further verified on a certified reference material, pyroceram 9606, and 99.95% pure iron whose thermal conductivities are much smaller and much larger, respectively, than that of the stainless steel meter bars. Additionally, measurements are performed using three different inert gases to show the effect of the insulation effective thermal conductivity on measurement error, revealing low conductivity, argon gas, gives the lowest error sensitivity when deviating from the optimal condition. The result of this study provides a general guideline for the specific measurement method and for methods requiring optimal guarding or insulation

  11. A Fairness-Based Access Control Scheme to Optimize IPTV Fast Channel Changing

    Directory of Open Access Journals (Sweden)

    Junyu Lai

    2014-01-01

    Full Text Available IPTV services are typically featured with a longer channel changing delay compared to the conventional TV systems. The major contributor to this lies in the time spent on intraframe (I-frame acquisition during channel changing. Currently, most widely adopted fast channel changing (FCC methods rely on promptly transmitting to the client (conducting the channel changing a retained I-frame of the targeted channel as a separate unicasting stream. However, this I-frame acceleration mechanism has an inherent scalability problem due to the explosions of channel changing requests during commercial breaks. In this paper, we propose a fairness-based admission control (FAC scheme for the original I-frame acceleration mechanism to enhance its scalability by decreasing the bandwidth demands. Based on the channel changing history of every client, the FAC scheme can intelligently decide whether or not to conduct the I-frame acceleration for each channel change request. Comprehensive simulation experiments demonstrate the potential of our proposed FAC scheme to effectively optimize the scalability of the I-frame acceleration mechanism, particularly in commercial breaks. Meanwhile, the FAC scheme only slightly increases the average channel changing delay by temporarily disabling FCC (i.e., I-frame acceleration for the clients who are addicted to frequent channel zapping.

  12. Energy-Efficient Optimization for HARQ Schemes over Time-Correlated Fading Channels

    KAUST Repository

    Shi, Zheng

    2018-03-19

    Energy efficiency of three common hybrid automatic repeat request (HARQ) schemes including Type I HARQ, HARQ with chase combining (HARQ-CC) and HARQ with incremental redundancy (HARQ-IR), is analyzed and joint power allocation and rate selection to maximize the energy efficiency is investigated in this paper. Unlike prior literature, time-correlated fading channels is considered and two widely concerned quality of service (QoS) constraints, i.e., outage and goodput constraints, are also considered in the optimization, which further differentiates this work from prior ones. Using a unified expression of asymptotic outage probabilities, optimal transmission powers and optimal rate are derived in closed-forms to maximize the energy efficiency while satisfying the QoS constraints. These closed-form solutions then enable a thorough analysis of the maximal energy efficiencies of various HARQ schemes. It is revealed that with low outage constraint, the maximal energy efficiency achieved by Type I HARQ is $\\\\frac{1}{4\\\\ln2}$ bits/J, while HARQ-CC and HARQ-IR can achieve the same maximal energy efficiency as $\\\\frac{\\\\kappa_\\\\infty}{4\\\\ln2}$ bits/J where $\\\\kappa_\\\\infty = 1.6617$. Moreover, time correlation in the fading channels has a negative impact on the energy efficiency, while large maximal allowable number of transmissions is favorable for the improvement of energy efficiency. The effectiveness of the energy-efficient optimization is verified by extensive simulations and the results also show that HARQ-CC can achieve the best tradeoff between energy efficiency and spectral efficiency among the three HARQ schemes.

  13. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    Directory of Open Access Journals (Sweden)

    Huan Chen

    Full Text Available This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN. Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  14. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    Science.gov (United States)

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  15. Continuous quality control of the blood sampling procedure using a structured observation scheme

    DEFF Research Database (Denmark)

    Seemann, T. L.; Nybo, M.

    2015-01-01

    Background: An important preanalytical factor is the blood sampling procedure and its adherence to the guidelines, i.e. CLSI and ISO 15189, in order to ensure a consistent quality of the blood collection. Therefore, it is critically important to introduce quality control on this part of the process....... As suggested by the EFLM working group on the preanalytical phase we introduced continuous quality control of the blood sampling procedure using a structured observation scheme to monitor the quality of blood sampling performed on an everyday basis. Materials and methods: Based on our own routines the EFLM....... Conclusion: It is possible to establish a continuous quality control on blood sampling. It has been well accepted by the staff and we have already been able to identify critical areas in the sampling process. We find that continuous auditing increase focus on the quality of blood collection which ensures...

  16. Optimization Model for Machinery Selection of Multi-Crop Farms in Elsuki Agricultural Scheme

    Directory of Open Access Journals (Sweden)

    Mysara Ahmed Mohamed

    2017-07-01

    Full Text Available The optimization machinery model was developed to aid decision-makers and farm machinery managers in determining the optimal number of tractors, scheduling the agricultural operation and minimizing machinery total costs. For purpose of model verification, validation and application input data was collected from primary & secondary sources from Elsuki agricultural scheme for two seasons namely 2011-2012 and 2013-2014. Model verification was made by comparing the numbers of tractors of Elsuki agricultural scheme for season 2011-2012 with those estimated by the model. The model succeeded in reducing the number of tractors and operation total cost by 23%. The effect of optimization model on elements of direct cost saving indicated that the highest cost saving is reached with depreciation, repair and maintenance (23% and the minimum cost saving is attained with fuel cost (22%. Sensitivity analysis in terms of change in model input for each of cultivated area and total costs of operations showing that: Increasing the operation total cost by 10% decreased the total number of tractors after optimization by 23% and total cost of operations was also decreased by 23%. Increasing the cultivated area by 10%, decreased the total number of tractors after optimization by(12% and total cost of operations was also decreased by 12% (16669206 SDG(1111280 $ to 14636376 SDG(975758 $. For the case of multiple input effect of the area and operation total cost resulted in decrease maximum number of tractors by 12%, and the total cost of operations also decreased by 12%. It is recommended to apply the optimization model as pre-requisite for improving machinery management during implementation of machinery scheduling.

  17. Evaluation of alternative macroinvertebrate sampling techniques for use in a new tropical freshwater bioassessment scheme

    Directory of Open Access Journals (Sweden)

    Isabel Eleanor Moore

    2015-06-01

    Full Text Available Aim: The study aimed to determine the effectiveness of benthic macroinvertebrate dredge net sampling procedures as an alternative method to kick net sampling in tropical freshwater systems, specifically as an evaluation of sampling methods used in the Zambian Invertebrate Scoring System (ZISS river bioassessment scheme. Tropical freshwater ecosystems are sometimes dangerous or inaccessible to sampling teams using traditional kick-sampling methods, so identifying an alternative procedure that produces similar results is necessary in order to collect data from a wide variety of habitats.MethodsBoth kick and dredge nets were used to collect macroinvertebrate samples at 16 riverine sites in Zambia, ranging from backwaters and floodplain lagoons to fast flowing streams and rivers. The data were used to calculate ZISS, diversity (S: number of taxa present, and Average Score Per Taxon (ASPT scores per site, using the two sampling methods to compare their sampling effectiveness. Environmental parameters, namely pH, conductivity, underwater photosynthetically active radiation (PAR, temperature, alkalinity, flow, and altitude, were also recorded and used in statistical analysis. Invertebrate communities present at the sample sites were determined using multivariate procedures.ResultsAnalysis of the invertebrate community and environmental data suggested that the testing exercise was undertaken in four distinct macroinvertebrate community types, supporting at least two quite different macroinvertebrate assemblages, and showing significant differences in habitat conditions. Significant correlations were found for all three bioassessment score variables between results acquired using the two methods, with dredge-sampling normally producing lower scores than did the kick net procedures. Linear regression models were produced in order to correct each biological variable score collected by a dredge net to a score similar to that of one collected by kick net

  18. Methodology for optimization of process integration schemes in a biorefinery under uncertainty

    International Nuclear Information System (INIS)

    Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >González-Cortés, Meilyn; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Martínez-Martínez, Yenisleidys; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Albernas-Carvajal, Yailet; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Pedraza-Garciga, Julio; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Morales-Zamora, Marlen

    2017-01-01

    The uncertainty has a great impact in the investment decisions, operability of the plants and in the feasibility of integration opportunities in the chemical processes. This paper, presents the steps to consider the optimization of process investment in the processes integration under conditions of uncertainty. It is shown the potentialities of the biomass cane of sugar for the integration with several plants in a biorefinery scheme for the obtaining chemical products, thermal and electric energy. Among the factories with potentialities for this integration are the pulp and paper and sugar factories and other derivative processes. Theses factories have common resources and also have a variety of products that can be exchange between them so certain products generated in a one of them can be raw matter in another plant. The methodology developed guide to obtaining of feasible investment projects under uncertainty. As objective function was considered the maximization of net profitable value in different scenarios that are generated from the integration scheme. (author)

  19. Optimization study on multiple train formation scheme of urban rail transit

    Science.gov (United States)

    Xia, Xiaomei; Ding, Yong; Wen, Xin

    2018-05-01

    The new organization method, represented by the mixed operation of multi-marshalling trains, can adapt to the characteristics of the uneven distribution of passenger flow, but the research on this aspect is still not perfect enough. This paper introduced the passenger sharing rate and congestion penalty coefficient with different train formations. On this basis, this paper established an optimization model with the minimum passenger cost and operation cost as objective, and operation frequency and passenger demand as constraint. The ideal point method is used to solve this model. Compared with the fixed marshalling operation model, the overall cost of this scheme saves 9.24% and 4.43% respectively. This result not only validates the validity of the model, but also illustrate the advantages of the multiple train formations scheme.

  20. Determination of Optimal Opening Scheme for Electromagnetic Loop Networks Based on Fuzzy Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Yang Li

    2016-01-01

    Full Text Available Studying optimization and decision for opening electromagnetic loop networks plays an important role in planning and operation of power grids. First, the basic principle of fuzzy analytic hierarchy process (FAHP is introduced, and then an improved FAHP-based scheme evaluation method is proposed for decoupling electromagnetic loop networks based on a set of indicators reflecting the performance of the candidate schemes. The proposed method combines the advantages of analytic hierarchy process (AHP and fuzzy comprehensive evaluation. On the one hand, AHP effectively combines qualitative and quantitative analysis to ensure the rationality of the evaluation model; on the other hand, the judgment matrix and qualitative indicators are expressed with trapezoidal fuzzy numbers to make decision-making more realistic. The effectiveness of the proposed method is validated by the application results on the real power system of Liaoning province of China.

  1. Space-partition method for the variance-based sensitivity analysis: Optimal partition scheme and comparative study

    International Nuclear Information System (INIS)

    Zhai, Qingqing; Yang, Jun; Zhao, Yu

    2014-01-01

    Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one

  2. Luminosity optimization schemes in Compton experiments based on Fabry-Perot optical resonators

    Directory of Open Access Journals (Sweden)

    Alessandro Variola

    2011-03-01

    Full Text Available The luminosity of Compton x-ray and γ sources depends on the average current in electron bunches, the energy of the laser pulses, and the geometry of the particle bunch to laser pulse collisions. To obtain high power photon pulses, these can be stacked in a passive optical resonator (Fabry-Perot cavity especially when a high average flux is required. But, in this case, owing to the presence of the optical cavity mirrors, the electron bunches have to collide at an angle with the laser pulses with a consequent luminosity decrease. In this article a crab-crossing scheme is proposed for Compton sources, based on a laser amplified in a Fabry-Perot resonator, to eliminate the luminosity losses given by the crossing angle, taking into account that in laser-electron collisions only the electron bunches can be tilted at the collision point. We report the analytical study on the crab-crossing scheme for Compton gamma sources. The analytical expression for the total yield of photons generated in Compton sources with the crab-crossing scheme of collision is derived. The optimal collision angle of the bunch was found to be equal to half of the collision angle. At this crabbing angle, the maximal yield of scattered off laser photons is attained thanks to the maximization, in the collision process, of the time spent by the laser pulse in the electron bunch. Estimations for some Compton source projects are presented. Furthermore, some schemes of the optical cavities configuration are analyzed and the luminosity calculated. As illustrated, the four-mirror two- or three-dimensional scheme is the most appropriate for Compton sources.

  3. Evolutional Optimization on Material Ordering and Inventory Control of Supply Chain through Incentive Scheme

    Science.gov (United States)

    Prasertwattana, Kanit; Shimizu, Yoshiaki; Chiadamrong, Navee

    This paper studied the material ordering and inventory control of supply chain systems. The effect of controlling policies is analyzed under three different configurations of the supply chain systems, and the formulated problem has been solved by using an evolutional optimization method known as Differential Evolution (DE). The numerical results show that the coordinating policy with the incentive scheme outperforms the other policies and can improve the performance of the overall system as well as all members under the concept of supply chain management.

  4. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    Directory of Open Access Journals (Sweden)

    Yongkai An

    2015-07-01

    Full Text Available This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately.

  5. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  6. Optimal control, investment and utilization schemes for energy storage under uncertainty

    Science.gov (United States)

    Mirhosseini, Niloufar Sadat

    Energy storage has the potential to offer new means for added flexibility on the electricity systems. This flexibility can be used in a number of ways, including adding value towards asset management, power quality and reliability, integration of renewable resources and energy bill savings for the end users. However, uncertainty about system states and volatility in system dynamics can complicate the question of when to invest in energy storage and how best to manage and utilize it. This work proposes models to address different problems associated with energy storage within a microgrid, including optimal control, investment, and utilization. Electric load, renewable resources output, storage technology cost and electricity day-ahead and spot prices are the factors that bring uncertainty to the problem. A number of analytical methodologies have been adopted to develop the aforementioned models. Model Predictive Control and discretized dynamic programming, along with a new decomposition algorithm are used to develop optimal control schemes for energy storage for two different levels of renewable penetration. Real option theory and Monte Carlo simulation, coupled with an optimal control approach, are used to obtain optimal incremental investment decisions, considering multiple sources of uncertainty. Two stage stochastic programming is used to develop a novel and holistic methodology, including utilization of energy storage within a microgrid, in order to optimally interact with energy market. Energy storage can contribute in terms of value generation and risk reduction for the microgrid. The integration of the models developed here are the basis for a framework which extends from long term investments in storage capacity to short term operational control (charge/discharge) of storage within a microgrid. In particular, the following practical goals are achieved: (i) optimal investment on storage capacity over time to maximize savings during normal and emergency

  7. Evaluation of sampling schemes for in-service inspection of steam generator tubing

    International Nuclear Information System (INIS)

    Hanlen, R.C.

    1990-03-01

    This report is a follow-on of work initially sponsored by the US Nuclear Regulatory Commission (Bowen et al. 1989). The work presented here is funded by EPRI and is jointly sponsored by the Electric Power Research Institute (EPRI) and the US Nuclear Regulatory Commission (NRC). The goal of this research was to evaluate fourteen sampling schemes or plans. The main criterion used for evaluating plan performance was the effectiveness for sampling, detecting and plugging defective tubes. The performance criterion was evaluated across several choices of distributions of degraded/defective tubes, probability of detection (POD) curves and eddy-current sizing models. Conclusions from this study are dependent upon the tube defect distributions, sample size, and expansion rules considered. As degraded/defective tubes form ''clusters'' (i.e., maps 6A, 8A and 13A), the smaller sample sizes provide a capability of detecting and sizing defective tubes that approaches 100% inspection. When there is little or no clustering (i.e., maps 1A, 20 and 21), sample efficiency is approximately equal to the initial sample size taken. Thee is an indication (though not statistically significant) that the systematic sampling plans are better than the random sampling plans for equivalent initial sample size. There was no indication of an effect due to modifying the threshold value for the second stage expansion. The lack of an indication is likely due to the specific tube flaw sizes considered for the six tube maps. 1 ref., 11 figs., 19 tabs

  8. Using remotely-sensed data for optimal field sampling

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-09-01

    Full Text Available M B E R 2 0 0 8 15 USING REMOTELY- SENSED DATA FOR OPTIMAL FIELD SAMPLING BY DR PRAVESH DEBBA STATISTICS IS THE SCIENCE pertaining to the collection, summary, analysis, interpretation and presentation of data. It is often impractical... studies are: where to sample, what to sample and how many samples to obtain. Conventional sampling techniques are not always suitable in environmental studies and scientists have explored the use of remotely-sensed data as ancillary information to aid...

  9. A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme

    Science.gov (United States)

    Ghoman, Satyajit S.

    The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of

  10. Best Hiding Capacity Scheme for Variable Length Messages Using Particle Swarm Optimization

    Science.gov (United States)

    Bajaj, Ruchika; Bedi, Punam; Pal, S. K.

    Steganography is an art of hiding information in such a way that prevents the detection of hidden messages. Besides security of data, the quantity of data that can be hidden in a single cover medium, is also very important. We present a secure data hiding scheme with high embedding capacity for messages of variable length based on Particle Swarm Optimization. This technique gives the best pixel positions in the cover image, which can be used to hide the secret data. In the proposed scheme, k bits of the secret message are substituted into k least significant bits of the image pixel, where k varies from 1 to 4 depending on the message length. The proposed scheme is tested and results compared with simple LSB substitution, uniform 4-bit LSB hiding (with PSO) for the test images Nature, Baboon, Lena and Kitty. The experimental study confirms that the proposed method achieves high data hiding capacity and maintains imperceptibility and minimizes the distortion between the cover image and the obtained stego image.

  11. An effective coded excitation scheme based on a predistorted FM signal and an optimized digital filter

    DEFF Research Database (Denmark)

    Misaridis, Thanasis; Jensen, Jørgen Arendt

    1999-01-01

    This paper presents a coded excitation imaging system based on a predistorted FM excitation and a digital compression filter designed for medical ultrasonic applications, in order to preserve both axial resolution and contrast. In radars, optimal Chebyshev windows efficiently weight a nearly...... as with pulse excitation (about 1.5 lambda), depending on the filter design criteria. The axial sidelobes are below -40 dB, which is the noise level of the measuring imaging system. The proposed excitation/compression scheme shows good overall performance and stability to the frequency shift due to attenuation...... be removed by weighting. We show that by using a predistorted chirp with amplitude or phase shaping for amplitude ripple reduction and a correlation filter that accounts for the transducer's natural frequency weighting, output sidelobe levels of -35 to -40 dB are directly obtained. When an optimized filter...

  12. Properties of the DREAM scheme and its optimization for application to proteins

    International Nuclear Information System (INIS)

    Westfeld, Thomas; Verel, René; Ernst, Matthias; Böckmann, Anja; Meier, Beat H.

    2012-01-01

    The DREAM scheme is an efficient adiabatic homonuclear polarization-transfer method suitable for multi-dimensional experiments in biomolecular solid-state NMR. The bandwidth and dynamics of the polarization transfer in the DREAM experiment depend on a number of experimental and spin-system parameters. In order to obtain optimal results, the dependence of the cross-peak intensity on these parameters needs to be understood and carefully controlled. We introduce a simplified model to semi-quantitatively describe the polarization-transfer patterns for the relevant spin systems. Numerical simulations for all natural amino acids (except tryptophane) show the dependence of the cross-peak intensities as a function of the radio-frequency-carrier position. This dependency can be used as a guide to select the desired conditions in protein spectroscopy. Practical guidelines are given on how to set up a DREAM experiment for optimized Cα/Cβ transfer, which is important in sequential assignment experiments.

  13. Model-Based Predictive Control Scheme for Cost Optimization and Balancing Services for Supermarket Refrigeration Systems

    DEFF Research Database (Denmark)

    Weerts, Hermanus H. M.; Shafiei, Seyed Ehsan; Stoustrup, Jakob

    2014-01-01

    A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predictive...... control design. It is however shown that taking into account the knowledge of different time scales in the dynamical subsystems makes possible a linear formulation of a centralized predictive controller. A realistic scenario of regulatory power services in the smart grid is considered and formulated...... in the same objective as of cost optimization one. A simulation benchmark validated against real data and including significant dynamics of the system are employed to show the effectiveness of the proposed control scheme....

  14. A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs

    Science.gov (United States)

    Xu, Xin; Yuan, Minjiao; Liu, Xiao; Cai, Zhiping; Wang, Tian

    2018-01-01

    In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path

  15. A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs.

    Science.gov (United States)

    Xu, Xin; Yuan, Minjiao; Liu, Xiao; Liu, Anfeng; Xiong, Neal N; Cai, Zhiping; Wang, Tian

    2018-05-03

    In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path

  16. A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs

    Directory of Open Access Journals (Sweden)

    Xin Xu

    2018-05-01

    Full Text Available In wireless sensor networks (WSNs, communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R and COOR(P of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b the reliability can be improved since it is the product of the reliability of every hop of the

  17. Adaptive Digital Watermarking Scheme Based on Support Vector Machines and Optimized Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoyi Zhou

    2018-01-01

    Full Text Available Digital watermarking is an effective solution to the problem of copyright protection, thus maintaining the security of digital products in the network. An improved scheme to increase the robustness of embedded information on the basis of discrete cosine transform (DCT domain is proposed in this study. The embedding process consisted of two main procedures. Firstly, the embedding intensity with support vector machines (SVMs was adaptively strengthened by training 1600 image blocks which are of different texture and luminance. Secondly, the embedding position with the optimized genetic algorithm (GA was selected. To optimize GA, the best individual in the first place of each generation directly went into the next generation, and the best individual in the second position participated in the crossover and the mutation process. The transparency reaches 40.5 when GA’s generation number is 200. A case study was conducted on a 256 × 256 standard Lena image with the proposed method. After various attacks (such as cropping, JPEG compression, Gaussian low-pass filtering (3,0.5, histogram equalization, and contrast increasing (0.5,0.6 on the watermarked image, the extracted watermark was compared with the original one. Results demonstrate that the watermark can be effectively recovered after these attacks. Even though the algorithm is weak against rotation attacks, it provides high quality in imperceptibility and robustness and hence it is a successful candidate for implementing novel image watermarking scheme meeting real timelines.

  18. Sampling optimization for printer characterization by direct search.

    Science.gov (United States)

    Bianco, Simone; Schettini, Raimondo

    2012-12-01

    Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.

  19. Optimization of trigeneration systems by Mathematical Programming: Influence of plant scheme and boundary conditions

    International Nuclear Information System (INIS)

    Piacentino, A.; Gallea, R.; Cardona, F.; Lo Brano, V.; Ciulla, G.; Catrini, P.

    2015-01-01

    Highlights: • Lay-out, design and operation of trigeneration plant is optimized for hotel building. • The temporal basis used for the optimization is properly selected. • The influence of plant scheme on the optimal results is discussed. • Sensitivity analysis is performed for different levels of tax exemption on fuel. • Dynamic behavior of the cogeneration unit influences its optimal operation strategy. - Abstract: The large potential for energy saving by cogeneration and trigeneration in the building sector is scarcely exploited due to a number of obstacles in making the investments attractive. The analyst often encounters difficulties in identifying optimal design and operation strategies, since a number of factors, either endogenous (i.e. related with the energy load profiles) and exogenous (i.e. related with external conditions like energy prices and support mechanisms), influence the economic viability. In this paper a decision tool is adopted, which represents an upgrade of a software analyzed in previous papers; the tool simultaneously optimizes the plant lay-out, the sizes of the main components and their operation strategy. For a specific building in the hotel sector, a preliminary analysis is performed to identify the most promising plant configuration, in terms of type of cogeneration unit (either microturbine or diesel oil/natural gas-fueled reciprocate engine) and absorption chiller. Then, sensitivity analyses are carried out to investigate the effects induced by: (a) tax exemption for the fuel consumed in “efficient cogeneration” mode, (b) dynamic behavior of the prime mover and consequent capability to rapidly adjust its load level to follow the energy loads

  20. A high-precision sampling scheme to assess persistence and transport characteristics of micropollutants in rivers.

    Science.gov (United States)

    Schwientek, Marc; Guillet, Gaëlle; Rügner, Hermann; Kuch, Bertram; Grathwohl, Peter

    2016-01-01

    Increasing numbers of organic micropollutants are emitted into rivers via municipal wastewaters. Due to their persistence many pollutants pass wastewater treatment plants without substantial removal. Transport and fate of pollutants in receiving waters and export to downstream ecosystems is not well understood. In particular, a better knowledge of processes governing their environmental behavior is needed. Although a lot of data are available concerning the ubiquitous presence of micropollutants in rivers, accurate data on transport and removal rates are lacking. In this paper, a mass balance approach is presented, which is based on the Lagrangian sampling scheme, but extended to account for precise transport velocities and mixing along river stretches. The calculated mass balances allow accurate quantification of pollutants' reactivity along river segments. This is demonstrated for representative members of important groups of micropollutants, e.g. pharmaceuticals, musk fragrances, flame retardants, and pesticides. A model-aided analysis of the measured data series gives insight into the temporal dynamics of removal processes. The occurrence of different removal mechanisms such as photooxidation, microbial degradation, and volatilization is discussed. The results demonstrate, that removal processes are highly variable in time and space and this has to be considered for future studies. The high precision sampling scheme presented could be a powerful tool for quantifying removal processes under different boundary conditions and in river segments with contrasting properties. Copyright © 2015. Published by Elsevier B.V.

  1. The effect of sampling scheme in the survey of atmospheric deposition of heavy metals in Albania by using moss biomonitoring.

    Science.gov (United States)

    Qarri, Flora; Lazo, Pranvera; Bekteshi, Lirim; Stafilov, Trajce; Frontasyeva, Marina; Harmens, Harry

    2015-02-01

    The atmospheric deposition of heavy metals in Albania was investigated by using a carpet-forming moss species (Hypnum cupressiforme) as bioindicator. Sampling was done in the dry seasons of autumn 2010 and summer 2011. Two different sampling schemes are discussed in this paper: a random sampling scheme with 62 sampling sites distributed over the whole territory of Albania and systematic sampling scheme with 44 sampling sites distributed over the same territory. Unwashed, dried samples were totally digested by using microwave digestion, and the concentrations of metal elements were determined by inductively coupled plasma atomic emission spectroscopy (ICP-AES) and AAS (Cd and As). Twelve elements, such as conservative and trace elements (Al and Fe and As, Cd, Cr, Cu, Ni, Mn, Pb, V, Zn, and Li), were measured in moss samples. Li as typical lithogenic element is also included. The results reflect local emission points. The median concentrations and statistical parameters of elements were discussed by comparing two sampling schemes. The results of both sampling schemes are compared with the results of other European countries. Different levels of the contamination valuated by the respective contamination factor (CF) of each element are obtained for both sampling schemes, while the local emitters identified like iron-chromium metallurgy and cement industry, oil refinery, mining industry, and transport have been the same for both sampling schemes. In addition, the natural sources, from the accumulation of these metals in mosses caused by metal-enriched soil, associated with wind blowing soils were pointed as another possibility of local emitting factors.

  2. Optimization of the scheme for natural ecology planning of urban rivers based on ANP (analytic network process) model.

    Science.gov (United States)

    Zhang, Yichuan; Wang, Jiangping

    2015-07-01

    Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.

  3. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Directory of Open Access Journals (Sweden)

    Jake M Ferguson

    2014-06-01

    Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  4. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  5. Performance Analysis and Optimization of an Adaptive Admission Control Scheme in Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Shunfu Jin

    2013-01-01

    Full Text Available In cognitive radio networks, if all the secondary user (SU packets join the system without any restrictions, the average latency of the SU packets will be greater, especially when the traffic load of the system is higher. For this, we propose an adaptive admission control scheme with a system access probability for the SU packets in this paper. We suppose the system access probability is inversely proportional to the total number of packets in the system and introduce an Adaptive Factor to adjust the system access probability. Accordingly, we build a discrete-time preemptive queueing model with adjustable joining rate. In order to obtain the steady-state distribution of the queueing model exactly, we construct a two-dimensional Markov chain. Moreover, we derive the formulas for the blocking rate, the throughput, and the average latency of the SU packets. Afterwards, we provide numerical results to investigate the influence of the Adaptive Factor on different performance measures. We also give the individually optimal strategy and the socially optimal strategy from the standpoints of the SU packets. Finally, we provide a pricing mechanism to coordinate the two optimal strategies.

  6. Adjoint optimization scheme for lower hybrid current rampup and profile control in Tokamak

    International Nuclear Information System (INIS)

    Litaudon, X.; Moreau, D.; Bizarro, J.P.; Hoang, G.T.; Kupfer, K.; Peysson, Y.; Shkarofsky, I.P.; Bonoli, P.

    1992-12-01

    The purpose of this work is to take into account and study the effect of the electric field profiles on the Lower Hybrid (LH) current drive efficiency during transient phases such as rampup. As a complement to the full ray-tracing / Fokker Planck studies, and for the purpose of optimization studies, we developed a simplified 1-D model based on the adjoint Karney-Fisch numerical results. This approach allows us to estimate the LH power deposition profile which would be required for ramping the current with prescribed rate, total current density profile (q-profile) and surface loop voltage. For rampup optimization studies, we can therefore scan the whole parameter space and eliminate a posteriori those scenarios which correspond to unrealistic deposition profiles. We thus obtain the time evolution of the LH power, minor radius of the plasma, volt-second consumption and total energy dissipated. Optimization can thus be performed with respect to any of those criteria. This scheme is illustrated by some numerical simulations performed with TORE-SUPRA and NET/ITER parameters. We conclude with a derivation of a simple and general scaling law for the flux consumption during the rampup phase

  7. Revisiting Intel Xeon Phi optimization of Thompson cloud microphysics scheme in Weather Research and Forecasting (WRF) model

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2015-10-01

    The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.

  8. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  9. Monte Carlo importance sampling optimization for system reliability applications

    International Nuclear Information System (INIS)

    Campioni, Luca; Vestrucci, Paolo

    2004-01-01

    This paper focuses on the reliability analysis of multicomponent systems by the importance sampling technique, and, in particular, it tackles the optimization aspect. A methodology based on the minimization of the variance at the component level is proposed for the class of systems consisting of independent components. The claim is that, by means of such a methodology, the optimal biasing could be achieved without resorting to the typical approach by trials

  10. Exploring synergistic benefits of Water-Food-Energy Nexus through multi-objective reservoir optimization schemes.

    Science.gov (United States)

    Uen, Tinn-Shuan; Chang, Fi-John; Zhou, Yanlai; Tsai, Wen-Ping

    2018-08-15

    This study proposed a holistic three-fold scheme that synergistically optimizes the benefits of the Water-Food-Energy (WFE) Nexus by integrating the short/long-term joint operation of a multi-objective reservoir with irrigation ponds in response to urbanization. The three-fold scheme was implemented step by step: (1) optimizing short-term (daily scale) reservoir operation for maximizing hydropower output and final reservoir storage during typhoon seasons; (2) simulating long-term (ten-day scale) water shortage rates in consideration of the availability of irrigation ponds for both agricultural and public sectors during non-typhoon seasons; and (3) promoting the synergistic benefits of the WFE Nexus in a year-round perspective by integrating the short-term optimization and long-term simulation of reservoir operations. The pivotal Shihmen Reservoir and 745 irrigation ponds located in Taoyuan City of Taiwan together with the surrounding urban areas formed the study case. The results indicated that the optimal short-term reservoir operation obtained from the non-dominated sorting genetic algorithm II (NSGA-II) could largely increase hydropower output but just slightly affected water supply. The simulation results of the reservoir coupled with irrigation ponds indicated that such joint operation could significantly reduce agricultural and public water shortage rates by 22.2% and 23.7% in average, respectively, as compared to those of reservoir operation excluding irrigation ponds. The results of year-round short/long-term joint operation showed that water shortage rates could be reduced by 10% at most, the food production rate could be increased by up to 47%, and the hydropower benefit could increase up to 9.33 million USD per year, respectively, in a wet year. Consequently, the proposed methodology could be a viable approach to promoting the synergistic benefits of the WFE Nexus, and the results provided unique insights for stakeholders and policymakers to pursue

  11. Design and experimental realization of an optimal scheme for teleportation of an n-qubit quantum state

    Science.gov (United States)

    Sisodia, Mitali; Shukla, Abhishek; Thapliyal, Kishore; Pathak, Anirban

    2017-12-01

    An explicit scheme (quantum circuit) is designed for the teleportation of an n-qubit quantum state. It is established that the proposed scheme requires an optimal amount of quantum resources, whereas larger amount of quantum resources have been used in a large number of recently reported teleportation schemes for the quantum states which can be viewed as special cases of the general n-qubit state considered here. A trade-off between our knowledge about the quantum state to be teleported and the amount of quantum resources required for the same is observed. A proof-of-principle experimental realization of the proposed scheme (for a 2-qubit state) is also performed using 5-qubit superconductivity-based IBM quantum computer. The experimental results show that the state has been teleported with high fidelity. Relevance of the proposed teleportation scheme has also been discussed in the context of controlled, bidirectional, and bidirectional controlled state teleportation.

  12. An Extended Multilocus Sequence Typing (MLST Scheme for Rapid Direct Typing of Leptospira from Clinical Samples.

    Directory of Open Access Journals (Sweden)

    Sabrina Weiss

    2016-09-01

    Full Text Available Rapid typing of Leptospira is currently impaired by requiring time consuming culture of leptospires. The objective of this study was to develop an assay that provides multilocus sequence typing (MLST data direct from patient specimens while minimising costs for subsequent sequencing.An existing PCR based MLST scheme was modified by designing nested primers including anchors for facilitated subsequent sequencing. The assay was applied to various specimen types from patients diagnosed with leptospirosis between 2014 and 2015 in the United Kingdom (UK and the Lao Peoples Democratic Republic (Lao PDR. Of 44 clinical samples (23 serum, 6 whole blood, 3 buffy coat, 12 urine PCR positive for pathogenic Leptospira spp. at least one allele was amplified in 22 samples (50% and used for phylogenetic inference. Full allelic profiles were obtained from ten specimens, representing all sample types (23%. No nonspecific amplicons were observed in any of the samples. Of twelve PCR positive urine specimens three gave full allelic profiles (25% and two a partial profile. Phylogenetic analysis allowed for species assignment. The predominant species detected was L. interrogans (10/14 and 7/8 from UK and Lao PDR, respectively. All other species were detected in samples from only one country (Lao PDR: L. borgpetersenii [1/8]; UK: L. kirschneri [1/14], L. santarosai [1/14], L. weilii [2/14].Typing information of pathogenic Leptospira spp. was obtained directly from a variety of clinical samples using a modified MLST assay. This assay negates the need for time-consuming culture of Leptospira prior to typing and will be of use both in surveillance, as single alleles enable species determination, and outbreaks for the rapid identification of clusters.

  13. Optimal Resource Allocation for NOMA-TDMA Scheme with α-Fairness in Industrial Internet of Things.

    Science.gov (United States)

    Sun, Yanjing; Guo, Yiyu; Li, Song; Wu, Dapeng; Wang, Bin

    2018-05-15

    In this paper, a joint non-orthogonal multiple access and time division multiple access (NOMA-TDMA) scheme is proposed in Industrial Internet of Things (IIoT), which allowed multiple sensors to transmit in the same time-frequency resource block using NOMA. The user scheduling, time slot allocation, and power control are jointly optimized in order to maximize the system α -fair utility under transmit power constraint and minimum rate constraint. The optimization problem is nonconvex because of the fractional objective function and the nonconvex constraints. To deal with the original problem, we firstly convert the objective function in the optimization problem into a difference of two convex functions (D.C.) form, and then propose a NOMA-TDMA-DC algorithm to exploit the global optimum. Numerical results show that the NOMA-TDMA scheme significantly outperforms the traditional orthogonal multiple access scheme in terms of both spectral efficiency and user fairness.

  14. Optimization of the two-sample rank Neyman-Pearson detector

    Science.gov (United States)

    Akimov, P. S.; Barashkov, V. M.

    1984-10-01

    The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.

  15. Geminal embedding scheme for optimal atomic basis set construction in correlated calculations

    Energy Technology Data Exchange (ETDEWEB)

    Sorella, S., E-mail: sorella@sissa.it [International School for Advanced Studies (SISSA), Via Beirut 2-4, 34014 Trieste, Italy and INFM Democritos National Simulation Center, Trieste (Italy); Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr [Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France); Mazzola, G., E-mail: gmazzola@phys.ethz.ch [Theoretische Physik, ETH Zurich, 8093 Zurich (Switzerland); Casula, M., E-mail: michele.casula@impmc.upmc.fr [CNRS and Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France)

    2015-12-28

    We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.

  16. A Spectrum Handoff Scheme for Optimal Network Selection in NEMO Based Cognitive Radio Vehicular Networks

    Directory of Open Access Journals (Sweden)

    Krishan Kumar

    2017-01-01

    Full Text Available When a mobile network changes its point of attachments in Cognitive Radio (CR vehicular networks, the Mobile Router (MR requires spectrum handoff. Network Mobility (NEMO in CR vehicular networks is concerned with the management of this movement. In future NEMO based CR vehicular networks deployment, multiple radio access networks may coexist in the overlapping areas having different characteristics in terms of multiple attributes. The CR vehicular node may have the capability to make call for two or more types of nonsafety services such as voice, video, and best effort simultaneously. Hence, it becomes difficult for MR to select optimal network for the spectrum handoff. This can be done by performing spectrum handoff using Multiple Attributes Decision Making (MADM methods which is the objective of the paper. The MADM methods such as grey relational analysis and cost based methods are used. The application of MADM methods provides wider and optimum choice among the available networks with quality of service. Numerical results reveal that the proposed scheme is effective for spectrum handoff decision for optimal network selection with reduced complexity in NEMO based CR vehicular networks.

  17. Accelerated Simplified Swarm Optimization with Exploitation Search Scheme for Data Clustering.

    Directory of Open Access Journals (Sweden)

    Wei-Chang Yeh

    Full Text Available Data clustering is commonly employed in many disciplines. The aim of clustering is to partition a set of data into clusters, in which objects within the same cluster are similar and dissimilar to other objects that belong to different clusters. Over the past decade, the evolutionary algorithm has been commonly used to solve clustering problems. This study presents a novel algorithm based on simplified swarm optimization, an emerging population-based stochastic optimization approach with the advantages of simplicity, efficiency, and flexibility. This approach combines variable vibrating search (VVS and rapid centralized strategy (RCS in dealing with clustering problem. VVS is an exploitation search scheme that can refine the quality of solutions by searching the extreme points nearby the global best position. RCS is developed to accelerate the convergence rate of the algorithm by using the arithmetic average. To empirically evaluate the performance of the proposed algorithm, experiments are examined using 12 benchmark datasets, and corresponding results are compared with recent works. Results of statistical analysis indicate that the proposed algorithm is competitive in terms of the quality of solutions.

  18. An intelligent hybrid scheme for optimizing parking space: A Tabu metaphor and rough set based approach

    Directory of Open Access Journals (Sweden)

    Soumya Banerjee

    2011-03-01

    Full Text Available Congested roads, high traffic, and parking problems are major concerns for any modern city planning. Congestion of on-street spaces in official neighborhoods may give rise to inappropriate parking areas in office and shopping mall complex during the peak time of official transactions. This paper proposes an intelligent and optimized scheme to solve parking space problem for a small city (e.g., Mauritius using a reactive search technique (named as Tabu Search assisted by rough set. Rough set is being used for the extraction of uncertain rules that exist in the databases of parking situations. The inclusion of rough set theory depicts the accuracy and roughness, which are used to characterize uncertainty of the parking lot. Approximation accuracy is employed to depict accuracy of a rough classification [1] according to different dynamic parking scenarios. And as such, the hybrid metaphor proposed comprising of Tabu Search and rough set could provide substantial research directions for other similar hard optimization problems.

  19. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  20. Subdivision, Sampling, and Initialization Strategies for Simplical Branch and Bound in Global Optimization

    DEFF Research Database (Denmark)

    Clausen, Jens; Zilinskas, A,

    2002-01-01

    We consider the problem of optimizing a Lipshitzian function. The branch and bound technique is a well-known solution method, and the key components for this are the subdivision scheme, the bound calculation scheme, and the initialization. For Lipschitzian optimization, the bound calculations are...

  1. A Latency and Coverage Optimized Data Collection Scheme for Smart Cities Based on Vehicular Ad-hoc Networks.

    Science.gov (United States)

    Xu, Yixuan; Chen, Xi; Liu, Anfeng; Hu, Chunhua

    2017-04-18

    Using mobile vehicles as "data mules" to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC) scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D), but also vehicle to vehicle transmission (V2V). Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%.

  2. A Latency and Coverage Optimized Data Collection Scheme for Smart Cities Based on Vehicular Ad-hoc Networks

    Directory of Open Access Journals (Sweden)

    Yixuan Xu

    2017-04-01

    Full Text Available Using mobile vehicles as “data mules” to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D, but also vehicle to vehicle transmission (V2V. Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%.

  3. A Latency and Coverage Optimized Data Collection Scheme for Smart Cities Based on Vehicular Ad-Hoc Networks

    Science.gov (United States)

    Xu, Yixuan; Chen, Xi; Liu, Anfeng; Hu, Chunhua

    2017-01-01

    Using mobile vehicles as “data mules” to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC) scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D), but also vehicle to vehicle transmission (V2V). Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%. PMID:28420218

  4. An Optimally Stable and Accurate Second-Order SSP Runge-Kutta IMEX Scheme for Atmospheric Applications

    Science.gov (United States)

    Rokhzadi, Arman; Mohammadian, Abdolmajid; Charron, Martin

    2018-01-01

    The objective of this paper is to develop an optimized implicit-explicit (IMEX) Runge-Kutta scheme for atmospheric applications focusing on stability and accuracy. Following the common terminology, the proposed method is called IMEX-SSP2(2,3,2), as it has second-order accuracy and is composed of diagonally implicit two-stage and explicit three-stage parts. This scheme enjoys the Strong Stability Preserving (SSP) property for both parts. This new scheme is applied to nonhydrostatic compressible Boussinesq equations in two different arrangements, including (i) semiimplicit and (ii) Horizontally Explicit-Vertically Implicit (HEVI) forms. The new scheme preserves the SSP property for larger regions of absolute monotonicity compared to the well-studied scheme in the same class. In addition, numerical tests confirm that the IMEX-SSP2(2,3,2) improves the maximum stable time step as well as the level of accuracy and computational cost compared to other schemes in the same class. It is demonstrated that the A-stability property as well as satisfying "second-stage order" and stiffly accurate conditions lead the proposed scheme to better performance than existing schemes for the applications examined herein.

  5. Optimization of protein samples for NMR using thermal shift assays

    International Nuclear Information System (INIS)

    Kozak, Sandra; Lercher, Lukas; Karanth, Megha N.; Meijers, Rob; Carlomagno, Teresa; Boivin, Stephane

    2016-01-01

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor"® provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  6. Optimization of protein samples for NMR using thermal shift assays

    Energy Technology Data Exchange (ETDEWEB)

    Kozak, Sandra [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Lercher, Lukas; Karanth, Megha N. [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Meijers, Rob [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Carlomagno, Teresa, E-mail: teresa.carlomagno@oci.uni-hannover.de [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Boivin, Stephane, E-mail: sboivin77@hotmail.com, E-mail: s.boivin@embl-hamburg.de [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany)

    2016-04-15

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor{sup ®} provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  7. On Optimal, Minimal BRDF Sampling for Reflectance Acquisition

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Jensen, Henrik Wann; Ramamoorthi, Ravi

    2015-01-01

    The bidirectional reflectance distribution function (BRDF) is critical for rendering, and accurate material representation requires data-driven reflectance models. However, isotropic BRDFs are 3D functions, and measuring the reflectance of a flat sample can require a million incident and outgoing...... direction pairs, making the use of measured BRDFs impractical. In this paper, we address the problem of reconstructing a measured BRDF from a limited number of samples. We present a novel mapping of the BRDF space, allowing for extraction of descriptive principal components from measured databases......, such as the MERL BRDF database. We optimize for the best sampling directions, and explicitly provide the optimal set of incident and outgoing directions in the Rusinkiewicz parameterization for n = {1, 2, 5, 10, 20} samples. Based on the principal components, we describe a method for accurately reconstructing BRDF...

  8. A Novel Scheme for Optimal Control of a Nonlinear Delay Differential Equations Model to Determine Effective and Optimal Administrating Chemotherapy Agents in Breast Cancer.

    Science.gov (United States)

    Ramezanpour, H R; Setayeshi, S; Akbari, M E

    2011-01-01

    Determining the optimal and effective scheme for administrating the chemotherapy agents in breast cancer is the main goal of this scientific research. The most important issue here is the amount of drug or radiation administrated in chemotherapy and radiotherapy for increasing patient's survival. This is because in these cases, the therapy not only kills the tumor cells, but also kills some of the healthy tissues and causes serious damages. In this paper we investigate optimal drug scheduling effect for breast cancer model which consist of nonlinear ordinary differential time-delay equations. In this paper, a mathematical model of breast cancer tumors is discussed and then optimal control theory is applied to find out the optimal drug adjustment as an input control of system. Finally we use Sensitivity Approach (SA) to solve the optimal control problem. The goal of this paper is to determine optimal and effective scheme for administering the chemotherapy agent, so that the tumor is eradicated, while the immune systems remains above a suitable level. Simulation results confirm the effectiveness of our proposed procedure. In this paper a new scheme is proposed to design a therapy protocol for chemotherapy in Breast Cancer. In contrast to traditional pulse drug delivery, a continuous process is offered and optimized, according to the optimal control theory for time-delay systems.

  9. Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP

    Science.gov (United States)

    Roshani, E.; Berg, A. A.; Lindsay, J.

    2013-12-01

    Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories

  10. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  11. Performance of laboratories analysing welding fume on filter samples: results from the WASP proficiency testing scheme.

    Science.gov (United States)

    Stacey, Peter; Butler, Owen

    2008-06-01

    This paper emphasizes the need for occupational hygiene professionals to require evidence of the quality of welding fume data from analytical laboratories. The measurement of metals in welding fume using atomic spectrometric techniques is a complex analysis often requiring specialist digestion procedures. The results from a trial programme testing the proficiency of laboratories in the Workplace Analysis Scheme for Proficiency (WASP) to measure potentially harmful metals in several different types of welding fume showed that most laboratories underestimated the mass of analyte on the filters. The average recovery was 70-80% of the target value and >20% of reported recoveries for some of the more difficult welding fume matrices were welding fume trial filter samples. Consistent rather than erratic error predominated, suggesting that the main analytical factor contributing to the differences between the target values and results was the effectiveness of the sample preparation procedures used by participating laboratories. It is concluded that, with practice and regular participation in WASP, performance can improve over time.

  12. Relevance of sampling schemes in light of Ruelle's linear response theory

    International Nuclear Information System (INIS)

    Lucarini, Valerio; Wouters, Jeroen; Faranda, Davide; Kuna, Tobias

    2012-01-01

    We reconsider the theory of the linear response of non-equilibrium steady states to perturbations. We first show that using a general functional decomposition for space–time dependent forcings, we can define elementary susceptibilities that allow us to construct the linear response of the system to general perturbations. Starting from the definition of SRB measure, we then study the consequence of taking different sampling schemes for analysing the response of the system. We show that only a specific choice of the time horizon for evaluating the response of the system to a general time-dependent perturbation allows us to obtain the formula first presented by Ruelle. We also discuss the special case of periodic perturbations, showing that when they are taken into consideration the sampling can be fine-tuned to make the definition of the correct time horizon immaterial. Finally, we discuss the implications of our results in terms of strategies for analysing the outputs of numerical experiments by providing a critical review of a formula proposed by Reick

  13. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  14. Optimized preparation of urine samples for two-dimensional electrophoresis and initial application to patient samples

    DEFF Research Database (Denmark)

    Lafitte, Daniel; Dussol, Bertrand; Andersen, Søren

    2002-01-01

    OBJECTIVE: We optimized of the preparation of urinary samples to obtain a comprehensive map of urinary proteins of healthy subjects and then compared this map with the ones obtained with patient samples to show that the pattern was specific of their kidney disease. DESIGN AND METHODS: The urinary...

  15. Optimal Performance of a Nonlinear Gantry Crane System via Priority-based Fitness Scheme in Binary PSO Algorithm

    International Nuclear Information System (INIS)

    Jaafar, Hazriq Izzuan; Ali, Nursabillilah Mohd; Selamat, Nur Asmiza; Kassim, Anuar Mohamed; Mohamed, Z; Abidin, Amar Faiz Zainal; Jamian, J J

    2013-01-01

    This paper presents development of an optimal PID and PD controllers for controlling the nonlinear gantry crane system. The proposed Binary Particle Swarm Optimization (BPSO) algorithm that uses Priority-based Fitness Scheme is adopted in obtaining five optimal controller gains. The optimal gains are tested on a control structure that combines PID and PD controllers to examine system responses including trolley displacement and payload oscillation. The dynamic model of gantry crane system is derived using Lagrange equation. Simulation is conducted within Matlab environment to verify the performance of system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). This proposed technique demonstrates that implementation of Priority-based Fitness Scheme in BPSO is effective and able to move the trolley as fast as possible to the various desired position

  16. Ad-Hoc vs. Standardized and Optimized Arthropod Diversity Sampling

    Directory of Open Access Journals (Sweden)

    Pedro Cardoso

    2009-09-01

    Full Text Available The use of standardized and optimized protocols has been recently advocated for different arthropod taxa instead of ad-hoc sampling or sampling with protocols defined on a case-by-case basis. We present a comparison of both sampling approaches applied for spiders in a natural area of Portugal. Tests were made to their efficiency, over-collection of common species, singletons proportions, species abundance distributions, average specimen size, average taxonomic distinctness and behavior of richness estimators. The standardized protocol revealed three main advantages: (1 higher efficiency; (2 more reliable estimations of true richness; and (3 meaningful comparisons between undersampled areas.

  17. The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes

    Science.gov (United States)

    Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark

    2000-01-01

    Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.

  18. Identification of isomers and control of ionization and dissociation processes using dual-mass-spectrometer scheme and genetic algorithm optimization

    International Nuclear Information System (INIS)

    Chen Zhou; Qiu-Nan Tong; Zhang Cong-Cong; Hu Zhan

    2015-01-01

    Identification of acetone and its two isomers, and the control of their ionization and dissociation processes are performed using a dual-mass-spectrometer scheme. The scheme employs two sets of time of flight mass spectrometers to simultaneously acquire the mass spectra of two different molecules under the irradiation of identically shaped femtosecond laser pulses. The optimal laser pulses are found using closed-loop learning method based on a genetic algorithm. Compared with the mass spectra of the two isomers that are obtained with the transform limited pulse, those obtained under the irradiation of the optimal laser pulse show large differences and the various reaction pathways of the two molecules are selectively controlled. The experimental results demonstrate that the scheme is quite effective and useful in studies of two molecules having common mass peaks, which makes a traditional single mass spectrometer unfeasible. (paper)

  19. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  20. Development of an Optimal Power Control Scheme for Wave-Offshore Hybrid Generation Systems

    Directory of Open Access Journals (Sweden)

    Seungmin Jung

    2015-08-01

    Full Text Available Integration technology of various distribution systems for improving renewable energy utilization has been receiving attention in the power system industry. The wave-offshore hybrid generation system (HGS, which has a capacity of over 10 MW, was recently developed by adopting several voltage source converters (VSC, while a control method for adopted power conversion systems has not yet been configured in spite of the unique system characteristics of the designated structure. This paper deals with a reactive power assignment method for the developed hybrid system to improve the power transfer efficiency of the entire system. Through the development and application processes for an optimization algorithm utilizing the real-time active power profiles of each generator, a feasibility confirmation of power transmission loss reduction was implemented. To find the practical effect of the proposed control scheme, the real system information regarding the demonstration process was applied from case studies. Also, an evaluation for the loss of the improvement rate was calculated.

  1. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  2. A staggered-grid finite-difference scheme optimized in the time–space domain for modeling scalar-wave propagation in geophysical problems

    International Nuclear Information System (INIS)

    Tan, Sirui; Huang, Lianjie

    2014-01-01

    For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within a given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion

  3. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  4. Simultaneous beam sampling and aperture shape optimization for SPORT

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu

    2015-01-01

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  5. Simultaneous beam sampling and aperture shape optimization for SPORT

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Ye, Yinyu [Department of Management Science and Engineering, Stanford University, Stanford, California 94305 (United States)

    2015-02-15

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  6. Simultaneous beam sampling and aperture shape optimization for SPORT.

    Science.gov (United States)

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case

  7. Optimized Explicit Runge--Kutta Schemes for the Spectral Difference Method Applied to Wave Propagation Problems

    KAUST Repository

    Parsani, Matteo

    2013-04-10

    Explicit Runge--Kutta schemes with large stable step sizes are developed for integration of high-order spectral difference spatial discretizations on quadrilateral grids. The new schemes permit an effective time step that is substantially larger than the maximum admissible time step of standard explicit Runge--Kutta schemes available in the literature. Furthermore, they have a small principal error norm and admit a low-storage implementation. The advantages of the new schemes are demonstrated through application to the Euler equations and the linearized Euler equations.

  8. Optimized Explicit Runge--Kutta Schemes for the Spectral Difference Method Applied to Wave Propagation Problems

    KAUST Repository

    Parsani, Matteo; Ketcheson, David I.; Deconinck, W.

    2013-01-01

    Explicit Runge--Kutta schemes with large stable step sizes are developed for integration of high-order spectral difference spatial discretizations on quadrilateral grids. The new schemes permit an effective time step that is substantially larger than the maximum admissible time step of standard explicit Runge--Kutta schemes available in the literature. Furthermore, they have a small principal error norm and admit a low-storage implementation. The advantages of the new schemes are demonstrated through application to the Euler equations and the linearized Euler equations.

  9. Continuous quality control of the blood sampling procedure using a structured observation scheme

    DEFF Research Database (Denmark)

    Seemann, Tine Lindberg; Nybo, Mads

    2016-01-01

    INTRODUCTION: An observational study was conducted using a structured observation scheme to assess compliance with the local phlebotomy guideline, to identify necessary focus items, and to investigate whether adherence to the phlebotomy guideline improved. MATERIALS AND METHODS: The questionnaire...

  10. A note on a fatal error of optimized LFC private information retrieval scheme and its corrected results

    DEFF Research Database (Denmark)

    Tamura, Jim; Kobara, Kazukuni; Fathi, Hanane

    2010-01-01

    A number of lightweight PIR (Private Information Retrieval) schemes have been proposed in recent years. In JWIS2006, Kwon et al. proposed a new scheme (optimized LFCPIR, or OLFCPIR), which aimed at reducing the communication cost of Lipmaa's O(log2 n) PIR(LFCPIR) to O(logn). However in this paper......, we point out a fatal error of overflow contained in OLFCPIR and show how the error can be corrected. Finally, we compare with LFCPIR to show that the communication cost of our corrected OLFCPIR is asymptotically the same as the previous LFCPIR....

  11. A quadratic form of the Coulomb operator and an optimization scheme for the extended Kohn-Sham models

    International Nuclear Information System (INIS)

    Kusakabe, Koichi

    2009-01-01

    To construct an optimization scheme for an extension of the Kohn-Sham approach, I introduce an operator form of the Coulomb interaction. This form is the sum of quadratic form pairs, which can be redefined in a self-consistent calculation of a multi-reference density functional theory. A detailed derivation of the form is given. A fluctuation term introduced in the extended Kohn-Sham scheme is expressed in this form for regularization. The present procedure also provides an exact derivation of effective negative interactions in charge fluctuation channels. Relevance to high-temperature superconductors is discussed.

  12. Effects of changes in Italian bioenergy promotion schemes for agricultural biogas projects: Insights from a regional optimization model

    International Nuclear Information System (INIS)

    Chinese, D.; Patrizio, P.; Nardin, G.

    2014-01-01

    Italy has witnessed an extraordinary growth in biogas generation from livestock effluents and agricultural activities in the last few years as well as a severe isomorphic process, leading to a market dominance of 999 kW power plants owned by “entrepreneurial farms”. Under the pressure of the economic crisis in the country, the Italian government has restructured renewable energy support schemes, introducing a new program in 2013. In this paper, the effects of the previous and current support schemes on the optimal plant size, feedstock mix and profitability were investigated by introducing a spatially explicit biogas supply chain optimization model, which accounts for different incentive structures. By applying the model to a regional case study, homogenization observed to date is recognized as a result of former incentive structures. Considerable reductions in local economic potentials for agricultural biogas power plants without external heat use, are estimated. New plants are likely to be manure-based and due to the lower energy density of such feedstock, wider supply chains are expected although optimal plant size will be smaller. The new support scheme will therefore most likely eliminate past distortions but also slow down investments in agricultural biogas plants. - Highlights: • We review the evolution of agricultural biogas support schemes in Italy over last 20 years. • A biogas supply chain optimization model which accounts for feed-in-tariffs is introduced. • The model is applied to a regional case study under the two most recent support schemes. • Incentives in force until 2013 caused homogenization towards maize based 999 kW el plants. • Wider, manure based supply chains feeding smaller plants are expected with future incentives

  13. Continuous quality control of the blood sampling procedure using a structured observation scheme.

    Science.gov (United States)

    Seemann, Tine Lindberg; Nybo, Mads

    2016-10-15

    An observational study was conducted using a structured observation scheme to assess compliance with the local phlebotomy guideline, to identify necessary focus items, and to investigate whether adherence to the phlebotomy guideline improved. The questionnaire from the EFLM Working Group for the Preanalytical Phase was adapted to local procedures. A pilot study of three months duration was conducted. Based on this, corrective actions were implemented and a follow-up study was conducted. All phlebotomists at the Department of Clinical Biochemistry and Pharmacology were observed. Three blood collections by each phlebotomist were observed at each session conducted at the phlebotomy ward and the hospital wards, respectively. Error frequencies were calculated for the phlebotomy ward and the hospital wards and for the two study phases. A total of 126 blood drawings by 39 phlebotomists were observed in the pilot study, while 84 blood drawings by 34 phlebotomists were observed in the follow-up study. In the pilot study, the three major error items were hand hygiene (42% error), mixing of samples (22%), and order of draw (21%). Minor significant differences were found between the two settings. After focus on the major aspects, the follow-up study showed significant improvement for all three items at both settings (P < 0.01, P < 0.01, and P = 0.01, respectively). Continuous quality control of the phlebotomy procedure revealed a number of items not conducted in compliance with the local phlebotomy guideline. It supported significant improvements in the adherence to the recommended phlebotomy procedures and facilitated documentation of the phlebotomy quality.

  14. Selection of optimal treatment scheme for brain metastases of non-small cell lung cancer

    International Nuclear Information System (INIS)

    Dong Mingxin; Zhao Tong; Huang Jingzi; Yu Shukun; Ma Yan; Tian Zhongcheng; Jin Xiangshun; Quan Jizhong; Liu Jin; Wang Dongxu

    2006-01-01

    Objective: To select the optimal treatment scheme for brain metastases of non-small cell lung cancers (NSCLCs). Methods: Seventy-two NSCLC cases diagnosesd by pathology with brain metastases were randomly classified into three groups, Group I, 24 cases with whole brain conventional external fractioned irradiation of D T 36-41 Gy/4-5 w, Group II, 22 eases with y-knife treatment plus whole brain conventional external fractioned irradiation, and Group III, 26 cases with γ-knife plus whole brain conventional external fractioned irradiation in combination with chemotherapy of Vm-26. The surrounding area of tumor was strictly covered with 50% para-central-dosal curve in γ-knife treatment (D T 16-25 Gy with a mean of 16 Gy). The muirleaf collimator was selected according to the volume of tumors. Chemotherapy of Vm-26 (60 mg/m 2 d1-3) was applied during the treatment with whole brain conventional external fractioned irradiation (D T 19-29 Gy/2-3 w), 21 days in a period, 2 periods in total. Results: The median survival time was estimated to be 6.0 months (ranged from 1.2 to 19.0 months) in the Group I, 9.2 months (4.4-30 months) in the Group II, and 10.8 months (5.2-42.2 months) in the Group III. The 1-year and 2-year survival rates were 34.6% and 12.6% , 62.2% and 30.2%, and 70.8% and 35.6% respectively in Group I, Group II, and Group III, respectively. Conclusion: For brain metastases of NSCLC, γ-knife plus whole brain conventional external fractioned irradiation combined with treatment of Vm-26 had a significantly beneficial influence on improvement of the local control and 1-year and 2-year survival. There was no complaint about the side-effects of the treatment. (authors)

  15. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    Science.gov (United States)

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  16. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    Science.gov (United States)

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  17. Optimal stochastic reactive power scheduling in a microgrid considering voltage droop scheme of DGs and uncertainty of wind farms

    International Nuclear Information System (INIS)

    Khorramdel, Benyamin; Raoofat, Mahdi

    2012-01-01

    Distributed Generators (DGs) in a microgrid may operate in three different reactive power control strategies, including PV, PQ and voltage droop schemes. This paper proposes a new stochastic programming approach for reactive power scheduling of a microgrid, considering the uncertainty of wind farms. The proposed algorithm firstly finds the expected optimal operating point of each DG in V-Q plane while the wind speed is a probabilistic variable. A multi-objective function with goals of loss minimization, reactive power reserve maximization and voltage security margin maximization is optimized using a four-stage multi-objective nonlinear programming. Then, using Monte Carlo simulation enhanced by scenario reduction technique, the proposed algorithm simulates actual condition and finds optimal operating strategy of DGs. Also, if any DGs are scheduled to operate in voltage droop scheme, the optimum droop is determined. Also, in the second part of the research, to enhance the optimality of the results, PSO algorithm is used for the multi-objective optimization problem. Numerical examples on IEEE 34-bus test system including two wind turbines are studied. The results show the benefits of voltage droop scheme for mitigating the impacts of the uncertainty of wind. Also, the results show preference of PSO method in the proposed approach. -- Highlights: ► Reactive power scheduling in a microgrid considering loss and voltage security. ► Stochastic nature of wind farms affects reactive power scheduling and is considered. ► Advantages of using the voltage droop characteristics of DGs in voltage security are shown. ► Power loss, voltage security and VAR reserve are three goals of a multi-objective optimization. ► Monte Carlo method with scenario reduction is used to determine optimal control strategy of DGs.

  18. Energy-Efficient Optimization for HARQ Schemes over Time-Correlated Fading Channels

    KAUST Repository

    Shi, Zheng; Ma, Shaodan; Yang, Guanghua; Alouini, Mohamed-Slim

    2018-01-01

    in the optimization, which further differentiates this work from prior ones. Using a unified expression of asymptotic outage probabilities, optimal transmission powers and optimal rate are derived in closed-forms to maximize the energy efficiency while satisfying

  19. Towards an optimal sampling strategy for assessing genetic variation within and among white clover (Trifolium repens L. cultivars using AFLP

    Directory of Open Access Journals (Sweden)

    Khosro Mehdi Khanlou

    2011-01-01

    Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.

  20. Modeling of frequency-domain scalar wave equation with the average-derivative optimal scheme based on a multigrid-preconditioned iterative solver

    Science.gov (United States)

    Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue

    2018-01-01

    An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.

  1. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...

  2. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  3. SU-F-T-497: Spatiotemporally Optimal, Personalized Prescription Scheme for Glioblastoma Patients Using the Proliferation and Invasion Glioma Model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, M; Rockhill, J; Phillips, M [University Washington, Seattle, WA (United States)

    2016-06-15

    Purpose: To investigate a spatiotemporally optimal radiotherapy prescription scheme and its potential benefit for glioblastoma (GBM) patients using the proliferation and invasion (PI) glioma model. Methods: Standard prescription for GBM was assumed to deliver 46Gy in 23 fractions to GTV1+2cm margin and additional 14Gy in 7 fractions to GTV2+2cm margin. We simulated the tumor proliferation and invasion in 2D according to the PI glioma model with a moving velocity of 0.029(slow-move), 0.079(average-move), and 0.13(fast-move) mm/day for GTV2 with a radius of 1 and 2cm. For each tumor, the margin around GTV1 and GTV2 was varied to 0–6 cm and 1–3 cm respectively. Total dose to GTV1 was constrained such that the equivalent uniform dose (EUD) to normal brain equals EUD with the standard prescription. A non-stationary dose policy, where the fractional dose varies, was investigated to estimate the temporal effect of the radiation dose. The efficacy of an optimal prescription scheme was evaluated by tumor cell-surviving fraction (SF), EUD, and the expected survival time. Results: Optimal prescription for the slow-move tumors was to use 3.0(small)-3.5(large) cm margins to GTV1, and 1.5cm margin to GTV2. For the average- and fast-move tumors, it was optimal to use 6.0cm margin for GTV1 suggesting that whole brain therapy is optimal, and then 1.5cm (average-move) and 1.5–3.0cm (fast-move, small-large) margins for GTV2. It was optimal to deliver the boost sequentially using a linearly decreasing fractional dose for all tumors. Optimal prescription led to 0.001–0.465% of the tumor SF resulted from using the standard prescription, and increased tumor EUD by 25.3–49.3% and the estimated survival time by 7.6–22.2 months. Conclusion: It is feasible to optimize a prescription scheme depending on the individual tumor characteristics. A personalized prescription scheme could potentially increase tumor EUD and the expected survival time significantly without increasing EUD to

  4. Optimized scheme in coal-fired boiler combustion based on information entropy and modified K-prototypes algorithm

    Science.gov (United States)

    Gu, Hui; Zhu, Hongxia; Cui, Yanfeng; Si, Fengqi; Xue, Rui; Xi, Han; Zhang, Jiayu

    2018-06-01

    An integrated combustion optimization scheme is proposed for the combined considering the restriction in coal-fired boiler combustion efficiency and outlet NOx emissions. Continuous attribute discretization and reduction techniques are handled as optimization preparation by E-Cluster and C_RED methods, in which the segmentation numbers don't need to be provided in advance and can be continuously adapted with data characters. In order to obtain results of multi-objections with clustering method for mixed data, a modified K-prototypes algorithm is then proposed. This algorithm can be divided into two stages as K-prototypes algorithm for clustering number self-adaptation and clustering for multi-objective optimization, respectively. Field tests were carried out at a 660 MW coal-fired boiler to provide real data as a case study for controllable attribute discretization and reduction in boiler system and obtaining optimization parameters considering [ maxηb, minyNOx ] multi-objective rule.

  5. Optimal scheme of postoperative chemoradiotherapy in rectal cancer: phase III prospective randomized trial

    International Nuclear Information System (INIS)

    Kim, Young Seok; Kim, Jong Hoon; Choi, Eun Kyung

    2002-01-01

    To determine the optimal scheme of postoperative chemoradiotherapy in rectal cancer by comparing survival, patterns of failure, toxicities in early and late radiotherapy groups using a phase III randomized prospective clinical trial. From January 1996 to March 1999, 307 patients with curatively resected AJCC stage II and III rectal cancer were assigned randomly to an 'early (151 patients, arm I)' or a 'late (156 patients, arm II)' and were administered combined chemotherapy (5-FU 375 mg/m 2 /day, leucovorin 20 mg/m 2 , IV bolus daily, for 3 days with RT, 5 days without RT, 8 cycles with 4 weeks interval) and radiation therapy (whole pelvis with 45 Gy/25 fractions/5 weeks). Patients of arm I received radiation therapy from day 1 of the first cycle of chemotherapy and those of arm II from day 57 with a third cycle of chemotherapy. The median follow-up period of living patients was 40 months. Of the 307 patients enrolled, fifty patients did not receive scheduled radiation therapy or chemotherapy. The overall survival rate and disease free survival rate at 5 years were 78.3% and 68.7% in arm I, and 78.4% and 67.5% in arm II. The local recurrence rate was 6.6% and 6.4% (ρ = 0.46) in arms I and II, respectively, no significant difference was observed between the distant metastasis rates of the two arms (23.8% and 29.5%, ρ = 0.16). During radiation therapy, grade 3 diarrhea or more, by the NCI common toxicity criteria, was observed in 63.0% and 58.2% of the respective arms (ρ = N.S.), but most were controlled with supportive care. Hematologic toxicity (leukopenia) greater than RTOG grade 2 was found in only 1.3% and 2.6% of patients in each respective arm. There was no significant difference in survival, patterns of failure or toxicities between the early and late radiation therapy arms. Postoperative adjuvant chemoradiation was found to be a relatively safe treatment but higher compliance is needed

  6. Optimal CCD readout by digital correlated double sampling

    Science.gov (United States)

    Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.

    2016-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.

  7. Neuro-genetic system for optimization of GMI samples sensitivity.

    Science.gov (United States)

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Continuous quality control of the blood sampling procedure using a structured observation scheme

    OpenAIRE

    Lindberg Seemann, Tine; Nybo, Mads

    2016-01-01

    INTRODUCTION: An observational study was conducted using a structured observation scheme to assess compliance with the local phlebotomy guideline, to identify necessary focus items, and to investigate whether adherence to the phlebotomy guideline improved.MATERIALS AND METHODS: The questionnaire from the EFLM Working Group for the Preanalytical Phase was adapted to local procedures. A pilot study of three months duration was conducted. Based on this, corrective actions were implemented and a ...

  9. An Extended Multilocus Sequence Typing (MLST) Scheme for Rapid Direct Typing of Leptospira from Clinical Samples

    OpenAIRE

    Weiss, Sabrina; Menezes, Angela; Woods, Kate; Chanthongthip, Anisone; Dittrich, Sabine; Opoku-Boateng, Agatha; Kimuli, Maimuna; Chalker, Victoria

    2016-01-01

    Background Rapid typing of Leptospira is currently impaired by requiring time consuming culture of leptospires. The objective of this study was to develop an assay that provides multilocus sequence typing (MLST) data direct from patient specimens while minimising costs for subsequent sequencing. Methodology and Findings An existing PCR based MLST scheme was modified by designing nested primers including anchors for facilitated subsequent sequencing. The assay was applied to various specimen t...

  10. Programming scheme based optimization of hybrid 4T-2R OxRAM NVSRAM

    Science.gov (United States)

    Majumdar, Swatilekha; Kingra, Sandeep Kaur; Suri, Manan

    2017-09-01

    In this paper, we present a novel single-cycle programming scheme for 4T-2R NVSRAM, exploiting pulse engineered input signals. OxRAM devices based on 3 nm thick bi-layer active switching oxide and 90 nm CMOS technology node were used for all simulations. The cell design is implemented for real-time non-volatility rather than last-bit, or power-down non-volatility. Detailed analysis of the proposed single-cycle, parallel RRAM device programming scheme is presented in comparison to the two-cycle sequential RRAM programming used for similar 4T-2R NVSRAM bit-cells. The proposed single-cycle programming scheme coupled with the 4T-2R architecture leads to several benefits such as- possibility of unconventional transistor sizing, 50% lower latency, 20% improvement in SNM and ∼20× reduced energy requirements, when compared against two-cycle programming approach.

  11. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    Science.gov (United States)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  12. Parameter optimization of a computer-aided diagnosis scheme for the segmentation of microcalcification clusters in mammograms

    International Nuclear Information System (INIS)

    Gavrielides, Marios A.; Lo, Joseph Y.; Floyd, Carey E. Jr.

    2002-01-01

    Our purpose in this study is to develop a parameter optimization technique for the segmentation of suspicious microcalcification clusters in digitized mammograms. In previous work, a computer-aided diagnosis (CAD) scheme was developed that used local histogram analysis of overlapping subimages and a fuzzy rule-based classifier to segment individual microcalcifications, and clustering analysis for reducing the number of false positive clusters. The performance of this previous CAD scheme depended on a large number of parameters such as the intervals used to calculate fuzzy membership values and on the combination of membership values used by each decision rule. These parameters were optimized empirically based on the performance of the algorithm on the training set. In order to overcome the limitations of manual training and rule generation, the segmentation algorithm was modified in order to incorporate automatic parameter optimization. For the segmentation of individual microcalcifications, the new algorithm used a neural network with fuzzy-scaled inputs. The fuzzy-scaled inputs were created by processing the histogram features with a family of membership functions, the parameters of which were automatically extracted from the distribution of the feature values. The neural network was trained to classify feature vectors as either positive or negative. Individual microcalcifications were segmented from positive subimages. After clustering, another neural network was trained to eliminate false positive clusters. A database of 98 images provided training and testing sets to optimize the parameters and evaluate the CAD scheme, respectively. The performance of the algorithm was evaluated with a FROC analysis. At a sensitivity rate of 93.2%, there was an average of 0.8 false positive clusters per image. The results are very comparable with those taken using our previously published rule-based method. However, the new algorithm is more suited to generalize its

  13. Optimized Basis Sets for the Environment in the Domain-Specific Basis Set Approach of the Incremental Scheme.

    Science.gov (United States)

    Anacker, Tony; Hill, J Grant; Friedrich, Joachim

    2016-04-21

    Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.

  14. A unified thermostat scheme for efficient configurational sampling for classical/quantum canonical ensembles via molecular dynamics

    Science.gov (United States)

    Zhang, Zhijun; Liu, Xinzijian; Chen, Zifei; Zheng, Haifeng; Yan, Kangyu; Liu, Jian

    2017-07-01

    We show a unified second-order scheme for constructing simple, robust, and accurate algorithms for typical thermostats for configurational sampling for the canonical ensemble. When Langevin dynamics is used, the scheme leads to the BAOAB algorithm that has been recently investigated. We show that the scheme is also useful for other types of thermostats, such as the Andersen thermostat and Nosé-Hoover chain, regardless of whether the thermostat is deterministic or stochastic. In addition to analytical analysis, two 1-dimensional models and three typical real molecular systems that range from the gas phase, clusters, to the condensed phase are used in numerical examples for demonstration. Accuracy may be increased by an order of magnitude for estimating coordinate-dependent properties in molecular dynamics (when the same time interval is used), irrespective of which type of thermostat is applied. The scheme is especially useful for path integral molecular dynamics because it consistently improves the efficiency for evaluating all thermodynamic properties for any type of thermostat.

  15. Low-complexity joint symbol synchronization and sampling frequency offset estimation scheme for optical IMDD OFDM systems.

    Science.gov (United States)

    Zhang, Zhen; Zhang, Qianwu; Chen, Jian; Li, Yingchun; Song, Yingxiong

    2016-06-13

    A low-complexity joint symbol synchronization and SFO estimation scheme for asynchronous optical IMDD OFDM systems based on only one training symbol is proposed. Numerical simulations and experimental demonstrations are also under taken to evaluate the performance of the mentioned scheme. The experimental results show that robust and precise symbol synchronization and the SFO estimation can be achieved simultaneously at received optical power as low as -20dBm in asynchronous OOFDM systems. SFO estimation accuracy in MSE can be lower than 1 × 10-11 under SFO range from -60ppm to 60ppm after 25km SSMF transmission. Optimal System performance can be maintained until cumulate number of employed frames for calculation is less than 50 under above-mentioned conditions. Meanwhile, the proposed joint scheme has a low level of operation complexity comparing with existing methods, when the symbol synchronization and SFO estimation are considered together. Above-mentioned results can give an important reference in practical system designs.

  16. OPTIMIZATION OF THE TEMPERATURE CONTROL SCHEME FOR ROLLER COMPACTED CONCRETE DAMS BASED ON FINITE ELEMENT AND SENSITIVITY ANALYSIS METHODS

    Directory of Open Access Journals (Sweden)

    Huawei Zhou

    2016-10-01

    Full Text Available Achieving an effective combination of various temperature control measures is critical for temperature control and crack prevention of concrete dams. This paper presents a procedure for optimizing the temperature control scheme of roller compacted concrete (RCC dams that couples the finite element method (FEM with a sensitivity analysis method. In this study, seven temperature control schemes are defined according to variations in three temperature control measures: concrete placement temperature, water-pipe cooling time, and thermal insulation layer thickness. FEM is employed to simulate the equivalent temperature field and temperature stress field obtained under each of the seven designed temperature control schemes for a typical overflow dam monolith based on the actual characteristics of a RCC dam located in southwestern China. A sensitivity analysis is subsequently conducted to investigate the degree of influence each of the three temperature control measures has on the temperature field and temperature tensile stress field of the dam. Results show that the placement temperature has a substantial influence on the maximum temperature and tensile stress of the dam, and that the placement temperature cannot exceed 15 °C. The water-pipe cooling time and thermal insulation layer thickness have little influence on the maximum temperature, but both demonstrate a substantial influence on the maximum tensile stress of the dam. The thermal insulation thickness is significant for reducing the probability of cracking as a result of high thermal stress, and the maximum tensile stress can be controlled under the specification limit with a thermal insulation layer thickness of 10 cm. Finally, an optimized temperature control scheme for crack prevention is obtained based on the analysis results.

  17. A self-optimizing scheme for energy balanced routing in Wireless Sensor Networks using SensorAnt.

    Science.gov (United States)

    Shamsan Saleh, Ahmed M; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A; Ismail, Alyani

    2012-01-01

    Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency.

  18. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  19. Nonlinear H∞ Optimal Control Scheme for an Underwater Vehicle with Regional Function Formulation

    Directory of Open Access Journals (Sweden)

    Zool H. Ismail

    2013-01-01

    Full Text Available A conventional region control technique cannot meet the demands for an accurate tracking performance in view of its inability to accommodate highly nonlinear system dynamics, imprecise hydrodynamic coefficients, and external disturbances. In this paper, a robust technique is presented for an Autonomous Underwater Vehicle (AUV with region tracking function. Within this control scheme, nonlinear H∞ and region based control schemes are used. A Lyapunov-like function is presented for stability analysis of the proposed control law. Numerical simulations are presented to demonstrate the performance of the proposed tracking control of the AUV. It is shown that the proposed control law is robust against parameter uncertainties, external disturbances, and nonlinearities and it leads to uniform ultimate boundedness of the region tracking error.

  20. A new channel allocation scheme and performance optimizing for mobile multimedia wireless networks

    Institute of Scientific and Technical Information of China (English)

    ZHAO Fang-ming; JIANG Ling-ge; MA Ming-da

    2008-01-01

    A multimedia channel allocation scheme is proposed and studied in terms of the connection-level QoS. A new traffic model based on multidimensional Markov chain is developed considering the traffic charac-teristic of two special periods of time. And the pre-emptive priority strategies are used to classify real-time serv-ices and non-real-time services. Real-time service is given higher priority for its allowance to pre-empt channels used by non-real-time service. Considering the mobility of persons in a day, which affects the mobile user's den-sity, the simulation was conducted involving the two pre-emptive priority strategies. The result of some compari-sons shows the feasibility of the proposed scheme.

  1. An optimal scheme for top quark mass measurement near the \\rm{t}\\bar{t} threshold at future \\rm{e}^{+}{e}^{-} colliders

    Science.gov (United States)

    Chen, Wei-Guo; Wan, Xia; Wang, You-Kai

    2018-05-01

    A top quark mass measurement scheme near the {{t}}\\bar{{{t}}} production threshold in future {{{e}}}+{{{e}}}- colliders, e.g. the Circular Electron Positron Collider (CEPC), is simulated. A {χ }2 fitting method is adopted to determine the number of energy points to be taken and their locations. Our results show that the optimal energy point is located near the largest slope of the cross section v. beam energy plot, and the most efficient scheme is to concentrate all luminosity on this single energy point in the case of one-parameter top mass fitting. This suggests that the so-called data-driven method could be the best choice for future real experimental measurements. Conveniently, the top mass statistical uncertainty can also be calculated directly by the error matrix even without any sampling and fitting. The agreement of the above two optimization methods has been checked. Our conclusion is that by taking 50 fb‑1 total effective integrated luminosity data, the statistical uncertainty of the top potential subtracted mass can be suppressed to about 7 MeV and the total uncertainty is about 30 MeV. This precision will help to identify the stability of the electroweak vacuum at the Planck scale. Supported by National Science Foundation of China (11405102) and the Fundamental Research Funds for the Central Universities of China (GK201603027, GK201803019)

  2. Improving perfusion quantification in arterial spin labeling for delayed arrival times by using optimized acquisition schemes

    International Nuclear Information System (INIS)

    Kramme, Johanna; Diehl, Volker; Madai, Vince I.; Sobesky, Jan; Guenther, Matthias

    2015-01-01

    The improvement in Arterial Spin Labeling (ASL) perfusion quantification, especially for delayed bolus arrival times (BAT), with an acquisition redistribution scheme mitigating the T1 decay of the label in multi-TI ASL measurements is investigated. A multi inflow time (TI) 3D-GRASE sequence is presented which adapts the distribution of acquisitions accordingly, by keeping the scan time constant. The MR sequence increases the number of averages at long TIs and decreases their number at short TIs and thus compensating the T1 decay of the label. The improvement of perfusion quantification is evaluated in simulations as well as in-vivo in healthy volunteers and patients with prolonged BATs due to age or steno-occlusive disease. The improvement in perfusion quantification depends on BAT. At healthy BATs the differences are small, but become larger for longer BATs typically found in certain diseases. The relative error of perfusion is improved up to 30% at BATs > 1500 ms in comparison to the standard acquisition scheme. This adapted acquisition scheme improves the perfusion measurement in comparison to standard multi-TI ASL implementations. It provides relevant benefit in clinical conditions that cause prolonged BATs and is therefore of high clinical relevance for neuroimaging of steno-occlusive diseases.

  3. How old is this bird? The age distribution under some phase sampling schemes.

    Science.gov (United States)

    Hautphenne, Sophie; Massaro, Melanie; Taylor, Peter

    2017-12-01

    In this paper, we use a finite-state continuous-time Markov chain with one absorbing state to model an individual's lifetime. Under this model, the time of death follows a phase-type distribution, and the transient states of the Markov chain are known as phases. We then attempt to provide an answer to the simple question "What is the conditional age distribution of the individual, given its current phase"? We show that the answer depends on how we interpret the question, and in particular, on the phase observation scheme under consideration. We then apply our results to the computation of the age pyramid for the endangered Chatham Island black robin Petroica traversi during the monitoring period 2007-2014.

  4. Quantum dynamics calculations using symmetrized, orthogonal Weyl-Heisenberg wavelets with a phase space truncation scheme. II. Construction and optimization

    International Nuclear Information System (INIS)

    Poirier, Bill; Salam, A.

    2004-01-01

    In this paper, we extend and elaborate upon a wavelet method first presented in a previous publication [B. Poirier, J. Theo. Comput. Chem. 2, 65 (2003)]. In particular, we focus on construction and optimization of the wavelet functions, from theoretical and numerical viewpoints, and also examine their localization properties. The wavelets used are modified Wilson-Daubechies wavelets, which in conjunction with a simple phase space truncation scheme, enable one to solve the multidimensional Schroedinger equation. This approach is ideally suited to rovibrational spectroscopy applications, but can be used in any context where differential equations are involved

  5. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Science.gov (United States)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  6. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  7. Joint Optimized CPU and Networking Control Scheme for Improved Energy Efficiency in Video Streaming on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Sung-Woong Jo

    2017-01-01

    Full Text Available Video streaming service is one of the most popular applications for mobile users. However, mobile video streaming services consume a lot of energy, resulting in a reduced battery life. This is a critical problem that results in a degraded user’s quality of experience (QoE. Therefore, in this paper, a joint optimization scheme that controls both the central processing unit (CPU and wireless networking of the video streaming process for improved energy efficiency on mobile devices is proposed. For this purpose, the energy consumption of the network interface and CPU is analyzed, and based on the energy consumption profile a joint optimization problem is formulated to maximize the energy efficiency of the mobile device. The proposed algorithm adaptively adjusts the number of chunks to be downloaded and decoded in each packet. Simulation results show that the proposed algorithm can effectively improve the energy efficiency when compared with the existing algorithms.

  8. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    Energy Technology Data Exchange (ETDEWEB)

    Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.

  9. A general scheme for training and optimization of the Grenander deformable template model

    DEFF Research Database (Denmark)

    Fisker, Rune; Schultz, Nette; Duta, N.

    2000-01-01

    parameters, a very fast general initialization algorithm and an adaptive likelihood model based on local means. The model parameters are trained by a combination of a 2D shape learning algorithm and a maximum likelihood based criteria. The fast initialization algorithm is based on a search approach using...... for applying the general deformable template model proposed by (Grenander et al., 1991) to a new problem with minimal manual interaction, beside supplying a training set, which can be done by a non-expert user. The main contributions compared to previous work are a supervised learning scheme for the model...

  10. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    Science.gov (United States)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically

  11. Implementation of a compressive sampling scheme for wireless sensors to achieve energy efficiency in a structural health monitoring system

    Science.gov (United States)

    O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.

    2013-04-01

    Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.

  12. Subsolutions of an Isaacs Equation and Efficient Schemes for Importance Sampling: Convergence Analysis

    National Research Council Canada - National Science Library

    Dupuis, Paul; Wang, Hui

    2005-01-01

    Previous papers by authors establish the connection between importance sampling algorithms for estimating rare-event probabilities, two-person zero-sum differential games, and the associated Isaacs equation...

  13. Subsolutions of an Isaacs Equation and Efficient Schemes for Importance Sampling: Examples and Numerics

    National Research Council Canada - National Science Library

    Dupuis, Paul; Wang, Hui

    2005-01-01

    It has been established that importance sampling algorithms for estimating rare-event probabilities are intimately connected with two-person zero-sum differential games and the associated Isaacs equation...

  14. Developing a computationally efficient dynamic multilevel hybrid optimization scheme using multifidelity model interactions.

    Energy Technology Data Exchange (ETDEWEB)

    Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr. (; .); Giunta, Anthony Andrew

    2006-01-01

    Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and

  15. Evaluation of an Optimal Epidemiological Typing Scheme for Legionella pneumophila with Whole-Genome Sequence Data Using Validation Guidelines.

    Science.gov (United States)

    David, Sophia; Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R; Afshar, Baharak; Underwood, Anthony; Fry, Norman K; Parkhill, Julian; Harrison, Timothy G

    2016-08-01

    Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current "gold standard" typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard "typing panel," previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. Copyright © 2016 David et al.

  16. Importance Sampling Based Decision Trees for Security Assessment and the Corresponding Preventive Control Schemes: the Danish Case Study

    DEFF Research Database (Denmark)

    Liu, Leo; Rather, Zakir Hussain; Chen, Zhe

    2013-01-01

    Decision Trees (DT) based security assessment helps Power System Operators (PSO) by providing them with the most significant system attributes and guiding them in implementing the corresponding emergency control actions to prevent system insecurity and blackouts. DT is obtained offline from time...... and adopts a methodology of importance sampling to maximize the information contained in the database so as to increase the accuracy of DT. Further, this paper also studies the effectiveness of DT by implementing its corresponding preventive control schemes. These approaches are tested on the detailed model...

  17. A global earthquake discrimination scheme to optimize ground-motion prediction equation selection

    Science.gov (United States)

    Garcia, Daniel; Wald, David J.; Hearne, Michael

    2012-01-01

    We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

  18. MICRO FINANCING SCHEME BASED ON OPTIMIZATION OF NETWORK (MFS-ON: APROPOSED IMPROVEMENT ON CURRENT PRACTICES

    Directory of Open Access Journals (Sweden)

    Soenartomo Soepomo

    2017-03-01

    Full Text Available One important method in reducing poverty was through finance.The poor were lack of qualification andcapacity to borrow from formal financial sector. Therefore they should resort their financing needs to informalsources albeit very high cost implication.This dependency in turn would disrupt their productive capacitysincethe interest was very high. We focus on special segment of theproductive poor. We reviewed various financingscheme that widely practiced both domestically and globally. We perceived that existing schemes were inadequatefrom several perspectives: (1 partial nature, (2 substandard business practices, (3 lack of cooperationand (4 limited coverage. We proposed an alternative financing scheme.The spirit of the approach emphasizedthe critical role of self-sufficiency of Microfinance Institution (MFI. Through self sufficiency, MFI coulddevelop a healthy business with reasonable rate of return. In addition to self sufficiency, first, the proposal alsoincluded financing from private sector through mobilization of Corporate Social Responsibility (CSR funds.The funding sources became broad and economics scale could be achieved. Second, the proposal improved risksharing mechanism by introducing the regional government banks as well as insurers. Third, the proposalmade the distribution channel optimum by involvement of society elements

  19. Towards Efficient Energy Management of Smart Buildings Exploiting Heuristic Optimization with Real Time and Critical Peak Pricing Schemes

    Directory of Open Access Journals (Sweden)

    Sheraz Aslam

    2017-12-01

    Full Text Available The smart grid plays a vital role in decreasing electricity cost through Demand Side Management (DSM. Smart homes, a part of the smart grid, contribute greatly to minimizing electricity consumption cost via scheduling home appliances. However, user waiting time increases due to the scheduling of home appliances. This scheduling problem is the motivation to find an optimal solution that could minimize the electricity cost and Peak to Average Ratio (PAR with minimum user waiting time. There are many studies on Home Energy Management (HEM for cost minimization and peak load reduction. However, none of the systems gave sufficient attention to tackle multiple parameters (i.e., electricity cost and peak load reduction at the same time as user waiting time was minimum for residential consumers with multiple homes. Hence, in this work, we propose an efficient HEM scheme using the well-known meta-heuristic Genetic Algorithm (GA, the recently developed Cuckoo Search Optimization Algorithm (CSOA and the Crow Search Algorithm (CSA, which can be used for electricity cost and peak load alleviation with minimum user waiting time. The integration of a smart Electricity Storage System (ESS is also taken into account for more efficient operation of the Home Energy Management System (HEMS. Furthermore, we took the real-time electricity consumption pattern for every residence, i.e., every home has its own living pattern. The proposed scheme is implemented in a smart building; comprised of thirty smart homes (apartments, Real-Time Pricing (RTP and Critical Peak Pricing (CPP signals are examined in terms of electricity cost estimation for both a single smart home and a smart building. In addition, feasible regions are presented for single and multiple smart homes, which show the relationship among the electricity cost, electricity consumption and user waiting time. Experimental results demonstrate the effectiveness of our proposed scheme for single and multiple smart

  20. Optimal dewatering schemes in the foundation design of an electronuclear plant

    International Nuclear Information System (INIS)

    Galeati, G.; Gambolati, G.

    1988-01-01

    A three-dimensional finite element model combined with an optimization approach based on linear mixed integer programming is developed and applied to assist in the design of the dewatering system for the electronuclear plant to be built by the Italian Electric Agency (ENEL) in Trino Vercellese, northwestern Italy. The foundations site is encompassed by a 25- to 35-m deep plastic wall with the purpose of protecting the unconfined aquifer from the significant water table lowering required by the construction project. To reduce further the propagation of the depression cone a large amount of the water pumped out is reinjected through ad hoc recharge ditches. The finite element optimization model includes both the natural and the artificial constraints and provides several optimal withdrawal strategies for the dewatering system design concerning the distribution of the abstraction wells and the corresponding pumping rates. Physical and economical objective functions are explored and the related solutions are discussed

  1. Optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme for Intel Many Integrated Core (MIC) architecture

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.

  2. Experimental Modeling of Monolithic Resistors for Silicon ICS with a Robust Optimizer-Driving Scheme

    Directory of Open Access Journals (Sweden)

    Philippe Leduc

    2002-06-01

    Full Text Available Today, an exhaustive library of models describing the electrical behavior of integrated passive components in the radio-frequency range is essential for the simulation and optimization of complex circuits. In this work, a preliminary study has been done on Tantalum Nitride (TaN resistors integrated on silicon, and this leads to a single p-type lumped-element circuit. An efficient extraction technique will be presented to provide a computer-driven optimizer with relevant initial model parameter values (the "guess-timate". The results show the unicity in most cases of the lumped element determination, which leads to a precise simulation of self-resonant frequencies.

  3. Frequency-Selective Signal Sensing with Sub-Nyquist Uniform Sampling Scheme

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2015-01-01

    In this paper the authors discuss a problem of acquisition and reconstruction of a signal polluted by adjacent- channel interference. The authors propose a method to find a sub-Nyquist uniform sampling pattern which allows for correct reconstruction of selected frequencies. The method is inspired...... by the Restricted Isometry Property, which is known from the field of compressed sensing. Then, compressed sensing is used to successfully reconstruct a wanted signal even if some of the uniform samples were randomly lost, e. g. due to ADC saturation. An experiment which tests the proposed method in practice...

  4. Simple and efficient importance sampling scheme for a tandem queue with server slow-down

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2008-01-01

    This paper considers importance sampling as a tool for rare-event simulation. The system at hand is a so-called tandem queue with slow-down, which essentially means that the server of the first queue (or: upstream queue) switches to a lower speed when the second queue (downstream queue) exceeds some

  5. 40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Interpreting PCB concentration... § 761.79(b)(3) § 761.316 Interpreting PCB concentration measurements resulting from this sampling... composite is 20 µg/100 cm2, then the entire 9.5 square meters has a PCB surface concentration of 20 µg/100...

  6. On the non-orthogonal sampling scheme for Gabor's signal expansion

    NARCIS (Netherlands)

    Bastiaans, M.J.; Leest, van A.J.; Veen, J.P.

    2000-01-01

    Gabor's signal expansion and the Gabor transform are formulated on a non-orthogonal time-frequency lattice instead of on the traditional rectangular lattice [1,2]. The reason for doing so is that a non-orthogonal sampling geometry might be better adapted to the form of the window functions (in the

  7. TEM10 homodyne detection as an optimal small-displacement and tilt-measurement scheme

    DEFF Research Database (Denmark)

    Delaubert, Vincent; Treps, Nikolas; Lassen, Mikael Østergaard

    2006-01-01

    We report an experimental demonstration of optimal measurements of small displacement and tilt of a Gaussian beam - two conjugate variables - involving a homodyne detection with a TEM10 local oscillator. We verify that the standard split detection is only 64% efficient. We also show a displacement...

  8. Model-based predictive control scheme for cost optimization and balancing services for supermarket refrigeration Systems

    NARCIS (Netherlands)

    Weerts, H.H.M.; Shafiei, S.E.; Stoustrup, J.; Izadi-Zamanabadi, R.; Boje, E.; Xia, X.

    2014-01-01

    A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predictive

  9. An Optimal Integrated Control Scheme for Permanent Magnet Synchronous Generator-Based Wind Turbines under Asymmetrical Grid Fault Conditions

    Directory of Open Access Journals (Sweden)

    Dan Wang

    2016-04-01

    Full Text Available In recent years, the increasing penetration level of wind energy into power systems has brought new issues and challenges. One of the main concerns is the issue of dynamic response capability during outer disturbance conditions, especially the fault-tolerance capability during asymmetrical faults. In order to improve the fault-tolerance and dynamic response capability under asymmetrical grid fault conditions, an optimal integrated control scheme for the grid-side voltage-source converter (VSC of direct-driven permanent magnet synchronous generator (PMSG-based wind turbine systems is proposed in this paper. The optimal control strategy includes a main controller and an additional controller. In the main controller, a double-loop controller based on differential flatness-based theory is designed for grid-side VSC. Two parts are involved in the design process of the flatness-based controller: the reference trajectories generation of flatness output and the implementation of the controller. In the additional control aspect, an auxiliary second harmonic compensation control loop based on an improved calculation method for grid-side instantaneous transmission power is designed by the quasi proportional resonant (Quasi-PR control principle, which is able to simultaneously restrain the second harmonic components in active power and reactive power injected into the grid without the respective calculation for current control references. Moreover, to reduce the DC-link overvoltage during grid faults, the mathematical model of DC-link voltage is analyzed and a feedforward modified control factor is added to the traditional DC voltage control loop in grid-side VSC. The effectiveness of the optimal control scheme is verified in PSCAD/EMTDC simulation software.

  10. Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction

    Science.gov (United States)

    2016-01-01

    human-sized scene in 0.048sec− 0.101sec. Index Terms—Microwave imaging, multistatic radar, Fast Fourier Transform (FFT). I. INTRODUCTION Near-field...configuration, but its computational demands are extreme. Fast Fourier Transform (FFT) imaging has long been used to efficiently construct images sampled...with the block diagram depicted in Fig. 4. It is noted that the multistatic to monostatic correction is valid over a finite imaging domain. However, as

  11. Focusing light through dynamical samples using fast continuous wavefront optimization.

    Science.gov (United States)

    Blochet, B; Bourdieu, L; Gigan, S

    2017-12-01

    We describe a fast continuous optimization wavefront shaping system able to focus light through dynamic scattering media. A micro-electro-mechanical system-based spatial light modulator, a fast photodetector, and field programmable gate array electronics are combined to implement a continuous optimization of a wavefront with a single-mode optimization rate of 4.1 kHz. The system performances are demonstrated by focusing light through colloidal solutions of TiO 2 particles in glycerol with tunable temporal stability.

  12. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  13. Resource-Aware Load Balancing Scheme using Multi-objective Optimization in Cloud Computing

    OpenAIRE

    Kavita Rana; Vikas Zandu

    2016-01-01

    Cloud computing is a service based, on-demand, pay per use model consisting of an interconnected and virtualizes resources delivered over internet. In cloud computing, usually there are number of jobs that need to be executed with the available resources to achieve optimal performance, least possible total time for completion, shortest response time, and efficient utilization of resources etc. Hence, job scheduling is the most important concern that aims to ensure that use’s requirement are ...

  14. An optimal implicit staggered-grid finite-difference scheme based on the modified Taylor-series expansion with minimax approximation method for elastic modeling

    Science.gov (United States)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2017-03-01

    Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.

  15. Optimal grade control sampling practice in open-pit mining

    DEFF Research Database (Denmark)

    Engström, Karin; Esbensen, Kim Harry

    2017-01-01

    Misclassification of ore grades results in lost revenues, and the need for representative sampling procedures in open pit mining is increasingly important in all mining industries. This study evaluated possible improvements in sampling representativity with the use of Reverse Circulation (RC) drill...... sampling compared to manual Blast Hole (BH) sampling in the Leveäniemi open pit mine, northern Sweden. The variographic experiment results showed that sampling variability was lower for RC than for BH sampling. However, the total costs for RC drill sampling are significantly exceeding current costs...... for manual BH sampling, which needs to be compensated for by other benefits to motivate introduction of RC drilling. The main conclusion is that manual BH sampling can be fit-for-purpose in the studied open pit mine. However, with so many mineral commodities and mining methods in use globally...

  16. Optimized pulsed write schemes improve linearity and write speed for low-power organic neuromorphic devices

    Science.gov (United States)

    Keene, Scott T.; Melianas, Armantas; Fuller, Elliot J.; van de Burgt, Yoeri; Talin, A. Alec; Salleo, Alberto

    2018-06-01

    Neuromorphic devices are becoming increasingly appealing as efficient emulators of neural networks used to model real world problems. However, no hardware to date has demonstrated the necessary high accuracy and energy efficiency gain over CMOS in both (1) training via backpropagation and (2) in read via vector matrix multiplication. Such shortcomings are due to device non-idealities, particularly asymmetric conductance tuning in response to uniform voltage pulse inputs. Here, by formulating a general circuit model for capacitive ion-exchange neuromorphic devices, we show that asymmetric nonlinearity in organic electrochemical neuromorphic devices (ENODes) can be suppressed by an appropriately chosen write scheme. Simulations based upon our model suggest that a nonlinear write-selector could reduce the switching voltage and energy, enabling analog tuning via a continuous set of resistance states (100 states) with extremely low switching energy (~170 fJ · µm‑2). This work clarifies the pathway to neural algorithm accelerators capable of parallelism during both read and write operations.

  17. Practical splitting methods for the adaptive integration of nonlinear evolution equations. Part I: Construction of optimized schemes and pairs of schemes

    KAUST Repository

    Auzinger, Winfried; Hofstä tter, Harald; Ketcheson, David I.; Koch, Othmar

    2016-01-01

    We present a number of new contributions to the topic of constructing efficient higher-order splitting methods for the numerical integration of evolution equations. Particular schemes are constructed via setup and solution of polynomial systems for the splitting coefficients. To this end we use and modify a recent approach for generating these systems for a large class of splittings. In particular, various types of pairs of schemes intended for use in adaptive integrators are constructed.

  18. Practical splitting methods for the adaptive integration of nonlinear evolution equations. Part I: Construction of optimized schemes and pairs of schemes

    KAUST Repository

    Auzinger, Winfried

    2016-07-28

    We present a number of new contributions to the topic of constructing efficient higher-order splitting methods for the numerical integration of evolution equations. Particular schemes are constructed via setup and solution of polynomial systems for the splitting coefficients. To this end we use and modify a recent approach for generating these systems for a large class of splittings. In particular, various types of pairs of schemes intended for use in adaptive integrators are constructed.

  19. Optimal ordering and pricing policy for price sensitive stock–dependent demand under progressive payment scheme

    Directory of Open Access Journals (Sweden)

    Nita H. Shah

    2011-01-01

    Full Text Available The terminal condition of inventory level to be zero at the end of the cycle time adopted by Soni and Shah (2008, 2009 is not viable when demand is stock-dependent. To rectify this assumption, we extend their model for (1 an ending – inventory to be non-zero; (2 limited floor space; (3 a profit maximization model; (4 selling price to be a decision variable, and (5 units in inventory deteriorate at a constant rate. The algorithm is developed to search for the optimal decision policy. The working of the proposed model is supported with a numerical example. Sensitivity analysis is carried out to investigate critical parameters.

  20. Triangular Geometrized Sampling Heuristics for Fast Optimal Motion Planning

    Directory of Open Access Journals (Sweden)

    Ahmed Hussain Qureshi

    2015-02-01

    Full Text Available Rapidly-exploring Random Tree (RRT-based algorithms have become increasingly popular due to their lower computational complexity as compared with other path planning algorithms. The recently presented RRT* motion planning algorithm improves upon the original RRT algorithm by providing optimal path solutions. While RRT determines an initial collision-free path fairly quickly, RRT* guarantees almost certain convergence to an optimal, obstacle-free path from the start to the goal points for any given geometrical environment. However, the main limitations of RRT* include its slow processing rate and high memory consumption, due to the large number of iterations required for calculating the optimal path. In order to overcome these limitations, we present another improvement, i.e, the Triangular Geometerized-RRT* (TG-RRT* algorithm, which utilizes triangular geometrical methods to improve the performance of the RRT* algorithm in terms of the processing time and a decreased number of iterations required for an optimal path solution. Simulations comparing the performance results of the improved TG-RRT* with RRT* are presented to demonstrate the overall improvement in performance and optimal path detection.

  1. Finding an Optimal Thermo-Mechanical Processing Scheme for a Gum-Type Ti-Nb-Zr-Fe-O Alloy

    Science.gov (United States)

    Nocivin, Anna; Cojocaru, Vasile Danut; Raducanu, Doina; Cinca, Ion; Angelescu, Maria Lucia; Dan, Ioan; Serban, Nicolae; Cojocaru, Mirela

    2017-09-01

    A gum-type alloy was subjected to a thermo-mechanical processing scheme to establish a suitable process for obtaining superior structural and behavioural characteristics. Three processes were proposed: a homogenization treatment, a cold-rolling process and a solution treatment with three heating temperatures: 1073 K (800 °C), 1173 K (900 °C) and 1273 K (1000 °C). Results of all three proposed processes were analyzed using x-ray diffraction and scanning electron microscopy imaging, to establish and compare the structural modifications. The behavioural status was completed with micro-hardness and tensile strength tests. The optimal results were obtained for solution treatment at 1073 K.

  2. Optimization of operation schemes in boiling water reactors using neural networks

    International Nuclear Information System (INIS)

    Ortiz S, J. J.; Castillo M, A.; Pelta, D. A.

    2012-10-01

    In previous works were presented the results of a recurrent neural network to find the best combination of several groups of fuel cells, fuel load and control bars patterns. These solution groups to each problem of Fuel Management were previously optimized by diverse optimization techniques. The neural network chooses the partial solutions so the combination of them, correspond to a good configuration of the reactor according to a function objective. The values of the involved variables in this objective function are obtained through the simulation of the combination of partial solutions by means of Simulate-3. In the present work, a multilayer neural network that learned how to predict some results of Simulate-3 was used so was possible to substitute it in the objective function for the neural network and to accelerate the response time of the whole system of this way. The preliminary results shown in this work are encouraging to continue carrying out efforts in this sense and to improve the response quality of the system. (Author)

  3. Design and implementation of an optimal laser pulse front tilting scheme for ultrafast electron diffraction in reflection geometry with high temporal resolution

    Directory of Open Access Journals (Sweden)

    Francesco Pennacchio

    2017-07-01

    Full Text Available Ultrafast electron diffraction is a powerful technique to investigate out-of-equilibrium atomic dynamics in solids with high temporal resolution. When diffraction is performed in reflection geometry, the main limitation is the mismatch in group velocity between the overlapping pump light and the electron probe pulses, which affects the overall temporal resolution of the experiment. A solution already available in the literature involved pulse front tilt of the pump beam at the sample, providing a sub-picosecond time resolution. However, in the reported optical scheme, the tilted pulse is characterized by a temporal chirp of about 1 ps at 1 mm away from the centre of the beam, which limits the investigation of surface dynamics in large crystals. In this paper, we propose an optimal tilting scheme designed for a radio-frequency-compressed ultrafast electron diffraction setup working in reflection geometry with 30 keV electron pulses containing up to 105 electrons/pulse. To characterize our scheme, we performed optical cross-correlation measurements, obtaining an average temporal width of the tilted pulse lower than 250 fs. The calibration of the electron-laser temporal overlap was obtained by monitoring the spatial profile of the electron beam when interacting with the plasma optically induced at the apex of a copper needle (plasma lensing effect. Finally, we report the first time-resolved results obtained on graphite, where the electron-phonon coupling dynamics is observed, showing an overall temporal resolution in the sub-500 fs regime. The successful implementation of this configuration opens the way to directly probe structural dynamics of low-dimensional systems in the sub-picosecond regime, with pulsed electrons.

  4. Optimal experiment design in a filtering context with application to sampled network data

    OpenAIRE

    Singhal, Harsh; Michailidis, George

    2010-01-01

    We examine the problem of optimal design in the context of filtering multiple random walks. Specifically, we define the steady state E-optimal design criterion and show that the underlying optimization problem leads to a second order cone program. The developed methodology is applied to tracking network flow volumes using sampled data, where the design variable corresponds to controlling the sampling rate. The optimal design is numerically compared to a myopic and a naive strategy. Finally, w...

  5. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  6. On optimal improvements of classical iterative schemes for Z-matrices

    Science.gov (United States)

    Noutsos, D.; Tzoumas, M.

    2006-04-01

    Many researchers have considered preconditioners, applied to linear systems, whose matrix coefficient is a Z- or an M-matrix, that make the associated Jacobi and Gauss-Seidel methods converge asymptotically faster than the unpreconditioned ones. Such preconditioners are chosen so that they eliminate the off-diagonal elements of the same column or the elements of the first upper diagonal [Milaszewicz, LAA 93 (1987) 161-170], Gunawardena et al. [LAA 154-156 (1991) 123-143]. In this work we generalize the previous preconditioners to obtain optimal methods. "Good" Jacobi and Gauss-Seidel algorithms are given and preconditioners, that eliminate more than one entry per row, are also proposed and analyzed. Moreover, the behavior of the above preconditioners to the Krylov subspace methods is studied.

  7. Convex Programming and Bootstrap Sensitivity for Optimized Electricity Bill in Healthcare Buildings under a Time-Of-Use Pricing Scheme

    Directory of Open Access Journals (Sweden)

    Rodolfo Gordillo-Orquera

    2018-06-01

    Full Text Available Efficient energy management is strongly dependent on determining the adequate power contracts among the ones offered by different electricity suppliers. This topic takes special relevance in healthcare buildings, where noticeable amounts of energy are required to generate an adequate health environment for patients and staff. In this paper, a convex optimization method is scrutinized to give a straightforward analysis of the optimal power levels to be contracted while minimizing the electricity bill cost in a time-of-use pricing scheme. In addition, a sensitivity analysis is carried out on the constraints in the optimization problems, which are analyzed in terms of both their empirical distribution and their bootstrap-estimated statistical distributions to create a simple-to-use tool for this purpose, the so-called mosaic-distribution. The evaluation of the proposed method was carried out with five-year consumption data on two different kinds of healthcare buildings, a large one given by Hospital Universitario de Fuenlabrada, and a primary care center, Centro de Especialidades el Arroyo, both located at Fuenlabrada (Madrid, Spain. The analysis of the resulting optimization shows that the annual savings achieved vary moderately, ranging from −0.22 % to +27.39%, depending on the analyzed year profile and the healthcare building type. The analysis introducing mosaic-distribution to represent the sensitivity score also provides operative information to evaluate the convenience of implementing energy saving measures. All this information is useful for managers to determine the appropriate power levels for next year contract renewal and to consider whether to implement demand response mechanisms in healthcare buildings.

  8. Optimization of Decision-Making for Spatial Sampling in the North China Plain, Based on Remote-Sensing a Priori Knowledge

    Science.gov (United States)

    Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.

    2012-07-01

    In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.

  9. Intel Many Integrated Core (MIC) architecture optimization strategies for a memory-bound Weather Research and Forecasting (WRF) Goddard microphysics scheme

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Goddard cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The WRF is a widely used weather prediction system in the world. It development is a done in collaborative around the globe. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the code of this important part of WRF. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU do. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 4.7x. Furthermore, the same optimizations improved performance on a dual socket Intel Xeon E5-2670 system by a factor of 2.8x compared to the original code.

  10. Fugitive emission source characterization using a gradient-based optimization scheme and scalar transport adjoint

    Science.gov (United States)

    Brereton, Carol A.; Joynes, Ian M.; Campbell, Lucy J.; Johnson, Matthew R.

    2018-05-01

    Fugitive emissions are important sources of greenhouse gases and lost product in the energy sector that can be difficult to detect, but are often easily mitigated once they are known, located, and quantified. In this paper, a scalar transport adjoint-based optimization method is presented to locate and quantify unknown emission sources from downstream measurements. This emission characterization approach correctly predicted locations to within 5 m and magnitudes to within 13% of experimental release data from Project Prairie Grass. The method was further demonstrated on simulated simultaneous releases in a complex 3-D geometry based on an Alberta gas plant. Reconstructions were performed using both the complex 3-D transient wind field used to generate the simulated release data and using a sequential series of steady-state RANS wind simulations (SSWS) representing 30 s intervals of physical time. Both the detailed transient and the simplified wind field series could be used to correctly locate major sources and predict their emission rates within 10%, while predicting total emission rates from all sources within 24%. This SSWS case would be much easier to implement in a real-world application, and gives rise to the possibility of developing pre-computed databases of both wind and scalar transport adjoints to reduce computational time.

  11. Ultrafast lattice dynamics in photoexcited nanostructures. Femtosecond X-ray diffraction with optimized evaluation schemes

    International Nuclear Information System (INIS)

    Schick, Daniel

    2013-01-01

    Within the course of this thesis, I have investigated the complex interplay between electron and lattice dynamics in nanostructures of perovskite oxides. Femtosecond hard X-ray pulses were utilized to probe the evolution of atomic rearrangement directly, which is driven by ultrafast optical excitation of electrons. The physics of complex materials with a large number of degrees of freedom can be interpreted once the exact fingerprint of ultrafast lattice dynamics in time-resolved X-ray diffraction experiments for a simple model system is well known. The motion of atoms in a crystal can be probed directly and in real-time by femtosecond pulses of hard X-ray radiation in a pump-probe scheme. In order to provide such ultrashort X-ray pulses, I have built up a laser-driven plasma X-ray source. The setup was extended by a stable goniometer, a two-dimensional X-ray detector and a cryogen-free cryostat. The data acquisition routines of the diffractometer for these ultrafast X-ray diffraction experiments were further improved in terms of signal-to-noise ratio and angular resolution. The implementation of a high-speed reciprocal-space mapping technique allowed for a two-dimensional structural analysis with femtosecond temporal resolution. I have studied the ultrafast lattice dynamics, namely the excitation and propagation of coherent phonons, in photoexcited thin films and superlattice structures of the metallic perovskite SrRuO 3 . Due to the quasi-instantaneous coupling of the lattice to the optically excited electrons in this material a spatially and temporally well-defined thermal stress profile is generated in SrRuO 3 . This enables understanding the effect of the resulting coherent lattice dynamics in time-resolved X-ray diffraction data in great detail, e.g. the appearance of a transient Bragg peak splitting in both thin films and superlattice structures of SrRuO 3 . In addition, a comprehensive simulation toolbox to calculate the ultrafast lattice dynamics and the

  12. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  13. Optimism is universal: exploring the presence and benefits of optimism in a representative sample of the world.

    Science.gov (United States)

    Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D

    2013-10-01

    Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations. © 2012 Wiley Periodicals, Inc.

  14. Determination of Optimal Double Sampling Plan using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sampath Sundaram

    2012-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Designing double sampling plan requires identification of sample sizes and acceptance numbers. In this paper a genetic algorithm has been designed for the selection of optimal acceptance numbers and sample sizes for the specified producer’s risk and consumer’s risk. Implementation of the algorithm has been illustrated numerically for different choices of quantities involved in a double sampling plan   

  1. SamplingStrata: An R Package for the Optimization of Strati?ed Sampling

    Directory of Open Access Journals (Sweden)

    Giulio Barcaroli

    2014-11-01

    Full Text Available When designing a sampling survey, usually constraints are set on the desired precision levels regarding one or more target estimates (the Ys. If a sampling frame is available, containing auxiliary information related to each unit (the Xs, it is possible to adopt a stratified sample design. For any given strati?cation of the frame, in the multivariate case it is possible to solve the problem of the best allocation of units in strata, by minimizing a cost function sub ject to precision constraints (or, conversely, by maximizing the precision of the estimates under a given budget. The problem is to determine the best stratification in the frame, i.e., the one that ensures the overall minimal cost of the sample necessary to satisfy precision constraints. The Xs can be categorical or continuous; continuous ones can be transformed into categorical ones. The most detailed strati?cation is given by the Cartesian product of the Xs (the atomic strata. A way to determine the best stratification is to explore exhaustively the set of all possible partitions derivable by the set of atomic strata, evaluating each one by calculating the corresponding cost in terms of the sample required to satisfy precision constraints. This is una?ordable in practical situations, where the dimension of the space of the partitions can be very high. Another possible way is to explore the space of partitions with an algorithm that is particularly suitable in such situations: the genetic algorithm. The R package SamplingStrata, based on the use of a genetic algorithm, allows to determine the best strati?cation for a population frame, i.e., the one that ensures the minimum sample cost necessary to satisfy precision constraints, in a multivariate and multi-domain case.

  2. Optimizing Combinations of Flavonoids Deriving from Astragali Radix in Activating the Regulatory Element of Erythropoietin by a Feedback System Control Scheme

    Directory of Open Access Journals (Sweden)

    Hui Yu

    2013-01-01

    Full Text Available Identifying potent drug combination from a herbal mixture is usually quite challenging, due to a large number of possible trials. Using an engineering approach of the feedback system control (FSC scheme, we identified the potential best combinations of four flavonoids, including formononetin, ononin, calycosin, and calycosin-7-O-β-D-glucoside deriving from Astragali Radix (AR; Huangqi, which provided the best biological action at minimal doses. Out of more than one thousand possible combinations, only tens of trials were required to optimize the flavonoid combinations that stimulated a maximal transcriptional activity of hypoxia response element (HRE, a critical regulator for erythropoietin (EPO transcription, in cultured human embryonic kidney fibroblast (HEK293T. By using FSC scheme, 90% of the work and time can be saved, and the optimized flavonoid combinations increased the HRE mediated transcriptional activity by ~3-fold as compared with individual flavonoid, while the amount of flavonoids was reduced by ~10-fold. Our study suggests that the optimized combination of flavonoids may have strong effect in activating the regulatory element of erythropoietin at very low dosage, which may be used as new source of natural hematopoietic agent. The present work also indicates that the FSC scheme is able to serve as an efficient and model-free approach to optimize the drug combination of different ingredients within a herbal decoction.

  3. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, M; Li, R; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States); Ye, Y [Stanford Univ, Management Science and Engineering, Stanford, Ca (United States); Boyd, S [Stanford University, Electrical Engineering, Stanford, CA (United States)

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  4. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    International Nuclear Information System (INIS)

    Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S

    2014-01-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  5. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  6. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  7. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  8. A Velocity-Level Bi-Criteria Optimization Scheme for Coordinated Path Tracking of Dual Robot Manipulators Using Recurrent Neural Network.

    Science.gov (United States)

    Xiao, Lin; Zhang, Yongsheng; Liao, Bolin; Zhang, Zhijun; Ding, Lei; Jin, Long

    2017-01-01

    A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network.

  9. Toward a General Theory of Commitment, Renegotiation and Contract Incompleteness : (II) Commitment Problem and Optimal Incentive Schemes in Agency with Bilateral Moral Hazard

    OpenAIRE

    Suzuki, Yutaka

    1998-01-01

    This paper investigates the characteristics of the optimal incentive contracts when the principal is also a productive agent. In this bilateral moral hazard framework, the two requirements should be satisfied in designing an incentive scheme. One is the agent's incentive provision and the other is the principal's incentive provision. Because of the trade off between these two incentive provisions, only the second best is obtainable if the incentive contract should be based only on the total o...

  10. Development zoning scheme of the territory of the projected national park "Orilskyi" in order to optimize the structure of natureusing

    Directory of Open Access Journals (Sweden)

    Zelens'ka L.I.

    2009-08-01

    Full Text Available The scheme planning of land reserved for the creation of a national park "Orilskyi" within Shulhivskoyi village council Petrikov district of Dnipropetrovsk region, which is based on a functional concept of territory planning. Dedicated areas protected mode, recreational and economic of subzones. Grounded floral-faunistic value protected territory types rationalization of nature. The results introduced in local government institutions for the planning scheme area.

  11. A sampling scheme intended for tandem measurements of sodium transport and microvillous surface area in the coprodaeal epithelium of hens on high- and low-salt diets.

    Science.gov (United States)

    Mayhew, T M; Dantzer, V; Elbrønd, V S; Skadhauge, E

    1990-12-01

    A tissue sampling protocol for combined morphometric and physiological studies on the mucosa of the avian coprodaeum is presented. The morphometric goal is to estimate the surface area due to microvilli at the epithelial cell apex and the proposed scheme is illustrated using material from three White Plymouth Rock hens. The scheme is designed to satisfy sampling requirements for the unbiased estimation of surface areas by vertical sectioning coupled with cycloid test lines and it incorporates a number of useful internal checks. It relies on multi-level sampling with four levels of stereological estimation. At Level I, macroscopic estimates of coprodaeal volume are obtained. Light microscopy is employed at Level II to calculate epithelial volume density. Levels III and IV require low and high power electron microscopy to estimate the surface density of the epithelial apical border and the amplification factor due to microvilli. Worked examples of the calculation steps are provided.

  12. Optimal Physics Parameterization Scheme Combination of the Weather Research and Forecasting Model for Seasonal Precipitation Simulation over Ghana

    Directory of Open Access Journals (Sweden)

    Richard Yao Kuma Agyeman

    2017-01-01

    Full Text Available Seasonal predictions of precipitation, among others, are important to help mitigate the effects of drought and floods on agriculture, hydropower generation, disasters, and many more. This work seeks to obtain a suitable combination of physics schemes of the Weather Research and Forecasting (WRF model for seasonal precipitation simulation over Ghana. Using the ERA-Interim reanalysis as forcing data, simulation experiments spanning eight months (from April to November were performed for two different years: a dry year (2001 and a wet year (2008. A double nested approach was used with the outer domain at 50 km resolution covering West Africa and the inner domain covering Ghana at 10 km resolution. The results suggest that the WRF model generally overestimated the observed precipitation by a mean value between 3% and 64% for both years. Most of the scheme combinations overestimated (underestimated precipitation over coastal (northern zones of Ghana for both years but estimated precipitation reasonably well over forest and transitional zones. On the whole, the combination of WRF Single-Moment 6-Class Microphysics Scheme, Grell-Devenyi Ensemble Cumulus Scheme, and Asymmetric Convective Model Planetary Boundary Layer Scheme simulated the best temporal pattern and temporal variability with the least relative bias for both years and therefore is recommended for Ghana.

  13. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  14. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  15. Benefits of incorporating the adaptive dynamic range optimization amplification scheme into an assistive listening device for people with mild or moderate hearing loss.

    Science.gov (United States)

    Chang, Hung-Yue; Luo, Ching-Hsing; Lo, Tun-Shin; Chen, Hsiao-Chuan; Huang, Kuo-You; Liao, Wen-Huei; Su, Mao-Chang; Liu, Shu-Yu; Wang, Nan-Mai

    2017-08-28

    This study investigated whether a self-designed assistive listening device (ALD) that incorporates an adaptive dynamic range optimization (ADRO) amplification strategy can surpass a commercially available monaurally worn linear ALD, SM100. Both subjective and objective measurements were implemented. Mandarin Hearing-In-Noise Test (MHINT) scores were the objective measurement, whereas participant satisfaction was the subjective measurement. The comparison was performed in a mixed design (i.e., subjects' hearing status being mild or moderate, quiet versus noisy, and linear versus ADRO scheme). The participants were two groups of hearing-impaired subjects, nine mild and eight moderate, respectively. The results of the ADRO system revealed a significant difference in the MHINT sentence reception threshold (SRT) in noisy environments between monaurally aided and unaided conditions, whereas the linear system did not. The benchmark results showed that the ADRO scheme is effectively beneficial to people who experience mild or moderate hearing loss in noisy environments. The satisfaction rating regarding overall speech quality indicated that the participants were satisfied with the speech quality of both ADRO and linear schemes in quiet environments, and they were more satisfied with ADRO than they with the linear scheme in noisy environments.

  16. Implementation of suitable flow injection/sequential-sample separation/preconcentration schemes for determination of trace metal concentrations using detection by electrothermal atomic absorption spectrometry and inductively coupled plasma mass spectrometry

    DEFF Research Database (Denmark)

    Hansen, Elo Harald; Wang, Jianhua

    2002-01-01

    Various preconditioning procedures encomprising appropriate separation/preconcentration schemes in order to obtain optimal sensitivity and selectivity characteristics when using electrothermal atomic absorption spectrometry (ETAAS) and inductively coupled plasma mass spectrometry (ICPMS...

  17. OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Maja Vrkljan

    2004-12-01

    Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.

  18. Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States

    Science.gov (United States)

    Sousan, Sinan Dhia Jameel

    This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that

  19. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  20. Optimizing School-Based Health-Promotion Programmes: Lessons from a Qualitative Study of Fluoridated Milk Schemes in the UK

    Science.gov (United States)

    Foster, Geraldine R. K.; Tickle, Martin

    2013-01-01

    Background and objective: Some districts in the United Kingdom (UK), where the level of child dental caries is high and water fluoridation has not been possible, implement school-based fluoridated milk (FM) schemes. However, process variables, such as consent to drink FM and loss of children as they mature, impede the effectiveness of these…

  1. NSGA-II based optimal control scheme of wind thermal power system for improvement of frequency regulation characteristics

    Directory of Open Access Journals (Sweden)

    S. Chaine

    2015-09-01

    Full Text Available This work presents a methodology to optimize the controller parameters of doubly fed induction generator modeled for frequency regulation in interconnected two-area wind power integrated thermal power system. The gains of integral controller of automatic generation control loop and the proportional and derivative controllers of doubly fed induction generator inertial control loop are optimized in a coordinated manner by employing the multi-objective non-dominated sorting genetic algorithm-II. To reduce the numbers of optimization parameters, a sensitivity analysis is done to determine that the above mentioned three controller parameters are the most sensitive among the rest others. Non-dominated sorting genetic algorithm-II has depicted better efficiency of optimization compared to the linear programming, genetic algorithm, particle swarm optimization, and cuckoo search algorithm. The performance of the designed optimal controller exhibits robust performance even with the variation in penetration levels of wind energy, disturbances, parameter and operating conditions in the system.

  2. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    International Nuclear Information System (INIS)

    Oliveira, Karina B. de; Oliveira, Bras H. de

    2013-01-01

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C 18 column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min−1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 ± 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  3. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  4. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  5. Optimizing headspace sampling temperature and time for analysis of volatile oxidation products in fish oil

    DEFF Research Database (Denmark)

    Rørbæk, Karen; Jensen, Benny

    1997-01-01

    Headspace-gas chromatography (HS-GC), based on adsorption to Tenax GR(R), thermal desorption and GC, has been used for analysis of volatiles in fish oil. To optimize sam sampling conditions, the effect of heating the fish oil at various temperatures and times was evaluated from anisidine values (AV...

  6. Isolation and identification of phytase-producing strains from soil samples and optimization of production parameters

    Directory of Open Access Journals (Sweden)

    Masoud Mohammadi

    2017-09-01

    Discussion and conclusion: Penicillium sp. isolated from a soil sample near Qazvin, was able to produce highly active phytase in optimized environmental conditions, which could be a suitable candidate for commercial production of phytase to be used as complement in poultry feeding industries.

  7. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  9. Time optimization of 90Sr measurements: Sequential measurement of multiple samples during ingrowth of 90Y

    International Nuclear Information System (INIS)

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-01-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing 90 Sr by making the Cherenkov measurement of the daughter nuclide 90 Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of 90 Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21 h to 6.5 h, when assuming a MDA of 1 Bq/L and at a background count rate of approximately 0.8 cpm. - Highlights: • An approach roughly a factor of three more efficient than an un-optimized method. • The optimization gives a more efficient use of instrument time. • The efficiency increase ranges from a factor of three to 10, for 10 to 40 samples.

  10. Optimal sampling plan for clean development mechanism energy efficiency lighting projects

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2013-01-01

    Highlights: • A metering cost minimisation model is built to assist the sampling plan for CDM projects. • The model minimises the total metering cost by the determination of optimal sample size. • The required 90/10 criterion sampling accuracy is maintained. • The proposed metering cost minimisation model is applicable to other CDM projects as well. - Abstract: Clean development mechanism (CDM) project developers are always interested in achieving required measurement accuracies with the least metering cost. In this paper, a metering cost minimisation model is proposed for the sampling plan of a specific CDM energy efficiency lighting project. The problem arises from the particular CDM sampling requirement of 90% confidence and 10% precision for the small-scale CDM energy efficiency projects, which is known as the 90/10 criterion. The 90/10 criterion can be met through solving the metering cost minimisation problem. All the lights in the project are classified into different groups according to uncertainties of the lighting energy consumption, which are characterised by their statistical coefficient of variance (CV). Samples from each group are randomly selected to install power meters. These meters include less expensive ones with less functionality and more expensive ones with greater functionality. The metering cost minimisation model will minimise the total metering cost through the determination of the optimal sample size at each group. The 90/10 criterion is formulated as constraints to the metering cost objective. The optimal solution to the minimisation problem will therefore minimise the metering cost whilst meeting the 90/10 criterion, and this is verified by a case study. Relationships between the optimal metering cost and the population sizes of the groups, CV values and the meter equipment cost are further explored in three simulations. The metering cost minimisation model proposed for lighting systems is applicable to other CDM projects as

  11. A New Wavelength Optimization and Energy-Saving Scheme Based on Network Coding in Software-Defined WDM-PON Networks

    Science.gov (United States)

    Ren, Danping; Wu, Shanshan; Zhang, Lijing

    2016-09-01

    In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.

  12. Optimization of the fractionated irradiation scheme considering physical doses to tumor and organ at risk based on dose–volume histograms

    Energy Technology Data Exchange (ETDEWEB)

    Sugano, Yasutaka [Graduate School of Health Sciences, Hokkaido University, Kita-12, Nishi-5, Kita-ku, Sapporo, Hokkaido 060-0812 (Japan); Mizuta, Masahiro [Laboratory of Advanced Data Science, Information Initiative Center, Hokkaido University, Kita-11, Nishi-5, Kita-ku, Sapporo, Hokkaido 060-0811 (Japan); Takao, Seishin; Shirato, Hiroki; Sutherland, Kenneth L. [Department of Radiation Medicine, Graduate School of Medicine, Hokkaido University, Kita-15, Nishi-5, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Date, Hiroyuki, E-mail: date@hs.hokudai.ac.jp [Faculty of Health Sciences, Hokkaido University, Kita-12, Nishi-5, Kita-ku, Sapporo, Hokkaido 060-0812 (Japan)

    2015-11-15

    Purpose: Radiotherapy of solid tumors has been performed with various fractionation regimens such as multi- and hypofractionations. However, the ability to optimize the fractionation regimen considering the physical dose distribution remains insufficient. This study aims to optimize the fractionation regimen, in which the authors propose a graphical method for selecting the optimal number of fractions (n) and dose per fraction (d) based on dose–volume histograms for tumor and normal tissues of organs around the tumor. Methods: Modified linear-quadratic models were employed to estimate the radiation effects on the tumor and an organ at risk (OAR), where the repopulation of the tumor cells and the linearity of the dose-response curve in the high dose range of the surviving fraction were considered. The minimization problem for the damage effect on the OAR was solved under the constraint that the radiation effect on the tumor is fixed by a graphical method. Here, the damage effect on the OAR was estimated based on the dose–volume histogram. Results: It was found that the optimization of fractionation scheme incorporating the dose–volume histogram is possible by employing appropriate cell surviving models. The graphical method considering the repopulation of tumor cells and a rectilinear response in the high dose range enables them to derive the optimal number of fractions and dose per fraction. For example, in the treatment of prostate cancer, the optimal fractionation was suggested to lie in the range of 8–32 fractions with a daily dose of 2.2–6.3 Gy. Conclusions: It is possible to optimize the number of fractions and dose per fraction based on the physical dose distribution (i.e., dose–volume histogram) by the graphical method considering the effects on tumor and OARs around the tumor. This method may stipulate a new guideline to optimize the fractionation regimen for physics-guided fractionation.

  13. Optimized IMAC-IMAC protocol for phosphopeptide recovery from complex biological samples

    DEFF Research Database (Denmark)

    Ye, Juanying; Zhang, Xumin; Young, Clifford

    2010-01-01

    using Fe(III)-NTA IMAC resin and it proved to be highly selective in the phosphopeptide enrichment of a highly diluted standard sample (1:1000) prior to MALDI MS analysis. We also observed that a higher iron purity led to an increased IMAC enrichment efficiency. The optimized method was then adapted...... to phosphoproteome analyses of cell lysates of high protein complexity. From either 20 microg of mouse sample or 50 microg of Drosophila melanogaster sample, more than 1000 phosphorylation sites were identified in each study using IMAC-IMAC and LC-MS/MS. We demonstrate efficient separation of multiply phosphorylated...... characterization of phosphoproteins in functional phosphoproteomics research projects....

  14. Optimization of CO2 Storage in Saline Aquifers Using Water-Alternating Gas (WAG) Scheme - Case Study for Utsira Formation

    Science.gov (United States)

    Agarwal, R. K.; Zhang, Z.; Zhu, C.

    2013-12-01

    For optimization of CO2 storage and reduced CO2 plume migration in saline aquifers, a genetic algorithm (GA) based optimizer has been developed which is combined with the DOE multi-phase flow and heat transfer numerical simulation code TOUGH2. Designated as GA-TOUGH2, this combined solver/optimizer has been verified by performing optimization studies on a number of model problems and comparing the results with brute-force optimization which requires a large number of simulations. Using GA-TOUGH2, an innovative reservoir engineering technique known as water-alternating-gas (WAG) injection has been investigated to determine the optimal WAG operation for enhanced CO2 storage capacity. The topmost layer (layer # 9) of Utsira formation at Sleipner Project, Norway is considered as a case study. A cylindrical domain, which possesses identical characteristics of the detailed 3D Utsira Layer #9 model except for the absence of 3D topography, was used. Topographical details are known to be important in determining the CO2 migration at Sleipner, and are considered in our companion model for history match of the CO2 plume migration at Sleipner. However, simplification on topography here, without compromising accuracy, is necessary to analyze the effectiveness of WAG operation on CO2 migration without incurring excessive computational cost. Selected WAG operation then can be simulated with full topography details later. We consider a cylindrical domain with thickness of 35 m with horizontal flat caprock. All hydrogeological properties are retained from the detailed 3D Utsira Layer #9 model, the most important being the horizontal-to-vertical permeability ratio of 10. Constant Gas Injection (CGI) operation with nine-year average CO2 injection rate of 2.7 kg/s is considered as the baseline case for comparison. The 30-day, 15-day, and 5-day WAG cycle durations are considered for the WAG optimization design. Our computations show that for the simplified Utsira Layer #9 model, the

  15. Optimization of sampling for the determination of the mean Radium-226 concentration in surface soil

    International Nuclear Information System (INIS)

    Williams, L.R.; Leggett, R.W.; Espegren, M.L.; Little, C.A.

    1987-08-01

    This report describes a field experiment that identifies an optimal method for determination of compliance with the US Environmental Protection Agency's Ra-226 guidelines for soil. The primary goals were to establish practical levels of accuracy and precision in estimating the mean Ra-226 concentration of surface soil in a small contaminated region; to obtain empirical information on composite vs. individual soil sampling and on random vs. uniformly spaced sampling; and to examine the practicality of using gamma measurements in predicting the average surface radium concentration and in estimating the number of soil samples required to obtain a given level of accuracy and precision. Numerous soil samples were collected on each six sites known to be contaminated with uranium mill tailings. Three types of samples were collected on each site: 10-composite samples, 20-composite samples, and individual or post hole samples; 10-composite sampling is the method of choice because it yields a given level of accuracy and precision for the least cost. Gamma measurements can be used to reduce surface soil sampling on some sites. 2 refs., 5 figs., 7 tabs

  16. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Alternative difference analysis scheme combining R-space EXAFS fit with global optimization XANES fit for X-ray transient absorption spectroscopy.

    Science.gov (United States)

    Zhan, Fei; Tao, Ye; Zhao, Haifeng

    2017-07-01

    Time-resolved X-ray absorption spectroscopy (TR-XAS), based on the laser-pump/X-ray-probe method, is powerful in capturing the change of the geometrical and electronic structure of the absorbing atom upon excitation. TR-XAS data analysis is generally performed on the laser-on minus laser-off difference spectrum. Here, a new analysis scheme is presented for the TR-XAS difference fitting in both the extended X-ray absorption fine-structure (EXAFS) and the X-ray absorption near-edge structure (XANES) regions. R-space EXAFS difference fitting could quickly provide the main quantitative structure change of the first shell. The XANES fitting part introduces a global non-derivative optimization algorithm and optimizes the local structure change in a flexible way where both the core XAS calculation package and the search method in the fitting shell are changeable. The scheme was applied to the TR-XAS difference analysis of Fe(phen) 3 spin crossover complex and yielded reliable distance change and excitation population.

  18. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  19. Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples

    Directory of Open Access Journals (Sweden)

    Hyunok Oh

    2003-05-01

    Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.

  20. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    Directory of Open Access Journals (Sweden)

    Arnaud Chapon

    Full Text Available Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples’ characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters. Keywords: Liquid Scintillation Counting (LSC, PerkinElmer, Tri-Carb, Smear, Swipe

  1. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  2. Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2015-01-01

    Roč. 52, č. 2 (2015), s. 419-440 ISSN 0021-9002 Grant - others:GA AV ČR(CZ) 171396 Institutional support: RVO:67985556 Keywords : Dominated Convergence theorem for the expected average criterion * Discrepancy function * Kolmogorov inequality * Innovations * Strong sample-path optimality Subject RIV: BC - Control Systems Theory Impact factor: 0.665, year: 2015 http://library.utia.cas.cz/separaty/2015/E/sladky-0449029.pdf

  3. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  4. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  5. Efficient Power Scheduling in Smart Homes Using Hybrid Grey Wolf Differential Evolution Optimization Technique with Real Time and Critical Peak Pricing Schemes

    Directory of Open Access Journals (Sweden)

    Muqaddas Naz

    2018-02-01

    Full Text Available With the emergence of automated environments, energy demand by consumers is increasing rapidly. More than 80% of total electricity is being consumed in the residential sector. This brings a challenging task of maintaining the balance between demand and generation of electric power. In order to meet such challenges, a traditional grid is renovated by integrating two-way communication between the consumer and generation unit. To reduce electricity cost and peak load demand, demand side management (DSM is modeled as an optimization problem, and the solution is obtained by applying meta-heuristic techniques with different pricing schemes. In this paper, an optimization technique, the hybrid gray wolf differential evolution (HGWDE, is proposed by merging enhanced differential evolution (EDE and gray wolf optimization (GWO scheme using real-time pricing (RTP and critical peak pricing (CPP. Load shifting is performed from on-peak hours to off-peak hours depending on the electricity cost defined by the utility. However, there is a trade-off between user comfort and cost. To validate the performance of the proposed algorithm, simulations have been carried out in MATLAB. Results illustrate that using RTP, the peak to average ratio (PAR is reduced to 53.02%, 29.02% and 26.55%, while the electricity bill is reduced to 12.81%, 12.012% and 12.95%, respectively, for the 15-, 30- and 60-min operational time interval (OTI. On the other hand, the PAR and electricity bill are reduced to 47.27%, 22.91%, 22% and 13.04%, 12%, 11.11% using the CPP tariff.

  6. Trends and perspectives of flow injection/sequential injection on-line sample-pretreatment schemes coupled to ETAAS

    DEFF Research Database (Denmark)

    Wang, Jianhua; Hansen, Elo Harald

    2005-01-01

    Flow injection (FI) analysis, the first generation of this technique, became in the 1990s supplemented by its second generation, sequential injection (SI), and most recently by the third generation (i.e.,Lab-on-Valve). The dominant role played by FI in automatic, on-line, sample pretreatments in ...

  7. A resting box for outdoor sampling of adult Anopheles arabiensis in rice irrigation schemes of lower Moshi, northern Tanzania

    Directory of Open Access Journals (Sweden)

    Msangi Shandala

    2009-04-01

    Full Text Available Abstract Background Malaria vector sampling is the best method for understanding the vector dynamics and infectivity; thus, disease transmission seasonality can be established. There is a need to protecting humans involved in the sampling of disease vectors during surveillance or in control programmes. In this study, human landing catch, two cow odour baited resting boxes and an unbaited resting box were evaluated as vector sampling tools in an area with a high proportion of Anopheles arabiensis, as the major malaria vector. Methods Three resting boxes were evaluated against human landing catch. Two were baited with cow odour, while the third was unbaited. The inner parts of the boxes were covered with black cloth materials. Experiments were arranged in latin-square design. Boxes were set in the evening and left undisturbed; mosquitoes were collected at 06:00 am the next morning, while human landing catch was done overnight. Results A total of 9,558 An. arabiensis mosquitoes were collected. 17.5% (N = 1668 were collected in resting box baited with cow body odour, 42.5% (N = 4060 in resting box baited with cow urine, 15.1% (N = 1444 in unbaited resting box and 24.9% (N = 2386 were collected by human landing catch technique. In analysis, the house positions had no effect on the density of mosquitoes caught (DF = 3, F = 0.753, P = 0.387; the sampling technique had significant impact on the caught mosquitoes densities (DF = 3, F 37. 944, P Conclusion Odour-baited resting boxes have shown the possibility of replacing the existing traditional method (human landing catch for sampling malaria vectors in areas with a high proportion of An. arabiensis as malaria vectors. Further evaluations of fermented urine and longevity of the urine odour still need to be investigated.

  8. Optimal Sample Size Determinations for the Heteroscedastic Two One-Sided Tests of Mean Equivalence: Design Schemes and Software Implementations

    Science.gov (United States)

    Jan, Show-Li; Shieh, Gwowen

    2017-01-01

    Equivalence assessment is becoming an increasingly important topic in many application areas including behavioral and social sciences research. Although there exist more powerful tests, the two one-sided tests (TOST) procedure is a technically transparent and widely accepted method for establishing statistical equivalence. Alternatively, a direct…

  9. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    International Nuclear Information System (INIS)

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-01-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality

  10. Evaluation of sample preparation methods and optimization of nickel determination in vegetable tissues

    Directory of Open Access Journals (Sweden)

    Rodrigo Fernando dos Santos Salazar

    2011-02-01

    Full Text Available Nickel, although essential to plants, may be toxic to plants and animals. It is mainly assimilated by food ingestion. However, information about the average levels of elements (including Ni in edible vegetables from different regions is still scarce in Brazil. The objectives of this study were to: (a evaluate and optimize a method for preparation of vegetable tissue samples for Ni determination; (b optimize the analytical procedures for determination by Flame Atomic Absorption Spectrometry (FAAS and by Electrothermal Atomic Absorption (ETAAS in vegetable samples and (c determine the Ni concentration in vegetables consumed in the cities of Lorena and Taubaté in the Vale do Paraíba, State of São Paulo, Brazil. By means of the analytical technique for determination by ETAAS or FAAS, the results were validated by the test of analyte addition and recovery. The most viable method tested for quantification of this element was HClO4-HNO3 wet digestion. All samples but carrot tissue collected in Lorena contained Ni levels above the permitted by the Brazilian Ministry of Health. The most disturbing results, requiring more detailed studies, were the Ni concentrations measured in carrot samples from Taubaté, where levels were five times higher than permitted by Brazilian regulations.

  11. Demonstration and Optimization of BNFL's Pulsed Jet Mixing and RFD Sampling Systems Using NCAW Simulant

    International Nuclear Information System (INIS)

    Bontha, J.R.; Golcar, G.R.; Hannigan, N.

    2000-01-01

    The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%

  12. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  13. On the optimal sampling of bandpass measurement signals through data acquisition systems

    International Nuclear Information System (INIS)

    Angrisani, L; Vadursi, M

    2008-01-01

    Data acquisition systems (DAS) play a fundamental role in a lot of modern measurement solutions. One of the parameters characterizing a DAS is its maximum sample rate, which imposes constraints on the signals that can be alias-free digitized. Bandpass sampling theory singles out separated ranges of admissible sample rates, which can be significantly lower than carrier frequency. But, how to choose the most convenient sample rate according to the purpose at hand? The paper proposes a method for the automatic selection of the optimal sample rate in measurement applications involving bandpass signals; the effects of sample clock instability and limited resolution are also taken into account. The method allows the user to choose the location of spectral replicas of the sampled signal in terms of normalized frequency, and the minimum guard band between replicas, thus introducing a feature that no DAS currently available on the market seems to offer. A number of experimental tests on bandpass digitally modulated signals are carried out to assess the concurrence of the obtained central frequency with the expected one

  14. Optimal sample to tracer ratio for isotope dilution mass spectrometry: the polyisotopic case

    International Nuclear Information System (INIS)

    Laszlo, G.; Ridder, P. de; Goldman, A.; Cappis, J.; Bievre, P. de

    1991-01-01

    The Isotope Dilution Mass Spectrometry (IDMS) measurement technique provides a means for determining the unknown amount of various isotopes of an element in a sample solution of known mass. The sample solution is mixed with an auxiliary solution, or tracer, containing a known amount of the same element having the same isotopes but of different relative abundances or isotopic composition and the induced change in the isotopic composition measured by isotope mass spectrometry. The technique involves the measurement of the abundance ratio of each isotope to a (same) reference isotope in the sample solution, in the tracer solution and in the blend of the sample and tracer solution. These isotope ratio measurements, the known element amount in the tracer and the known mass of sample solution are used to calculate the unknown amount of one isotope in the sample solution. Subsequently the unknown amount of element is determined. The purpose of this paper is to examine the optimization of the ratio of the estimated unknown amount of element in the sample solution to the known amount of element in the tracer solution in order to minimize the relative uncertainty in the determination of the unknown amount of element

  15. [Sampling optimization for tropical invertebrates: an example using dung beetles (Coleoptera: Scarabaeinae) in Venezuela].

    Science.gov (United States)

    Ferrer-Paris, José Rafael; Sánchez-Mercado, Ada; Rodríguez, Jon Paul

    2013-03-01

    The development of efficient sampling protocols is an essential prerequisite to evaluate and identify priority conservation areas. There are f ew protocols for fauna inventory and monitoring in wide geographical scales for the tropics, where the complexity of communities and high biodiversity levels, make the implementation of efficient protocols more difficult. We proposed here a simple strategy to optimize the capture of dung beetles, applied to sampling with baited traps and generalizable to other sampling methods. We analyzed data from eight transects sampled between 2006-2008 withthe aim to develop an uniform sampling design, that allows to confidently estimate species richness, abundance and composition at wide geographical scales. We examined four characteristics of any sampling design that affect the effectiveness of the sampling effort: the number of traps, sampling duration, type and proportion of bait, and spatial arrangement of the traps along transects. We used species accumulation curves, rank-abundance plots, indicator species analysis, and multivariate correlograms. We captured 40 337 individuals (115 species/morphospecies of 23 genera). Most species were attracted by both dung and carrion, but two thirds had greater relative abundance in traps baited with human dung. Different aspects of the sampling design influenced each diversity attribute in different ways. To obtain reliable richness estimates, the number of traps was the most important aspect. Accurate abundance estimates were obtained when the sampling period was increased, while the spatial arrangement of traps was determinant to capture the species composition pattern. An optimum sampling strategy for accurate estimates of richness, abundance and diversity should: (1) set 50-70 traps to maximize the number of species detected, (2) get samples during 48-72 hours and set trap groups along the transect to reliably estimate species abundance, (3) set traps in groups of at least 10 traps to

  16. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    Science.gov (United States)

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  17. The scheme optimization and management innovation for the first containment integrated in-service test of nuclear power plant

    International Nuclear Information System (INIS)

    Wang Haiwei; Yang Gang

    2014-01-01

    The containment integrated test is a large-scale, high risk and very difficult test in pressurized water reactor nuclear power plants. By simulating peak pressure inside the containment in DESIGN-BASIS accident conditions, measuring the total leakage rate of the containment with the peak pressure, and implementing the structure inspection test on several pressure levels, the containment's performance can be verified. Containment integrated test is an important witness point supervised by NNSA. The test results crucially decide the reactor to be started or not. The containment integrated test in 301 overhaul is the first in-service test of Unit 3. By the experience of the same 6 former tests in Qinshan Second Nuclear Power Plant and the feedback from other plants, the test scheme get more scientific and the organization management more standardized. This article discusses the containment integrated test in 301 overhaul and summarizes the experience to provide some references for the following containment integrated tests in the future. (authors)

  18. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    International Nuclear Information System (INIS)

    Carpy, R; Picker, G; Amann, B; Ranebo, H; Vincent-Bonnieu, S; Minster, O; Winter, J; Dettmann, J; Castiglione, L; Höhler, R; Langevin, D

    2011-01-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of 'wet foams' have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume 3 . These units, will be on orbit replaceable sets, that will allow multiple sample compositions processing (in the range of >40).

  19. Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.

    Science.gov (United States)

    Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh

    2012-02-28

    Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.

  20. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.

    Science.gov (United States)

    Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden.

  1. Optimal quantum control of Bose-Einstein condensates in magnetic microtraps: Comparison of gradient-ascent-pulse-engineering and Krotov optimization schemes

    Science.gov (United States)

    Jäger, Georg; Reich, Daniel M.; Goerz, Michael H.; Koch, Christiane P.; Hohenester, Ulrich

    2014-09-01

    We study optimal quantum control of the dynamics of trapped Bose-Einstein condensates: The targets are to split a condensate, residing initially in a single well, into a double well, without inducing excitation, and to excite a condensate from the ground state to the first-excited state of a single well. The condensate is described in the mean-field approximation of the Gross-Pitaevskii equation. We compare two optimization approaches in terms of their performance and ease of use; namely, gradient-ascent pulse engineering (GRAPE) and Krotov's method. Both approaches are derived from the variational principle but differ in the way the control is updated, additional costs are accounted for, and second-order-derivative information can be included. We find that GRAPE produces smoother control fields and works in a black-box manner, whereas Krotov with a suitably chosen step-size parameter converges faster but can produce sharp features in the control fields.

  2. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  3. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Energy Technology Data Exchange (ETDEWEB)

    Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it [Sapienza Università di Roma, Dipartimento di Ingegneria Civile, Edile e Ambientale (Italy); Alfonso, L. [Hydroinformatics Chair Group, UNESCO-IHE, Delft (Netherlands); Di Baldassarre, G. [Department of Earth Sciences, Program for Air, Water and Landscape Sciences, Uppsala University (Sweden)

    2016-06-08

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  4. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    International Nuclear Information System (INIS)

    Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.

    2016-01-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  5. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  6. Development of an inverse distance weighted active infrared stealth scheme using the repulsive particle swarm optimization algorithm.

    Science.gov (United States)

    Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk

    2018-04-20

    Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.

  7. Optimal sampling plan for clean development mechanism lighting projects with lamp population decay

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2014-01-01

    Highlights: • A metering cost minimisation model is built with the lamp population decay to optimise CDM lighting projects sampling plan. • The model minimises the total metering cost and optimise the annual sample size during the crediting period. • The required 90/10 criterion sampling accuracy is satisfied for each CDM monitoring report. - Abstract: This paper proposes a metering cost minimisation model that minimises metering cost under the constraints of sampling accuracy requirement for clean development mechanism (CDM) energy efficiency (EE) lighting project. Usually small scale (SSC) CDM EE lighting projects expect a crediting period of 10 years given that the lighting population will decay as time goes by. The SSC CDM sampling guideline requires that the monitored key parameters for the carbon emission reduction quantification must satisfy the sampling accuracy of 90% confidence and 10% precision, known as the 90/10 criterion. For the existing registered CDM lighting projects, sample sizes are either decided by professional judgment or by rule-of-thumb without considering any optimisation. Lighting samples are randomly selected and their energy consumptions are monitored continuously by power meters. In this study, the sampling size determination problem is formulated as a metering cost minimisation model by incorporating a linear lighting decay model as given by the CDM guideline AMS-II.J. The 90/10 criterion is formulated as constraints to the metering cost minimisation problem. Optimal solutions to the problem minimise the metering cost whilst satisfying the 90/10 criterion for each reporting period. The proposed metering cost minimisation model is applicable to other CDM lighting projects with different population decay characteristics as well

  8. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Tijssen, Rob H.N. [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Senneville, Baudouin D. de [Imaging Division, University Medical Center Utrecht, Utrecht (Netherlands); L' Institut de Mathématiques de Bordeaux, Unité Mixte de Recherche 5251, Centre National de la Recherche Scientifique/University of Bordeaux, Bordeaux (France); Heerkens, Hanne D.; Vulpen, Marco van; Lagendijk, Jan J.W.; Berg, Cornelis A.T. van den [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands)

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.

  9. Optimal sampling in damage detection of flexural beams by continuous wavelet transform

    International Nuclear Information System (INIS)

    Basu, B; Broderick, B M; Montanari, L; Spagnoli, A

    2015-01-01

    Modern measurement techniques are improving in capability to capture spatial displacement fields occurring in deformed structures with high precision and in a quasi-continuous manner. This in turn has made the use of vibration-based damage identification methods more effective and reliable for real applications. However, practical measurement and data processing issues still present barriers to the application of these methods in identifying several types of structural damage. This paper deals with spatial Continuous Wavelet Transform (CWT) damage identification methods in beam structures with the aim of addressing the following key questions: (i) can the cost of damage detection be reduced by down-sampling? (ii) what is the minimum number of sampling intervals required for optimal damage detection ? The first three free vibration modes of a cantilever and a simple supported beam with an edge open crack are numerically simulated. A thorough parametric study is carried out by taking into account the key parameters governing the problem, including level of noise, crack depth and location, mechanical and geometrical parameters of the beam. The results are employed to assess the optimal number of sampling intervals for effective damage detection. (paper)

  10. Design and optimization of an energy degrader with a multi-wedge scheme based on Geant4

    Science.gov (United States)

    Liang, Zhikai; Liu, Kaifeng; Qin, Bin; Chen, Wei; Liu, Xu; Li, Dong; Xiong, Yongqian

    2018-05-01

    A proton therapy facility based on an isochronous superconducting cyclotron is under construction in Huazhong University of Science and Technology (HUST). To meet the clinical requirements, an energy degrader is essential in the beamline to modulate the fixed beam energy extracted from the cyclotron. Because of the multiple Coulomb scattering in the degrader, the beam emittance and the energy spread will be considerably increased during the energy degradation process. Therefore, a set of collimators is designed to restrict the increase in beam emittance after the energy degradation. The energy spread will be reduced in the following beam line which is not discussed in this paper. In this paper, the design considerations of an energy degrader and collimators are introduced, and the properties of the degrader material, degrader structure and the initial beam parameters are discussed using the Geant4 Monte-Carlo toolkit, with the main purpose of improving the overall performance of the degrader by multiple parameter optimization.

  11. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    Science.gov (United States)

    Carpy, R.; Picker, G.; Amann, B.; Ranebo, H.; Vincent-Bonnieu, S.; Minster, O.; Winter, J.; Dettmann, J.; Castiglione, L.; Höhler, R.; Langevin, D.

    2011-12-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of "wet foams" have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy [1] and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume 40).

  12. AMORE-HX: a multidimensional optimization of radial enhanced NMR-sampled hydrogen exchange

    International Nuclear Information System (INIS)

    Gledhill, John M.; Walters, Benjamin T.; Wand, A. Joshua

    2009-01-01

    The Cartesian sampled three-dimensional HNCO experiment is inherently limited in time resolution and sensitivity for the real time measurement of protein hydrogen exchange. This is largely overcome by use of the radial HNCO experiment that employs the use of optimized sampling angles. The significant practical limitation presented by use of three-dimensional data is the large data storage and processing requirements necessary and is largely overcome by taking advantage of the inherent capabilities of the 2D-FT to process selective frequency space without artifact or limitation. Decomposition of angle spectra into positive and negative ridge components provides increased resolution and allows statistical averaging of intensity and therefore increased precision. Strategies for averaging ridge cross sections within and between angle spectra are developed to allow further statistical approaches for increasing the precision of measured hydrogen occupancy. Intensity artifacts potentially introduced by over-pulsing are effectively eliminated by use of the BEST approach

  13. Population pharmacokinetic analysis of clopidogrel in healthy Jordanian subjects with emphasis optimal sampling strategy.

    Science.gov (United States)

    Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A

    2013-05-01

    Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Optimization of multi-channel neutron focusing guides for extreme sample environments

    International Nuclear Information System (INIS)

    Di Julio, D D; Lelièvre-Berna, E; Andersen, K H; Bentley, P M; Courtois, P

    2014-01-01

    In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.

  15. Neutron activation analysis for the optimal sampling and extraction of extractable organohalogens in human hari

    International Nuclear Information System (INIS)

    Zhang, H.; Chai, Z.F.; Sun, H.B.; Xu, H.F.

    2005-01-01

    Many persistent organohalogen compounds such as DDTs and polychlorinated biphenyls have caused seriously environmental pollution problem that now involves all life. It is know that neutron activation analysis (NAA) is a very convenient method for halogen analysis and is also the only method currently available for simultaneously determining organic chlorine, bromine and iodine in one extract. Human hair is a convenient material to evaluate the burden of such compounds in human body and dan be easily collected from people over wide ranges of age, sex, residential areas, eating habits and working environments. To effectively extract organohalogen compounds from human hair, in present work the optimal Soxhelt-extraction time of extractable organohalogen (EOX) and extractable persistent organohalogen (EPOX) from hair of different lengths were studied by NAA. The results indicated that the optimal Soxhelt-extraction time of EOX and EPOX from human hair was 8-11 h, and the highest EOX and EPOX contents were observed in hair powder extract. The concentrations of both EOX and EPOX in different hair sections were in the order of hair powder ≥ 2 mm > 5 mm, which stated that hair samples milled into hair powder or cut into very short sections were not only for homogeneous. hair sample but for the best hair extraction efficiency.

  16. Field Sampling from a Segmented Image

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-06-01

    Full Text Available This paper presents a statistical method for deriving the optimal prospective field sampling scheme on a remote sensing image to represent different categories in the field. The iterated conditional modes algorithm (ICM) is used for segmentation...

  17. Performance of Optimally Merged Multisatellite Precipitation Products Using the Dynamic Bayesian Model Averaging Scheme Over the Tibetan Plateau

    Science.gov (United States)

    Ma, Yingzhao; Hong, Yang; Chen, Yang; Yang, Yuan; Tang, Guoqiang; Yao, Yunjun; Long, Di; Li, Changmin; Han, Zhongying; Liu, Ronghua

    2018-01-01

    Accurate estimation of precipitation from satellites at high spatiotemporal scales over the Tibetan Plateau (TP) remains a challenge. In this study, we proposed a general framework for blending multiple satellite precipitation data using the dynamic Bayesian model averaging (BMA) algorithm. The blended experiment was performed at a daily 0.25° grid scale for 2007-2012 among Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42RT and 3B42V7, Climate Prediction Center MORPHing technique (CMORPH), and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR). First, the BMA weights were optimized using the expectation-maximization (EM) method for each member on each day at 200 calibrated sites and then interpolated to the entire plateau using the ordinary kriging (OK) approach. Thus, the merging data were produced by weighted sums of the individuals over the plateau. The dynamic BMA approach showed better performance with a smaller root-mean-square error (RMSE) of 6.77 mm/day, higher correlation coefficient of 0.592, and closer Euclid value of 0.833, compared to the individuals at 15 validated sites. Moreover, BMA has proven to be more robust in terms of seasonality, topography, and other parameters than traditional ensemble methods including simple model averaging (SMA) and one-outlier removed (OOR). Error analysis between BMA and the state-of-the-art IMERG in the summer of 2014 further proved that the performance of BMA was superior with respect to multisatellite precipitation data merging. This study demonstrates that BMA provides a new solution for blending multiple satellite data in regions with limited gauges.

  18. Combined mask and illumination scheme optimization for robust contact patterning on 45nm technology node flash memory devices

    Science.gov (United States)

    Vaglio Pret, Alessandro; Capetti, Gianfranco; Bollin, Maddalena; Cotti, Gina; De Simone, Danilo; Cantù, Pietro; Vaccaro, Alessandro; Soma, Laura

    2008-03-01

    Immersion Lithography is the most important technique for extending optical lithography's capabilities and meeting the requirements of Semiconductor Roadmap. The introduction of immersion tools has recently allowed the development of 45nm technology node in single exposure. Nevertheless the usage of hyper-high NA scanners (NA > 1), some levels still remain very critical to be imaged with sufficient process performances. For memory devices, contact mask is for sure the most challenging layer. Aim of this paper is to present the lithographic assessment of 193nm contact holes process, with k I value of ~0.30 using NA 1.20 immersion lithography (minimum pitch is 100nm). Different issues will be reported, related to mask choices (Binary or Attenuated Phase Shift) and illuminator configurations. First phase of the work will be dedicated to a preliminary experimental screening on a simple test case in order to reduce the variables in the following optimization sections. Based on this analysis we will discard X-Y symmetrical illuminators (Annular, C-Quad) due to poor contrast. Second phase will be dedicated to a full simulation assessment. Different illuminators will be compared, with both mask type and several mask biases. From this study, we will identify some general trends of lithography performances that can be used for the fine tuning of the RET settings. The last phase of the work will be dedicated to find the sensitivity trends for one of the analyzed illuminators. In particular we study the effect of Numerical Aperture, mask bias in both X and Y direction and poles sigma ring-width and centre.

  19. Optimization of a Pre-MEKC Separation SPE Procedure for Steroid Molecules in Human Urine Samples

    Directory of Open Access Journals (Sweden)

    Ilona Olędzka

    2013-11-01

    Full Text Available Many steroid hormones can be considered as potential biomarkers and their determination in body fluids can create opportunities for the rapid diagnosis of many diseases and disorders of the human body. Most existing methods for the determination of steroids are usually time- and labor-consuming and quite costly. Therefore, the aim of analytical laboratories is to develop a new, relatively low-cost and rapid implementation methodology for their determination in biological samples. Due to the fact that there is little literature data on concentrations of steroid hormones in urine samples, we have made attempts at the electrophoretic determination of these compounds. For this purpose, an extraction procedure for the optimized separation and simultaneous determination of seven steroid hormones in urine samples has been investigated. The isolation of analytes from biological samples was performed by liquid-liquid extraction (LLE with dichloromethane and compared to solid phase extraction (SPE with C18 and hydrophilic-lipophilic balance (HLB columns. To separate all the analytes a micellar electrokinetic capillary chromatography (MECK technique was employed. For full separation of all the analytes a running buffer (pH 9.2, composed of 10 mM sodium tetraborate decahydrate (borax, 50 mM sodium dodecyl sulfate (SDS, and 10% methanol was selected. The methodology developed in this work for the determination of steroid hormones meets all the requirements of analytical methods. The applicability of the method has been confirmed for the analysis of urine samples collected from volunteers—both men and women (students, amateur bodybuilders, using and not applying steroid doping. The data obtained during this work can be successfully used for further research on the determination of steroid hormones in urine samples.

  20. Modeling Optimal Cutoffs for the Brazilian Household Food Insecurity Measurement Scale in a Nationwide Representative Sample.

    Science.gov (United States)

    Interlenghi, Gabriela S; Reichenheim, Michael E; Segall-Corrêa, Ana M; Pérez-Escamilla, Rafael; Moraes, Claudia L; Salles-Costa, Rosana

    2017-07-01

    Background: This is the second part of a model-based approach to examine the suitability of the current cutoffs applied to the raw score of the Brazilian Household Food Insecurity Measurement Scale [Escala Brasileira de Insegurança Alimentar (EBIA)]. The approach allows identification of homogeneous groups who correspond to severity levels of food insecurity (FI) and, by extension, discriminant cutoffs able to accurately distinguish these groups. Objective: This study aims to examine whether the model-based approach for identifying optimal cutoffs first implemented in a local sample is replicated in a countrywide representative sample. Methods: Data were derived from the Brazilian National Household Sample Survey of 2013 ( n = 116,543 households). Latent class factor analysis (LCFA) models from 2 to 5 classes were applied to the scale's items to identify the number of underlying FI latent classes. Next, identification of optimal cutoffs on the overall raw score was ascertained from these identified classes. Analyses were conducted in the aggregate data and by macroregions. Finally, model-based classifications (latent classes and groupings identified thereafter) were contrasted to the traditionally used classification. Results: LCFA identified 4 homogeneous groups with a very high degree of class separation (entropy = 0.934-0.975). The following cutoffs were identified in the aggregate data: between 1 and 2 (1/2), 5 and 6 (5/6), and 10 and 11 (10/11) in households with children and/or adolescents category emerged consistently in all analyses. Conclusions: Nationwide findings corroborate previous local evidence that households with an overall score of 1 are more akin to those scoring negative on all items. These results may contribute to guide experts' and policymakers' decisions on the most appropriate EBIA cutoffs. © 2017 American Society for Nutrition.

  1. Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation

    International Nuclear Information System (INIS)

    Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A.; Bouquerel, Hélène

    2016-01-01

    Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L"−"1 and 10% for 10 mBq L"−"1. While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L"−"1, a conservative experimental estimate is rather 5 mBq L"−"1, corresponding to 0.14 fg g"−"1. The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported. - Highlights: • Radium-226 concentration measured with optimized accumulation in a container. • Radon-222 in air measured precisely with scintillation flasks and long countings. • Method tested by repetition tests, dilution experiments, and successful blind tests. • Estimated conservative detection limit without pre-concentration is 5 mBq L"−"1. • Method is portable, cost

  2. Evaluation and optimization of DNA extraction and purification procedures for soil and sediment samples.

    Science.gov (United States)

    Miller, D N; Bryant, J E; Madsen, E L; Ghiorse, W C

    1999-11-01

    We compared and statistically evaluated the effectiveness of nine DNA extraction procedures by using frozen and dried samples of two silt loam soils and a silt loam wetland sediment with different organic matter contents. The effects of different chemical extractants (sodium dodecyl sulfate [SDS], chloroform, phenol, Chelex 100, and guanadinium isothiocyanate), different physical disruption methods (bead mill homogenization and freeze-thaw lysis), and lysozyme digestion were evaluated based on the yield and molecular size of the recovered DNA. Pairwise comparisons of the nine extraction procedures revealed that bead mill homogenization with SDS combined with either chloroform or phenol optimized both the amount of DNA extracted and the molecular size of the DNA (maximum size, 16 to 20 kb). Neither lysozyme digestion before SDS treatment nor guanidine isothiocyanate treatment nor addition of Chelex 100 resin improved the DNA yields. Bead mill homogenization in a lysis mixture containing chloroform, SDS, NaCl, and phosphate-Tris buffer (pH 8) was found to be the best physical lysis technique when DNA yield and cell lysis efficiency were used as criteria. The bead mill homogenization conditions were also optimized for speed and duration with two different homogenizers. Recovery of high-molecular-weight DNA was greatest when we used lower speeds and shorter times (30 to 120 s). We evaluated four different DNA purification methods (silica-based DNA binding, agarose gel electrophoresis, ammonium acetate precipitation, and Sephadex G-200 gel filtration) for DNA recovery and removal of PCR inhibitors from crude extracts. Sephadex G-200 spin column purification was found to be the best method for removing PCR-inhibiting substances while minimizing DNA loss during purification. Our results indicate that for these types of samples, optimum DNA recovery requires brief, low-speed bead mill homogenization in the presence of a phosphate-buffered SDS-chloroform mixture, followed

  3. Optimization of a radiochemistry method for plutonium determination in biological samples

    International Nuclear Information System (INIS)

    Cerchetti, Maria L.; Arguelles, Maria G.

    2005-01-01

    Plutonium has been widely used for civilian an military activities. Nevertheless, the methods to control work exposition have not evolved in the same way, remaining as one of the major challengers for the radiological protection practice. Due to the low acceptable incorporation limit, the usual determination is based on indirect methods in urine samples. Our main objective was to optimize a technique used to monitor internal contamination of workers exposed to Plutonium isotopes. Different parameters were modified and their influence on the three steps of the method was evaluated. Those which gave the highest yield and feasibility were selected. The method involves: 1-) Sample concentration (coprecipitation); 2-) Plutonium purification; and 3-) Source preparation by electrodeposition. On the coprecipitation phase, changes on temperature and concentration of the carrier were evaluated. On the ion-exchange separation, changes on the type of the resin, elution solution for hydroxylamine (concentration and volume), length and column recycle were evaluated. Finally, on the electrodeposition phase, we modified the following: electrolytic solution, pH and time. Measures were made by liquid scintillation counting and alpha spectrometry (PIPS). We obtained the following yields: 88% for coprecipitation (at 60 C degree with 2 ml of CaHPO 4 ), 71% for ion-exchange (resins AG 1x8 Cl - 100-200 mesh, hydroxylamine 0.1N in HCl 0.2N as eluent, column between 4.5 and 8 cm), and 93% for electrodeposition (H 2 SO 4 -NH 4 OH, 100 minutes and pH from 2 to 2.8). The expand uncertainty was 30% (NC 95%), the decision threshold (Lc) was 0.102 Bq/L and the minimum detectable activity was 0.218 Bq/L of urine. We obtained an optimized method to screen workers exposed to Plutonium. (author)

  4. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hojin; Li Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing Lei [Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States) and Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Ehwa University, Seoul 158-710 (Korea, Republic of); Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States); Department of Statistics, Stanford University, Stanford, California 94305-4065 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5304 (United States)

    2012-07-15

    Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the

  5. Optimization of Network Topology in Computer-Aided Detection Schemes Using Phased Searching with NEAT in a Time-Scaled Framework.

    Science.gov (United States)

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-01-01

    In the field of computer-aided mammographic mass detection, many different features and classifiers have been tested. Frequently, the relevant features and optimal topology for the artificial neural network (ANN)-based approaches at the classification stage are unknown, and thus determined by trial-and-error experiments. In this study, we analyzed a classifier that evolves ANNs using genetic algorithms (GAs), which combines feature selection with the learning task. The classifier named "Phased Searching with NEAT in a Time-Scaled Framework" was analyzed using a dataset with 800 malignant and 800 normal tissue regions in a 10-fold cross-validation framework. The classification performance measured by the area under a receiver operating characteristic (ROC) curve was 0.856 ± 0.029. The result was also compared with four other well-established classifiers that include fixed-topology ANNs, support vector machines (SVMs), linear discriminant analysis (LDA), and bagged decision trees. The results show that Phased Searching outperformed the LDA and bagged decision tree classifiers, and was only significantly outperformed by SVM. Furthermore, the Phased Searching method required fewer features and discarded superfluous structure or topology, thus incurring a lower feature computational and training and validation time requirement. Analyses performed on the network complexities evolved by Phased Searching indicate that it can evolve optimal network topologies based on its complexification and simplification parameter selection process. From the results, the study also concluded that the three classifiers - SVM, fixed-topology ANN, and Phased Searching with NeuroEvolution of Augmenting Topologies (NEAT) in a Time-Scaled Framework - are performing comparably well in our mammographic mass detection scheme.

  6. A simple optimized microwave digestion method for multielement monitoring in mussel samples

    International Nuclear Information System (INIS)

    Saavedra, Y.; Gonzalez, A.; Fernandez, P.; Blanco, J.

    2004-01-01

    With the aim of obtaining a set of common decomposition conditions allowing the determination of several metals in mussel tissue (Hg by cold vapour atomic absorption spectrometry; Cu and Zn by flame atomic absorption spectrometry; and Cd, PbCr, Ni, As and Ag by electrothermal atomic absorption spectrometry), a factorial experiment was carried out using as factors the sample weight, digestion time and acid addition. It was found that the optimal conditions were 0.5 g of freeze-dried and triturated samples with 6 ml of nitric acid and subjected to microwave heating for 20 min at 180 psi. This pre-treatment, using only one step and one oxidative reagent, was suitable to determine the nine metals studied with no subsequent handling of the digest. It was possible to carry out the determination of atomic absorption using calibrations with aqueous standards and matrix modifiers for cadmium, lead, chromium, arsenic and silver. The accuracy of the procedure was checked using oyster tissue (SRM 1566b) and mussel tissue (CRM 278R) certified reference materials. The method is now used routinely to monitor these metals in wild and cultivated mussels, and found to be good

  7. Identification of Variable-Number Tandem-Repeat (VNTR) Sequences in Acinetobacter pittii and Development of an Optimized Multiple-Locus VNTR Analysis Typing Scheme.

    Science.gov (United States)

    Hu, Yuan; Li, Bo Qing; Jin, Da Zhi; He, Li Hua; Tao, Xiao Xia; Zhang, Jian Zhong

    2015-12-01

    To develop a multiple-locus variable-number tandem-repeat (VNTR) analysis (MLVA) assay for Acinetobacter pittii typing. Polymorphic VNTRs were searched by Tandem Repeats Finder. The distribution and polymorphism of each VNTR locus were analyzed in all the A. pittii genomes deposited in the NCBI genome database by BLAST and were evaluated with a collection of 20 well-characterized clinical A. pittii strains and one reference strain. The MLVA assay was compared with pulsed-field gel electrophoresis (PFGE) for discriminating A. pittii isolates. Ten VNTR loci were identified upon bioinformatic screening of A. pittii genomes, but only five of them showed full amplifiability and good polymorphism. Therefore, an MLVA assay composed of five VNTR loci was developed. The typeability, reproducibility, stability, discriminatory power, and epidemiological concordance were excellent. Compared with PFGE, the new optimized MLVA typing scheme provided the same and even greater discrimination. Compared with PFGE, MLVA typing is a faster and more standardized alternative for studying the genetic relatedness of A. pittii isolates in disease surveillance and outbreak investigation. Copyright © 2015 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.

  8. Optimized Analytical Method to Determine Gallic and Picric Acids in Pyrotechnic Samples by Using HPLC/UV (Reverse Phase)

    International Nuclear Information System (INIS)

    Garcia Alonso, S.; Perez Pastor, R. M.

    2013-01-01

    A study on the optimization and development of a chromatographic method for the determination of gallic and picric acids in pyrotechnic samples is presented. In order to achieve this, both analytical conditions by HPLC with diode detection and extraction step of a selected sample were studied. (Author)

  9. Reaction schemes of immunoanalysis

    International Nuclear Information System (INIS)

    Delaage, M.; Barbet, J.

    1991-01-01

    The authors apply a general theory for multiple equilibria to the reaction schemes of immunoanalysis, competition and sandwich. This approach allows the manufacturer to optimize the system and provide the user with interpolation functions for the standard curve and its first derivative as well, thus giving access to variance [fr

  10. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    Science.gov (United States)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  11. Colour schemes

    DEFF Research Database (Denmark)

    van Leeuwen, Theo

    2013-01-01

    This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation.......This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....

  12. A flexible Bayesian assessment for the expected impact of data on prediction confidence for optimal sampling designs

    Science.gov (United States)

    Leube, Philipp; Geiges, Andreas; Nowak, Wolfgang

    2010-05-01

    Incorporating hydrogeological data, such as head and tracer data, into stochastic models of subsurface flow and transport helps to reduce prediction uncertainty. Considering limited financial resources available for the data acquisition campaign, information needs towards the prediction goal should be satisfied in a efficient and task-specific manner. For finding the best one among a set of design candidates, an objective function is commonly evaluated, which measures the expected impact of data on prediction confidence, prior to their collection. An appropriate approach to this task should be stochastically rigorous, master non-linear dependencies between data, parameters and model predictions, and allow for a wide variety of different data types. Existing methods fail to fulfill all these requirements simultaneously. For this reason, we introduce a new method, denoted as CLUE (Cross-bred Likelihood Uncertainty Estimator), that derives the essential distributions and measures of data utility within a generalized, flexible and accurate framework. The method makes use of Bayesian GLUE (Generalized Likelihood Uncertainty Estimator) and extends it to an optimal design method by marginalizing over the yet unknown data values. Operating in a purely Bayesian Monte-Carlo framework, CLUE is a strictly formal information processing scheme free of linearizations. It provides full flexibility associated with the type of measurements (linear, non-linear, direct, indirect) and accounts for almost arbitrary sources of uncertainty (e.g. heterogeneity, geostatistical assumptions, boundary conditions, model concepts) via stochastic simulation and Bayesian model averaging. This helps to minimize the strength and impact of possible subjective prior assumptions, that would be hard to defend prior to data collection. Our study focuses on evaluating two different uncertainty measures: (i) expected conditional variance and (ii) expected relative entropy of a given prediction goal. The

  13. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  14. Testing of Alignment Parameters for Ancient Samples: Evaluating and Optimizing Mapping Parameters for Ancient Samples Using the TAPAS Tool

    Directory of Open Access Journals (Sweden)

    Ulrike H. Taron

    2018-03-01

    Full Text Available High-throughput sequence data retrieved from ancient or other degraded samples has led to unprecedented insights into the evolutionary history of many species, but the analysis of such sequences also poses specific computational challenges. The most commonly used approach involves mapping sequence reads to a reference genome. However, this process becomes increasingly challenging with an elevated genetic distance between target and reference or with the presence of contaminant sequences with high sequence similarity to the target species. The evaluation and testing of mapping efficiency and stringency are thus paramount for the reliable identification and analysis of ancient sequences. In this paper, we present ‘TAPAS’, (Testing of Alignment Parameters for Ancient Samples, a computational tool that enables the systematic testing of mapping tools for ancient data by simulating sequence data reflecting the properties of an ancient dataset and performing test runs using the mapping software and parameter settings of interest. We showcase TAPAS by using it to assess and improve mapping strategy for a degraded sample from a banded linsang (Prionodon linsang, for which no closely related reference is currently available. This enables a 1.8-fold increase of the number of mapped reads without sacrificing mapping specificity. The increase of mapped reads effectively reduces the need for additional sequencing, thus making more economical use of time, resources, and sample material.

  15. Determination of total concentration of chemically labeled metabolites as a means of metabolome sample normalization and sample loading optimization in mass spectrometry-based metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2012-12-18

    For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C

  16. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    Science.gov (United States)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one

  17. The Proteome of Ulcerative Colitis in Colon Biopsies from Adults - Optimized Sample Preparation and Comparison with Healthy Controls.

    Science.gov (United States)

    Schniers, Armin; Anderssen, Endre; Fenton, Christopher Graham; Goll, Rasmus; Pasing, Yvonne; Paulssen, Ruth Hracky; Florholmen, Jon; Hansen, Terkel

    2017-12-01

    The purpose of the study was to optimize the sample preparation and to further use an improved sample preparation to identify proteome differences between inflamed ulcerative colitis tissue from untreated adults and healthy controls. To optimize the sample preparation, we studied the effect of adding different detergents to a urea containing lysis buffer for a Lys-C/trypsin tandem digestion. With the optimized method, we prepared clinical samples from six ulcerative colitis patients and six healthy controls and analysed them by LC-MS/MS. We examined the acquired data to identify differences between the states. We improved the protein extraction and protein identification number by utilizing a urea and sodium deoxycholate containing buffer. Comparing ulcerative colitis and healthy tissue, we found 168 of 2366 identified proteins differently abundant. Inflammatory proteins are higher abundant in ulcerative colitis, proteins related to anion-transport and mucus production are lower abundant. A high proportion of S100 proteins is differently abundant, notably with both up-regulated and down-regulated proteins. The optimized sample preparation method will improve future proteomic studies on colon mucosa. The observed protein abundance changes and their enrichment in various groups improve our understanding of ulcerative colitis on protein level. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Development and optimization of the determination of pharmaceuticals in water samples by SPE and HPLC with diode-array detection.

    Science.gov (United States)

    Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra

    2013-09-01

    This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  20. Low-dose cone-beam CT via raw counts domain low-signal correction schemes: Performance assessment and task-based parameter optimization (Part II. Task-based parameter optimization).

    Science.gov (United States)

    Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong

    2018-05-01

    Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region

  1. Hyphenation of optimized microfluidic sample preparation with nano liquid chromatography for faster and greener alkaloid analysis

    NARCIS (Netherlands)

    Shen, Y.; Beek, van T.A.; Zuilhof, H.; Chen, B.

    2013-01-01

    A glass liquid–liquid extraction (LLE) microchip with three parallel 3.5 cm long and 100 µm wide interconnecting channels was optimized in terms of more environmentally friendly (greener) solvents and extraction efficiency. In addition, the optimized chip was successfully hyphenated with nano-liquid

  2. The optimal amount and allocation of of sampling effort for plant health inspection

    NARCIS (Netherlands)

    Surkov, I.; Oude Lansink, A.G.J.M.; Werf, van der W.

    2009-01-01

    Plant import inspection can prevent the introduction of exotic pests and diseases, thereby averting economic losses. We explore the optimal allocation of a fixed budget, taking into account risk differentials, and the optimal-sized budget to minimise total pest costs. A partial-equilibrium market

  3. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    Energy Technology Data Exchange (ETDEWEB)

    Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)

    2016-01-15

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  4. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    International Nuclear Information System (INIS)

    Maglevanny, I.I.; Smolar, V.A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  5. Image Interpolation Scheme based on SVM and Improved PSO

    Science.gov (United States)

    Jia, X. F.; Zhao, B. T.; Liu, X. X.; Song, H. P.

    2018-01-01

    In order to obtain visually pleasing images, a support vector machines (SVM) based interpolation scheme is proposed, in which the improved particle swarm optimization is applied to support vector machine parameters optimization. Training samples are constructed by the pixels around the pixel to be interpolated. Then the support vector machine with optimal parameters is trained using training samples. After the training, we can get the interpolation model, which can be employed to estimate the unknown pixel. Experimental result show that the interpolated images get improvement PNSR compared with traditional interpolation methods, which is agrees with the subjective quality.

  6. CSR schemes in agribusiness

    DEFF Research Database (Denmark)

    Pötz, Katharina Anna; Haas, Rainer; Balzarova, Michaela

    2013-01-01

    of schemes that can be categorized on focus areas, scales, mechanisms, origins, types and commitment levels. Research limitations/implications – The findings contribute to conceptual and empirical research on existing models to compare and analyse CSR standards. Sampling technique and depth of analysis limit......Purpose – The rise of CSR followed a demand for CSR standards and guidelines. In a sector already characterized by a large number of standards, the authors seek to ask what CSR schemes apply to agribusiness, and how they can be systematically compared and analysed. Design....../methodology/approach – Following a deductive-inductive approach the authors develop a model to compare and analyse CSR schemes based on existing studies and on coding qualitative data on 216 CSR schemes. Findings – The authors confirm that CSR standards and guidelines have entered agribusiness and identify a complex landscape...

  7. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  8. Tradable schemes

    NARCIS (Netherlands)

    J.K. Hoogland (Jiri); C.D.D. Neumann

    2000-01-01

    textabstractIn this article we present a new approach to the numerical valuation of derivative securities. The method is based on our previous work where we formulated the theory of pricing in terms of tradables. The basic idea is to fit a finite difference scheme to exact solutions of the pricing

  9. Transmission characteristics and optimal diagnostic samples to detect an FMDV infection in vaccinated and non-vaccinated sheep

    NARCIS (Netherlands)

    Eble, P.L.; Orsel, K.; Kluitenberg-van Hemert, F.; Dekker, A.

    2015-01-01

    We wanted to quantify transmission of FMDV Asia-1 in sheep and to evaluate which samples would be optimal for detection of an FMDV infection in sheep. For this, we used 6 groups of 4 non-vaccinated and 6 groups of 4 vaccinated sheep. In each group 2 sheep were inoculated and contact exposed to 2

  10. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    International Nuclear Information System (INIS)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra; Kalogeropoulou, Christina; Pratikakis, Ioannis; Costaridou, Lena

    2015-01-01

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the

  11. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    Energy Technology Data Exchange (ETDEWEB)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra [Department of Medical Physics, School of Medicine,University of Patras, Patras 26504 (Greece); Kalogeropoulou, Christina [Department of Radiology, School of Medicine, University of Patras, Patras 26504 (Greece); Pratikakis, Ioannis [Department of Electrical and Computer Engineering, Democritus University of Thrace, Xanthi 67100 (Greece); Costaridou, Lena, E-mail: costarid@upatras.gr [Department of Medical Physics, School of Medicine, University of Patras, Patras 26504 (Greece)

    2015-08-15

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the

  12. Debba China presentation on optimal field sampling for exploration targets and geochemicals

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available a Introductio n to Remot e Sensin g Optimi ze d samplin g scheme s cas e studie s De rivin g optima l explo ratio n targe t zone s Opti m u m samplin g schem e fo r sur fac e geochemica l cha racte... rizatio n of min e tailing s Optima lFiel d Samplin g fo r 1. Explo ratio n Targe tZ one s an d 2. Geochemica lCha rac te rizatio n of Min e Tailing s P. De bb a CSIR ,Logistic s an d Quantitati ve Method s...

  13. Ionizing radiation as optimization method for aluminum detection from drinking water samples

    International Nuclear Information System (INIS)

    Bazante-Yamguish, Renata; Geraldo, Aurea Beatriz C.; Moura, Eduardo; Manzoli, Jose Eduardo

    2013-01-01

    The presence of organic compounds in water samples is often responsible for metal complexation; depending on the analytic method, the organic fraction may dissemble the evaluation of the real values of metal concentration. Pre-treatment of the samples is advised when organic compounds are interfering agents, and thus sample mineralization may be accomplished by several chemical and/or physical methods. Here, the ionizing radiation was used as an advanced oxidation process (AOP), for sample pre-treatment before the analytic determination of total and dissolved aluminum by ICP-OES in drinking water samples from wells and spring source located at Billings dam region. Before irradiation, the spring source and wells' samples showed aluminum levels of 0.020 mg/l and 0.2 mg/l respectively; after irradiation, both samples showed a 8-fold increase of aluminum concentration. These results are discussed considering other physical and chemical parameters and peculiarities of sample sources. (author)

  14. Optimal sampling period of the digital control system for the nuclear power plant steam generator water level control

    International Nuclear Information System (INIS)

    Hur, Woo Sung; Seong, Poong Hyun

    1995-01-01

    A great effort has been made to improve the nuclear plant control system by use of digital technologies and a long term schedule for the control system upgrade has been prepared with an aim to implementation in the next generation nuclear plants. In case of digital control system, it is important to decide the sampling period for analysis and design of the system, because the performance and the stability of a digital control system depend on the value of the sampling period of the digital control system. There is, however, currently no systematic method used universally for determining the sampling period of the digital control system. Generally, a traditional way to select the sampling frequency is to use 20 to 30 times the bandwidth of the analog control system which has the same system configuration and parameters as the digital one. In this paper, a new method to select the sampling period is suggested which takes into account of the performance as well as the stability of the digital control system. By use of the Irving's model steam generator, the optimal sampling period of an assumptive digital control system for steam generator level control is estimated and is actually verified in the digital control simulation system for Kori-2 nuclear power plant steam generator level control. Consequently, we conclude the optimal sampling period of the digital control system for Kori-2 nuclear power plant steam generator level control is 1 second for all power ranges. 7 figs., 3 tabs., 8 refs. (Author)

  15. Optimization of sample preparation variables for wedelolactone from Eclipta alba using Box-Behnken experimental design followed by HPLC identification.

    Science.gov (United States)

    Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S

    2013-07-01

    Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  16. Sampling optimization trade-offs for long-term monitoring of gamma dose rates

    NARCIS (Netherlands)

    Melles, S.J.; Heuvelink, G.B.M.; Twenhöfel, C.J.W.; Stöhlker, U.

    2008-01-01

    This paper applies a recently developed optimization method to examine the design of networks that monitor radiation under routine conditions. Annual gamma dose rates were modelled by combining regression with interpolation of the regression residuals using spatially exhaustive predictors and an

  17. Counting, enumerating and sampling of execution plans in a cost-based query optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    1999-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on

  18. Counting, Enumerating and Sampling of Execution Plans in a Cost-Based Query Optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    2000-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on the query

  19. Evaluation of dynamically dimensioned search algorithm for optimizing SWAT by altering sampling distributions and searching range

    Science.gov (United States)

    The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...

  20. Relationships between depressive symptoms and perceived social support, self-esteem, & optimism in a sample of rural adolescents.

    Science.gov (United States)

    Weber, Scott; Puskar, Kathryn Rose; Ren, Dianxu

    2010-09-01

    Stress, developmental changes and social adjustment problems can be significant in rural teens. Screening for psychosocial problems by teachers and other school personnel is infrequent but can be a useful health promotion strategy. We used a cross-sectional survey descriptive design to examine the inter-relationships between depressive symptoms and perceived social support, self-esteem, and optimism in a sample of rural school-based adolescents. Depressive symptoms were negatively correlated with peer social support, family social support, self-esteem, and optimism. Findings underscore the importance for teachers and other school staff to provide health education. Results can be used as the basis for education to improve optimism, self-esteem, social supports and, thus, depression symptoms of teens.

  1. Optimization of Sample Preparation for the Identification and Quantification of Saxitoxin in Proficiency Test Mussel Sample using Liquid Chromatography-Tandem Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Kirsi Harju

    2015-11-01

    Full Text Available Saxitoxin (STX and some selected paralytic shellfish poisoning (PSP analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS. Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk. Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD.

  2. Optimizing detection of noble gas emission at a former UNE site: sample strategy, collection, and analysis

    Science.gov (United States)

    Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.

    2013-12-01

    Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.

  3. Optimal sample preparation for nanoparticle metrology (statistical size measurements) using atomic force microscopy

    International Nuclear Information System (INIS)

    Hoo, Christopher M.; Doan, Trang; Starostin, Natasha; West, Paul E.; Mecartney, Martha L.

    2010-01-01

    Optimal deposition procedures are determined for nanoparticle size characterization by atomic force microscopy (AFM). Accurate nanoparticle size distribution analysis with AFM requires non-agglomerated nanoparticles on a flat substrate. The deposition of polystyrene (100 nm), silica (300 and 100 nm), gold (100 nm), and CdSe quantum dot (2-5 nm) nanoparticles by spin coating was optimized for size distribution measurements by AFM. Factors influencing deposition include spin speed, concentration, solvent, and pH. A comparison using spin coating, static evaporation, and a new fluid cell deposition method for depositing nanoparticles is also made. The fluid cell allows for a more uniform and higher density deposition of nanoparticles on a substrate at laminar flow rates, making nanoparticle size analysis via AFM more efficient and also offers the potential for nanoparticle analysis in liquid environments.

  4. Optimizing human semen cryopreservation by reducing test vial volume and repetitive test vial sampling

    DEFF Research Database (Denmark)

    Jensen, Christian F S; Ohl, Dana A; Parker, Walter R

    2015-01-01

    OBJECTIVE: To investigate optimal test vial (TV) volume, utility and reliability of TVs, intermediate temperature exposure (-88°C to -93°C) before cryostorage, cryostorage in nitrogen vapor (VN2) and liquid nitrogen (LN2), and long-term stability of VN2 cryostorage of human semen. DESIGN......: Prospective clinical laboratory study. SETTING: University assisted reproductive technology (ART) laboratory. PATIENT(S): A total of 594 patients undergoing semen analysis and cryopreservation. INTERVENTION(S): Semen analysis, cryopreservation with different intermediate steps and in different volumes (50......-1,000 μL), and long-term storage in LN2 or VN2. MAIN OUTCOME MEASURE(S): Optimal TV volume, prediction of cryosurvival (CS) in ART procedure vials (ARTVs) with pre-freeze semen parameters and TV CS, post-thaw motility after two- or three-step semen cryopreservation and cryostorage in VN2 and LN2. RESULT...

  5. Optimization of Sample Preparation processes of Bone Material for Raman Spectroscopy.

    Science.gov (United States)

    Chikhani, Madelen; Wuhrer, Richard; Green, Hayley

    2018-03-30

    Raman spectroscopy has recently been investigated for use in the calculation of postmortem interval from skeletal material. The fluorescence generated by samples, which affects the interpretation of Raman data, is a major limitation. This study compares the effectiveness of two sample preparation techniques, chemical bleaching and scraping, in the reduction of fluorescence from bone samples during testing with Raman spectroscopy. Visual assessment of Raman spectra obtained at 1064 nm excitation following the preparation protocols indicates an overall reduction in fluorescence. Results demonstrate that scraping is more effective at resolving fluorescence than chemical bleaching. The scraping of skeletonized remains prior to Raman analysis is a less destructive method and allows for the preservation of a bone sample in a state closest to its original form, which is beneficial in forensic investigations. It is recommended that bone scraping supersedes chemical bleaching as the preferred method for sample preparation prior to Raman spectroscopy. © 2018 American Academy of Forensic Sciences.

  6. Optimism and self-esteem are related to sleep. Results from a large community-based sample.

    Science.gov (United States)

    Lemola, Sakari; Räikkönen, Katri; Gomez, Veronica; Allemand, Mathias

    2013-12-01

    There is evidence that positive personality characteristics, such as optimism and self-esteem, are important for health. Less is known about possible determinants of positive personality characteristics. To test the relationship of optimism and self-esteem with insomnia symptoms and sleep duration. Sleep parameters, optimism, and self-esteem were assessed by self-report in a community-based sample of 1,805 adults aged between 30 and 84 years in the USA. Moderation of the relation between sleep and positive characteristics by gender and age as well as potential confounding of the association by depressive disorder was tested. Individuals with insomnia symptoms scored lower on optimism and self-esteem largely independent of age and sex, controlling for symptoms of depression and sleep duration. Short sleep duration (self-esteem when compared to individuals sleeping 7-8 h, controlling depressive symptoms. Long sleep duration (>9 h) was also related to low optimism and self-esteem independent of age and sex. Good and sufficient sleep is associated with positive personality characteristics. This relationship is independent of the association between poor sleep and depression.

  7. Cost-constrained optimal sampling for system identification in pharmacokinetics applications with population priors and nuisance parameters.

    Science.gov (United States)

    Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar

    2015-06-01

    Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  8. A Counterexample on Sample-Path Optimality in Stable Markov Decision Chains with the Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2014-01-01

    Roč. 163, č. 2 (2014), s. 674-684 ISSN 0022-3239 Grant - others:PSF Organization(US) 012/300/02; CONACYT (México) and ASCR (Czech Republic)(MX) 171396 Institutional support: RVO:67985556 Keywords : Strong sample-path optimality * Lyapunov function condition * Stationary policy * Expected average reward criterion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.509, year: 2014 http://library.utia.cas.cz/separaty/2014/E/sladky-0432661.pdf

  9. Optimization Extracting Technology of Cynomorium songaricum Rupr. Saponins by Ultrasonic and Determination of Saponins Content in Samples with Different Source

    OpenAIRE

    Xiaoli Wang; Qingwei Wei; Xinqiang Zhu; Chunmei Wang; Yonggang Wang; Peng Lin; Lin Yang

    2015-01-01

    Extraction process was optimized by single factor and orthogonal experiment (L9 (34)). Moreover, the content determination was studied in methodology. The optimum ultrasonic extraction conditions were: ethanol concentration of 75%, ultrasonic power of 420 w, the solid-liquid ratio of 1:15, extraction duration of 45 min, extraction temperature of 90°C and extraction for 2 times. Saponins content in Guazhou samples was significantly higher than those in Xinjiang and Inner Mongolia. Meanwhile, G...

  10. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    Science.gov (United States)

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  11. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    Directory of Open Access Journals (Sweden)

    D. Ramyachitra

    2015-09-01

    Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  12. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    Science.gov (United States)

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  13. Optimized Clinical Use of RNALater and FFPE Samples for Quantitative Proteomics

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Kastaniegaard, Kenneth; Padurariu, Simona

    2015-01-01

    Introduction and Objectives The availability of patient samples is essential for clinical proteomic research. Biobanks worldwide store mainly samples stabilized in RNAlater as well as formalin-fixed and paraffin embedded (FFPE) biopsies. Biobank material is a potential source for clinical...... we compare to FFPE and frozen samples being the control. Methods From the sigmoideum of two healthy participants’ twenty-four biopsies were extracted using endoscopy. The biopsies was stabilized either by being directly frozen, RNAlater, FFPE or incubated for 30 min at room temperature prior to FFPE...... information. Conclusion We have demonstrated that quantitative proteome analysis and pathway mapping of samples stabilized in RNAlater as well as by FFPE is feasible with minimal impact on the quality of protein quantification and post-translational modifications....

  14. COARSE: Convex Optimization based autonomous control for Asteroid Rendezvous and Sample Exploration, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Sample return missions, by nature, require high levels of spacecraft autonomy. Developments in hardware avionics have led to more capable real-time onboard computing...

  15. Secure RAID Schemes for Distributed Storage

    OpenAIRE

    Huang, Wentao; Bruck, Jehoshua

    2016-01-01

    We propose secure RAID, i.e., low-complexity schemes to store information in a distributed manner that is resilient to node failures and resistant to node eavesdropping. We generalize the concept of systematic encoding to secure RAID and show that systematic schemes have significant advantages in the efficiencies of encoding, decoding and random access. For the practical high rate regime, we construct three XOR-based systematic secure RAID schemes with optimal or almost optimal encoding and ...

  16. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  17. The optimally sampled galaxy-wide stellar initial mass function. Observational tests and the publicly available GalIMF code

    Science.gov (United States)

    Yan, Zhiqiang; Jerabkova, Tereza; Kroupa, Pavel

    2017-11-01

    Here we present a full description of the integrated galaxy-wide initial mass function (IGIMF) theory in terms of the optimal sampling and compare it with available observations. Optimal sampling is the method we use to discretize the IMF deterministically into stellar masses. Evidence indicates that nature may be closer to deterministic sampling as observations suggest a smaller scatter of various relevant observables than random sampling would give, which may result from a high level of self-regulation during the star formation process. We document the variation of IGIMFs under various assumptions. The results of the IGIMF theory are consistent with the empirical relation between the total mass of a star cluster and the mass of its most massive star, and the empirical relation between the star formation rate (SFR) of a galaxy and the mass of its most massive cluster. Particularly, we note a natural agreement with the empirical relation between the IMF power-law index and the SFR of a galaxy. The IGIMF also results in a relation between the SFR of a galaxy and the mass of its most massive star such that, if there were no binaries, galaxies with SFR first time, we show optimally sampled galaxy-wide IMFs (OSGIMF) that mimic the IGIMF with an additional serrated feature. Finally, a Python module, GalIMF, is provided allowing the calculation of the IGIMF and OSGIMF dependent on the galaxy-wide SFR and metallicity. A copy of the python code model is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A126

  18. MCMC-ODPR: Primer design optimization using Markov Chain Monte Carlo sampling

    Directory of Open Access Journals (Sweden)

    Kitchen James L

    2012-11-01

    Full Text Available Abstract Background Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR algorithm. Results After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. Conclusions MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  19. MCMC-ODPR: primer design optimization using Markov Chain Monte Carlo sampling.

    Science.gov (United States)

    Kitchen, James L; Moore, Jonathan D; Palmer, Sarah A; Allaby, Robin G

    2012-11-05

    Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  20. Spatio-temporal optimization of sampling for bluetongue vectors (Culicoides) near grazing livestock

    DEFF Research Database (Denmark)

    Kirkeby, Carsten; Stockmarr, Anders; Bødker, Rene

    2013-01-01

    BACKGROUND: Estimating the abundance of Culicoides using light traps is influenced by a large variation in abundance in time and place. This study investigates the optimal trapping strategy to estimate the abundance or presence/absence of Culicoides on a field with grazing animals. We used 45 light...... absence of vectors on the field. The variation in the estimated abundance decreased steeply when using up to six traps, and was less pronounced when using more traps, although no clear cutoff was found. CONCLUSIONS: Despite spatial clustering in vector abundance, we found no effect of increasing...... monitoring programmes on fields with grazing animals....

  1. Optimized sample preparation for two-dimensional gel electrophoresis of soluble proteins from chicken bursa of Fabricius

    Directory of Open Access Journals (Sweden)

    Zheng Xiaojuan

    2009-10-01

    Full Text Available Abstract Background Two-dimensional gel electrophoresis (2-DE is a powerful method to study protein expression and function in living organisms and diseases. This technique, however, has not been applied to avian bursa of Fabricius (BF, a central immune organ. Here, optimized 2-DE sample preparation methodologies were constructed for the chicken BF tissue. Using the optimized protocol, we performed further 2-DE analysis on a soluble protein extract from the BF of chickens infected with virulent avibirnavirus. To demonstrate the quality of the extracted proteins, several differentially expressed protein spots selected were cut from 2-DE gels and identified by matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS. Results An extraction buffer containing 7 M urea, 2 M thiourea, 2% (w/v 3-[(3-cholamidopropyl-dimethylammonio]-1-propanesulfonate (CHAPS, 50 mM dithiothreitol (DTT, 0.2% Bio-Lyte 3/10, 1 mM phenylmethylsulfonyl fluoride (PMSF, 20 U/ml Deoxyribonuclease I (DNase I, and 0.25 mg/ml Ribonuclease A (RNase A, combined with sonication and vortex, yielded the best 2-DE data. Relative to non-frozen immobilized pH gradient (IPG strips, frozen IPG strips did not result in significant changes in the 2-DE patterns after isoelectric focusing (IEF. When the optimized protocol was used to analyze the spleen and thymus, as well as avibirnavirus-infected bursa, high quality 2-DE protein expression profiles were obtained. 2-DE maps of BF of chickens infected with virulent avibirnavirus were visibly different and many differentially expressed proteins were found. Conclusion These results showed that method C, in concert extraction buffer IV, was the most favorable for preparing samples for IEF and subsequent protein separation and yielded the best quality 2-DE patterns. The optimized protocol is a useful sample preparation method for comparative proteomics analysis of chicken BF tissues.

  2. Optimizing sampling strategy for radiocarbon dating of Holocene fluvial systems in a vertically aggrading setting

    International Nuclear Information System (INIS)

    Toernqvist, T.E.; Dijk, G.J. Van

    1993-01-01

    The authors address the question of how to determine the period of activity (sedimentation) of fossil (Holocene) fluvial systems in vertically aggrading environments. The available data base consists of almost 100 14 C ages from the Rhine-Meuse delta. Radiocarbon samples from the tops of lithostratigraphically correlative organic beds underneath overbank deposits (sample type 1) yield consistent ages, indicating a synchronous onset of overbank deposition over distances of at least up to 20 km along channel belts. Similarly, 14 C ages from the base of organic residual channel fills (sample type 3) generally indicate a clear termination of within-channel sedimentation. In contrast, 14 C ages from the base of organic beds overlying overbank deposits (sample type 2), commonly assumed to represent the end of fluvial sedimentation, show a large scatter reaching up to 1000 14 C years. It is concluded that a combination of sample types 1 and 3 generally yields a satisfactory delimitation of the period of activity of a fossil fluvial system. 30 refs., 11 figs., 4 tabs

  3. Capital budgeting under relational contracting: optimal ranking and duration criteria for schemes of concession, project-financing and public-private partnership

    OpenAIRE

    Biondi, Yuri

    2009-01-01

    International audience; Project-financing and public-private partnership schemes are joint projects of investment that are generally submitted to investment valuation criteria based on compound discounting. However, the theoretical basis of these criteria is at issue nowadays. According to recent studies on relational contracting economics and behavioral finance, joint projects of investment can be considered as special relational environments where the project's returns improve on alternativ...

  4. Optimal sampling designs for estimation of Plasmodium falciparum clearance rates in patients treated with artemisinin derivatives

    Science.gov (United States)

    2013-01-01

    Background The emergence of Plasmodium falciparum resistance to artemisinins in Southeast Asia threatens the control of malaria worldwide. The pharmacodynamic hallmark of artemisinin derivatives is rapid parasite clearance (a short parasite half-life), therefore, the in vivo phenotype of slow clearance defines the reduced susceptibility to the drug. Measurement of parasite counts every six hours during the first three days after treatment have been recommended to measure the parasite clearance half-life, but it remains unclear whether simpler sampling intervals and frequencies might also be sufficient to reliably estimate this parameter. Methods A total of 2,746 parasite density-time profiles were selected from 13 clinical trials in Thailand, Cambodia, Mali, Vietnam, and Kenya. In these studies, parasite densities were measured every six hours until negative after treatment with an artemisinin derivative (alone or in combination with a partner drug). The WWARN Parasite Clearance Estimator (PCE) tool was used to estimate “reference” half-lives from these six-hourly measurements. The effect of four alternative sampling schedules on half-life estimation was investigated, and compared to the reference half-life (time zero, 6, 12, 24 (A1); zero, 6, 18, 24 (A2); zero, 12, 18, 24 (A3) or zero, 12, 24 (A4) hours and then every 12 hours). Statistical bootstrap methods were used to estimate the sampling distribution of half-lives for parasite populations with different geometric mean half-lives. A simulation study was performed to investigate a suite of 16 potential alternative schedules and half-life estimates generated by each of the schedules were compared to the “true” half-life. The candidate schedules in the simulation study included (among others) six-hourly sampling, schedule A1, schedule A4, and a convenience sampling schedule at six, seven, 24, 25, 48 and 49 hours. Results The median (range) parasite half-life for all clinical studies combined was 3.1 (0

  5. Sterile Reverse Osmosis Water Combined with Friction Are Optimal for Channel and Lever Cavity Sample Collection of Flexible Duodenoscopes

    Directory of Open Access Journals (Sweden)

    Michelle J. Alfa

    2017-11-01

    Full Text Available IntroductionSimulated-use buildup biofilm (BBF model was used to assess various extraction fluids and friction methods to determine the optimal sample collection method for polytetrafluorethylene channels. In addition, simulated-use testing was performed for the channel and lever cavity of duodenoscopes.Materials and methodsBBF was formed in polytetrafluorethylene channels using Enterococcus faecalis, Escherichia coli, and Pseudomonas aeruginosa. Sterile reverse osmosis (RO water, and phosphate-buffered saline with and without Tween80 as well as two neutralizing broths (Letheen and Dey–Engley were each assessed with and without friction. Neutralizer was added immediately after sample collection and samples concentrated using centrifugation. Simulated-use testing was done using TJF-Q180V and JF-140F Olympus duodenoscopes.ResultsDespite variability in the bacterial CFU in the BBF model, none of the extraction fluids tested were significantly better than RO. Borescope examination showed far less residual material when friction was part of the extraction protocol. The RO for flush-brush-flush (FBF extraction provided significantly better recovery of E. coli (p = 0.02 from duodenoscope lever cavities compared to the CDC flush method.Discussion and conclusionWe recommend RO with friction for FBF extraction of the channel and lever cavity of duodenoscopes. Neutralizer and sample concentration optimize recovery of viable bacteria on culture.

  6. Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.

    Directory of Open Access Journals (Sweden)

    João Tiago Marques

    Full Text Available Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.

  7. Scheme for implementing N-qubit controlled phase gate of photons assisted by quantum-dot-microcavity coupled system: optimal probability of success

    International Nuclear Information System (INIS)

    Cui, Wen-Xue; Hu, Shi; Wang, Hong-Fu; Zhu, Ai-Dong; Zhang, Shou

    2015-01-01

    The direct implementation of multiqubit controlled phase gate of photons is appealing and important for reducing the complexity of the physical realization of linear-optics-based practical quantum computer and quantum algorithms. In this letter we propose a nondestructive scheme for implementing an N-qubit controlled phase gate of photons with a high success probability. The gate can be directly implemented with the self-designed quantum encoder circuits, which are probabilistic optical quantum entangler devices and can be achieved using linear optical elements, single-photon superposition state, and quantum dot coupled to optical microcavity. The calculated results indicate that both the success probabilities of the quantum encoder circuit and the N-qubit controlled phase gate in our scheme are higher than those in the previous schemes. We also consider the effects of the side leakage and cavity loss on the success probability and the fidelity of the quantum encoder circuit for a realistic quantum-dot-microcavity coupled system. (letter)

  8. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems

    International Nuclear Information System (INIS)

    Zio, E.; Pedroni, N.

    2010-01-01

    The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the

  9. Cadmium and lead determination by ICPMS: Method optimization and application in carabao milk samples

    Directory of Open Access Journals (Sweden)

    Riza A. Magbitang

    2012-06-01

    Full Text Available A method utilizing inductively coupled plasma mass spectrometry (ICPMS as the element-selective detector with microwave-assisted nitric acid digestion as the sample pre-treatment technique was developed for the simultaneous determination of cadmium (Cd and lead (Pb in milk samples. The estimated detection limits were 0.09ìg kg-1 and 0.33ìg kg-1 for Cd and Pb, respectively. The method was linear in the concentration range 0.01 to 500ìg kg-1with correlation coefficients of 0.999 for both analytes.The method was validated using certified reference material BCR 150 and the determined values for Cd and Pb were 18.24 ± 0.18 ìg kg-1 and 807.57 ± 7.07ìg kg-1, respectively. Further validation using another certified reference material, NIST 1643e, resulted in determined concentrations of 6.48 ± 0.10 ìg L-1 for Cd and 21.96 ± 0.87 ìg L-1 for Pb. These determined values agree well with the certified values in the reference materials.The method was applied to processed and raw carabao milk samples collected in Nueva Ecija, Philippines.The Cd levels determined in the samples were in the range 0.11 ± 0.07 to 5.17 ± 0.13 ìg kg-1 for the processed milk samples, and 0.11 ± 0.07 to 0.45 ± 0.09 ìg kg-1 for the raw milk samples. The concentrations of Pb were in the range 0.49 ± 0.21 to 5.82 ± 0.17 ìg kg-1 for the processed milk samples, and 0.72 ± 0.18 to 6.79 ± 0.20 ìg kg-1 for the raw milk samples.

  10. Method optimization for non-equilibrium solid phase microextraction sampling of HAPs for GC/MS analysis

    Science.gov (United States)

    Zawadowicz, M. A.; Del Negro, L. A.

    2010-12-01

    Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.

  11. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    International Nuclear Information System (INIS)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles

    2014-01-01

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr 2 ) than is the pentafluorostyrene component distribution

  12. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    Energy Technology Data Exchange (ETDEWEB)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles, E-mail: cwilkins@uark.edu

    2014-01-15

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr{sub 2}) than is the pentafluorostyrene component distribution.

  13. Robust, Sensitive, and Automated Phosphopeptide Enrichment Optimized for Low Sample Amounts Applied to Primary Hippocampal Neurons

    NARCIS (Netherlands)

    Post, Harm; Penning, Renske; Fitzpatrick, Martin; Garrigues, L.B.; Wu, W.; Mac Gillavry, H.D.; Hoogenraad, C.C.; Heck, A.J.R.; Altelaar, A.F.M.

    2017-01-01

    Because of the low stoichiometry of protein phosphorylation, targeted enrichment prior to LC–MS/MS analysis is still essential. The trend in phosphoproteome analysis is shifting toward an increasing number of biological replicates per experiment, ideally starting from very low sample amounts,

  14. Optimal sampling strategies to assess inulin clearance in children by the inulin single-injection method

    NARCIS (Netherlands)

    van Rossum, Lyonne K.; Mathot, Ron A. A.; Cransberg, Karlien; Vulto, Arnold G.

    2003-01-01

    Glomerular filtration rate in patients can be determined by estimating the plasma clearance of inulin with the single-injection method. In this method, a single bolus injection of inulin is administered and several blood samples are collected. For practical and convenient application of this method

  15. Optimization of deconvolution software used in the study of spectra of soil samples from Madagascar

    International Nuclear Information System (INIS)

    ANDRIAMADY NARIMANANA, S.F.

    2005-01-01

    The aim of this work is to perform the deconvolution of gamma spectra by using the deconvolution peak program. Synthetic spectra, reference materials and ten soil samples with various U-238 activities from three regions of Madagascar were used. This work concerns : soil sample spectra with low activities of about (47±2) Bq.kg -1 from Ankatso, soil sample spectra with average activities of about (125±2)Bq.kg -1 from Antsirabe and soil sample spectra with high activities of about (21100± 120) Bq.kg -1 from Vinaninkarena. Singlet and multiplet peaks with various intensities were found in each soil spectrum. Interactive Peak Fit (IPF) program in Genie-PC from Canberra Industries allows to deconvoluate many multiplet regions : quartet within 235 keV-242 keV, Pb-214 and Pb-212 within 294 keV -301 keV; Th-232 daughters within 582 keV - 584 keV; Ac-228 within 904 keV -911 keV and within 964 keV-970 keV and Bi-214 within 1401 keV - 1408 keV. Those peaks were used to quantify considered radionuclides. However, IPF cannot resolve Ra-226 peak at 186,1 keV. [fr

  16. Optimization of fecal cytology in the dog: comparison of three sampling methods.

    Science.gov (United States)

    Frezoulis, Petros S; Angelidou, Elisavet; Diakou, Anastasia; Rallis, Timoleon S; Mylonakis, Mathios E

    2017-09-01

    Dry-mount fecal cytology (FC) is a component of the diagnostic evaluation of gastrointestinal diseases. There is limited information on the possible effect of the sampling method on the cytologic findings of healthy dogs or dogs admitted with diarrhea. We aimed to: (1) establish sampling method-specific expected values of selected cytologic parameters (isolated or clustered epithelial cells, neutrophils, lymphocytes, macrophages, spore-forming rods) in clinically healthy dogs; (2) investigate if the detection of cytologic abnormalities differs among methods in dogs admitted with diarrhea; and (3) investigate if there is any association between FC abnormalities and the anatomic origin (small- or large-bowel diarrhea) or the chronicity of diarrhea. Sampling with digital examination (DE), rectal scraping (RS), and rectal lavage (RL) was prospectively assessed in 37 healthy and 34 diarrheic dogs. The median numbers of isolated ( p = 0.000) or clustered ( p = 0.002) epithelial cells, and of lymphocytes ( p = 0.000), differed among the 3 methods in healthy dogs. In the diarrheic dogs, the RL method was the least sensitive in detecting neutrophils, and isolated or clustered epithelial cells. Cytologic abnormalities were not associated with the origin or the chronicity of diarrhea. Sampling methods differed in their sensitivity to detect abnormalities in FC; DE or RS may be of higher sensitivity compared to RL. Anatomic origin or chronicity of diarrhea do not seem to affect the detection of cytologic abnormalities.

  17. Optimization of sample absorbance for quantitative analysis in the presence of pathlength error in the IR and NIR regions

    International Nuclear Information System (INIS)

    Hirschfeld, T.; Honigs, D.; Hieftje, G.

    1985-01-01

    Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable

  18. Centrifugation protocols: tests to determine optimal lithium heparin and citrate plasma sample quality.

    Science.gov (United States)

    Dimeski, Goce; Solano, Connie; Petroff, Mark K; Hynd, Matthew

    2011-05-01

    Currently, no clear guidelines exist for the most appropriate tests to determine sample quality from centrifugation protocols for plasma sample types with both lithium heparin in gel barrier tubes for biochemistry testing and citrate tubes for coagulation testing. Blood was collected from 14 participants in four lithium heparin and one serum tube with gel barrier. The plasma tubes were centrifuged at four different centrifuge settings and analysed for potassium (K(+)), lactate dehydrogenase (LD), glucose and phosphorus (Pi) at zero time, poststorage at six hours at 21 °C and six days at 2-8°C. At the same time, three citrate tubes were collected and centrifuged at three different centrifuge settings and analysed immediately for prothrombin time/international normalized ratio, activated partial thromboplastin time, derived fibrinogen and surface-activated clotting time (SACT). The biochemistry analytes indicate plasma is less stable than serum. Plasma sample quality is higher with longer centrifugation time, and much higher g force. Blood cells present in the plasma lyse with time or are damaged when transferred in the reaction vessels, causing an increase in the K(+), LD and Pi above outlined limits. The cells remain active and consume glucose even in cold storage. The SACT is the only coagulation parameter that was affected by platelets >10 × 10(9)/L in the citrate plasma. In addition to the platelet count, a limited but sensitive number of assays (K(+), LD, glucose and Pi for biochemistry, and SACT for coagulation) can be used to determine appropriate centrifuge settings to consistently obtain the highest quality lithium heparin and citrate plasma samples. The findings will aid laboratories to balance the need to provide the most accurate results in the best turnaround time.

  19. [Optimization of solid-phase extraction for enrichment of toxic organic compounds in water samples].

    Science.gov (United States)

    Zhang, Ming-quan; Li, Feng-min; Wu, Qian-yuan; Hu, Hong-ying

    2013-05-01

    A concentration method for enrichment of toxic organic compounds in water samples has been developed based on combined solid-phase extraction (SPE) to reduce impurities and improve recoveries of target compounds. This SPE method was evaluated in every stage to identify the source of impurities. Based on the analysis of Waters Oasis HLB without water samples, the eluent of SPE sorbent after dichloromethane and acetone contributed 85% of impurities during SPE process. In order to reduce the impurities from SPE sorbent, soxhlet extraction of dichloromethane followed by acetone and lastly methanol was applied to the sorbents for 24 hours and the results had proven that impurities were reduced significantly. In addition to soxhlet extraction, six types of prevalent SPE sorbents were used to absorb 40 target compounds, the lgK(ow) values of which were within the range of 1.46 and 8.1, and recovery rates were compared. It was noticed and confirmed that Waters Oasis HLB had shown the best recovery results for most of the common testing samples among all three styrenedivinylbenzene (SDB) polymer sorbents, which were 77% on average. Furthermore, Waters SepPak AC-2 provided good recovery results for pesticides among three types of activated carbon sorbents and the average recovery rates reached 74%. Therefore, Waters Oasis HLB and Waters SepPak AC-2 were combined to obtain a better recovery and the average recovery rate for the tested 40 compounds of this new SPE method was 87%.

  20. Optimizing the data acquisition rate for a remotely controllable structural monitoring system with parallel operation and self-adaptive sampling

    International Nuclear Information System (INIS)

    Sheng, Wenjuan; Guo, Aihuang; Liu, Yang; Azmi, Asrul Izam; Peng, Gang-Ding

    2011-01-01

    We present a novel technique that optimizes the real-time remote monitoring and control of dispersed civil infrastructures. The monitoring system is based on fiber Bragg gating (FBG) sensors, and transfers data via Ethernet. This technique combines parallel operation and self-adaptive sampling to increase the data acquisition rate in remote controllable structural monitoring systems. The compact parallel operation mode is highly efficient at achieving the highest possible data acquisition rate for the FBG sensor based local data acquisition system. Self-adaptive sampling is introduced to continuously coordinate local acquisition and remote control for data acquisition rate optimization. Key issues which impact the operation of the whole system, such as the real-time data acquisition rate, data processing capability, and buffer usage, are investigated. The results show that, by introducing parallel operation and self-adaptive sampling, the data acquisition rate can be increased by several times without affecting the system operating performance on both local data acquisition and remote process control

  1. Quality improvement in determination of chemical oxygen demand in samples considered difficult to analyze, through participation in proficiency-testing schemes

    DEFF Research Database (Denmark)

    Raposo, Francisco; Fernández-Cegrí, V.; De la Rubia, M.A.

    2010-01-01

    Chemical oxygen demand (COD) is a critical analytical parameter in waste and wastewater treatment, more specifically in anaerobic digestion, although little is known about the quality of measuring COD of anaerobic digestion samples. Proficiency testing (PT) is a powerful tool that can be used...... to test the performance achievable in the participants laboratories, so we carried out a second PT of COD determination in samples considered ‘‘difficult’’ to analyze (i.e. solid samples and liquid samples with high concentrations of suspended solids). The results obtained (based on acceptable z...

  2. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas

    Science.gov (United States)

    Adu-Brimpong, Joel; Coffey, Nathan; Ayers, Colby; Berrigan, David; Yingling, Leah R.; Thomas, Samantha; Mitchell, Valerie; Ahuja, Chaarushi; Rivers, Joshua; Hartz, Jacob; Powell-Wiley, Tiffany M.

    2017-01-01

    Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist), a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783) participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions). Twelve street segments per home address were assessed for (1) Land-Use Type; (2) Public Transportation Availability; (3) Street Characteristics; (4) Environment Quality and (5) Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9) and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6). Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3). Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p health behaviors and outcomes. PMID:28282878

  3. Evaluation of spot and passive sampling for monitoring, flux estimation and risk assessment of pesticides within the constraints of a typical regulatory monitoring scheme.

    Science.gov (United States)

    Zhang, Zulin; Troldborg, Mads; Yates, Kyari; Osprey, Mark; Kerr, Christine; Hallett, Paul D; Baggaley, Nikki; Rhind, Stewart M; Dawson, Julian J C; Hough, Rupert L

    2016-11-01

    In many agricultural catchments of Europe and North America, pesticides occur at generally low concentrations with significant temporal variation. This poses several challenges for both monitoring and understanding ecological risks/impacts of these chemicals. This study aimed to compare the performance of passive and spot sampling strategies given the constraints of typical regulatory monitoring. Nine pesticides were investigated in a river currently undergoing regulatory monitoring (River Ugie, Scotland). Within this regulatory framework, spot and passive sampling were undertaken to understand spatiotemporal occurrence, mass loads and ecological risks. All the target pesticides were detected in water by both sampling strategies. Chlorotoluron was observed to be the dominant pesticide by both spot (maximum: 111.8ng/l, mean: 9.35ng/l) and passive sampling (maximum: 39.24ng/l, mean: 4.76ng/l). The annual pesticide loads were estimated to be 2735g and 1837g based on the spot and passive sampling data, respectively. The spatiotemporal trend suggested that agricultural activities were the primary source of the compounds with variability in loads explained in large by timing of pesticide applications and rainfall. The risk assessment showed chlorotoluron and chlorpyrifos posed the highest ecological risks with 23% of the chlorotoluron spot samples and 36% of the chlorpyrifos passive samples resulting in a Risk Quotient greater than 0.1. This suggests that mitigation measures might need to be taken to reduce the input of pesticides into the river. The overall comparison of the two sampling strategies supported the hypothesis that passive sampling tends to integrate the contaminants over a period of exposure and allows quantification of contamination at low concentration. The results suggested that within a regulatory monitoring context passive sampling was more suitable for flux estimation and risk assessment of trace contaminants which cannot be diagnosed by spot

  4. Population Pharmacokinetics of Gemcitabine and dFdU in Pancreatic Cancer Patients Using an Optimal Design, Sparse Sampling Approach.

    Science.gov (United States)

    Serdjebi, Cindy; Gattacceca, Florence; Seitz, Jean-François; Fein, Francine; Gagnière, Johan; François, Eric; Abakar-Mahamat, Abakar; Deplanque, Gael; Rachid, Madani; Lacarelle, Bruno; Ciccolini, Joseph; Dahan, Laetitia

    2017-06-01

    Gemcitabine remains a pillar in pancreatic cancer treatment. However, toxicities are frequently observed. Dose adjustment based on therapeutic drug monitoring might help decrease the occurrence of toxicities. In this context, this work aims at describing the pharmacokinetics (PK) of gemcitabine and its metabolite dFdU in pancreatic cancer patients and at identifying the main sources of their PK variability using a population PK approach, despite a sparse sampled-population and heterogeneous administration and sampling protocols. Data from 38 patients were included in the analysis. The 3 optimal sampling times were determined using KineticPro and the population PK analysis was performed on Monolix. Available patient characteristics, including cytidine deaminase (CDA) status, were tested as covariates. Correlation between PK parameters and occurrence of severe hematological toxicities was also investigated. A two-compartment model best fitted the gemcitabine and dFdU PK data (volume of distribution and clearance for gemcitabine: V1 = 45 L and CL1 = 4.03 L/min; for dFdU: V2 = 36 L and CL2 = 0.226 L/min). Renal function was found to influence gemcitabine clearance, and body surface area to impact the volume of distribution of dFdU. However, neither CDA status nor the occurrence of toxicities was correlated to PK parameters. Despite sparse sampling and heterogeneous administration and sampling protocols, population and individual PK parameters of gemcitabine and dFdU were successfully estimated using Monolix population PK software. The estimated parameters were consistent with previously published results. Surprisingly, CDA activity did not influence gemcitabine PK, which was explained by the absence of CDA-deficient patients enrolled in the study. This work suggests that even sparse data are valuable to estimate population and individual PK parameters in patients, which will be usable to individualize the dose for an optimized benefit to risk ratio.

  5. Tracking a changing environment: optimal sampling, adaptive memory and overnight effects.

    Science.gov (United States)

    Dunlap, Aimee S; Stephens, David W

    2012-02-01

    Foraging in a variable environment presents a classic problem of decision making with incomplete information. Animals must track the changing environment, remember the best options and make choices accordingly. While several experimental studies have explored the idea that sampling behavior reflects the amount of environmental change, we take the next logical step in asking how change influences memory. We explore the hypothesis that memory length should be tied to the ecological relevance and the value of the information learned, and that environmental change is a key determinant of the value of memory. We use a dynamic programming model to confirm our predictions and then test memory length in a factorial experiment. In our experimental situation we manipulate rates of change in a simple foraging task for blue jays over a 36 h period. After jays experienced an experimentally determined change regime, we tested them at a range of retention intervals, from 1 to 72 h. Manipulated rates of change influenced learning and sampling rates: subjects sampled more and learned more quickly in the high change condition. Tests of retention revealed significant interactions between retention interval and the experienced rate of change. We observed a striking and surprising difference between the high and low change treatments at the 24h retention interval. In agreement with earlier work we find that a circadian retention interval is special, but we find that the extent of this 'specialness' depends on the subject's prior experience of environmental change. Specifically, experienced rates of change seem to influence how subjects balance recent information against past experience in a way that interacts with the passage of time. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Additive operator-difference schemes splitting schemes

    CERN Document Server

    Vabishchevich, Petr N

    2013-01-01

    Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy

  7. Determination of Ergot Alkaloids: Purity and Stability Assessment of Standards and Optimization of Extraction Conditions for Cereal Samples

    DEFF Research Database (Denmark)

    Krska, R.; Berthiller, F.; Schuhmacher, R.

    2008-01-01

    as those that are the most common and physiologically active. The purity of the standards was investigated by means of liquid chromatography with diode array detection, electrospray ionization, and time-of-flight mass spectrometry (LC-DAD-ESI-TOF-MS). All of the standards assessed showed purity levels...... (PSA) before LC/MS/MS. Based on the results obtained from these optimization studies, a mixture of acetonitrile with ammonium carbonate buffer was used as extraction solvent, as recoveries for all analyzed ergot alkaloids were significantly higher than those with the other solvents. Different sample...

  8. Superposition Enhanced Nested Sampling

    Directory of Open Access Journals (Sweden)

    Stefano Martiniani

    2014-08-01

    Full Text Available The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  9. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas

    Directory of Open Access Journals (Sweden)

    Joel Adu-Brimpong

    2017-03-01

    Full Text Available Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist, a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783 participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions. Twelve street segments per home address were assessed for (1 Land-Use Type; (2 Public Transportation Availability; (3 Street Characteristics; (4 Environment Quality and (5 Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9 and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6. Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3. Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p < 0.001. This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes.

  10. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    Science.gov (United States)

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost

  11. Optimized cryo-focused ion beam sample preparation aimed at in situ structural studies of membrane proteins.

    Science.gov (United States)

    Schaffer, Miroslava; Mahamid, Julia; Engel, Benjamin D; Laugks, Tim; Baumeister, Wolfgang; Plitzko, Jürgen M

    2017-02-01

    While cryo-electron tomography (cryo-ET) can reveal biological structures in their native state within the cellular environment, it requires the production of high-quality frozen-hydrated sections that are thinner than 300nm. Sample requirements are even more stringent for the visualization of membrane-bound protein complexes within dense cellular regions. Focused ion beam (FIB) sample preparation for transmission electron microscopy (TEM) is a well-established technique in material science, but there are only few examples of biological samples exhibiting sufficient quality for high-resolution in situ investigation by cryo-ET. In this work, we present a comprehensive description of a cryo-sample preparation workflow incorporating additional conductive-coating procedures. These coating steps eliminate the adverse effects of sample charging on imaging with the Volta phase plate, allowing data acquisition with improved contrast. We discuss optimized FIB milling strategies adapted from material science and each critical step required to produce homogeneously thin, non-charging FIB lamellas that make large areas of unperturbed HeLa and Chlamydomonas cells accessible for cryo-ET at molecular resolution. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Optimization of the solvent-based dissolution method to sample volatile organic compound vapors for compound-specific isotope analysis.

    Science.gov (United States)

    Bouchard, Daniel; Wanner, Philipp; Luo, Hong; McLoughlin, Patrick W; Henderson, James K; Pirkle, Robert J; Hunkeler, Daniel

    2017-10-20

    The methodology of the solvent-based dissolution method used to sample gas phase volatile organic compounds (VOC) for compound-specific isotope analysis (CSIA) was optimized to lower the method detection limits for TCE and benzene. The sampling methodology previously evaluated by [1] consists in pulling the air through a solvent to dissolve and accumulate the gaseous VOC. After the sampling process, the solvent can then be treated similarly as groundwater samples to perform routine CSIA by diluting an aliquot of the solvent into water to reach the required concentration of the targeted contaminant. Among solvents tested, tetraethylene glycol dimethyl ether (TGDE) showed the best aptitude for the method. TGDE has a great affinity with TCE and benzene, hence efficiently dissolving the compounds during their transition through the solvent. The method detection limit for TCE (5±1μg/m 3 ) and benzene (1.7±0.5μg/m 3 ) is lower when using TGDE compared to methanol, which was previously used (385μg/m 3 for TCE and 130μg/m 3 for benzene) [2]. The method detection limit refers to the minimal gas phase concentration in ambient air required to load sufficient VOC mass into TGDE to perform δ 13 C analysis. Due to a different analytical procedure, the method detection limit associated with δ 37 Cl analysis was found to be 156±6μg/m 3 for TCE. Furthermore, the experimental results validated the relationship between the gas phase TCE and the progressive accumulation of dissolved TCE in the solvent during the sampling process. Accordingly, based on the air-solvent partitioning coefficient, the sampling methodology (e.g. sampling rate, sampling duration, amount of solvent) and the final TCE concentration in the solvent, the concentration of TCE in the gas phase prevailing during the sampling event can be determined. Moreover, the possibility to analyse for TCE concentration in the solvent after sampling (or other targeted VOCs) allows the field deployment of the sampling

  13. Optimal sample size for predicting viability of cabbage and radish seeds based on near infrared spectra of single seeds

    DEFF Research Database (Denmark)

    Shetty, Nisha; Min, Tai-Gi; Gislum, René

    2011-01-01

    The effects of the number of seeds in a training sample set on the ability to predict the viability of cabbage or radish seeds are presented and discussed. The supervised classification method extended canonical variates analysis (ECVA) was used to develop a classification model. Calibration sub......-sets of different sizes were chosen randomly with several iterations and using the spectral-based sample selection algorithms DUPLEX and CADEX. An independent test set was used to validate the developed classification models. The results showed that 200 seeds were optimal in a calibration set for both cabbage...... using all 600 seeds in the calibration set. Thus, the number of seeds in the calibration set can be reduced by up to 67% without significant loss of classification accuracy, which will effectively enhance the cost-effectiveness of NIR spectral analysis. Wavelength regions important...

  14. Immunosuppressant therapeutic drug monitoring by LC-MS/MS: workflow optimization through automated processing of whole blood samples.

    Science.gov (United States)

    Marinova, Mariela; Artusi, Carlo; Brugnolo, Laura; Antonelli, Giorgia; Zaninotto, Martina; Plebani, Mario

    2013-11-01

    Although, due to its high specificity and sensitivity, LC-MS/MS is an efficient technique for the routine determination of immunosuppressants in whole blood, it involves time-consuming manual sample preparation. The aim of the present study was therefore to develop an automated sample-preparation protocol for the quantification of sirolimus, everolimus and tacrolimus by LC-MS/MS using a liquid handling platform. Six-level commercially available blood calibrators were used for assay development, while four quality control materials and three blood samples from patients under immunosuppressant treatment were employed for the evaluation of imprecision. Barcode reading, sample re-suspension, transfer of whole blood samples into 96-well plates, addition of internal standard solution, mixing, and protein precipitation were performed with a liquid handling platform. After plate filtration, the deproteinised supernatants were submitted for SPE on-line. The only manual steps in the entire process were de-capping of the tubes, and transfer of the well plates to the HPLC autosampler. Calibration curves were linear throughout the selected ranges. The imprecision and accuracy data for all analytes were highly satisfactory. The agreement between the results obtained with manual and those obtained with automated sample preparation was optimal (n=390, r=0.96). In daily routine (100 patient samples) the typical overall total turnaround time was less than 6h. Our findings indicate that the proposed analytical system is suitable for routine analysis, since it is straightforward and precise. Furthermore, it incurs less manual workload and less risk of error in the quantification of whole blood immunosuppressant concentrations than conventional methods. © 2013.

  15. Optimized Field Sampling and Monitoring of Airborne Hazardous Transport Plumes; A Geostatistical Simulation Approach

    International Nuclear Information System (INIS)

    Chen, DI-WEN

    2001-01-01

    Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian/geostatistical methodology, quantitatively combines soft information

  16. Optimization of Region-of-Interest Sampling Strategies for Hepatic MRI Proton Density Fat Fraction Quantification

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.

    2017-01-01

    BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of

  17. Optimization of region-of-interest sampling strategies for hepatic MRI proton density fat fraction quantification.

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B

    2018-04-01

    Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance

  18. Labeling schemes for bounded degree graphs

    DEFF Research Database (Denmark)

    Adjiashvili, David; Rotbart, Noy Galil

    2014-01-01

    We investigate adjacency labeling schemes for graphs of bounded degree Δ = O(1). In particular, we present an optimal (up to an additive constant) log n + O(1) adjacency labeling scheme for bounded degree trees. The latter scheme is derived from a labeling scheme for bounded degree outerplanar...... graphs. Our results complement a similar bound recently obtained for bounded depth trees [Fraigniaud and Korman, SODA 2010], and may provide new insights for closing the long standing gap for adjacency in trees [Alstrup and Rauhe, FOCS 2002]. We also provide improved labeling schemes for bounded degree...

  19. Optimization of microwave-assisted extraction with saponification (MAES) for the determination of polybrominated flame retardants in aquaculture samples.

    Science.gov (United States)

    Fajar, N M; Carro, A M; Lorenzo, R A; Fernandez, F; Cela, R

    2008-08-01

    The efficiency of microwave-assisted extraction with saponification (MAES) for the determination of seven polybrominated flame retardants (polybrominated biphenyls, PBBs; and polybrominated diphenyl ethers, PBDEs) in aquaculture samples is described and compared with microwave-assisted extraction (MAE). Chemometric techniques based on experimental designs and desirability functions were used for simultaneous optimization of the operational parameters used in both MAES and MAE processes. Application of MAES to this group of contaminants in aquaculture samples, which had not been previously applied to this type of analytes, was shown to be superior to MAE in terms of extraction efficiency, extraction time and lipid content extracted from complex matrices (0.7% as against 18.0% for MAE extracts). PBBs and PBDEs were determined by gas chromatography with micro-electron capture detection (GC-muECD). The quantification limits for the analytes were 40-750 pg g(-1) (except for BB-15, which was 1.43 ng g(-1)). Precision for MAES-GC-muECD (%RSD < 11%) was significantly better than for MAE-GC-muECD (%RSD < 20%). The accuracy of both optimized methods was satisfactorily demonstrated by analysis of appropriate certified reference material (CRM), WMF-01.

  20. Plasma treatment of bulk niobium surface for superconducting rf cavities: Optimization of the experimental conditions on flat samples

    Directory of Open Access Journals (Sweden)

    M. Rašković

    2010-11-01

    Full Text Available Accelerator performance, in particular the average accelerating field and the cavity quality factor, depends on the physical and chemical characteristics of the superconducting radio-frequency (SRF cavity surface. Plasma based surface modification provides an excellent opportunity to eliminate nonsuperconductive pollutants in the penetration depth region and to remove the mechanically damaged surface layer, which improves the surface roughness. Here we show that the plasma treatment of bulk niobium (Nb presents an alternative surface preparation method to the commonly used buffered chemical polishing and electropolishing methods. We have optimized the experimental conditions in the microwave glow discharge system and their influence on the Nb removal rate on flat samples. We have achieved an etching rate of 1.7  μm/min⁡ using only 3% chlorine in the reactive mixture. Combining a fast etching step with a moderate one, we have improved the surface roughness without exposing the sample surface to the environment. We intend to apply the optimized experimental conditions to the preparation of single cell cavities, pursuing the improvement of their rf performance.

  1. Optimal sample size of signs for classification of radiational and oily soils

    International Nuclear Information System (INIS)

    Babayev, M.P.; Iskenderov, S.M.; Aghayev, R.A.

    2012-01-01

    Full text : This article tells about classification of radiational and oily soils that should be in essence a compact intelligence system which contains maximum information on classes of soil objects in the accepted feature space. The stored experience shows that the volume of the most informative soil signs can make up maximum 7-8 indexes. More correct approach to our opinion for a sample of the most informative (most important) indexes is the method of testing and mistakes, that is the experimental method, allowing to make use a wide experience and intuition of the researcher, or group of the researchers, engaged for many years in the field of soil science. At this operational stage of the formal device of soils classification, to say more concrete, the assessment section of selfdescriptiveness of soil signs of this formal device, in our opinion, is purely mathematized and in some cases even not reflect the true picture. In this case it will be calculated 21 pair of correlative elements between the selected soil signs as a measure of the linear communication. The volume of the correlative row will be equal to 6, as the increase in volume of the correlative row can sharply increase the volume calculation. Pertinently to note that, it is the first time an attempt is made to create correlative matrixes of the most important signs of radiation and oily soils

  2. Numerical scheme for optimization of xenon transient processes in a reactor. Problem on fast response without a limitation for phase variables

    International Nuclear Information System (INIS)

    Gerasimov, A.S.

    1975-01-01

    A numerical diagram is suggested of minimizing a period of xenon transient process in the reactor without any limitation of xenon-135 concentration. The problem is solved with a computer in a point model. Pontryagin's maximum principle is used so as to check optimization of the transient process

  3. A boundary-optimized rejection region test for the two-sample binomial problem.

    Science.gov (United States)

    Gabriel, Erin E; Nason, Martha; Fay, Michael P; Follmann, Dean A

    2018-03-30

    Testing the equality of 2 proportions for a control group versus a treatment group is a well-researched statistical problem. In some settings, there may be strong historical data that allow one to reliably expect that the control proportion is one, or nearly so. While one-sample tests or comparisons to historical controls could be used, neither can rigorously control the type I error rate in the event the true control rate changes. In this work, we propose an unconditional exact test that exploits the historical information while controlling the type I error rate. We sequentially construct a rejection region by first maximizing the rejection region in the space where all controls have an event, subject to the constraint that our type I error rate does not exceed α for any true event rate; then with any remaining α we maximize the additional rejection region in the space where one control avoids the event, and so on. When the true control event rate is one, our test is the most powerful nonrandomized test for all points in the alternative space. When the true control event rate is nearly one, we demonstrate that our test has equal or higher mean power, averaging over the alternative space, than a variety of well-known tests. For the comparison of 4 controls and 4 treated subjects, our proposed test has higher power than all comparator tests. We demonstrate the properties of our proposed test by simulation and use our method to design a malaria vaccine trial. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  4. Optimized pre-thinning procedures of ion-beam thinning for TEM sample preparation by magnetorheological polishing.

    Science.gov (United States)

    Luo, Hu; Yin, Shaohui; Zhang, Guanhua; Liu, Chunhui; Tang, Qingchun; Guo, Meijian

    2017-10-01

    Ion-beam-thinning is a well-established sample preparation technique for transmission electron microscopy (TEM), but tedious procedures and labor consuming pre-thinning could seriously reduce its efficiency. In this work, we present a simple pre-thinning technique by using magnetorheological (MR) polishing to replace manual lapping and dimpling, and demonstrate the successful preparation of electron-transparent single crystal silicon samples after MR polishing and single-sided ion milling. Dimples pre-thinned to less than 30 microns and with little mechanical surface damage were repeatedly produced under optimized MR polishing conditions. Samples pre-thinned by both MR polishing and traditional technique were ion-beam thinned from the rear side until perforation, and then observed by optical microscopy and TEM. The results show that the specimen pre-thinned by MR technique was free from dimpling related defects, which were still residual in sample pre-thinned by conventional technique. Nice high-resolution TEM images could be acquired after MR polishing and one side ion-thinning. MR polishing promises to be an adaptable and efficient method for pre-thinning in preparation of TEM specimens, especially for brittle ceramics. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. An Optimized DNA Analysis Workflow for the Sampling, Extraction, and Concentration of DNA obtained from Archived Latent Fingerprints.

    Science.gov (United States)

    Solomon, April D; Hytinen, Madison E; McClain, Aryn M; Miller, Marilyn T; Dawson Cruz, Tracey

    2018-01-01

    DNA profiles have been obtained from fingerprints, but there is limited knowledge regarding DNA analysis from archived latent fingerprints-touch DNA "sandwiched" between adhesive and paper. Thus, this study sought to comparatively analyze a variety of collection and analytical methods in an effort to seek an optimized workflow for this specific sample type. Untreated and treated archived latent fingerprints were utilized to compare different biological sampling techniques, swab diluents, DNA extraction systems, DNA concentration practices, and post-amplification purification methods. Archived latent fingerprints disassembled and sampled via direct cutting, followed by DNA extracted using the QIAamp® DNA Investigator Kit, and concentration with Centri-Sep™ columns increased the odds of obtaining an STR profile. Using the recommended DNA workflow, 9 of the 10 samples provided STR profiles, which included 7-100% of the expected STR alleles and two full profiles. Thus, with carefully selected procedures, archived latent fingerprints can be a viable DNA source for criminal investigations including cold/postconviction cases. © 2017 American Academy of Forensic Sciences.

  6. SU-E-T-23: A Novel Two-Step Optimization Scheme for Tandem and Ovoid (T and O) HDR Brachytherapy Treatment for Locally Advanced Cervical Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, M; Todor, D [Virginia Commonwealth University, Richmond, VA (United States); Fields, E [Virginia Commonwealth University, Richmond, Virginia (United States)

    2014-06-01

    Purpose: To present a novel method allowing fast, true volumetric optimization of T and O HDR treatments and to quantify its benefits. Materials and Methods: 27 CT planning datasets and treatment plans from six consecutive cervical cancer patients treated with 4–5 intracavitary T and O insertions were used. Initial treatment plans were created with a goal of covering high risk (HR)-CTV with D90 > 90% and minimizing D2cc to rectum, bladder and sigmoid with manual optimization, approved and delivered. For the second step, each case was re-planned adding a new structure, created from the 100% prescription isodose line of the manually optimized plan to the existent physician delineated HR-CTV, rectum, bladder and sigmoid. New, more rigorous DVH constraints for the critical OARs were used for the optimization. D90 for the HR-CTV and D2cc for OARs were evaluated in both plans. Results: Two-step optimized plans had consistently smaller D2cc's for all three OARs while preserving good D90s for HR-CTV. On plans with “excellent” CTV coverage, average D90 of 96% (range 91–102), sigmoid D2cc was reduced on average by 37% (range 16–73), bladder by 28% (range 20–47) and rectum by 27% (range 15–45). Similar reductions were obtained on plans with “good” coverage, with an average D90 of 93% (range 90–99). For plans with inferior coverage, average D90 of 81%, an increase in coverage to 87% was achieved concurrently with D2cc reductions of 31%, 18% and 11% for sigmoid, bladder and rectum. Conclusions: A two-step DVH-based optimization can be added with minimal planning time increase, but with the potential of dramatic and systematic reductions of D2cc for OARs and in some cases with concurrent increases in target dose coverage. These single-fraction modifications would be magnified over the course of 4–5 intracavitary insertions and may have real clinical implications in terms of decreasing both acute and late toxicity.

  7. Novel synthesis of nanocomposite for the extraction of Sildenafil Citrate (Viagra) from water and urine samples: Process screening and optimization.

    Science.gov (United States)

    Asfaram, Arash; Ghaedi, Mehrorang; Purkait, Mihir Kumar

    2017-09-01

    A sensitive analytical method is investigated to concentrate and determine trace level of Sildenafil Citrate (SLC) present in water and urine samples. The method is based on a sample treatment using dispersive solid-phase micro-extraction (DSPME) with laboratory-made Mn@ CuS/ZnS nanocomposite loaded on activated carbon (Mn@ CuS/ZnS-NCs-AC) as a sorbent for the target analyte. The efficiency was enhanced by ultrasound-assisted (UA) with dispersive nanocomposite solid-phase micro-extraction (UA-DNSPME). Four significant variables affecting SLC recovery like; pH, eluent volume, sonication time and adsorbent mass were selected by the Plackett-Burman design (PBD) experiments. These selected factors were optimized by the central composite design (CCD) to maximize extraction of SLC. The results exhibited that the optimum conditions for maximizing extraction of SLC were 6.0 pH, 300μL eluent (acetonitrile) volume, 10mg of adsorbent and 6min sonication time. Under optimized conditions, virtuous linearity of SLC was ranged from 30 to 4000ngmL -1 with R 2 of 0.99. The limit of detection (LOD) was 2.50ngmL -1 and the recoveries at two spiked levels were ranged from 97.37 to 103.21% with the relative standard deviation (RSD) less than 4.50% (n=15). The enhancement factor (EF) was 81.91. The results show that the combination UAE with DNSPME is a suitable method for the determination of SLC in water and urine samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Exploring structural variability in X-ray crystallographic models using protein local optimization by torsion-angle sampling

    International Nuclear Information System (INIS)

    Knight, Jennifer L.; Zhou, Zhiyong; Gallicchio, Emilio; Himmel, Daniel M.; Friesner, Richard A.; Arnold, Eddy; Levy, Ronald M.

    2008-01-01

    Torsion-angle sampling, as implemented in the Protein Local Optimization Program (PLOP), is used to generate multiple structurally variable single-conformer models which are in good agreement with X-ray data. An ensemble-refinement approach to differentiate between positional uncertainty and conformational heterogeneity is proposed. Modeling structural variability is critical for understanding protein function and for modeling reliable targets for in silico docking experiments. Because of the time-intensive nature of manual X-ray crystallographic refinement, automated refinement methods that thoroughly explore conformational space are essential for the systematic construction of structurally variable models. Using five proteins spanning resolutions of 1.0–2.8 Å, it is demonstrated how torsion-angle sampling of backbone and side-chain libraries with filtering against both the chemical energy, using a modern effective potential, and the electron density, coupled with minimization of a reciprocal-space X-ray target function, can generate multiple structurally variable models which fit the X-ray data well. Torsion-angle sampling as implemented in the Protein Local Optimization Program (PLOP) has been used in this work. Models with the lowest R free values are obtained when electrostatic and implicit solvation terms are included in the effective potential. HIV-1 protease, calmodulin and SUMO-conjugating enzyme illustrate how variability in the ensemble of structures captures structural variability that is observed across multiple crystal structures and is linked to functional flexibility at hinge regions and binding interfaces. An ensemble-refinement procedure is proposed to differentiate between variability that is a consequence of physical conformational heterogeneity and that which reflects uncertainty in the atomic coordinates

  9. Acoustic reverse-time migration using GPU card and POSIX thread based on the adaptive optimal finite-difference scheme and the hybrid absorbing boundary condition

    Science.gov (United States)

    Cai, Xiaohui; Liu, Yang; Ren, Zhiming

    2018-06-01

    Reverse-time migration (RTM) is a powerful tool for imaging geologically complex structures such as steep-dip and subsalt. However, its implementation is quite computationally expensive. Recently, as a low-cost solution, the graphic processing unit (GPU) was introduced to improve the efficiency of RTM. In the paper, we develop three ameliorative strategies to implement RTM on GPU card. First, given the high accuracy and efficiency of the adaptive optimal finite-difference (FD) method based on least squares (LS) on central processing unit (CPU), we study the optimal LS-based FD method on GPU. Second, we develop the CPU-based hybrid absorbing boundary condition (ABC) to the GPU-based one by addressing two issues of the former when introduced to GPU card: time-consuming and chaotic threads. Third, for large-scale data, the combinatorial strategy for optimal checkpointing and efficient boundary storage is introduced for the trade-off between memory and recomputation. To save the time of communication between host and disk, the portable operating system interface (POSIX) thread is utilized to create the other CPU core at the checkpoints. Applications of the three strategies on GPU with the compute unified device architecture (CUDA) programming language in RTM demonstrate their efficiency and validity.

  10. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  11. Optimization and application of octadecyl-modified monolithic silica for solid-phase extraction of drugs in whole blood samples.

    Science.gov (United States)

    Namera, Akira; Saito, Takeshi; Ota, Shigenori; Miyazaki, Shota; Oikawa, Hiroshi; Murata, Kazuhiro; Nagao, Masataka

    2017-09-29

    Monolithic silica in MonoSpin for solid-phase extraction of drugs from whole blood samples was developed to facilitate high-throughput analysis. Monolithic silica of various pore sizes and octadecyl contents were synthesized, and their effects on recovery rates were evaluated. The silica monolith M18-200 (20μm through-pore size, 10.4nm mesopore size, and 17.3% carbon content) achieved the best recovery of the target analytes in whole blood samples. The extraction proceeded with centrifugal force at 1000rpm for 2min, and the eluate was directly injected into the liquid chromatography-mass spectrometry system without any tedious steps such as evaporation of extraction solvents. Under the optimized condition, low detection limits of 0.5-2.0ngmL -1 and calibration ranges up to 1000ngmL -1 were obtained. The recoveries of the target drugs in the whole blood were 76-108% with relative standard deviation of less than 14.3%. These results indicate that the developed method based on monolithic silica is convenient, highly efficient, and applicable for detecting drugs in whole blood samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Throughput Estimation Method in Burst ACK Scheme for Optimizing Frame Size and Burst Frame Number Appropriate to SNR-Related Error Rate

    Science.gov (United States)

    Ohteru, Shoko; Kishine, Keiji

    The Burst ACK scheme enhances effective throughput by reducing ACK overhead when a transmitter sends sequentially multiple data frames to a destination. IEEE 802.11e is one such example. The size of the data frame body and the number of burst data frames are important burst transmission parameters that affect throughput. The larger the burst transmission parameters are, the better the throughput under error-free conditions becomes. However, large data frame could reduce throughput under error-prone conditions caused by signal-to-noise ratio (SNR) deterioration. If the throughput can be calculated from the burst transmission parameters and error rate, the appropriate ranges of the burst transmission parameters could be narrowed down, and the necessary buffer size for storing transmit data or received data temporarily could be estimated. In this paper, we present a method that features a simple algorithm for estimating the effective throughput from the burst transmission parameters and error rate. The calculated throughput values agree well with the measured ones for actual wireless boards based on the IEEE 802.11-based original MAC protocol. We also calculate throughput values for larger values of the burst transmission parameters outside the assignable values of the wireless boards and find the appropriate values of the burst transmission parameters.

  13. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    Science.gov (United States)

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  14. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers.

    Science.gov (United States)

    Tisdale, Evgenia; Kennedy, Devin; Xu, Xiaodong; Wilkins, Charles

    2014-01-15

    The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr2) than is the pentafluorostyrene component distribution. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn; Zhu, Weiliang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn [ACS Key Laboratory of Receptor Research, Drug Discovery and Design Center, Shanghai Institute of Materia Medica, Chinese Academy of Sciences, 555 Zuchongzhi Road, Shanghai 201203 (China); Shi, Jiye, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn [UCB Pharma, 216 Bath Road, Slough SL1 4EN (United Kingdom)

    2015-03-28

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much less computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.

  16. Intelligent Support System of Steel Technical Preparation in an Arc Furnace: Functional Scheme of Interactive Builder of the Multi Objective Optimization Problem

    Science.gov (United States)

    Logunova, O. S.; Sibileva, N. S.

    2017-12-01

    The purpose of the study is to increase the efficiency of the steelmaking process in large capacity arc furnace on the basis of implementation a new decision-making system about the composition of charge materials. The authors proposed an interactive builder for the formation of the optimization problem, taking into account the requirements of the customer, normative documents and stocks of charge materials in the warehouse. To implement the interactive builder, the sets of deterministic and stochastic model components are developed, as well as a list of preferences of criteria and constraints.

  17. Design and sampling plan optimization for RT-qPCR experiments in plants: a case study in blueberry

    Directory of Open Access Journals (Sweden)

    Jose V Die

    2016-03-01

    Full Text Available The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data.

  18. An Evaluation of Different Training Sample Allocation Schemes for Discrete and Continuous Land Cover Classification Using Decision Tree-Based Algorithms

    Directory of Open Access Journals (Sweden)

    René Roland Colditz

    2015-07-01

    Full Text Available Land cover mapping for large regions often employs satellite images of medium to coarse spatial resolution, which complicates mapping of discrete classes. Class memberships, which estimate the proportion of each class for every pixel, have been suggested as an alternative. This paper compares different strategies of training data allocation for discrete and continuous land cover mapping using classification and regression tree algorithms. In addition to measures of discrete and continuous map accuracy the correct estimation of the area is another important criteria. A subset of the 30 m national land cover dataset of 2006 (NLCD2006 of the United States was used as reference set to classify NADIR BRDF-adjusted surface reflectance time series of MODIS at 900 m spatial resolution. Results show that sampling of heterogeneous pixels and sample allocation according to the expected area of each class is best for classification trees. Regression trees for continuous land cover mapping should be trained with random allocation, and predictions should be normalized with a linear scaling function to correctly estimate the total area. From the tested algorithms random forest classification yields lower errors than boosted trees of C5.0, and Cubist shows higher accuracies than random forest regression.

  19. Optimization of operation schemes in boiling water reactors using neural networks; Optimizacion de esquemas de operacion en reactores de agua en ebullicion usando redes neuronales

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz S, J. J.; Castillo M, A. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Pelta, D. A., E-mail: juanjose.ortiz@inin.gob.mx [Universidad de Granada, Escuela Superior de Ingenierias, Informatica y Telecomunicacion, C/Daniel Saucedo Aranda s/n, 18071 Granada (Spain)

    2012-10-15

    In previous works were presented the results of a recurrent neural network to find the best combination of several groups of fuel cells, fuel load and control bars patterns. These solution groups to each problem of Fuel Management were previously optimized by diverse optimization techniques. The neural network chooses the partial solutions so the combination of them, correspond to a good configuration of the reactor according to a function objective. The values of the involved variables in this objective function are obtained through the simulation of the combination of partial solutions by means of Simulate-3. In the present work, a multilayer neural network that learned how to predict some results of Simulate-3 was used so was possible to substitute it in the objective function for the neural network and to accelerate the response time of the whole system of this way. The preliminary results shown in this work are encouraging to continue carrying out efforts in this sense and to improve the response quality of the system. (Author)

  20. Description of new mitochondrial genomes (Spodoptera litura, Noctuoidea and Cnaphalocrocis medinalis, Pyraloidea) and phylogenetic reconstruction of Lepidoptera with the comment on optimization schemes.

    Science.gov (United States)

    Wan, Xinlong; Kim, Min Jee; Kim, Iksoo

    2013-11-01

    We newly sequenced mitochondrial genomes of Spodoptera litura and Cnaphalocrocis medinalis belonging to Lepidoptera to obtain further insight into mitochondrial genome evolution in this group and investigated the influence of optimal strategies on phylogenetic reconstruction of Lepidoptera. Estimation of p-distances of each mitochondrial gene for available taxonomic levels has shown the highest value in ND6, whereas the lowest values in COI and COII at the nucleotide level, suggesting different utility of each gene for different hierarchical group when individual genes are utilized for phylogenetic analysis. Phylogenetic analyses mainly yielded the relationships (((((Bombycoidea + Geometroidea) + Noctuoidea) + Pyraloidea) + Papilionoidea) + Tortricoidea), evidencing the polyphyly of Macrolepidoptera. The Noctuoidea concordantly recovered the familial relationships (((Arctiidae + Lymantriidae) + Noctuidae) + Notodontidae). The tests of optimality strategies, such as exclusion of third codon positions, inclusion of rRNA and tRNA genes, data partitioning, RY recoding approach, and recoding nucleotides into amino acids suggested that the majority of the strategies did not substantially alter phylogenetic topologies or nodal supports, except for the sister relationship between Lycaenidae and Pieridae only in the amino acid dataset, which was in contrast to the sister relationship between Lycaenidae and Nymphalidae in Papilionoidea in the remaining datasets.

  1. A randomized phase 3 study on the optimization of the combination of bevacizumab with FOLFOX/OXXEL in the treatment of patients with metastatic colorectal cancer-OBELICS (Optimization of BEvacizumab scheduLIng within Chemotherapy Scheme).

    Science.gov (United States)

    Avallone, Antonio; Piccirillo, Maria Carmela; Aloj, Luigi; Nasti, Guglielmo; Delrio, Paolo; Izzo, Francesco; Di Gennaro, Elena; Tatangelo, Fabiana; Granata, Vincenza; Cavalcanti, Ernesta; Maiolino, Piera; Bianco, Francesco; Aprea, Pasquale; De Bellis, Mario; Pecori, Biagio; Rosati, Gerardo; Carlomagno, Chiara; Bertolini, Alessandro; Gallo, Ciro; Romano, Carmela; Leone, Alessandra; Caracò, Corradina; de Lutio di Castelguidone, Elisabetta; Daniele, Gennaro; Catalano, Orlando; Botti, Gerardo; Petrillo, Antonella; Romano, Giovanni M; Iaffaioli, Vincenzo R; Lastoria, Secondo; Perrone, Francesco; Budillon, Alfredo

    2016-02-08

    Despite the improvements in diagnosis and treatment, colorectal cancer (CRC) is the second cause of cancer deaths in both sexes. Therefore, research in this field remains of great interest. The approval of bevacizumab, a humanized anti-vascular endothelial growth factor (VEGF) monoclonal antibody, in combination with a fluoropyrimidine-based chemotherapy in the treatment of metastatic CRC has changed the oncology practice in this disease. However, the efficacy of bevacizumab-based treatment, has thus far been rather modest. Efforts are ongoing to understand the better way to combine bevacizumab and chemotherapy, and to identify valid predictive biomarkers of benefit to avoid unnecessary and costly therapy to nonresponder patients. The BRANCH study in high-risk locally advanced rectal cancer patients showed that varying bevacizumab schedule may impact on the feasibility and efficacy of chemo-radiotherapy. OBELICS is a multicentre, open-label, randomised phase 3 trial comparing in mCRC patients two treatment arms (1:1): standard concomitant administration of bevacizumab with chemotherapy (mFOLFOX/OXXEL regimen) vs experimental sequential bevacizumab given 4 days before chemotherapy, as first or second treatment line. Primary end point is the objective response rate (ORR) measured according to RECIST criteria. A sample size of 230 patients was calculated allowing reliable assessment in all plausible first-second line case-mix conditions, with a 80% statistical power and 2-sided alpha error of 0.05. Secondary endpoints are progression free-survival (PFS), overall survival (OS), toxicity and quality of life. The evaluation of the potential predictive role of several circulating biomarkers (circulating endothelial cells and progenitors, VEGF and VEGF-R SNPs, cytokines, microRNAs, free circulating DNA) as well as the value of the early [(18)F]-Fluorodeoxyglucose positron emission tomography (FDG-PET) response, are the objectives of the traslational project. Overall this

  2. Poster abstract: A decentralized routing scheme based on a zero-sum game to optimize energy in solar powered sensor networks

    KAUST Repository

    Dehwah, Ahmad H.

    2014-04-01

    This poster is aimed at solving the problem of maximizing the energy margin of a solar-powered sensor network at a fixed time horizon, to maximize the network performance during an event to monitor. Using a game theoretic approach, the optimal routing maximizing the energy margin of the network at a given time under solar power forcing can be computed in a decentralized way and solved exactly through dynamic programming with a low overall complexity. We also show that this decentralized algorithm is simple enough to be implemented on practical sensor nodes. Such an algorithm would be very useful whenever the energy margin of a solar-powered sensor network has to be maximized at a specific time. © 2014 IEEE.

  3. Poster abstract: A decentralized routing scheme based on a zero-sum game to optimize energy in solar powered sensor networks

    KAUST Repository

    Dehwah, Ahmad H.; Tembine, Hamidou; Claudel, Christian G.

    2014-01-01

    This poster is aimed at solving the problem of maximizing the energy margin of a solar-powered sensor network at a fixed time horizon, to maximize the network performance during an event to monitor. Using a game theoretic approach, the optimal routing maximizing the energy margin of the network at a given time under solar power forcing can be computed in a decentralized way and solved exactly through dynamic programming with a low overall complexity. We also show that this decentralized algorithm is simple enough to be implemented on practical sensor nodes. Such an algorithm would be very useful whenever the energy margin of a solar-powered sensor network has to be maximized at a specific time. © 2014 IEEE.

  4. A Genetic Algorithm Based Optimization Scheme To Find The Best Set Of Design Parameters To Enhance The Performance Of An Automobile Radiator

    Directory of Open Access Journals (Sweden)

    G.Chaitanya

    2013-12-01

    Full Text Available The present work aims at maximizing the overall heat transfer rate of an automobile radiator using Genetic Algorithm approach. The design specifications and empirical data pertaining to a rally car radiator obtained from literature are considered in the present work. The mathematical function describing the objective for the problem is formulated using the radiator core design equations and heat transfer relations governing the radiator. The overall heat transfer rate obtained from the present optimization technique is found to be 9.48 percent higher compared to the empirical value present in the literature. Also, the enhancement in the overall heat transfer rate is achieved with a marginal reduction in the radiator dimensions indicating better spacing ratio compared to the existing design.

  5. Capacity-achieving CPM schemes

    OpenAIRE

    Perotti, Alberto; Tarable, Alberto; Benedetto, Sergio; Montorsi, Guido

    2008-01-01

    The pragmatic approach to coded continuous-phase modulation (CPM) is proposed as a capacity-achieving low-complexity alternative to the serially-concatenated CPM (SC-CPM) coding scheme. In this paper, we first perform a selection of the best spectrally-efficient CPM modulations to be embedded into SC-CPM schemes. Then, we consider the pragmatic capacity (a.k.a. BICM capacity) of CPM modulations and optimize it through a careful design of the mapping between input bits and CPM waveforms. The s...

  6. Optimizing sample pretreatment for compound-specific stable carbon isotopic analysis of amino sugars in marine sediment

    Science.gov (United States)

    Zhu, R.; Lin, Y.-S.; Lipp, J. S.; Meador, T. B.; Hinrichs, K.-U.

    2014-09-01

    Amino sugars are quantitatively significant constituents of soil and marine sediment, but their sources and turnover in environmental samples remain poorly understood. The stable carbon isotopic composition of amino sugars can provide information on the lifestyles of their source organisms and can be monitored during incubations with labeled substrates to estimate the turnover rates of microbial populations. However, until now, such investigation has been carried out only with soil samples, partly because of the much lower abundance of amino sugars in marine environments. We therefore optimized a procedure for compound-specific isotopic analysis of amino sugars in marine sediment, employing gas chromatography-isotope ratio mass spectrometry. The whole procedure consisted of hydrolysis, neutralization, enrichment, and derivatization of amino sugars. Except for the derivatization step, the protocol introduced negligible isotopic fractionation, and the minimum requirement of amino sugar for isotopic analysis was 20 ng, i.e., equivalent to ~8 ng of amino sugar carbon. Compound-specific stable carbon isotopic analysis of amino sugars obtained from marine sediment extracts indicated that glucosamine and galactosamine were mainly derived from organic detritus, whereas muramic acid showed isotopic imprints from indigenous bacterial activities. The δ13C analysis of amino sugars provides a valuable addition to the biomarker-based characterization of microbial metabolism in the deep marine biosphere, which so far has been lipid oriented and biased towards the detection of archaeal signals.

  7. Optimization of pressurized liquid extraction (PLE) of dioxin-furans and dioxin-like PCBs from environmental samples.

    Science.gov (United States)

    Antunes, Pedro; Viana, Paula; Vinhas, Tereza; Capelo, J L; Rivera, J; Gaspar, Elvira M S M

    2008-05-30

    Pressurized liquid extraction (PLE) applying three extraction cycles, temperature and pressure, improved the efficiency of solvent extraction when compared with the classical Soxhlet extraction. Polychlorinated-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs) and dioxin-like PCBs (coplanar polychlorinated biphenyls (Co-PCBs)) in two Certified Reference Materials [DX-1 (sediment) and BCR 529 (soil)] and in two contaminated environmental samples (sediment and soil) were extracted by ASE and Soxhlet methods. Unlike data previously reported by other authors, results demonstrated that ASE using n-hexane as solvent and three extraction cycles, 12.4 MPa (1800 psi) and 150 degrees C achieves similar recovery results than the classical Soxhlet extraction for PCDFs and Co-PCBs, and better recovery results for PCDDs. ASE extraction, performed in less time and with less solvent proved to be, under optimized conditions, an excellent extraction technique for the simultaneous analysis of PCDD/PCDFs and Co-PCBs from environmental samples. Such fast analytical methodology, having the best cost-efficiency ratio, will improve the control and will provide more information about the occurrence of dioxins and the levels of toxicity and thereby will contribute to increase human health.

  8. Optimization of loop-mediated isothermal amplification (LAMP) assays for the detection of Leishmania DNA in human blood samples.

    Science.gov (United States)

    Abbasi, Ibrahim; Kirstein, Oscar D; Hailu, Asrat; Warburg, Alon

    2016-10-01

    Visceral leishmaniasis (VL), one of the most important neglected tropical diseases, is caused by Leishmania donovani eukaryotic protozoan parasite of the genus Leishmania, the disease is prevalent mainly in the Indian sub-continent, East Africa and Brazil. VL can be diagnosed by PCR amplifying ITS1 and/or kDNA genes. The current study involved the optimization of Loop-mediated isothermal amplification (LAMP) for the detection of Leishmania DNA in human blood or tissue samples. Three LAMP systems were developed; in two of those the primers were designed based on shared regions of the ITS1 gene among different Leishmania species, while the primers for the third LAMP system were derived from a newly identified repeated region in the Leishmania genome. The LAMP tests were shown to be sufficiently sensitive to detect 0.1pg of DNA from most Leishmania species. The green nucleic acid stain SYTO16, was used here for the first time to allow real-time monitoring of LAMP amplification. The advantage of real time-LAMP using SYTO 16 over end-point LAMP product detection is discussed. The efficacy of the real time-LAMP tests for detecting Leishmania DNA in dried blood samples from volunteers living in endemic areas, was compared with that of qRT-kDNA PCR. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Performance of two liquids scintillation and optimization of a Wallac 1411 counter in the tritium quantification in aqueous samples

    International Nuclear Information System (INIS)

    Contreras de la Cruz, E. de J.; Lopez del Rio, H.; Davila R, J. I.; Mireles G, F.; Pinedo V, J. L.

    2014-10-01

    The optimization of a liquid scintillation counting Wallac 1411 is presented as well as the performance of the liquids scintillation miscible in water OptiPhase Hi Safe 3 and Last Gold Ab, in the tritium quantification in aqueous samples. The luminescence effect, the quenching, the solution ph and the level of pulse amplitude comparator (Pac) were evaluated in the response of both liquids scintillation in the tritium measurement. The quenching and the luminescence modify the scintillators response; in the first of them the counting efficiency decreases and the minimum detectable activity increases; the second interferes in the tritium quantification in the interest window, but the effect disappears after 4 hours of darkness of the samples. The maximum counting efficiency was of 24% for OptiPhase Hi Safe 3 and 31% for Last Gold Ab, diminishing with the quenching until values of 8 and 11%, respectively. For a counting time of 6 hours and lower quenching, the minimum detectable concentration for OptiPhase Hi Safe 3 was of 13.4 ± 0.2 Bq/L and 9.9 ± 0.1 Bq/L for Last Gold Ab. Both scintillators responded appropriately to sour and basic solutions, being only presented chemiluminescence in Last Gold Ab to ph highly basic. The Pac application that varies between 1 and 256 does not have effect in the tritium measurement until values above 90. (Author)

  10. Self-optimizing robust nonlinear model predictive control

    NARCIS (Netherlands)

    Lazar, M.; Heemels, W.P.M.H.; Jokic, A.; Thoma, M.; Allgöwer, F.; Morari, M.

    2009-01-01

    This paper presents a novel method for designing robust MPC schemes that are self-optimizing in terms of disturbance attenuation. The method employs convex control Lyapunov functions and disturbance bounds to optimize robustness of the closed-loop system on-line, at each sampling instant - a unique

  11. An Optimization Study on Listening Experiments to Improve the Comparability of Annoyance Ratings of Noise Samples from Different Experimental Sample Sets.

    Science.gov (United States)

    Di, Guoqing; Lu, Kuanguang; Shi, Xiaofan

    2018-03-08

    Annoyance ratings obtained from listening experiments are widely used in studies on health effect of environmental noise. In listening experiments, participants usually give the annoyance rating of each noise sample according to its relative annoyance degree among all samples in the experimental sample set if there are no reference sound samples, which leads to poor comparability between experimental results obtained from different experimental sample sets. To solve this problem, this study proposed to add several pink noise samples with certain loudness levels into experimental sample sets as reference sound samples. On this basis, the standard curve between logarithmic mean annoyance and loudness level of pink noise was used to calibrate the experimental results and the calibration procedures were described in detail. Furthermore, as a case study, six different types of noise sample sets were selected to conduct listening experiments using this method to examine the applicability of it. Results showed that the differences in the annoyance ratings of each identical noise sample from different experimental sample sets were markedly decreased after calibration. The determination coefficient ( R ²) of linear fitting functions between psychoacoustic annoyance (PA) and mean annoyance (MA) of noise samples from different experimental sample sets increased obviously after calibration. The case study indicated that the method above is applicable to calibrating annoyance ratings obtained from different types of noise sample sets. After calibration, the comparability of annoyance ratings of noise samples from different experimental sample sets can be distinctly improved.

  12. Optimization of PMAxx pretreatment to distinguish between human norovirus with intact and altered capsids in shellfish and sewage samples.

    Science.gov (United States)

    Randazzo, Walter; Khezri, Mohammad; Ollivier, Joanna; Le Guyader, Françoise S; Rodríguez-Díaz, Jesús; Aznar, Rosa; Sánchez, Gloria

    2018-02-02

    Shellfish contamination by human noroviruses (HuNoVs) is a serious health and economic problem. Recently an ISO procedure based on RT-qPCR for the quantitative detection of HuNoVs in shellfish has been issued, but these procedures cannot discriminate between inactivated and potentially infectious viruses. The aim of the present study was to optimize a pretreatment using PMAxx to better discriminate between intact and heat-treated HuNoVs in shellfish and sewage. To this end, the optimal conditions (30min incubation with 100μM of PMAxx and 0.5% of Triton, and double photoactivation) were applied to mussels, oysters and cockles artificially inoculated with thermally-inactivated (99°C for 5min) HuNoV GI and GII. This pretreatment reduced the signal of thermally-inactivated HuNoV GI in cockles and HuNoV GII in mussels by >3 log. Additionally, this pretreatment reduced the signal of thermally-inactivated HuNoV GI and GII between 1 and 1.5 log in oysters. Thermal inactivation of HuNoV GI and GII in PBS, sewage and bioaccumulated oysters was also evaluated by the PMAxx-Triton pretreatment. Results showed significant differences between reductions observed in the control and PMAxx-treated samples in PBS following treatment at 72 and 95°C for 15min. In sewage, the RT-qPCR signal of HuNoV GI was completely removed by the PMAxx pretreatment after heating at 72 and 95°C, while the RT-qPCR signal for HuNoV GII was completely eliminated only at 95°C. Finally, the PMAxx-Triton pretreatme