WorldWideScience

Sample records for optimal sampling schemes

  1. Sampling scheme optimization from hyperspectral data

    NARCIS (Netherlands)

    Debba, P.

    2006-01-01

    This thesis presents statistical sampling scheme optimization for geo-environ-menta] purposes on the basis of hyperspectral data. It integrates derived products of the hyperspectral remote sensing data into individual sampling schemes. Five different issues are being dealt with.First, the optimized

  2. Sampling scheme optimization from hyperspectral data

    NARCIS (Netherlands)

    Debba, P.

    2006-01-01

    This thesis presents statistical sampling scheme optimization for geo-environ-menta] purposes on the basis of hyperspectral data. It integrates derived products of the hyperspectral remote sensing data into individual sampling schemes. Five different issues are being dealt with.First, the optimized

  3. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  4. Field sampling scheme optimization using simulated annealing

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-10-01

    Full Text Available to derive optimal sampling schemes. 2. Hyperspectral remote sensing In the study of electro-magnetic physics, when energy in the form of light interacts with a material, part of the energy at certain wavelength is absorbed, transmitted, emitted... in order to derive optimal sampling schemes. 2. Hyperspectral remote sensing In the study of electro-magnetic physics, when energy in the form of light interacts with a material, part of the energy at certain wavelength is absorbed, transmitted, emitted...

  5. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available At this presentation, the author discussed a statistical method for deriving optimal spatial sampling schemes. First I focus on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting...

  6. Optimal sampling schemes for vegetation and geological field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2012-07-01

    Full Text Available The presentation made to Wits Statistics Department was on common classification methods used in the field of remote sensing, and the use of remote sensing to design optimal sampling schemes for field visits with applications in vegetation...

  7. An Optimization-Based Sampling Scheme for Phylogenetic Trees

    Science.gov (United States)

    Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell

    Much modern work in phylogenetics depends on statistical sampling approaches to phylogeny construction to estimate probability distributions of possible trees for any given input data set. Our theoretical understanding of sampling approaches to phylogenetics remains far less developed than that for optimization approaches, however, particularly with regard to the number of sampling steps needed to produce accurate samples of tree partition functions. Despite the many advantages in principle of being able to sample trees from sophisticated probabilistic models, we have little theoretical basis for concluding that the prevailing sampling approaches do in fact yield accurate samples from those models within realistic numbers of steps. We propose a novel approach to phylogenetic sampling intended to be both efficient in practice and more amenable to theoretical analysis than the prevailing methods. The method depends on replacing the standard tree rearrangement moves with an alternative Markov model in which one solves a theoretically hard but practically tractable optimization problem on each step of sampling. The resulting method can be applied to a broad range of standard probability models, yielding practical algorithms for efficient sampling and rigorous proofs of accurate sampling for some important special cases. We demonstrate the efficiency and versatility of the method in an analysis of uncertainty in tree inference over varying input sizes. In addition to providing a new practical method for phylogenetic sampling, the technique is likely to prove applicable to many similar problems involving sampling over combinatorial objects weighted by a likelihood model.

  8. An Optimal Dimensionality Sampling Scheme on the Sphere for Antipodal Signals In Diffusion Magnetic Resonance Imaging

    CERN Document Server

    Bates, Alice P; Kennedy, Rodney A

    2015-01-01

    We propose a sampling scheme on the sphere and develop a corresponding spherical harmonic transform (SHT) for the accurate reconstruction of the diffusion signal in diffusion magnetic resonance imaging (dMRI). By exploiting the antipodal symmetry, we design a sampling scheme that requires the optimal number of samples on the sphere, equal to the degrees of freedom required to represent the antipodally symmetric band-limited diffusion signal in the spectral (spherical harmonic) domain. Compared with existing sampling schemes on the sphere that allow for the accurate reconstruction of the diffusion signal, the proposed sampling scheme reduces the number of samples required by a factor of two or more. We analyse the numerical accuracy of the proposed SHT and show through experiments that the proposed sampling allows for the accurate and rotationally invariant computation of the SHT to near machine precision accuracy.

  9. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  10. Sampling scheme optimization for diffuse optical tomography based on data and image space rankings

    Science.gov (United States)

    Sabir, Sohail; Kim, Changhwan; Cho, Sanghoon; Heo, Duchang; Kim, Kee Hyun; Ye, Jong Chul; Cho, Seungryong

    2016-10-01

    We present a methodology for the optimization of sampling schemes in diffuse optical tomography (DOT). The proposed method exploits singular value decomposition (SVD) of the sensitivity matrix, or weight matrix, in DOT. Two mathematical metrics are introduced to assess and determine the optimum source-detector measurement configuration in terms of data correlation and image space resolution. The key idea of the work is to weight each data measurement, or rows in the sensitivity matrix, and similarly to weight each unknown image basis, or columns in the sensitivity matrix, according to their contribution to the rank of the sensitivity matrix, respectively. The proposed metrics offer a perspective on the data sampling and provide an efficient way of optimizing the sampling schemes in DOT. We evaluated various acquisition geometries often used in DOT by use of the proposed metrics. By iteratively selecting an optimal sparse set of data measurements, we showed that one can design a DOT scanning protocol that provides essentially the same image quality at a much reduced sampling.

  11. Improved nonparametric estimation of the optimal diagnostic cut-off point associated with the Youden index under different sampling schemes.

    Science.gov (United States)

    Yin, Jingjing; Samawi, Hani; Linder, Daniel

    2016-07-01

    A diagnostic cut-off point of a biomarker measurement is needed for classifying a random subject to be either diseased or healthy. However, the cut-off point is usually unknown and needs to be estimated by some optimization criteria. One important criterion is the Youden index, which has been widely adopted in practice. The Youden index, which is defined as the maximum of (sensitivity + specificity -1), directly measures the largest total diagnostic accuracy a biomarker can achieve. Therefore, it is desirable to estimate the optimal cut-off point associated with the Youden index. Sometimes, taking the actual measurements of a biomarker is very difficult and expensive, while ranking them without the actual measurement can be relatively easy. In such cases, ranked set sampling can give more precise estimation than simple random sampling, as ranked set samples are more likely to span the full range of the population. In this study, kernel density estimation is utilized to numerically solve for an estimate of the optimal cut-off point. The asymptotic distributions of the kernel estimators based on two sampling schemes are derived analytically and we prove that the estimators based on ranked set sampling are relatively more efficient than that of simple random sampling and both estimators are asymptotically unbiased. Furthermore, the asymptotic confidence intervals are derived. Intensive simulations are carried out to compare the proposed method using ranked set sampling with simple random sampling, with the proposed method outperforming simple random sampling in all cases. A real data set is analyzed for illustrating the proposed method.

  12. Geochemical sampling scheme optimization on mine wastes based on hyperspectral data

    CSIR Research Space (South Africa)

    Zhao, T

    2008-07-01

    Full Text Available annealing uses the Weighted Means Shortest Distance (WMSD) criterion between sampling points. The scaled weight function intensively samples areas where an abundance of weathering mine waste occurs. A threshold is defined to constrain the sampling points...

  13. A systematic random sampling scheme optimized to detect the proportion of rare synapses in the neuropil.

    Science.gov (United States)

    da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C

    2009-05-30

    Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.

  14. Optimal staggered-grid finite-difference schemes by combining Taylor-series expansion and sampling approximation for wave equation modeling

    Science.gov (United States)

    Yan, Hongyong; Yang, Lei; Li, Xiang-Yang

    2016-12-01

    High-order staggered-grid finite-difference (SFD) schemes have been universally used to improve the accuracy of wave equation modeling. However, the high-order SFD coefficients on spatial derivatives are usually determined by the Taylor-series expansion (TE) method, which just leads to great accuracy at small wavenumbers for wave equation modeling. Some conventional optimization methods can achieve high accuracy at large wavenumbers, but they hardly guarantee the small numerical dispersion error at small wavenumbers. In this paper, we develop new optimal explicit SFD (ESFD) and implicit SFD (ISFD) schemes for wave equation modeling. We first derive the optimal ESFD and ISFD coefficients for the first-order spatial derivatives by applying the combination of the TE and the sampling approximation to the dispersion relation, and then analyze their numerical accuracy. Finally, we perform elastic wave modeling with the ESFD and ISFD schemes based on the TE method and the optimal method, respectively. When the appropriate number and interval for the sampling points are chosen, these optimal schemes have extremely high accuracy at small wavenumbers, and can also guarantee small numerical dispersion error at large wavenumbers. Numerical accuracy analyses and modeling results demonstrate the optimal ESFD and ISFD schemes can efficiently suppress the numerical dispersion and significantly improve the modeling accuracy compared to the TE-based ESFD and ISFD schemes.

  15. Role of over-sampled data in superresolution processing and a progressive up-sampling scheme for optimized implementations of iterative restoration algorithms

    Science.gov (United States)

    Sundareshan, Malur K.; Zegers, Pablo

    1999-07-01

    Super-resolution algorithms are often needed to enhance the resolution of diffraction-limited imagery acquired from certain sensors, particularly those operating in the millimeter-wave range. While several powerful iterative procedures for image superresolution are currently being developed, some practical implementation considerations become important in order to reduce the computational complexity and improve the convergence rate in deploying these algorithms in applications where real-time performance is of critical importance. Issues of particular interest are representation of the acquired imagery data on appropriate sample grids and the availability of oversampled data prior to super-resolution processing. Sampling at the Nyquist rate corresponds to an optimal spacing of detector elements or a scan rate that provides the largest dwell time (for scan- type focal plane imaging arrays), thus ensuring an increased SNR in the acquired image. However, super-resolution processing of this data could produce aliasing of the spectral components, leading not only to inaccurate estimates of the frequencies beyond the sensor cutoff frequency but also corruption of the passband itself, in turn resulting in a restored image that is poorer than the original. Obtaining sampled image data at a rate higher than the Nyquist rate can be accomplished either during data collection by modifying the acquisition hardware or as a post-acquisition signal processing step. If the ultimate goal in obtaining the oversampled image is to perform super- resolution, however, upsampling operations implemented as part of the overall signal processing software can offer several important benefits compared to acquiring oversampled data by hardware methods (such as by increasing number of detector elements in the sensor array or by microscanning). In this paper, we shall give a mathematical characterization of the process of image representation on a sample grid and establish the role of

  16. Optimal probabilistic dense coding schemes

    Science.gov (United States)

    Kögler, Roger A.; Neves, Leonardo

    2017-04-01

    Dense coding with non-maximally entangled states has been investigated in many different scenarios. We revisit this problem for protocols adopting the standard encoding scheme. In this case, the set of possible classical messages cannot be perfectly distinguished due to the non-orthogonality of the quantum states carrying them. So far, the decoding process has been approached in two ways: (i) The message is always inferred, but with an associated (minimum) error; (ii) the message is inferred without error, but only sometimes; in case of failure, nothing else is done. Here, we generalize on these approaches and propose novel optimal probabilistic decoding schemes. The first uses quantum-state separation to increase the distinguishability of the messages with an optimal success probability. This scheme is shown to include (i) and (ii) as special cases and continuously interpolate between them, which enables the decoder to trade-off between the level of confidence desired to identify the received messages and the success probability for doing so. The second scheme, called multistage decoding, applies only for qudits ( d-level quantum systems with d>2) and consists of further attempts in the state identification process in case of failure in the first one. We show that this scheme is advantageous over (ii) as it increases the mutual information between the sender and receiver.

  17. On Optimal Designs of Some Censoring Schemes

    Directory of Open Access Journals (Sweden)

    Dr. Adnan Mohammad Awad

    2016-03-01

    Full Text Available The main objective of this paper  is to explore suitability of some entropy-information measures for introducing a new optimality censoring criterion and to apply it to some censoring schemes from some underlying life-time models.  In addition, the  paper investigates four related issues namely; the  effect of the parameter of parent distribution on optimal scheme, equivalence of schemes based on Shannon and Awad sup-entropy measures, the conjecture that the optimal scheme is one stage scheme, and  a conjecture by Cramer and Bagh (2011 about Shannon minimum and maximum schemes when parent distribution is reflected power. Guidelines for designing an optimal censoring plane are reported together with theoretical and numerical results and illustrations.

  18. Optimal design of funded pension schemes

    NARCIS (Netherlands)

    Bovenberg, A.L.; Mehlkopf, R.J.

    2014-01-01

    This article reviews the literature on the optimal design and regulation of funded pension schemes. We first characterize optimal saving and investment over an individual’s life cycle. Within a stylized modeling framework, we explore optimal individual saving and investing behavior. Subsequently, va

  19. Optimizing Decision Tree Attack on CAS Scheme

    Directory of Open Access Journals (Sweden)

    PERKOVIC, T.

    2016-05-01

    Full Text Available In this paper we show a successful side-channel timing attack on a well-known high-complexity cognitive authentication (CAS scheme. We exploit the weakness of CAS scheme that comes from the asymmetry of the virtual interface and graphical layout which results in nonuniform human behavior during the login procedure, leading to detectable variations in user's response times. We optimized a well-known probabilistic decision tree attack on CAS scheme by introducing this timing information into the attack. We show that the developed classifier could be used to significantly reduce the number of login sessions required to break the CAS scheme.

  20. Optimal Sales Schemes for Network Goods

    DEFF Research Database (Denmark)

    Parakhonyak, Alexei; Vikander, Nick

    This paper examines the optimal sequencing of sales in the presence of network externalities. A firm sells a good to a group of consumers whose payoff from buying is increasing in total quantity sold. The firm selects the order to serve consumers so as to maximize expected sales. It can serve all...... consumers simultaneously, serve them all sequentially, or employ any intermediate scheme. We show that the optimal sales scheme is purely sequential, where each consumer observes all previous sales before choosing whether to buy himself. A sequential scheme maximizes the amount of information available...

  1. Rapid Parameterization Schemes for Aircraft Shape Optimization

    Science.gov (United States)

    Li, Wu

    2012-01-01

    A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.

  2. Accelerated failure time model under general biased sampling scheme.

    Science.gov (United States)

    Kim, Jane Paik; Sit, Tony; Ying, Zhiliang

    2016-07-01

    Right-censored time-to-event data are sometimes observed from a (sub)cohort of patients whose survival times can be subject to outcome-dependent sampling schemes. In this paper, we propose a unified estimation method for semiparametric accelerated failure time models under general biased estimating schemes. The proposed estimator of the regression covariates is developed upon a bias-offsetting weighting scheme and is proved to be consistent and asymptotically normally distributed. Large sample properties for the estimator are also derived. Using rank-based monotone estimating functions for the regression parameters, we find that the estimating equations can be easily solved via convex optimization. The methods are confirmed through simulations and illustrated by application to real datasets on various sampling schemes including length-bias sampling, the case-cohort design and its variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Optimal Sales Schemes for Network Goods

    DEFF Research Database (Denmark)

    Parakhonyak, Alexei; Vikander, Nick

    This paper examines the optimal sequencing of sales in the presence of network externalities. A firm sells a good to a group of consumers whose payoff from buying is increasing in total quantity sold. The firm selects the order to serve consumers so as to maximize expected sales. It can serve all...... consumers simultaneously, serve them all sequentially, or employ any intermediate scheme. We show that the optimal sales scheme is purely sequential, where each consumer observes all previous sales before choosing whether to buy himself. A sequential scheme maximizes the amount of information available...... to consumers, allowing success to breed success. Failure can also breed failure, but this is made less likely by consumers’ desire to influence one another’s behavior. We show that when consumers differ in the weight they place on the network externality, the firm would like to serve consumers with lower...

  4. Optimization of Train Trip Package Operation Scheme

    Directory of Open Access Journals (Sweden)

    Lu Tong

    2015-01-01

    Full Text Available Train trip package transportation is an advanced form of railway freight transportation, realized by a specialized train which has fixed stations, fixed time, and fixed path. Train trip package transportation has lots of advantages, such as large volume, long distance, high speed, simple forms of organization, and high margin, so it has become the main way of railway freight transportation. This paper firstly analyzes the related factors of train trip package transportation from its organizational forms and characteristics. Then an optimization model for train trip package transportation is established to provide optimum operation schemes. The proposed model is solved by the genetic algorithm. At last, the paper tests the model on the basis of the data of 8 regions. The results show that the proposed method is feasible for solving operation scheme issues of train trip package.

  5. Hybrid optimization schemes for quantum control

    Energy Technology Data Exchange (ETDEWEB)

    Goerz, Michael H.; Koch, Christiane P. [Universitaet Kassel, Theoretische Physik, Kassel (Germany); Whaley, K. Birgitta [University of California, Department of Chemistry, Berkeley, CA (United States)

    2015-12-15

    Optimal control theory is a powerful tool for solving control problems in quantum mechanics, ranging from the control of chemical reactions to the implementation of gates in a quantum computer. Gradient-based optimization methods are able to find high fidelity controls, but require considerable numerical effort and often yield highly complex solutions. We propose here to employ a two-stage optimization scheme to significantly speed up convergence and achieve simpler controls. The control is initially parametrized using only a few free parameters, such that optimization in this pruned search space can be performed with a simplex method. The result, considered now simply as an arbitrary function on a time grid, is the starting point for further optimization with a gradient-based method that can quickly converge to high fidelities. We illustrate the success of this hybrid technique by optimizing a geometric phase gate for two superconducting transmon qubits coupled with a shared transmission line resonator, showing that a combination of Nelder-Mead simplex and Krotov's method yields considerably better results than either one of the two methods alone. (orig.)

  6. Efficient Scheme for Optimizing Quantum Fourier Circuits

    Institute of Scientific and Technical Information of China (English)

    JIANG Min; ZHANG Zengke; Tzyh-Jong Tarn

    2008-01-01

    In quantum circuits, importing of additional qubits can reduce the operation time and prevent de-coherence induced by the environment. However, excessive qubits may make the quantum system vulner-able. This paper describes how to relax existing qubits without additional qubits to significantly reduce the operation time of the quantum Fourier circuit compared to a circuit without optimization. The results indicate that this scheme makes full use of the qubits relaxation. The concepts can be applied to improve similar quantum circuits and guide the physical implementations of quantum algorithms or devices.

  7. Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme

    Science.gov (United States)

    Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook

    1995-01-01

    Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

  8. Measurability Aspects of the Compactness Theorem for Sample Compression Schemes

    OpenAIRE

    2012-01-01

    It was proved in 1998 by Ben-David and Litman that a concept space has a sample compression scheme of size d if and only if every finite subspace has a sample compression scheme of size d. In the compactness theorem, measurability of the hypotheses of the created sample compression scheme is not guaranteed; at the same time measurability of the hypotheses is a necessary condition for learnability. In this thesis we discuss when a sample compression scheme, created from com- pression schemes o...

  9. Optimal coding schemes for conflict-free channel access

    Science.gov (United States)

    Browning, Douglas W.; Thomas, John B.

    1989-10-01

    A method is proposed for conflict-free access of a broadcast channel. The method uses a variable-length coding scheme to determine which user gains access to the channel. For an idle channel, an equation for optimal expected overhead is derived and a coding scheme that produces optimal codes is presented. Algorithms for generating optimal codes for access on a busy channel are discussed. Suboptimal schemes are found that perform in a nearly optimal fashion. The method is shown to be superior in performance to previously developed conflict-free channel access schemes.

  10. Prospective and retrospective spatial sampling scheme to characterize geochemicals in a mine tailings area

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-07-01

    Full Text Available guest at Nelson Mandela Metropolitan University presents a prospective and retrospective optimal spatial sampling scheme using the spatial distribution of secondary iron-bearing oxides/hydroxides, to characterize a mine tailings area....

  11. A simple and optimal ancestry labeling scheme for trees

    DEFF Research Database (Denmark)

    Dahlgaard, Søren; Knudsen, Mathias Bæk Tejs; Rotbart, Noy Galil

    2015-01-01

    of papers. The last, due to Fraigniaud and Korman [STOC 10’], presented an asymptotically optimal lg n+4 lg lg n+ O(1) labeling scheme using non-trivial tree-decomposition techniques. By providing a framework generalizing interval based labeling schemes, we obtain a simple, yet asymptotically optimal...

  12. General and Optimal Scheme for Local Conversion of Pure States

    Institute of Scientific and Technical Information of China (English)

    JIN Rui-Bo; CHEN Li-Bing; WANG Fa-Qiang; LU Yi-Qun

    2008-01-01

    We present general and optimal schemes for local conversion of pure states, via one specific example. First, we give the general solution of the doubly stochastic matrix. Then, we find the general and optimal positive-operator-valued measure (POVM) to realize the local conversion of pure states. Lastly, the physical realization of the POVM is discussed. We show that our scheme has a more general and better effect than other schemes.

  13. Optimal sampling of paid content

    OpenAIRE

    Halbheer, Daniel; Stahl, Florian; Koenigsberg, Oded; Lehmann, Donald R

    2011-01-01

    This paper analyzes optimal sampling and pricing of paid content for publishers of news websites. Publishers offer free content samples both to disclose journalistic quality to consumers and to generate online advertising revenues. We examine sampling where the publisher sets the number of free sample articles and consumers select the articles of their choice. Consumerslearn from the free samples in a Bayesian fashion and base their subscription decisions on posterior quality expectations. We...

  14. An Optimal Labeling Scheme for Ancestry Queries

    OpenAIRE

    2009-01-01

    An ancestry labeling scheme assigns labels (bit strings) to the nodes of rooted trees such that ancestry queries between any two nodes in a tree can be answered merely by looking at their corresponding labels. The quality of an ancestry labeling scheme is measured by its label size, that is the maximal number of bits in a label of a tree node. In addition to its theoretical appeal, the design of efficient ancestry labeling schemes is motivated by applications in web search engines. For this p...

  15. Optimal sampling schemes applied in geology

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-05-01

    Full Text Available , discuss the definition of hyperspectral versus multispectral, review some recent applications of hyperspectral image analysis, and summarize image-processing techniques commonly applied to hyperspectral imagery. Spectral Image Basics To understand...

  16. An Optimal Labeling Scheme for Ancestry Queries

    CERN Document Server

    Fraigniaud, Pierre

    2009-01-01

    An ancestry labeling scheme assigns labels (bit strings) to the nodes of rooted trees such that ancestry queries between any two nodes in a tree can be answered merely by looking at their corresponding labels. The quality of an ancestry labeling scheme is measured by its label size, that is the maximal number of bits in a label of a tree node. In addition to its theoretical appeal, the design of efficient ancestry labeling schemes is motivated by applications in web search engines. For this purpose, even small improvements in the label size are important. In fact, the literature about this topic is interested in the exact label size rather than just its order of magnitude. As a result, following the proposal of a simple interval-based ancestry scheme with label size $2\\log_2 n$ bits (Kannan et al., STOC '88), a considerable amount of work was devoted to improve the bound on the size of a label. The current state of the art upper bound is $\\log_2 n + O(\\sqrt{\\log n})$ bits (Abiteboul et al., SODA '02) which is...

  17. β-NMR sample optimization

    CERN Document Server

    Zakoucka, Eva

    2013-01-01

    During my summer student programme I was working on sample optimization for a new β-NMR project at the ISOLDE facility. The β-NMR technique is well-established in solid-state physics and just recently it is being introduced for applications in biochemistry and life sciences. The β-NMR collaboration will be applying for beam time to the INTC committee in September for three nuclei: Cu, Zn and Mg. Sample optimization for Mg was already performed last year during the summer student programme. Therefore sample optimization for Cu and Zn had to be completed as well for the project proposal. My part in the project was to perform thorough literature research on techniques studying Cu and Zn complexes in native conditions, search for relevant binding candidates for Cu and Zn applicable for ß-NMR and eventually evaluate selected binding candidates using UV-VIS spectrometry.

  18. Quantum Yield Characterization and Excitation Scheme Optimization of Upconverting Nanoparticles

    DEFF Research Database (Denmark)

    Liu, Haichun; Xu, Can T.; Jensen, Ole Bjarlin

    2014-01-01

    Upconverting nanoparticles suffer from low quantum yield in diffuse optical imaging, especially at low excitation intensities. Here, the power density dependent quantum yield is characterized, and the excitation scheme is optimized based on such characterization......Upconverting nanoparticles suffer from low quantum yield in diffuse optical imaging, especially at low excitation intensities. Here, the power density dependent quantum yield is characterized, and the excitation scheme is optimized based on such characterization...

  19. Quantum Yield Characterization and Excitation Scheme Optimization of Upconverting Nanoparticles

    DEFF Research Database (Denmark)

    Liu, Haichun; Xu, Can T.; Jensen, Ole Bjarlin;

    2014-01-01

    Upconverting nanoparticles suffer from low quantum yield in diffuse optical imaging, especially at low excitation intensities. Here, the power density dependent quantum yield is characterized, and the excitation scheme is optimized based on such characterization......Upconverting nanoparticles suffer from low quantum yield in diffuse optical imaging, especially at low excitation intensities. Here, the power density dependent quantum yield is characterized, and the excitation scheme is optimized based on such characterization...

  20. Optimal on/off scheme for all-optical switching

    DEFF Research Database (Denmark)

    Kristensen, Philip Trøst; Heuck, Mikkel; Mørk, Jesper

    2012-01-01

    We present a two-pulsed on/off scheme based on coherent control for fast switching of the optical energy in a micro cavity and use calculus of variations to optimize the switching in terms of energy.......We present a two-pulsed on/off scheme based on coherent control for fast switching of the optical energy in a micro cavity and use calculus of variations to optimize the switching in terms of energy....

  1. Variance optimal sampling based estimation of subset sums

    CERN Document Server

    Cohen, Edith; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel

    2008-01-01

    From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present a reservoir sampling scheme providing variance optimal estimation of subset sums. More precisely, if we have seen $n$ items of the stream, then for any subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line sampling scheme tailored for the concrete set of items seen: no off-line scheme based on $k$ samples can perform better than our on-line scheme when it comes to average variance over any subset size. Our scheme has no positive covariances between any pair of item estimates. Also, our scheme can handle each new item of the stream in $O(\\log k)$ time, which is optimal even on the word RAM.

  2. Optimized multilocus sequence typing (MLST scheme for Trypanosoma cruzi.

    Directory of Open Access Journals (Sweden)

    Patricio Diosque

    2014-08-01

    Full Text Available Trypanosoma cruzi, the aetiological agent of Chagas disease possess extensive genetic diversity. This has led to the development of a plethora of molecular typing methods for the identification of both the known major genetic lineages and for more fine scale characterization of different multilocus genotypes within these major lineages. Whole genome sequencing applied to large sample sizes is not currently viable and multilocus enzyme electrophoresis, the previous gold standard for T. cruzi typing, is laborious and time consuming. In the present work, we present an optimized Multilocus Sequence Typing (MLST scheme, based on the combined analysis of two recently proposed MLST approaches. Here, thirteen concatenated gene fragments were applied to a panel of T. cruzi reference strains encompassing all known genetic lineages. Concatenation of 13 fragments allowed assignment of all strains to the predicted Discrete Typing Units (DTUs, or near-clades, with the exception of one strain that was an outlier for TcV, due to apparent loss of heterozygosity in one fragment. Monophyly for all DTUs, along with robust bootstrap support, was restored when this fragment was subsequently excluded from the analysis. All possible combinations of loci were assessed against predefined criteria with the objective of selecting the most appropriate combination of between two and twelve fragments, for an optimized MLST scheme. The optimum combination consisted of 7 loci and discriminated between all reference strains in the panel, with the majority supported by robust bootstrap values. Additionally, a reduced panel of just 4 gene fragments displayed high bootstrap values for DTU assignment and discriminated 21 out of 25 genotypes. We propose that the seven-fragment MLST scheme could be used as a gold standard for T. cruzi typing, against which other typing approaches, particularly single locus approaches or systematic PCR assays based on amplicon size, could be compared.

  3. Optimized multilocus sequence typing (MLST) scheme for Trypanosoma cruzi.

    Science.gov (United States)

    Diosque, Patricio; Tomasini, Nicolás; Lauthier, Juan José; Messenger, Louisa Alexandra; Monje Rumi, María Mercedes; Ragone, Paula Gabriela; Alberti-D'Amato, Anahí Maitén; Pérez Brandán, Cecilia; Barnabé, Christian; Tibayrenc, Michel; Lewis, Michael David; Llewellyn, Martin Stephen; Miles, Michael Alexander; Yeo, Matthew

    2014-08-01

    Trypanosoma cruzi, the aetiological agent of Chagas disease possess extensive genetic diversity. This has led to the development of a plethora of molecular typing methods for the identification of both the known major genetic lineages and for more fine scale characterization of different multilocus genotypes within these major lineages. Whole genome sequencing applied to large sample sizes is not currently viable and multilocus enzyme electrophoresis, the previous gold standard for T. cruzi typing, is laborious and time consuming. In the present work, we present an optimized Multilocus Sequence Typing (MLST) scheme, based on the combined analysis of two recently proposed MLST approaches. Here, thirteen concatenated gene fragments were applied to a panel of T. cruzi reference strains encompassing all known genetic lineages. Concatenation of 13 fragments allowed assignment of all strains to the predicted Discrete Typing Units (DTUs), or near-clades, with the exception of one strain that was an outlier for TcV, due to apparent loss of heterozygosity in one fragment. Monophyly for all DTUs, along with robust bootstrap support, was restored when this fragment was subsequently excluded from the analysis. All possible combinations of loci were assessed against predefined criteria with the objective of selecting the most appropriate combination of between two and twelve fragments, for an optimized MLST scheme. The optimum combination consisted of 7 loci and discriminated between all reference strains in the panel, with the majority supported by robust bootstrap values. Additionally, a reduced panel of just 4 gene fragments displayed high bootstrap values for DTU assignment and discriminated 21 out of 25 genotypes. We propose that the seven-fragment MLST scheme could be used as a gold standard for T. cruzi typing, against which other typing approaches, particularly single locus approaches or systematic PCR assays based on amplicon size, could be compared.

  4. A new topology optimization scheme for nonlinear structures

    Energy Technology Data Exchange (ETDEWEB)

    Eim, Young Sup; Han, Seog Young [Hanyang University, Seoul (Korea, Republic of)

    2014-07-15

    A new topology optimization algorithm based on artificial bee colony algorithm (ABCA) was developed and applied to geometrically nonlinear structures. A finite element method and the Newton-Raphson technique were adopted for the nonlinear topology optimization. The distribution of material is expressed by the density of each element and a filter scheme was implemented to prevent a checkerboard pattern in the optimized layouts. In the application of ABCA for long structures or structures with small volume constraints, optimized topologies may be obtained differently for the same problem at each trial. The calculation speed is also very slow since topology optimization based on the roulette-wheel method requires many finite element analyses. To improve the calculation speed and stability of ABCA, a rank-based method was used. By optimizing several examples, it was verified that the developed topology scheme based on ABCA is very effective and applicable in geometrically nonlinear topology optimization problems.

  5. Using Sloane Rulers for Optimal Recovery Schemes in Distributed Computing

    Directory of Open Access Journals (Sweden)

    R. Delhi Babu

    2009-12-01

    Full Text Available Clusters and distributed systems offer fault tolerance and high performance through load sharing, and are thus attractive in real-time applications. When all computers are up and running, we would like the load to be evenly distributed among the computers. When one or more computers fail, the load must be redistributed. The redistribution is determined by the recovery scheme. The recovery scheme should keep the load as evenly distributed as possible even when the most unfavorable combinations of computers break down, i.e., we want to optimize the worst-case behavior. In this paper we compare the worst-case behavior of schemes such as Modulo ruler, Golomb ruler, Greedy sequence and Log sequence with worst-case behavior of Sloane sequence. Finally we observe that Sloane scheme performs better than all the other schemes. Keywords: Fault tolerance; High performance computing; Cluster technique; Recovery schemes; Sloane sequence;

  6. Optimized difference schemes for multidimensional hyperbolic partial differential equations

    Directory of Open Access Journals (Sweden)

    Adrian Sescu

    2009-04-01

    Full Text Available In numerical solutions to hyperbolic partial differential equations in multidimensions, in addition to dispersion and dissipation errors, there is a grid-related error (referred to as isotropy error or numerical anisotropy that affects the directional dependence of the wave propagation. Difference schemes are mostly analyzed and optimized in one dimension, wherein the anisotropy correction may not be effective enough. In this work, optimized multidimensional difference schemes with arbitrary order of accuracy are designed to have improved isotropy compared to conventional schemes. The derivation is performed based on Taylor series expansion and Fourier analysis. The schemes are restricted to equally-spaced Cartesian grids, so the generalized curvilinear transformation method and Cartesian grid methods are good candidates.

  7. Optimal Cooperative Relaying Schemes for Improving Wireless Physical Layer Security

    CERN Document Server

    Li, Jiangyuan; Weber, Steven

    2010-01-01

    We consider a cooperative wireless network in the presence of one of more eavesdroppers, and exploit node cooperation for achieving physical (PHY) layer based security. Two different cooperation schemes are considered. In the first scheme, cooperating nodes retransmit a weighted version of the source signal in a decode-and-forward (DF) fashion. In the second scheme, while the source is transmitting, cooperating nodes transmit weighted noise to confound the eavesdropper (cooperative jamming (CJ)). We investigate two objectives, i.e., maximization of achievable secrecy rate subject to a total power constraint, and minimization of total power transmit power under a secrecy rate constraint. For the first design objective with a single eavesdropper we obtain expressions for optimal weights under the DF protocol in closed form, and give an algorithm that converges to the optimal solution for the CJ scheme; while for multiple eavesdroppers we give an algorithm for the solution using the DF protocol that is guarantee...

  8. Multiobjective hyper heuristic scheme for system design and optimization

    Science.gov (United States)

    Rafique, Amer Farhan

    2012-11-01

    As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.

  9. STUDY ON SAMPLING INSPECTION SCHEME TO DIGITAL PRODUCTS IN GIS

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Adopting a principle of “check-accept for the f irstrank,inspection for the second rank”,this paper briefly discusses the rati onale of the sampling inspection and the sampling inspection schemes to digital products in GIS.The OC curve is drawn to explain the deficiency of the percent s ampling inspection.Meanwhile,the method of One Time Limiting Quality of count se lection is presented as the inspection scheme for production departments while t he method of One Time After-inspection Mean Percent Defective Upper Limit of cou nt selection is for acceptance departments.

  10. Optimization of stratification scheme for a fishery-independent survey with multiple objectives

    Institute of Scientific and Technical Information of China (English)

    XU Binduo; REN Yiping; CHEN Yong; XUE Ying; ZHANG Chongliang; WAN Rong

    2015-01-01

    Fishery-independent surveys are often used for collecting high quality biological and ecological data to support fisheries management. A careful optimization of fishery-independent survey design is necessary to improve the precision of survey estimates with cost-effective sampling efforts. We developed a simulation approach to evaluate and optimize the stratification scheme for a fishery-independent survey with multiple goals including estimation of abundance indices of individual species and species diversity indices. We compared the performances of the sampling designs with different stratification schemes for different goals over different months. Gains in precision of survey estimates from the stratification schemes were acquired compared to simple random sampling design for most indices. The stratification scheme with five strata performed the best. This study showed that the loss of precision of survey estimates due to the reduction of sampling efforts could be compensated by improved stratification schemes, which would reduce the cost and negative impacts of survey trawling on those species with low abundance in the fishery-independent survey. This study also suggests that optimization of a survey design differed with different survey objectives. A post-survey analysis can improve the stratification scheme of fishery-independent survey designs.

  11. Comparison of kriging interpolation precision between grid sampling scheme and simple random sampling scheme for precision agriculture

    Directory of Open Access Journals (Sweden)

    Jiang Houlong

    2016-01-01

    Full Text Available Sampling methods are important factors that can potentially limit the accuracy of predictions of spatial distribution patterns. A 10 ha tobacco-planted field was selected to compared the accuracy in predicting the spatial distribution of soil properties by using ordinary kriging and cross validation methods between grid sampling and simple random sampling scheme (SRS. To achieve this objective, we collected soil samples from the topsoil (0-20 cm in March 2012. Sample numbers of grid sampling and SRS were both 115 points each. Accuracies of spatial interpolation using the two sampling schemes were then evaluated based on validation samples (36 points and deviations of the estimates. The results suggested that soil pH and nitrate-N (NO3-N had low variation, whereas all other soil properties exhibited medium variation. Soil pH, organic matter (OM, total nitrogen (TN, cation exchange capacity (CEC, total phosphorus (TP and available phosphorus (AP matched the spherical model, whereas the remaining variables fit an exponential model with both sampling methods. The interpolation error of soil pH, TP, and AP was the lowest in SRS. The errors of interpolation for OM, CEC, TN, available potassium (AK and total potassium (TK were the lowest for grid sampling. The interpolation precisions of the soil NO3-N showed no significant differences between the two sampling schemes. Considering our data on interpolation precision and the importance of minerals for cultivation of flue-cured tobacco, the grid-sampling scheme should be used in tobacco-planted fields to determine the spatial distribution of soil properties. The grid-sampling method can be applied in a practical and cost-effective manner to facilitate soil sampling in tobacco-planted field.

  12. Numerical Comparison of Optimal Charging Schemes for Electric Vehicles

    DEFF Research Database (Denmark)

    You, Shi; Hu, Junjie; Pedersen, Anders Bro

    2012-01-01

    The optimal charging schemes for Electric vehicles (EV) generally differ from each other in the choice of charging periods and the possibility of performing vehicle-to-grid (V2G), and have different impacts on EV economics. Regarding these variations, this paper presents a numerical comparison...

  13. Optimal Censoring Scheme Selection Based on Artificial Bee Colony Optimization (ABC Algorithm

    Directory of Open Access Journals (Sweden)

    K. Kalaivani

    2015-07-01

    Full Text Available Life testing plans are more vital for carrying out researches on reliability and survival analysis. The inadequacy in the number of testing units or the timing limitations prevents the experiment from being continued until all the failures are detected. Hence, censoring grows to be an inheritably important and well-organized methodology for estimating the model parameters of underlying distributions. Type I and II censoring schemes are the most widely employed censoring schemes. The chief problem associated with the designing of life testing experiments practically is the determination of optimum censoring scheme. Hence, this study attempts to determine the optimum censoring through the minimization of total cost spent for the experiment, consuming less termination time and reasonable number of failures. The ABC algorithm is being employed in this study for obtaining the optimal censoring schemes. Entropy and variance serves as the optimal criterion. The proposed method utilizes Risk analysis to evaluate the efficiency or reliability of the optimal censoring scheme that is being determined. Optimum censoring scheme indicates the process of determining the best scheme from among the entire censoring schemes possible, in accordance to a specific optimality criterion.

  14. Optimal Numerical Schemes for Time Accurate Compressible Large Eddy Simulations: Comparison of Artificial Dissipation and Filtering Schemes

    Science.gov (United States)

    2014-11-01

    for Time Accurate Compressible Large Eddy Simulations : Comparison of Artificial Dissipation and Filtering Schemes 5b. GRANT NUMBER 5c. PROGRAM...Optimal Numerical Schemes for Time Accurate Compressible Large Eddy Simulations : Comparison of Artificial Dissipation and Filtering Schemes 67th

  15. An optimal probabilistic multiple-access scheme for cognitive radios

    KAUST Repository

    Hamza, Doha R.

    2012-09-01

    We study a time-slotted multiple-access system with a primary user (PU) and a secondary user (SU) sharing the same channel resource. The SU senses the channel at the beginning of the slot. If found free, it transmits with probability 1. If busy, it transmits with a certain access probability that is a function of its queue length and whether it has a new packet arrival. Both users, i.e., the PU and the SU, transmit with a fixed transmission rate by employing a truncated channel inversion power control scheme. We consider the case of erroneous sensing. The goal of the SU is to optimize its transmission scheduling policy to minimize its queueing delay under constraints on its average transmit power and the maximum tolerable primary outage probability caused by the miss detection of the PU. We consider two schemes regarding the secondary\\'s reaction to transmission errors. Under the so-called delay-sensitive (DS) scheme, the packet received in error is removed from the queue to minimize delay, whereas under the delay-tolerant (DT) scheme, the said packet is kept in the buffer and is retransmitted until correct reception. Using the latter scheme, there is a probability of buffer loss that is also constrained to be lower than a certain specified value. We also consider the case when the PU maintains an infinite buffer to store its packets. In the latter case, we modify the SU access scheme to guarantee the stability of the PU queue. We show that the performance significantly changes if the realistic situation of a primary queue is considered. In all cases, although the delay minimization problem is nonconvex, we show that the access policies can be efficiently obtained using linear programming and grid search over one or two parameters. © 1967-2012 IEEE.

  16. Planning Framework for Mesolevel Optimization of Urban Runoff Control Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Qianqian; Blohm, Andrew J.; Liu, Bo

    2017-04-01

    A planning framework is developed to optimize runoff control schemes at scales relevant for regional planning at an early stage. The framework employs less sophisticated modeling approaches to allow a practical application in developing regions with limited data sources and computing capability. The methodology contains three interrelated modules: (1)the geographic information system (GIS)-based hydrological module, which aims at assessing local hydrological constraints and potential for runoff control according to regional land-use descriptions; (2)the grading module, which is built upon the method of fuzzy comprehensive evaluation. It is used to establish a priority ranking system to assist the allocation of runoff control targets at the subdivision level; and (3)the genetic algorithm-based optimization module, which is included to derive Pareto-based optimal solutions for mesolevel allocation with multiple competing objectives. The optimization approach describes the trade-off between different allocation plans and simultaneously ensures that all allocation schemes satisfy the minimum requirement on runoff control. Our results highlight the importance of considering the mesolevel allocation strategy in addition to measures at macrolevels and microlevels in urban runoff management. (C) 2016 American Society of Civil Engineers.

  17. Simultaneous optimization of dose distributions and fractionation schemes in particle radiotherapy

    NARCIS (Netherlands)

    Unkelbach, J.; Zeng, C.; Engelsman, M.

    2013-01-01

    Purpose: The paper considers the fractionation problem in intensity modulated proton therapy (IMPT). Conventionally, IMPT fields are optimized independently of the fractionation scheme. In this work, we discuss the simultaneous optimization of fractionation scheme and pencil beam intensities. Meth

  18. Optimised Spatial Sampling Scheme for Soil Electriclal Conductivity Based on Variance Quad-Tree(VQT)Method

    Institute of Scientific and Technical Information of China (English)

    LI Yan; SHI Zhou; WU Ci-fang; LI Feng; LI Hong-yi

    2007-01-01

    The acquisition of precise soil data representative of the entire survey area,is a critical issue for many treatments such as irrigation or fertilization in precision agriculture.The aim of this study was to investigate the spatial variability of soil bulk electrical conductivity(ECb)in a coastal saline field and design an optimized spatial sampling scheme of ECb based on a sampling design algorithm,the variance quad-tree(VQT)method.Soil ECb data were collected from the field at 20m interval in a regular grid scheme.The smooth contour map of the whole field was obtained by ordinary kriging interpolation,VQT algorithm was then used to split the smooth contour map into strata of different number desired,the sampling locations can be selected within each stratum in subsequent sampling.The result indicated that the probability of choosing representative sampling sites was increased significantly by using VQT method with the sampling number being greatly reduced compared to grid sampling design while retaining the same prediction accuracy.The advantage of the VQT method is that this scheme samples sparsely in fields where the spatial variability is relatively uniform and more intensive where the variability is large.Thus the sampling efficiency can be improved,hence facilitate an assessment methodology that can be applied in a rapid,practical and cost-effective manner.

  19. Optimal Numerical Schemes for Compressible Large Eddy Simulations

    Science.gov (United States)

    Edoh, Ayaboe; Karagozian, Ann; Sankaran, Venkateswaran; Merkle, Charles

    2014-11-01

    The design of optimal numerical schemes for subgrid scale (SGS) models in LES of reactive flows remains an area of continuing challenge. It has been shown that significant differences in solution can arise due to the choice of the SGS model's numerical scheme and its inherent dissipation properties, which can be exacerbated in combustion computations. This presentation considers the individual roles of artificial dissipation, filtering, secondary conservation (Kinetic Energy Preservation), and collocated versus staggered grid arrangements with respect to the dissipation and dispersion characteristics and their overall impact on the robustness and accuracy for time-dependent simulations of relevance to reacting and non-reacting LES. We utilize von Neumann stability analysis in order to quantify these effects and to determine the relative strengths and weaknesses of the different approaches. Distribution A: Approved for public release, distribution unlimited. Supported by AFOSR (PM: Dr. F. Fahroo).

  20. Optimal design of sampling and mapping schemes in the radiometric exploration of Chipilapa, El Salvador (Geo-statistics); Diseno optimo de esquemas de muestreo y mapeo en la exploracion radiometrica de Chipilapa, El Salvador (Geo-estadistica)

    Energy Technology Data Exchange (ETDEWEB)

    Balcazar G, M.; Flores R, J.H

    1992-01-15

    As part of the knowledge about the radiometric surface exploration, carried out in the geothermal field of Chipilapa, El Salvador, its were considered the geo-statistical parameters starting from the calculated variogram of the field data, being that the maxim distance of correlation of the samples in 'radon' in the different observation addresses (N-S, E-W, N W-S E, N E-S W), it was of 121 mts for the monitoring grill in future prospectus in the same area. Being derived of it an optimization (minimum cost) in the spacing of the field samples by means of geo-statistical techniques, without losing the detection of the anomaly. (Author)

  1. Study on a new meteorological sampling scheme developed for the OSCAAR code system

    Energy Technology Data Exchange (ETDEWEB)

    Liu Xinhe; Tomita, Kenichi; Homma, Toshimitsu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2002-03-01

    One important step in Level-3 Probabilistic Safety Assessment is meteorological sequence sampling, on which the previous studies were mainly related to code systems using the straight-line plume model and more efforts are needed for those using the trajectory puff model such as the OSCAAR code system. This report describes the development of a new meteorological sampling scheme for the OSCAAR code system that explicitly considers population distribution. A group of principles set for the development of this new sampling scheme includes completeness, appropriate stratification, optimum allocation, practicability and so on. In this report, discussions are made about the procedures of the new sampling scheme and its application. The calculation results illustrate that although it is quite difficult to optimize stratification of meteorological sequences based on a few environmental parameters the new scheme do gather the most inverse conditions in a single subset of meteorological sequences. The size of this subset may be as small as a few dozens, so that the tail of a complementary cumulative distribution function is possible to remain relatively static in different trials of the probabilistic consequence assessment code. (author)

  2. Noise and nonlinear estimation with optimal schemes in DTI.

    Science.gov (United States)

    Özcan, Alpay

    2010-11-01

    In general, the estimation of the diffusion properties for diffusion tensor experiments (DTI) is accomplished via least squares estimation (LSE). The technique requires applying the logarithm to the measurements, which causes bad propagation of errors. Moreover, the way noise is injected to the equations invalidates the least squares estimate as the best linear unbiased estimate. Nonlinear estimation (NE), despite its longer computation time, does not possess any of these problems. However, all of the conditions and optimization methods developed in the past are based on the coefficient matrix obtained in a LSE setup. In this article, NE for DTI is analyzed to demonstrate that any result obtained relatively easily in a linear algebra setup about the coefficient matrix can be applied to the more complicated NE framework. The data, obtained using non-optimal and optimized diffusion gradient schemes, are processed with NE. In comparison with LSE, the results show significant improvements, especially for the optimization criterion. However, NE does not resolve the existing conflicts and ambiguities displayed with LSE methods.

  3. Stability of optimal-wave-front-sample coupling under sample translation and rotation

    CERN Document Server

    Anderson, Benjamin R; Eilers, Hergen

    2015-01-01

    The method of wavefront shaping to control optical properties of opaque media is a promising technique for authentication applications. One of the main challenges of this technique is the sensitivity of the wavefront-sample coupling to translation and/or rotation. To better understand how translation and rotation affect the wavefront- sample coupling we perform experiments in which we first optimize reflection from an opaque surface--to obtain an optimal wavefront--and then translate or rotate the surface and measure the new reflected intensity pattern. By using the correlation between the optimized and translated or rotated patterns we determine how sensitive the wavefront-sample coupling is. These experiments are performed for different spatial-light-modulator (SLM) bin sizes, beam-spot sizes, and nanoparticle concentrations. We find that all three parameters affect the different positional changes, implying that an optimization scheme can be used to maximize the stability of the wavefront-sample coupling. ...

  4. NOTE: Sampling and reconstruction schemes for biomagnetic sensor arrays

    Science.gov (United States)

    Naddeo, Adele; Della Penna, Stefania; Nappi, Ciro; Vardaci, Emanuele; Pizzella, Vittorio

    2002-09-01

    In this paper we generalize the approach of Ahonen et al (1993 IEEE Trans. Biomed. Eng. 40 859-69) to two-dimensional non-uniform sampling. The focus is on two main topics: (1) searching for the optimal sensor configuration on a planar measurement surface; and (2) reconstructing the magnetic field (a continuous function) from a discrete set of data points recorded with a finite number of sensors. A reconstruction formula for Bz is derived in the framework of the multidimensional Papoulis generalized sampling expansion (Papoulis A 1977 IEEE Trans. Circuits Syst. 24 652-4, Cheung K F 1993 Advanced Topics in Shannon Sampling and Interpolation Theory (New York: Springer) pp 85-119) in a particular case. Application of these considerations to the design of biomagnetic sensor arrays is also discussed.

  5. Sampling and reconstruction schemes for biomagnetic sensor arrays.

    Science.gov (United States)

    Naddeo, Adele; Della Penna, Stefania; Nappi, Ciro; Vardaci, Emanuele; Pizzella, Vittorio

    2002-09-21

    In this paper we generalize the approach of Ahonen et al (1993 IEEE Trans. Biomed. Eng. 40 859-69) to two-dimensional non-uniform sampling. The focus is on two main topics: (1) searching for the optimal sensor configuration on a planar measurement surface; and (2) reconstructing the magnetic field (a continuous function) from a discrete set of data points recorded with a finite number of sensors. A reconstruction formula for Bz is derived in the framework of the multidimensional Papoulis generalized sampling expansion (Papoulis A 1977 IEEE Trans. Circuits Syst. 24 652-4, Cheung K F 1993 Advanced Topics in Shannon Sampling and Interpolation Theory (New York: Springer) pp 85-119) in a particular case. Application of these considerations to the design of biomagnetic sensor arrays is also discussed.

  6. Axially perpendicular offset Raman scheme for reproducible measurement of housed samples in a noncircular container under variation of container orientation.

    Science.gov (United States)

    Duy, Pham K; Chang, Kyeol; Sriphong, Lawan; Chung, Hoeil

    2015-03-17

    An axially perpendicular offset (APO) scheme that is able to directly acquire reproducible Raman spectra of samples contained in an oval container under variation of container orientation has been demonstrated. This scheme utilized an axially perpendicular geometry between the laser illumination and the Raman photon detection, namely, irradiation through a sidewall of the container and gathering of the Raman photon just beneath the container. In the case of either backscattering or transmission measurements, Raman sampling volumes for an internal sample vary when the orientation of an oval container changes; therefore, the Raman intensities of acquired spectra are inconsistent. The generated Raman photons traverse the same bottom of the container in the APO scheme; the Raman sampling volumes can be relatively more consistent under the same situation. For evaluation, the backscattering, transmission, and APO schemes were simultaneously employed to measure alcohol gel samples contained in an oval polypropylene container at five different orientations and then the accuracies of the determination of the alcohol concentrations were compared. The APO scheme provided the most reproducible spectra, yielding the best accuracy when the axial offset distance was 10 mm. Monte Carlo simulations were performed to study the characteristics of photon propagation in the APO scheme and to explain the origin of the optimal offset distance that was observed. In addition, the utility of the APO scheme was further demonstrated by analyzing samples in a circular glass container.

  7. ADAPTIVE LIFTING BASED IMAGE COMPRESSION SCHEME WITH PARTICLE SWARM OPTIMIZATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Nishat kanvel

    2010-09-01

    Full Text Available This paper presents an adaptive lifting scheme with Particle Swarm Optimization technique for image compression. Particle swarm Optimization technique is used to improve the accuracy of the predictionfunction used in the lifting scheme. This scheme is applied in Image compression and parameters such as PSNR, Compression Ratio and the visual quality of the image is calculated .The proposed scheme iscompared with the existing methods.

  8. Diffusion spectrum MRI using body-centered-cubic and half-sphere sampling schemes.

    Science.gov (United States)

    Kuo, Li-Wei; Chiang, Wen-Yang; Yeh, Fang-Cheng; Wedeen, Van Jay; Tseng, Wen-Yih Isaac

    2013-01-15

    The optimum sequence parameters of diffusion spectrum MRI (DSI) on clinical scanners were investigated previously. However, the scan time of approximately 30 min is still too long for patient studies. Additionally, relatively large sampling interval in the diffusion-encoding space may cause aliasing artifact in the probability density function when Fourier transform is undertaken, leading to estimation error in fiber orientations. Therefore, this study proposed a non-Cartesian sampling scheme, body-centered-cubic (BCC), to avoid the aliasing artifact as compared to the conventional Cartesian grid sampling scheme (GRID). Furthermore, the accuracy of DSI with the use of half-sphere sampling schemes, i.e. GRID102 and BCC91, was investigated by comparing to their full-sphere sampling schemes, GRID203 and BCC181, respectively. In results, smaller deviation angle and lower angular dispersion were obtained by using the BCC sampling scheme. The half-sphere sampling schemes yielded angular precision and accuracy comparable to the full-sphere sampling schemes. The optimum b(max) was approximately 4750 s/mm(2) for GRID and 4500 s/mm(2) for BCC. In conclusion, the BCC sampling scheme could be implemented as a useful alternative to the GRID sampling scheme. Combination of BCC and half-sphere sampling schemes, that is BCC91, may potentially reduce the scan time of DSI from 30 min to approximately 14 min while maintaining its precision and accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Optimal fourth-order staggered-grid finite-difference scheme for 3D frequency-domain viscoelastic wave modeling

    Science.gov (United States)

    Li, Y.; Han, B.; Métivier, L.; Brossier, R.

    2016-09-01

    We investigate an optimal fourth-order staggered-grid finite-difference scheme for 3D frequency-domain viscoelastic wave modeling. An anti-lumped mass strategy is incorporated to minimize the numerical dispersion. The optimal finite-difference coefficients and the mass weighting coefficients are obtained by minimizing the misfit between the normalized phase velocities and the unity. An iterative damped least-squares method, the Levenberg-Marquardt algorithm, is utilized for the optimization. Dispersion analysis shows that the optimal fourth-order scheme presents less grid dispersion and anisotropy than the conventional fourth-order scheme with respect to different Poisson's ratios. Moreover, only 3.7 grid-points per minimum shear wavelength are required to keep the error of the group velocities below 1%. The memory cost is then greatly reduced due to a coarser sampling. A parallel iterative method named CARP-CG is used to solve the large ill-conditioned linear system for the frequency-domain modeling. Validations are conducted with respect to both the analytic viscoacoustic and viscoelastic solutions. Compared with the conventional fourth-order scheme, the optimal scheme generates wavefields having smaller error under the same discretization setups. Profiles of the wavefields are presented to confirm better agreement between the optimal results and the analytic solutions.

  10. The same number of optimized parameters scheme for determining intermolecular interaction energies

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Ettenhuber, Patrick; Eriksen, Janus Juul;

    2015-01-01

    We propose the Same Number Of Optimized Parameters (SNOOP) scheme as an alternative to the counterpoise method for treating basis set superposition errors in calculations of intermolecular interaction energies. The key point of the SNOOP scheme is to enforce that the number of optimized wave...

  11. Optimal design of a hybridization scheme with a fuel cell using genetic optimization

    Science.gov (United States)

    Rodriguez, Marco A.

    Fuel cell is one of the most dependable "green power" technologies, readily available for immediate application. It enables direct conversion of hydrogen and other gases into electric energy without any pollution of the environment. However, the efficient power generation is strictly stationary process that cannot operate under dynamic environment. Consequently, fuel cell becomes practical only within a specially designed hybridization scheme, capable of power storage and power management functions. The resultant technology could be utilized to its full potential only when both the fuel cell element and the entire hybridization scheme are optimally designed. The design optimization in engineering is among the most complex computational tasks due to its multidimensionality, nonlinearity, discontinuity and presence of constraints in the underlying optimization problem. this research aims at the optimal utilization of the fuel cell technology through the use of genetic optimization, and advance computing. This study implements genetic optimization in the definition of optimum hybridization rules for a PEM fuel cell/supercapacitor power system. PEM fuel cells exhibit high energy density but they are not intended for pulsating power draw applications. They work better in steady state operation and thus, are often hybridized. In a hybrid system, the fuel cell provides power during steady state operation while capacitors or batteries augment the power of the fuel cell during power surges. Capacitors and batteries can also be recharged when the motor is acting as a generator. Making analogies to driving cycles, three hybrid system operating modes are investigated: 'Flat' mode, 'Uphill' mode, and 'Downhill' mode. In the process of discovering the switching rules for these three modes, we also generate a model of a 30W PEM fuel cell. This study also proposes the optimum design of a 30W PEM fuel cell. The PEM fuel cell model and hybridization's switching rules are postulated

  12. The Optimizing QoS Miss Rate Scheme for the Data Centers

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this paper, we discuss quality of service requirements and impacts of different workloads of each service class on the resource proportion allocation scheme, and formulize the problem of minimizing the quality of service miss rate of service requests by using the queuing theory. The optimally allocating resource proportion scheme is obtained by the Lagrangian optimization approach. Our simulation results show that our scheme is efficient.

  13. Designing an Optimal Lightning Protection Scheme for Substations Using Shielding Wires

    Directory of Open Access Journals (Sweden)

    A. Khodadadi

    2017-06-01

    Full Text Available An optimal lightning protection scheme for a substation using shielding wires is investigated in this paper through computer software analysis. An economic approach is utilized by choosing a reasonable trade-off between protection, the number of shielding wires and the heights of them from the ground. This study is initially applied to a simple two-wire system and then extended to a sample substation. The solution for each problem is executed in MATLAB and 3-D realization is shown.

  14. Near-optimal labeling schemes for nearest common ancestors

    DEFF Research Database (Denmark)

    Alstrup, Stephen; Bistrup Halvorsen, Esben; Larsen, Kasper Green

    2014-01-01

    and Korman (STOC'10) established that labels in ancestor labeling schemes have size log n + Θ(log log n), our new lower bound separates ancestor and NCA labeling schemes. Our upper bound improves the 10 log n upper bound by Alstrup, Gavoille, Kaplan and Rauhe (TOCS'04), and our theoretical result even...

  15. Near-optimal labeling schemes for nearest common ancestors

    DEFF Research Database (Denmark)

    Alstrup, Stephen; Bistrup Halvorsen, Esben; Larsen, Kasper Green

    2014-01-01

    and Korman (STOC'10) established that labels in ancestor labeling schemes have size log n + Θ(log log n), our new lower bound separates ancestor and NCA labeling schemes. Our upper bound improves the 10 log n upper bound by Alstrup, Gavoille, Kaplan and Rauhe (TOCS'04), and our theoretical result even...

  16. Analysis and optimization of weighted ensemble sampling

    CERN Document Server

    Aristoff, David

    2016-01-01

    We give a mathematical framework for weighted ensemble (WE) sampling, a binning and resampling technique for efficiently computing probabilities in molecular dynamics. We prove that WE sampling is unbiased in a very general setting that includes adaptive binning. We show that when WE is used for stationary calculations in tandem with a Markov state model (MSM), the MSM can be used to optimize the allocation of replicas in the bins.

  17. An optimized quantum information splitting scheme with multiple controllers

    Science.gov (United States)

    Jiang, Min

    2016-12-01

    We propose an efficient scheme for splitting multi-qudit information with cooperative control of multiple agents. Each controller is assigned one controlling qudit, and he can monitor the state sharing of all multi-qudit information. Compared with the existing schemes, our scheme requires less resource consumption and approaches higher communication efficiency. In addition, our proposal involves only generalized Bell-state measurement, single-qudit measurement, one-qudit gates and a unitary-reduction operation, which makes it flexible and achievable for physical implementation.

  18. Optimally Efficient Multi Authority Secret Ballot e-Election Scheme

    Directory of Open Access Journals (Sweden)

    Dr. M. Padmavathamma

    2006-03-01

    Full Text Available An electronic voting scheme is a set of protocols that allow a collection of voters to cost their votes, while enabling a collection of authorities to collect votes, compute the final tally, and communicate the final tally that is checked by talliers. This scheme is based on the RSA and factoring assumptions. We apply the protocols of [CDS – 88] to Guillon – Quisqnater’s identification protocol [GQ –88] to constant proofs of validity for ballots.

  19. Thompson Sampling: An Optimal Finite Time Analysis

    CERN Document Server

    Kaufmann, Emilie; Munos, Rémi

    2012-01-01

    The question of the optimality of Thompson Sampling for solving the stochastic multi-armed bandit problem had been open since 1933. In this paper we answer it positively for the case of Bernoulli rewards by providing the first finite-time analysis that matches the asymptotic rate given in the Lai and Robbins lower bound for the cumulative regret. The proof is accompanied by a numerical comparison with other optimal policies, experiments that have been lacking in the literature until now for the Bernoulli case.

  20. Subdivision, Sampling, and Initialization Strategies for Simplical Branch and Bound in Global Optimization

    DEFF Research Database (Denmark)

    Clausen, Jens; Zilinskas, A,

    2002-01-01

    two schemes for sampling points of the function: midpoint sampling and vertex sampling. The convergence of the algorithm is proved, and numerical results are presented for the two dimensional case, for which also a special initial covering is presented. (C) 2002 Elsevier Science Ltd. All rights......We consider the problem of optimizing a Lipshitzian function. The branch and bound technique is a well-known solution method, and the key components for this are the subdivision scheme, the bound calculation scheme, and the initialization. For Lipschitzian optimization, the bound calculations...... are based on the sampling of function values. We propose a branch and bound algorithm based on regular simplexes. Initially, the domain in question is covered with regular simplexes, and our subdivision scheme maintains this property. The bound calculation becomes both simple and efficient, and we describe...

  1. Evaluation and Selection of the Optimal Scheme of Industrial Structure Adjustment Based on DEA

    Institute of Scientific and Technical Information of China (English)

    FU Lifang; GE Jiaqi; MENG Jun

    2006-01-01

    In the paper, the advanced assessment method of DEA (Date Envelopment Analysis) had been used to evaluate relative efficiency and select the optimal scheme of agricultural industrial structure adjustment. According to the results of DEA models, we analyzed scale benefits of every optional schemes, probed deeply the ultimate reason for not DEA efficient, which clarified the method and approach to improve these optional schemes. Finally, a new method had been proposed to rank and select the optimal scheme. The research is significant to direct the practice of the adjustment of agricultural industrial structure.

  2. Control charts for location based on different sampling schemes

    NARCIS (Netherlands)

    Mehmood, R.; Riaz, M.; Does, R.J.M.M.

    2013-01-01

    Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X̄ control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set

  3. Optimization Route of Food Logistics Distribution Based on Genetic and Graph Cluster Scheme Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Chen

    2015-06-01

    Full Text Available This study takes the concept of food logistics distribution as the breakthrough point, by means of the aim of optimization of food logistics distribution routes and analysis of the optimization model of food logistics route, as well as the interpretation of the genetic algorithm, it discusses the optimization of food logistics distribution route based on genetic and cluster scheme algorithm.

  4. The scheme optimization on construction diversion with discharge control feature of upstream operational reservoir

    Institute of Scientific and Technical Information of China (English)

    Liu Quan; Hu Zhigen; Fan Wuyi; Ni Jinchu; Li Qinjun

    2012-01-01

    There is a discharge control feature of construction diversion system with the upstream operational reservoir. The risk evaluation model of construction diversion is established by taking into consideration the risk factors of construc- tion diversion system with discharge control feature as well as their composition. And the risk factors include the up- stream operational reservoir discharge control, the interval flood and branch flood and the diversion system itself. And then based on analyzing of the conversion relation between risk index and investment index of diversion scheme, the risk control and conversion principals of diversion system are put forward, and the feasible diversion scheme model is built. At last, the risk and economic evaluation and scheme economic feasibility analysis method of diversion scheme are shown by an example of construction diversion scheme optimization with the discharge control condition of upstream hydropower station. The study is valuable for establishment and optimization of construction diversion scheme with upstream reservoir discharge control.

  5. Optimization schemes for the inversion of Bouguer gravity anomalies

    Science.gov (United States)

    Zamora, Azucena

    associated with structural changes [16]; therefore, it complements those geophysical methods with the same depth resolution that sample a different physical property (e.g. electromagnetic surveys sampling electric conductivity) or even those with different depth resolution sampling an alternative physical property (e.g. large scale seismic reflection surveys imaging the crust and top upper mantle using seismic velocity fields). In order to improve the resolution of Bouguer gravity anomalies, and reduce their ambiguity and uncertainty for the modeling of the shallow crust, we propose the implementation of primal-dual interior point methods for the optimization of density structure models through the introduction of physical constraints for transitional areas obtained from previously acquired geophysical data sets. This dissertation presents in Chapter 2 an initial forward model implementation for the calculation of Bouguer gravity anomalies in the Porphyry Copper-Molybdenum (Cu-Mo) Copper Flat Mine region located in Sierra County, New Mexico. In Chapter 3, we present a constrained optimization framework (using interior-point methods) for the inversion of 2-D models of Earth structures delineating density contrasts of anomalous bodies in uniform regions and/or boundaries between layers in layered environments. We implement the proposed algorithm using three different synthetic gravitational data sets with varying complexity. Specifically, we improve the 2-dimensional density structure models by getting rid of unacceptable solutions (geologically unfeasible models or those not satisfying the required constraints) given the reduction of the solution space. Chapter 4 shows the results from the implementation of our algorithm for the inversion of gravitational data obtained from the area surrounding the Porphyry Cu-Mo Cooper Flat Mine in Sierra County, NM. Information obtained from previous induced polarization surveys and core samples served as physical constraints for the

  6. Control Methods Utilizing Energy Optimizing Schemes in Refrigeration Systems

    DEFF Research Database (Denmark)

    Larsen, L.S; Thybo, C.; Stoustrup, Jakob

    2003-01-01

    The potential energy savings in refrigeration systems using energy optimal control has been proved to be substantial. This however requires an intelligent control that drives the refrigeration systems towards the energy optimal state. This paper proposes an approach for a control, which drives th...... the condenser pressure towards an optimal state. The objective of this is to present a feasible method that can be used for energy optimizing control. A simulation model of a simple refrigeration system will be used as basis for testing the control method.......The potential energy savings in refrigeration systems using energy optimal control has been proved to be substantial. This however requires an intelligent control that drives the refrigeration systems towards the energy optimal state. This paper proposes an approach for a control, which drives...

  7. Optimal convergence rate of the explicit finite difference scheme for American option valuation

    Science.gov (United States)

    Hu, Bei; Liang, Jin; Jiang, Lishang

    2009-08-01

    An optimal convergence rate O([Delta]x) for an explicit finite difference scheme for a variational inequality problem is obtained under the stability condition using completely PDE methods. As a corollary, a binomial tree scheme of an American put option (where ) is convergent unconditionally with the rate O(([Delta]t)1/2).

  8. Optimal spatial sampling scheme to characterize mine tailings

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-08-01

    Full Text Available to Characterize Mine Tailings Pravesh Debba1; Emmanuel John M. Carranza2; Alfred Stein2; Freek D. van der Meer2 1CSIR, Built Environment, Logisitics and Quantitative Methods, PO Box 395, Pretoria, 0001, South Africa. 2International Institute for Geo...-Information Science and Earth Observation (ITC), Hengelosestraat 99, PO Box 6, 7500 AA, Enschede, The Netherlands. E-mail: pdebba@csir.co.za; carranza@itc.nl; stein@itc.nl; vdmeer@itc.nl INTRODUCTION Geochemical characterization of mine waste impoundments...

  9. Control Methods Utilizing Energy Optimizing Schemes in Refrigeration Systems

    DEFF Research Database (Denmark)

    Larsen, L.S; Thybo, C.; Stoustrup, Jakob;

    2003-01-01

    The potential energy savings in refrigeration systems using energy optimal control has been proved to be substantial. This however requires an intelligent control that drives the refrigeration systems towards the energy optimal state. This paper proposes an approach for a control, which drives...

  10. Optimizing Voip Using A Cross Layer Call Admission Control Scheme

    Directory of Open Access Journals (Sweden)

    Mumtaz AL-Mukhtar

    2013-07-01

    Full Text Available Deployingwireless campus network becomes popular in many world universities for the services that areprovided.However, it suffers from different issues such as low VoIP network capacity, network congestioneffect on VoIP QoS and WLAN multi rate issue due to linkadaptation technique. In this paper a cross layercall admission control (CCAC scheme is proposed to reduce the effects of these problems on VoWLANbased on monitoring RTCPRR(RealTime Control Protocol ReceiverReportthat provides the QoS levelfor VoIP and monitoring the MAC layer for any change in the data rate. If the QoS level degrades due toone of the aforementioned reasons, a considerable change in the packet size or the codec type will be thesolution. A wireless campus network issimulatedusing OPNET 14.5 modeler and many scenarios aremodeled to improve this proposed scheme.

  11. Rare-event simulation for tandem queues: A simple and efficient importance sampling scheme

    NARCIS (Netherlands)

    Miretskiy, D.; Scheinhardt, W.; Mandjes, M.

    2009-01-01

    This paper focuses on estimating the rare event of overflow in the downstream queue of a tandem Jackson queue, relying on importance sampling. It is known that in this setting ‘traditional’ state-independent schemes perform poorly. More sophisticated state-dependent schemes yield asymptotic efficien

  12. Optimization of Eosine Analyses in Water Samples

    Science.gov (United States)

    Kola, Liljana

    2010-01-01

    The fluorescence ability of Eosine enables its using as artificial tracer in the water system studies. The fluorescence intensity of fluorescent dyes in water samples depends on their physical and chemical properties, such as pH, temperature, presence of oxidants, etc. This paper presents the experience of the Center of Applied Nuclear Physics, Tirana, in this field. The problem is dealt with in relation to applying Eosine to trace and determine water movements within the karstic system and underground waters. We have used for this study the standard solutions of Eosine. The method we have elaborated to this purpose made it possible to optimize procedures we use to analyze samples for the presence of Eosine and measure its content, even in trace levels, by the means of a Perkin Elmer LS 55 Luminescence Spectrometer.

  13. Optimized low-order explicit Runge-Kutta schemes for high- order spectral difference method

    KAUST Repository

    Parsani, Matteo

    2012-01-01

    Optimal explicit Runge-Kutta (ERK) schemes with large stable step sizes are developed for method-of-lines discretizations based on the spectral difference (SD) spatial discretization on quadrilateral grids. These methods involve many stages and provide the optimal linearly stable time step for a prescribed SD spectrum and the minimum leading truncation error coefficient, while admitting a low-storage implementation. Using a large number of stages, the new ERK schemes lead to efficiency improvements larger than 60% over standard ERK schemes for 4th- and 5th-order spatial discretization.

  14. DMT-optimal, Low ML-Complexity STBC-Schemes for Asymmetric MIMO Systems

    CERN Document Server

    Srinath, K Pavan

    2012-01-01

    For an $n_t$ transmit, $n_r$ receive antenna ($n_t\\times n_r$) MIMO system with quasi-static Rayleigh fading, it was shown by Elia et. al that schemes based on minimal-delay space-time block codes (STBCs) with a symbol rate of $n_t$ complex symbols per channel use (rate-$n_t$) and a {\\it non-vanishing determinant} (NVD) are diversity-multiplexing gain tradeoff (DMT)-optimal for arbitrary values of $n_r$. Further, explicit linear STBC-schemes (LSTBC-schemes) with the NVD property were also constructed. However, for asymmetric MIMO systems (where $n_r < n_t$), with the exception of the Alamouti code-scheme for the $2 \\times 1$ system and rate-1, diagonal STBC-schemes with NVD for an $n_t \\times 1$ system, no known minimal-delay, rate-$n_r$ STBC-scheme has been shown to be DMT-optimal. In this paper, we first obtain an enhanced sufficient criterion for an STBC-scheme to be DMT optimal and using this result, we show that for certain asymmetric MIMO systems, many well-known LSTBC-schemes which have low ML-decod...

  15. Optimal Scheme Selection of Agricultural Production Structure Adjustment - Based on DEA Model; Punjab (Pakistan)

    Institute of Scientific and Technical Information of China (English)

    Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan

    2015-01-01

    This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.

  16. Sintering Properties and Optimal Blending Schemes of Iron Ores

    Institute of Scientific and Technical Information of China (English)

    Dauter0liveira; WUSheng—li; DAIYu—ming; XUJian; CHEN Hong

    2012-01-01

    In order to obtain good sintering performance, it is important to understand sintering properties of iron ores. Sintering properties including chemical composition, granulation and high-temperature behaviors of ores from China, Brazil and Australia. Furthermore, several indices were defined to evaluate sintering properties of iron ores. The results show that: for chemical composition, Brazilian ores present high TFe, low SiOz, and low Alz03 con- tent. For granulation, particle diameter ratio of Brazilian ores are high; particle intermediate fraction of Chinese con- centrates are low; and average particle size and clay type index of Australian ores are high. For high-temperature properties, ores from China, Brazil and Australia present different characteristics. Ores from different origins should be mixed together to obtain good high-temperature properties. According to the analysis of each ore's sintering prop- erties, an ore blending scheme (Chinese concentrates 20 ~-1- Brazilian ores 400//oo -k Australian ores 40 ~) was sugges- ted. Moreover, sinter pot test using blending mix was performed, and the results indicated that the ore blending scheme led to good sintering performance and sinter quality.

  17. Resource optimization scheme for multimedia-enabled wireless mesh networks.

    Science.gov (United States)

    Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md Jalil; Suh, Doug Young

    2014-08-08

    Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment.

  18. Design of multishell sampling schemes with uniform coverage in diffusion MRI.

    Science.gov (United States)

    Caruyer, Emmanuel; Lenglet, Christophe; Sapiro, Guillermo; Deriche, Rachid

    2013-06-01

    In diffusion MRI, a technique known as diffusion spectrum imaging reconstructs the propagator with a discrete Fourier transform, from a Cartesian sampling of the diffusion signal. Alternatively, it is possible to directly reconstruct the orientation distribution function in q-ball imaging, providing so-called high angular resolution diffusion imaging. In between these two techniques, acquisitions on several spheres in q-space offer an interesting trade-off between the angular resolution and the radial information gathered in diffusion MRI. A careful design is central in the success of multishell acquisition and reconstruction techniques. The design of acquisition in multishell is still an open and active field of research, however. In this work, we provide a general method to design multishell acquisition with uniform angular coverage. This method is based on a generalization of electrostatic repulsion to multishell. We evaluate the impact of our method using simulations, on the angular resolution in one and two bundles of fiber configurations. Compared to more commonly used radial sampling, we show that our method improves the angular resolution, as well as fiber crossing discrimination. We propose a novel method to design sampling schemes with optimal angular coverage and show the positive impact on angular resolution in diffusion MRI. Copyright © 2012 Wiley Periodicals, Inc.

  19. Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks.

    Science.gov (United States)

    Robinson, Y Harold; Rajaram, M

    2015-01-01

    Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique.

  20. Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Y. Harold Robinson

    2015-01-01

    Full Text Available Mobile ad hoc network (MANET is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO that uses continuous time recurrent neural network (CTRNN to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique.

  1. Interpolation scheme for fictitious domain techniques and topology optimization of finite strain elastic problems

    DEFF Research Database (Denmark)

    Wang, Fengwen; Lazarov, Boyan Stefanov; Sigmund, Ole

    2014-01-01

    The focus of this paper is on interpolation schemes for fictitious domain and topology optimization approaches with structures undergoing large displacements. Numerical instability in the finite element simulations can often be observed, due to excessive distortion in low stiffness regions. A new...... for a challenging test geometry as well as for topology optimization of minimum compliance and compliant mechanisms. The effect of combining the proposed interpolation scheme with different hyperelastic material models is investigated as well. Numerical results show that the proposed approach alleviates...... the problems in the low stiffness regions and for the simulated cases, results in stable topology optimization of structures undergoing large displacements. © 2014 Elsevier B.V....

  2. An approximation scheme for optimal control of Volterra integral equations

    OpenAIRE

    Belbas, S. A.

    2006-01-01

    We present and analyze a new method for solving optimal control problems for Volterra integral equations, based on approximating the controlled Volterra integral equations by a sequence of systems of controlled ordinary differential equations. The resulting approximating problems can then be solved by dynamic programming methods for ODE controlled systems. Other, straightforward versions of dynamic programming, are not applicable to Volterra integral equations. We also derive the connection b...

  3. Using Correspondent Information for Route Optimization Scheme on Proxy Mobile IPv6

    Directory of Open Access Journals (Sweden)

    Young-Hyun Choi

    2010-08-01

    Full Text Available Proxy Mobile IPv6 outperforms previous mobilityprotocols have been standardized by the InternetEngineering Task Force. However, Proxy Mobile IPv6 stillinvolves the triangle routing problem in where data packetsfor the mobile node are delivered throughout inefficientrouting paths. To address the triangle routing problem, twodifferent Route Optimization schemes proposed that excludethe inefficient routing paths by creating the shortest routingpath. In this paper, we proposed Correspondent InformationRoute Optimization scheme solves the problem by inefficientsignaling cost of Dutta’s route optimization. Using CorrespondentInformation for Correspondent binding updatesprocess between the mobile access gateways which arecaused by bi-path data communication of the mobile entitiesof different the mobile access gateway on the same localmobility anchor. The results of signaling cost performanceevaluation show that performance of our proposed usingcorrespondent information route optimization scheme isbetter than Liebsch’s route optimization scheme as 45%for mobility of the data packets sender and Dutta’s routeoptimization scheme as 20% for mobility of the data packetssender.

  4. China summer precipitation simulations using an optimal ensemble of cumulus schemes

    Institute of Scientific and Technical Information of China (English)

    Shuyan LIU; Wei GAO; Min XU; Xueyuan WANG; Xin-Zhong LIANG

    2009-01-01

    RegCM3 (REGional Climate Model) simulations of precipitation in China in 1991 and 1998 are very sensitive to the cumulus parameterization. Among the four schemes available, none has superior skills over the whole of China, but each captures certain observed signals in distinct regions. The Grell scheme with the FritschChappell closure produces the smallest biases over the North; the Grell scheme with the Arakawa-Schubert closure performs the best over the southeast of 100°E;the Anthes-Kuo scheme is superior over the northeast; and the Emanuel scheme is more realistic over the southwest of 100~E and along the Yangtze River Basin. These differences indicate a strong degree of independence and complementarity between the parameterizations. As such,an ensemble is developed from the four schemes, whose relative contributions or weights are optimized locally to yield overall minimum root-mean-square errors from observed daily precipitation. The skill gain is evaluated by applying the identical distribution of the weights in a different period. It is shown that the ensemble always produces gross biases that are smaller than the individual schemes in both 1991 and 1998. The ensemble, however,cannot eliminate the large rainfall deficits over the southwest of 100°E and along the Yangtze River Basin that are systematic across all schemes. Further improvements can be made by a super-ensemble based on more cumulus schemes and/or multiple models.

  5. An Optimal Control Scheme to Minimize Loads in Wind Farms

    DEFF Research Database (Denmark)

    Soleimanzadeh, Maryam; Wisniewski, Rafal

    2012-01-01

    This work presents a control algorithm for wind farms that optimizes the power production of the farm and helps to increase the lifetime of wind turbines components. The control algorithm is a centralized approach, and it determines the power reference signals for individual wind turbines...... such that the structural loads of the wind turbines in low frequencies are reduced. The controller is relatively easy to implement on a wind farm, and in here the results of simulating the controller on a small wind farm is presented....

  6. An Optimal Control Scheme to Minimize Loads in Wind Farms

    DEFF Research Database (Denmark)

    Soleimanzadeh, Maryam; Wisniewski, Rafal

    2012-01-01

    This work presents a control algorithm for wind farms that optimizes the power production of the farm and helps to increase the lifetime of wind turbines components. The control algorithm is a centralized approach, and it determines the power reference signals for individual wind turbines...... such that the structural loads of the wind turbines in low frequencies are reduced. The controller is relatively easy to implement on a wind farm, and in here the results of simulating the controller on a small wind farm is presented....

  7. Optimizing congestion and emissions via tradable credit charge and reward scheme without initial credit allocations

    Science.gov (United States)

    Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang

    2017-01-01

    This paper investigates the revenue-neutral tradable credit charge and reward scheme without initial credit allocations that can reassign network traffic flow patterns to optimize congestion and emissions. First, we prove the existence of the proposed schemes and further decentralize the minimum emission flow pattern to user equilibrium. Moreover, we design the solving method of the proposed credit scheme for minimum emission problem. Second, we investigate the revenue-neutral tradable credit charge and reward scheme without initial credit allocations for bi-objectives to obtain the Pareto system optimum flow patterns of congestion and emissions; and present the corresponding solutions are located in the polyhedron constituted by some inequalities and equalities system. Last, numerical example based on a simple traffic network is adopted to obtain the proposed credit schemes and verify they are revenue-neutral.

  8. Scheme to Implement Optimal Asymmetric Economical Phase-Covariant Quantum Cloning in Cavity QED

    Institute of Scientific and Technical Information of China (English)

    YANG Chun-Nuan; ZHANG Wen-Hai; HE Jin-Chun; DAI Jie-Lin; HUANG Nian-Ning; YE Liu

    2008-01-01

    We propose an experimentally feasible scheme to implement the optimal asymmetric economical 1 → 2 phase-covariant quantum cloning in two dimensions based on the cavity QED technique. The protocol is very simple and only two atoms are required. Our scheme is insensitive to the cavity field states and cavity decay. During the processes, the cavity is only virtually excited and it thus greatly prolongs the efficient decoherent time. Therefore, it may be realized in experiment.

  9. Scheme for Implementation of Optimal Cloning of Arbitrary Single Particle Atomic State into Two Photonic States

    Institute of Scientific and Technical Information of China (English)

    A.K. Kushwaha; SONG Wei; QIN Tao

    2008-01-01

    We present a feasible scheme to implement the 1 → 2 optimal cloning of arbitrary single particle atomic state into two photonic states, which is important for applications in long distance quantum communication. Our scheme also realizes the tele-NOT gate of one atom to the distant atom trapped in another cavity. The scheme is based on the adiabatic passage and the polarization measurement. It is robust against a number of practical noises such as the violation of the Lamb-Dicke condition, spontaneous emission, and detection inefficiency.

  10. Cascaded Fresnel holographic image encryption scheme based on a constrained optimization algorithm and Henon map

    Science.gov (United States)

    Su, Yonggang; Tang, Chen; Chen, Xia; Li, Biyuan; Xu, Wenjun; Lei, Zhenkun

    2017-01-01

    We propose an image encryption scheme using chaotic phase masks and cascaded Fresnel transform holography based on a constrained optimization algorithm. In the proposed encryption scheme, the chaotic phase masks are generated by Henon map, and the initial conditions and parameters of Henon map serve as the main secret keys during the encryption and decryption process. With the help of multiple chaotic phase masks, the original image can be encrypted into the form of a hologram. The constrained optimization algorithm makes it possible to retrieve the original image from only single frame hologram. The use of chaotic phase masks makes the key management and transmission become very convenient. In addition, the geometric parameters of optical system serve as the additional keys, which can improve the security level of the proposed scheme. Comprehensive security analysis performed on the proposed encryption scheme demonstrates that the scheme has high resistance against various potential attacks. Moreover, the proposed encryption scheme can be used to encrypt video information. And simulations performed on a video in AVI format have also verified the feasibility of the scheme for video encryption.

  11. Credit-based accept-zero sampling schemes for the control of outgoing quality

    NARCIS (Netherlands)

    Baillie, David H.; Klaassen, Chris A.J.

    2000-01-01

    A general procedure is presented for switching between accept-zero attributes or variables sampling plans to provide acceptance sampling schemes with a specified limit on the (suitably defined) average outgoing quality (AOQ). The switching procedure is based on credit, defined as the total number of

  12. A Distributed Intrusion Detection Scheme about Communication Optimization in Smart Grid

    Directory of Open Access Journals (Sweden)

    Yunfa Li

    2013-01-01

    Full Text Available We first propose an efficient communication optimization algorithm in smart grid. Based on the optimization algorithm, we propose an intrusion detection algorithm to detect malicious data and possible cyberattacks. In this scheme, each node acts independently when it processes communication flows or cybersecurity threats. And neither special hardware nor nodes cooperation is needed. In order to justify the feasibility and the availability of this scheme, a series of experiments have been done. The results show that it is feasible and efficient to detect malicious data and possible cyberattacks with less computation and communication cost.

  13. Nearly optimal measurement schemes in a noisy Mach-Zehnder interferometer with coherent and squeezed vacuum

    Energy Technology Data Exchange (ETDEWEB)

    Gard, Bryan T.; You, Chenglong; Singh, Robinjeet; Lee, Hwang; Corbitt, Thomas R.; Dowling, Jonathan P. [Louisiana State University, Baton Rouge, LA (United States); Mishra, Devendra K. [Louisiana State University, Baton Rouge, LA (United States); V.S. Mehta College of Science, Physics Department, Bharwari, UP (India)

    2017-12-15

    The use of an interferometer to perform an ultra-precise parameter estimation under noisy conditions is a challenging task. Here we discuss nearly optimal measurement schemes for a well known, sensitive input state, squeezed vacuum and coherent light. We find that a single mode intensity measurement, while the simplest and able to beat the shot-noise limit, is outperformed by other measurement schemes in the low-power regime. However, at high powers, intensity measurement is only outperformed by a small factor. Specifically, we confirm, that an optimal measurement choice under lossless conditions is the parity measurement. In addition, we also discuss the performance of several other common measurement schemes when considering photon loss, detector efficiency, phase drift, and thermal photon noise. We conclude that, with noise considerations, homodyne remains near optimal in both the low and high power regimes. Surprisingly, some of the remaining investigated measurement schemes, including the previous optimal parity measurement, do not remain even near optimal when noise is introduced. (orig.)

  14. Adaptive multi-objective Optimization scheme for cognitive radio resource management

    KAUST Repository

    Alqerm, Ismail

    2014-12-01

    Cognitive Radio is an intelligent Software Defined Radio that is capable to alter its transmission parameters according to predefined objectives and wireless environment conditions. Cognitive engine is the actuator that performs radio parameters configuration by exploiting optimization and machine learning techniques. In this paper, we propose an Adaptive Multi-objective Optimization Scheme (AMOS) for cognitive radio resource management to improve spectrum operation and network performance. The optimization relies on adapting radio transmission parameters to environment conditions using constrained optimization modeling called fitness functions in an iterative manner. These functions include minimizing power consumption, Bit Error Rate, delay and interference. On the other hand, maximizing throughput and spectral efficiency. Cross-layer optimization is exploited to access environmental parameters from all TCP/IP stack layers. AMOS uses adaptive Genetic Algorithm in terms of its parameters and objective weights as the vehicle of optimization. The proposed scheme has demonstrated quick response and efficiency in three different scenarios compared to other schemes. In addition, it shows its capability to optimize the performance of TCP/IP layers as whole not only the physical layer.

  15. A numerical scheme for optimal transition paths of stochastic chemical kinetic systems

    Science.gov (United States)

    Liu, Di

    2008-10-01

    We present a new framework for finding the optimal transition paths of metastable stochastic chemical kinetic systems with large system size. The optimal transition paths are identified to be the most probable paths according to the Large Deviation Theory of stochastic processes. Dynamical equations for the optimal transition paths are derived using the variational principle. A modified Minimum Action Method (MAM) is proposed as a numerical scheme to solve the optimal transition paths. Applications to Gene Regulatory Networks such as the toggle switch model and the Lactose Operon Model in Escherichia coli are presented as numerical examples.

  16. Unified Importance Sampling Schemes for Efficient Simulation of Outage Capacity over Generalized Fading Channels

    KAUST Repository

    Rached, Nadhir B.

    2015-11-13

    The outage capacity (OC) is among the most important performance metrics of communication systems operating over fading channels. Of interest in the present paper is the evaluation of the OC at the output of the Equal Gain Combining (EGC) and the Maximum Ratio Combining (MRC) receivers. In this case, it can be seen that this problem turns out to be that of computing the Cumulative Distribution Function (CDF) for the sum of independent random variables. Since finding a closedform expression for the CDF of the sum distribution is out of reach for a wide class of commonly used distributions, methods based on Monte Carlo (MC) simulations take pride of price. In order to allow for the estimation of the operating range of small outage probabilities, it is of paramount importance to develop fast and efficient estimation methods as naive Monte Carlo (MC) simulations would require high computational complexity. In this line, we propose in this work two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels. The first estimator is shown to possess the asymptotic optimality criterion and applies for arbitrary fading models, whereas the second one achieves the well-desired bounded relative error property for the majority of the well-known fading variates. Moreover, the second estimator is shown to achieve the asymptotic optimality property under the particular Log-normal environment. Some selected simulation results are finally provided in order to illustrate the substantial computational gain achieved by the proposed IS schemes over naive MC simulations.

  17. DRO: domain-based route optimization scheme for nested mobile networks

    Directory of Open Access Journals (Sweden)

    Chuang Ming-Chin

    2011-01-01

    Full Text Available Abstract The network mobility (NEMO basic support protocol is designed to support NEMO management, and to ensure communication continuity between nodes in mobile networks. However, in nested mobile networks, NEMO suffers from the pinball routing problem, which results in long packet transmission delays. To solve the problem, we propose a domain-based route optimization (DRO scheme that incorporates a domain-based network architecture and ad hoc routing protocols for route optimization. DRO also improves the intra-domain handoff performance, reduces the convergence time during route optimization, and avoids the out-of-sequence packet problem. A detailed performance analysis and simulations were conducted to evaluate the scheme. The results demonstrate that DRO outperforms existing mechanisms in terms of packet transmission delay (i.e., better route-optimization, intra-domain handoff latency, convergence time, and packet tunneling overhead.

  18. Optimal Rate Based Image Transmission Scheme in Multi-rate Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Mr. Jayachandran.A ,

    2011-06-01

    Full Text Available In image transmission application over WSN energy efficiency and image quality are both important factor for joint optimization. The large size image transmission cause bottleneck in WSN due to the limited energy resources and network capacity. Since some sensor are in similar viewing directions the images they are capture likely exhibit certain level of correlation among themselves. This optimization scheme allows each image sensor to transmit optimal functions of the overlapped images through appropriate multiple rate oriented routing paths. Moreover, we use unused segment loss protection with erasure codes of different strength to maximize the expected quality at the destination and propose a fast algorithm that find nearly optimal transmission strategies simulation results show that proposed the scheme achieves high energy efficiency in WSN enhancing the image transmission quality.

  19. Optimal Tradable Credits Scheme and Congestion Pricing with the Efficiency Analysis to Congestion

    Directory of Open Access Journals (Sweden)

    Ge Gao

    2015-01-01

    Full Text Available We allow for three traffic scenarios: the tradable credits scheme, congestion pricing, and no traffic measure. The utility functions of different modes (car, bus, and bicycle are developed by considering the income’s impact on travelers’ behaviors. Their purpose is to analyze the demand distribution of different modes. A social optimization model is built aiming at maximizing the social welfare. The optimal tradable credits scheme (distribution of credits, credits charging, and the credit price, congestion pricing fees, bus frequency, and bus fare are obtained by solving the model. Mode choice behavior under the tradable credits scheme is also studied. Numerical examples are presented to demonstrate the model’s availability and explore the effects of the three schemes on traffic system’s performance. Results show congestion pricing would earn more social welfare than the other traffic measures. However, tradable credits scheme will give travelers more consumer surplus than congestion pricing. Travelers’ consumer surplus with congestion pricing is the minimum, which injures the travelers’ benefits. Tradable credits scheme is considered the best scenario by comparing the three scenarios’ efficiency.

  20. Simultaneous optimization of dose distributions and fractionation schemes in particle radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Unkelbach, Jan; Zeng, Chuan [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Engelsman, Martijn [Faculty of Applied Physics, Delft University of Technology/HollandPTC, 2628 CJ Delft (Netherlands)

    2013-09-15

    Purpose: The paper considers the fractionation problem in intensity modulated proton therapy (IMPT). Conventionally, IMPT fields are optimized independently of the fractionation scheme. In this work, we discuss the simultaneous optimization of fractionation scheme and pencil beam intensities.Methods: This is performed by allowing for distinct pencil beam intensities in each fraction, which are optimized using objective and constraint functions based on biologically equivalent dose (BED). The paper presents a model that mimics an IMPT treatment with a single incident beam direction for which the optimal fractionation scheme can be determined despite the nonconvexity of the BED-based treatment planning problem.Results: For this model, it is shown that a small α/β ratio in the tumor gives rise to a hypofractionated treatment, whereas a large α/β ratio gives rise to hyperfractionation. It is further demonstrated that, for intermediate α/β ratios in the tumor, a nonuniform fractionation scheme emerges, in which it is optimal to deliver different dose distributions in subsequent fractions. The intuitive explanation for this phenomenon is as follows: By varying the dose distribution in the tumor between fractions, the same total BED can be achieved with a lower physical dose. If it is possible to achieve this dose variation in the tumor without varying the dose in the normal tissue (which would have an adverse effect), the reduction in physical dose may lead to a net reduction of the normal tissue BED. For proton therapy, this is indeed possible to some degree because the entrance dose is mostly independent of the range of the proton pencil beam.Conclusions: The paper provides conceptual insight into the interdependence of optimal fractionation schemes and the spatial optimization of dose distributions. It demonstrates the emergence of nonuniform fractionation schemes that arise from the standard BED model when IMPT fields and fractionation scheme are optimized

  1. Use of different sampling schemes in machine learning-based prediction of hydrological models' uncertainty

    Science.gov (United States)

    Kayastha, Nagendra; Solomatine, Dimitri; Lal Shrestha, Durga; van Griensven, Ann

    2013-04-01

    In recent years, a lot of attention in the hydrologic literature is given to model parameter uncertainty analysis. The robustness estimation of uncertainty depends on the efficiency of sampling method used to generate the best fit responses (outputs) and on ease of use. This paper aims to investigate: (1) how sampling strategies effect the uncertainty estimations of hydrological models, (2) how to use this information in machine learning predictors of models uncertainty. Sampling of parameters may employ various algorithms. We compared seven different algorithms namely, Monte Carlo (MC) simulation, generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), partical swarm optimization (PSO) and adaptive cluster covering (ACCO) [1]. These methods were applied to estimate uncertainty of streamflow simulation using conceptual model HBV and Semi-distributed hydrological model SWAT. Nzoia catchment in West Kenya is considered as the case study. The results are compared and analysed based on the shape of the posterior distribution of parameters, uncertainty results on model outputs. The MLUE method [2] uses results of Monte Carlo sampling (or any other sampling shceme) to build a machine learning (regression) model U able to predict uncertainty (quantiles of pdf) of a hydrological model H outputs. Inputs to these models are specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. The problem here is that different sampling algorithms result in different data sets used to train such a model U, which leads to several models (and there is no clear evidence which model is the best since there is no basis for comparison). A solution could be to form a committee of all models U and

  2. Aerodynamic Noise Propagation Simulation using Immersed Boundary Method and Finite Volume Optimized Prefactored Compact Scheme

    Institute of Scientific and Technical Information of China (English)

    Min LIU; Keqi WU

    2008-01-01

    Based on the immersed boundary method (IBM) and the finite volume optimized pre-factored compact (FVOPC) scheme, a numerical simulation of noise propagation inside and outside the casing of a cross flow fan is estab-lished. The unsteady linearized Euler equations are solved to directly simulate the aero-acoustic field. In order to validate the FVOPC scheme, a simulation case: one dimensional linear wave propagation problem is carried out using FVOPC scheme, DRP scheme and HOC scheme. The result of FVOPC is in good agreement with the ana-lytic solution and it is better than the results of DRP and HOC schemes, the FVOPC is less dispersion and dissi-pation than DRP and HOC schemes. Then, numerical simulation of noise propagation problems is performed. The noise field of 36 compact rotating noise sources is obtained with the rotating velocity of 1000r/min. The PML absorbing boundary condition is applied to the sound far field boundary condition for depressing the numerical reflection. Wall boundary condition is applied to the casing. The results show that there are reflections on the casing wall and sound wave interference in the field. The FVOPC with the IBM is suitable for noise propagation problems under the complex geometries for depressing the dispersion and dissipation, and also keeping the high order precision.

  3. On Optimization Control Parameters in an Adaptive Error-Control Scheme in Satellite Networks

    Directory of Open Access Journals (Sweden)

    Ranko Vojinović

    2011-09-01

    Full Text Available This paper presents a method for optimization of control parameters of an adaptive GBN scheme in error-prone satellite channel. Method is based on the channel model with three state, where channel have the variable noise level.

  4. Breeding programmes for smallholder sheep farming systems: II. Optimization of cooperative village breeding schemes

    NARCIS (Netherlands)

    Gizaw, S.; Arendonk, van J.A.M.; Valle-Zarate, A.; Haile, A.; Rischkowsky, B.; Dessie, T.; Mwai, A.O.

    2014-01-01

    A simulation study was conducted to optimize a cooperative village-based sheep breeding scheme for Menz sheep of Ethiopia. Genetic gains and profits were estimated under nine levels of farmers' participation and three scenarios of controlled breeding achieved in the breeding programme, as well as un

  5. Biodiversity optimal sampling: an algorithmic solution

    Directory of Open Access Journals (Sweden)

    Alessandro Ferrarini

    2012-03-01

    Full Text Available Biodiversity sampling is a very serious task. When biodiversity sampling is not representative of the biodiversity spatial pattern due to few data or uncorrected sampling point locations, successive analyses, models and simulations are inevitably biased. In this work, I propose a new solution to the problem of biodiversity sampling. The proposed approach is proficient for habitats, plant and animal species, in addition it is able to answer the two pivotal questions of biodiversity sampling: 1 how many sampling points and 2 where are the sampling points.

  6. Designing sampling schemes for effect monitoring of nutrient leaching from agricultural soils.

    NARCIS (Netherlands)

    Brus, D.J.; Noij, I.G.A.M.

    2008-01-01

    A general methodology for designing sampling schemes for monitoring is illustrated with a case study aimed at estimating the temporal change of the spatial mean P concentration in the topsoil of an agricultural field after implementation of the remediation measure. A before-after control-impact (BAC

  7. Model and algorithm of optimizing alternate traffic restriction scheme in urban traffic network

    Institute of Scientific and Technical Information of China (English)

    徐光明; 史峰; 刘冰; 黄合来

    2014-01-01

    An optimization model and its solution algorithm for alternate traffic restriction (ATR) schemes were introduced in terms of both the restriction districts and the proportion of restricted automobiles. A bi-level programming model was proposed to model the ATR scheme optimization problem by aiming at consumer surplus maximization and overload flow minimization at the upper-level model. At the lower-level model, elastic demand, mode choice and multi-class user equilibrium assignment were synthetically optimized. A genetic algorithm involving prolonging codes was constructed, demonstrating high computing efficiency in that it dynamically includes newly-appearing overload links in the codes so as to reduce the subsequent searching range. Moreover, practical processing approaches were suggested, which may improve the operability of the model-based solutions.

  8. Study on a new meteorological sampling scheme developed for the OSCAAR code system

    OpenAIRE

    Liu, X.; 富田 賢一; 本間 俊充

    2002-01-01

    One important step in Level 3 Probabilistic Safety Assessment is meteorological sequence sampling, on which the previous studies were mainly related to code systems using straight line plume model and more efforts are needed for trajectory puff model such as the OSCAAR code system. This report describes the development of a new meteorological sampling scheme for the OSCAAR code system that explicitly considers population distribution. A group of principles was set forth for the development of...

  9. Tandem polymer solar cells: simulation and optimization through a multiscale scheme

    Directory of Open Access Journals (Sweden)

    Fanan Wei

    2017-01-01

    Full Text Available In this paper, polymer solar cells with a tandem structure were investigated and optimized using a multiscale simulation scheme. In the proposed multiscale simulation, multiple aspects – optical calculation, mesoscale simulation, device scale simulation and optimal power conversion efficiency searching modules – were studied together to give an optimal result. Through the simulation work, dependencies of device performance on the tandem structures were clarified by tuning the thickness, donor/acceptor weight ratio as well as the donor–acceptor distribution in both active layers of the two sub-cells. Finally, employing searching algorithms, we optimized the power conversion efficiency of the tandem polymer solar cells and located the optimal device structure parameters. With the proposed multiscale simulation strategy, poly(3-hexylthiophene/phenyl-C61-butyric acid methyl ester and (poly[2,6-(4,4-bis-(2-ethylhexyl-4H-cyclopenta[2,1-b;3,4-b]dithiophene-alt-4,7-(2,1,3-benzothiadiazole]/phenyl-C61-butyric acid methyl ester based tandem solar cells were simulated and optimized as an example. Two configurations with different sub-cell sequences in the tandem photovoltaic device were tested and compared. The comparison of the simulation results between the two configurations demonstrated that the balance between the two sub-cells is of critical importance for tandem organic photovoltaics to achieve high performance. Consistency between the optimization results and the reported experimental results proved the effectiveness of the proposed simulation scheme.

  10. Tandem polymer solar cells: simulation and optimization through a multiscale scheme.

    Science.gov (United States)

    Wei, Fanan; Yao, Ligang; Lan, Fei; Li, Guangyong; Liu, Lianqing

    2017-01-01

    In this paper, polymer solar cells with a tandem structure were investigated and optimized using a multiscale simulation scheme. In the proposed multiscale simulation, multiple aspects - optical calculation, mesoscale simulation, device scale simulation and optimal power conversion efficiency searching modules - were studied together to give an optimal result. Through the simulation work, dependencies of device performance on the tandem structures were clarified by tuning the thickness, donor/acceptor weight ratio as well as the donor-acceptor distribution in both active layers of the two sub-cells. Finally, employing searching algorithms, we optimized the power conversion efficiency of the tandem polymer solar cells and located the optimal device structure parameters. With the proposed multiscale simulation strategy, poly(3-hexylthiophene)/phenyl-C61-butyric acid methyl ester and (poly[2,6-(4,4-bis-(2-ethylhexyl)-4H-cyclopenta[2,1-b;3,4-b]dithiophene)-alt-4,7-(2,1,3-benzothiadiazole)])/phenyl-C61-butyric acid methyl ester based tandem solar cells were simulated and optimized as an example. Two configurations with different sub-cell sequences in the tandem photovoltaic device were tested and compared. The comparison of the simulation results between the two configurations demonstrated that the balance between the two sub-cells is of critical importance for tandem organic photovoltaics to achieve high performance. Consistency between the optimization results and the reported experimental results proved the effectiveness of the proposed simulation scheme.

  11. Tandem polymer solar cells: simulation and optimization through a multiscale scheme

    Science.gov (United States)

    Wei, Fanan; Yao, Ligang; Lan, Fei

    2017-01-01

    In this paper, polymer solar cells with a tandem structure were investigated and optimized using a multiscale simulation scheme. In the proposed multiscale simulation, multiple aspects – optical calculation, mesoscale simulation, device scale simulation and optimal power conversion efficiency searching modules – were studied together to give an optimal result. Through the simulation work, dependencies of device performance on the tandem structures were clarified by tuning the thickness, donor/acceptor weight ratio as well as the donor–acceptor distribution in both active layers of the two sub-cells. Finally, employing searching algorithms, we optimized the power conversion efficiency of the tandem polymer solar cells and located the optimal device structure parameters. With the proposed multiscale simulation strategy, poly(3-hexylthiophene)/phenyl-C61-butyric acid methyl ester and (poly[2,6-(4,4-bis-(2-ethylhexyl)-4H-cyclopenta[2,1-b;3,4-b]dithiophene)-alt-4,7-(2,1,3-benzothiadiazole)])/phenyl-C61-butyric acid methyl ester based tandem solar cells were simulated and optimized as an example. Two configurations with different sub-cell sequences in the tandem photovoltaic device were tested and compared. The comparison of the simulation results between the two configurations demonstrated that the balance between the two sub-cells is of critical importance for tandem organic photovoltaics to achieve high performance. Consistency between the optimization results and the reported experimental results proved the effectiveness of the proposed simulation scheme. PMID:28144571

  12. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    Science.gov (United States)

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.

  13. A FAST SEAMLESS HANDOVER SCHEME AND ITS CDT OPTIMIZATION FOR PING-PONG TYPE OF MOVEMENT

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In mobile IPv6 networks, the ping-pong type of movement brings about frequent handovers andthus increases signaling burden. This letter proposes a fast seamless handover scheme where the access routerkeeps the mobile node's old reservation till the offline Count Down Timer (CDT) expires in order to reducehandover signaling and delay while the mobile node returns in a very short period of time. Based upon a pois-son mobility model, an simple expression for CDT optimization is given out for the scheme to achieve the bestcost performance of resource reservation.

  14. Scheme to Implement Optimal Symmetric 1 → 2 Universal Quantum Telecloning Through Cavity-Assisted Interaction

    Institute of Scientific and Technical Information of China (English)

    YANG Zhen; ZHANG Wen-Hai; HE Juan; YE Liu

    2008-01-01

    We propose a scheme to implement the optimal symmetric 1 → 2 universal quantum telecloning through cavity-assisted interaction. In our scheme, an arbitrary single atomic state can be telecloned to two single atomic states. And three atoms are trapped in three spatially separated cavities respectively. With a particular multiparticle entangled state acting as a quantum information channel and the trapped single atom acting as a quantum network node for its long-lived internal state, quantum information can be telecloned among nodes and can stored in the nodes.

  15. Determination of Optimal Energy Efficient Separation Schemes based on Driving Forces

    DEFF Research Database (Denmark)

    Bek-Pedersen, Erik; Gani, Rafiqul; Levaux, O.

    2000-01-01

    A new integrated approach for synthesis, design and operation of separation schemes is presented. This integrated approach is based on driving forces that promote the desired separation for different separation techniques. A set of algorithms needed by the integrated approach for sequencing...... and design of distillation columns and for generating hybrid separation schemes are presented. The main feature of these algorithms is that they provide a 'visual' solution that also appears to be near optimal in terms of energy consumption. Several illustrative examples highlighting the application...

  16. An energy-efficient adaptive sampling scheme for wireless sensor networks

    NARCIS (Netherlands)

    Masoum, Alireza; Meratnia, Nirvana; Havinga, Paul J.M.

    2013-01-01

    Wireless sensor networks are new monitoring platforms. To cope with their resource constraints, in terms of energy and bandwidth, spatial and temporal correlation in sensor data can be exploited to find an optimal sampling strategy to reduce number of sampling nodes and/or sampling frequencies while

  17. A Programmable Look-Up Table-Based Interpolator with Nonuniform Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Élvio Carlos Dutra e Silva Júnior

    2012-01-01

    Full Text Available Interpolation is a useful technique for storage of complex functions on limited memory space: some few sampling values are stored on a memory bank, and the function values in between are calculated by interpolation. This paper presents a programmable Look-Up Table-based interpolator, which uses a reconfigurable nonuniform sampling scheme: the sampled points are not uniformly spaced. Their distribution can also be reconfigured to minimize the approximation error on specific portions of the interpolated function’s domain. Switching from one set of configuration parameters to another set, selected on the fly from a variety of precomputed parameters, and using different sampling schemes allow for the interpolation of a plethora of functions, achieving memory saving and minimum approximation error. As a study case, the proposed interpolator was used as the core of a programmable noise generator—output signals drawn from different Probability Density Functions were produced for testing FPGA implementations of chaotic encryption algorithms. As a result of the proposed method, the interpolation of a specific transformation function on a Gaussian noise generator reduced the memory usage to 2.71% when compared to the traditional uniform sampling scheme method, while keeping the approximation error below a threshold equal to 0.000030518.

  18. Integrating hydrograph modeling with real-time flow monitoring to generate hydrograph-specific sampling schemes

    Science.gov (United States)

    Gall, Heather E.; Jafvert, Chad T.; Jenkinson, Byron

    2010-11-01

    Automated sample collection for water quality research and evaluation generally is performed by simple time-paced or flow-weighted sampling protocols. However, samples collected on strict time-paced or flow-weighted schemes may not adequately capture all elements of storm event hydrographs (i.e., rise, peak, and recession). This can result in inadequate information for calculating chemical mass flux over storm events. In this research, an algorithm was developed to guide automated sampling of hydrographs based on storm-specific information. A key element of the new "hydrograph-specific sampling scheme" is the use of a hydrograph recession model for predicting the hydrograph recession curve, during which flow-paced intervals are calculated for scheduling the remaining samples. The algorithm was tested at a tile drained Midwest agricultural site where real-time flow data were processed by a programmable datalogger that in turn activated an automated sampler at the appropriate sampling times to collect a total of twenty samples during each storm event independent of the number of sequential hydrographs generated. The utility of the algorithm was successfully tested with hydrograph data collected at both a tile drain and agricultural ditch, suggesting the potential for general applicability of the method. This sampling methodology is flexible in that the logic can be adapted for use with any hydrograph recession model; however, in this case a power law equation proved to be the most practical model.

  19. Optimizing sampling approaches along ecological gradients

    DEFF Research Database (Denmark)

    Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel

    2016-01-01

    1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...

  20. A Synchrophasor Based Optimal Voltage Control Scheme with Successive Voltage Stability Margin Improvement

    Directory of Open Access Journals (Sweden)

    Heng-Yi Su

    2016-01-01

    Full Text Available This paper proposes an optimal control scheme based on a synchronized phasor (synchrophasor for power system secondary voltage control. The framework covers voltage stability monitoring and control. Specifically, a voltage stability margin estimation algorithm is developed and built in the newly designed adaptive secondary voltage control (ASVC method to achieve more reliable and efficient voltage regulation in power systems. This new approach is applied to improve voltage profile across the entire power grid by an optimized plan for VAR (reactive power sources allocation; therefore, voltage stability margin of a power system can be increased to reduce the risk of voltage collapse. An extensive simulation study on the IEEE 30-bus test system is carried out to demonstrate the feasibility and effectiveness of the proposed scheme.

  1. Optimizing sampling approaches along ecological gradients

    DEFF Research Database (Denmark)

    Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel

    2016-01-01

    1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...... patterns in a statistically solid and reproducible manner, given the normal restrictions in labour, time and money. However, a technical guideline about an adequate sampling design to maximize prediction success under restricted resources is lacking. This study aims at developing such a solid...... and reproducible guideline for sampling along gradients in all fields of ecology and science in general. 2. We conducted simulations with artificial data for five common response types known in ecology, each represented by a simple function (no response, linear, exponential, symmetric unimodal and asymmetric...

  2. Optimization and Sampling for NLP from a Unified Viewpoint

    NARCIS (Netherlands)

    Dymetman, M.; Bouchard, G.; Carter, S.; Bhattacharyya, B.; Ekbal, A.; Saha, S.; Johnson, M.; Molla-Aliod, D.; Dras, M.

    2012-01-01

    The OS* algorithm is a unified approach to exact optimization and sampling, based on incremental refinements of a functional upper bound, which combines ideas of adaptive rejection sampling and of A* optimization search. We first give a detailed description of OS*. We then explain how it can be

  3. Optimization of Dengue Epidemics: A Test Case with Different Discretization Schemes

    Science.gov (United States)

    Rodrigues, Helena Sofia; Monteiro, M. Teresa T.; Torres, Delfim F. M.

    2009-09-01

    The incidence of Dengue epidemiologic disease has grown in recent decades. In this paper an application of optimal control in Dengue epidemics is presented. The mathematical model includes the dynamic of Dengue mosquito, the affected persons, the people's motivation to combat the mosquito and the inherent social cost of the disease, such as cost with ill individuals, educations and sanitary campaigns. The dynamic model presents a set of nonlinear ordinary differential equations. The problem was discretized through Euler and Runge Kutta schemes, and solved using nonlinear optimization packages. The computational results as well as the main conclusions are shown.

  4. AN OPTIMIZED SCHEME FOR FAST HANDOFF IN IP-BASED CDMA WIRELESS COMMUNICATION NETWORKS

    Institute of Scientific and Technical Information of China (English)

    段世平; 徐友云; 宋文涛; 罗汉文

    2002-01-01

    This paper proposed an optimized fast handoff scheme for real-time applications in next generation IPbased CDMA wireless networks. The idea is to utilize optimized IP multicasting handoff (based on PIM-SM),which is triggered by CDMA layer-2 functionality. An IP-based cellular network model with WCDMA FDD air interface and IP-based packet traffic is adopted. No special network entities and signaling for handoff are added in our network model. The simulation results show that low delay and low packet-lost-rate can be obtained.

  5. Towards a Global Optimization Scheme for Multi-Band Speech Recognition

    OpenAIRE

    Cerisara, Christophe; Haton, Jean-Paul; Fohr, Dominique

    1999-01-01

    Colloque avec actes et comité de lecture.; n this paper, we deal with a new method to globally optimize a Multi-Band Speech Recognition (MBSR) system. We have tested our algorithm with the TIMIT database and obtained a significant improvement in the accuracy over a basic HMM system for clean speech. The goal of this work is not to prove the effectiveness of MBSR, what has yet been done, but to improve the training scheme by introducing a global optimization procedure. A consequence of this me...

  6. Optimization of Dengue Epidemics: a test case with different discretization schemes

    CERN Document Server

    Rodrigues, Helena Sofia; Torres, Delfim F M; 10.1063/1.3241345

    2010-01-01

    The incidence of Dengue epidemiologic disease has grown in recent decades. In this paper an application of optimal control in Dengue epidemics is presented. The mathematical model includes the dynamic of Dengue mosquito, the affected persons, the people's motivation to combat the mosquito and the inherent social cost of the disease, such as cost with ill individuals, educations and sanitary campaigns. The dynamic model presents a set of nonlinear ordinary differential equations. The problem was discretized through Euler and Runge Kutta schemes, and solved using nonlinear optimization packages. The computational results as well as the main conclusions are shown.

  7. Fuzzy Approach for Selecting Optimal COTS Based Software Products Under Consensus Recovery Block Scheme

    Directory of Open Access Journals (Sweden)

    P. C. Jha

    2011-01-01

    Full Text Available The cost associated with development of a large and complex software system is formidable. In today's customer driven market, improvement of quality aspects in terms of reliability of the product is also gaining increased importance. But the resources are limited and the manager has to maneuver within a tight schedule. In order to meet these challenges, many organizations are making use of Commercial Off-The-Shelf (COTS software. This paper develops a fuzzy multi objective optimization model approach for selecting the optimal COTS software product among alternatives for each module in the development of modular software system. The problem is formulated for consensus recovery block fault tolerant scheme. In today’s ever changing environment, it is arduous to estimate the precise cost and reliability of software. Therefore, we develop a fuzzy multi objective optimization models for selecting optimal COTS software products. Numerical illustrations are provided to demonstrate the models developed.

  8. Queuing Game Theory Based Optimal Routing Scheme for Heterogeneous Users over Space Information Networks

    Directory of Open Access Journals (Sweden)

    Chao Guo

    2017-01-01

    Full Text Available An optimal routing scheme in space information networks was presented to balance network loads for heterogeneous users. According to the competition among the nodes, the model was built based on queuing game theory. The virtual routing platform was in charge of resources allocation and route selection. It got user’s gain to decide which node the user joined in. Owning to the existing of heterogeneous users, an optimal admission fee needed to be obtained to avoid congestion. In our model, firstly, the whole welfare of the system was formulated. Then the optimal admission fee was calculated through maximizing the whole welfare. Meanwhile, the average maximum queue length was generated to set the buffer space of the node. At last, a routing factor was introduced into the route algorithm in order that the optimal routing could be selected by heterogeneous users. As a result, the system welfare reaches the maximum.

  9. Study on the structure optimization scheme design of a double-tube once-through steam

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Xinyu; Wu, Shifa; Wang, Pengfei; Zhao, Fuyu [Dept. of Nuclear Science and Technology, Xi' an Jiaotong University, Xi' an (China)

    2016-08-15

    A double-tube once-through steam generator (DOTSG) consisting of an outer straight tube and an inner helical tube is studied in this work. First, the structure of the DOTSG is optimized by considering two different objective functions. The tube length and the total pressure drop are considered as the first and second objective functions, respectively. Because the DOTSG is divided into the subcooled, boiling, and superheated sections according to the different secondary fluid states, the pitches in the three sections are defined as the optimization variables. A multi-objective optimization model is established and solved by particle swarm optimization. The optimization pitch is small in the subcooled region and superheated region, and large in the boiling region. Considering the availability of the optimum structure at power levels below 100% full power, we propose a new operating scheme that can fix the boundaries between the three heat-transfer sections. The operation scheme is proposed on the basis of data for full power, and the operation parameters are calculated at low power level. The primary inlet and outlet temperatures, as well as flow rate and secondary outlet temperature are changed according to the operation procedure.

  10. The optimization on flow scheme of helium liquefier with genetic algorithm

    Science.gov (United States)

    Wang, H. R.; Xiong, L. Y.; Peng, N.; Liu, L. Q.

    2017-01-01

    There are several ways to organize the flow scheme of the helium liquefiers, such as arranging the expanders in parallel (reverse Brayton stage) or in series (modified Brayton stages). In this paper, the inlet mass flow and temperatures of expanders in Collins cycle are optimized using genetic algorithm (GA). Results show that maximum liquefaction rate can be obtained when the system is working at the optimal parameters. However, the reliability of the system is not well due to high wheel speed of the first turbine. Study shows that the scheme in which expanders are arranged in series with heat exchangers between them has higher operation reliability but lower plant efficiency when working at the same situation. Considering both liquefaction rate and system stability, another flow scheme is put forward hoping to solve the dilemma. The three configurations are compared from different aspects, they are respectively economic cost, heat exchanger size, system reliability and exergy efficiency. In addition, the effect of heat capacity ratio on heat transfer efficiency is discussed. A conclusion of choosing liquefier configuration is given in the end, which is meaningful for the optimal design of helium liquefier.

  11. Optimized acquisition scheme for multi-projection correlation imaging of breast cancer

    Science.gov (United States)

    Chawla, Amarpreet S.; Samei, Ehsan; Saunders, Robert S.; Lo, Joseph Y.; Singh, Swatee

    2008-03-01

    We are reporting the optimized acquisition scheme of multi-projection breast Correlation Imaging (CI) technique, which was pioneered in our lab at Duke University. CI is similar to tomosynthesis in its image acquisition scheme. However, instead of analyzing the reconstructed images, the projection images are directly analyzed for pathology. Earlier, we presented an optimized data acquisition scheme for CI using mathematical observer model. In this article, we are presenting a Computer Aided Detection (CADe)-based optimization methodology. Towards that end, images from 106 subjects recruited for an ongoing clinical trial for tomosynthesis were employed. For each patient, 25 angular projections of each breast were acquired. Projection images were supplemented with a simulated 3 mm 3D lesion. Each projection was first processed by a traditional CADe algorithm at high sensitivity, followed by a reduction of false positives by combining geometrical correlation information available from the multiple images. Performance of the CI system was determined in terms of free-response receiver operating characteristics (FROC) curves and the area under ROC curves. For optimization, the components of acquisition such as the number of projections, and their angular span were systematically changed to investigate which one of the many possible combinations maximized the sensitivity and specificity. Results indicated that the performance of the CI system may be maximized with 7-11 projections spanning an angular arc of 44.8°, confirming our earlier findings using observer models. These results indicate that an optimized CI system may potentially be an important diagnostic tool for improved breast cancer detection.

  12. Evaluation of alternative macroinvertebrate sampling techniques for use in a new tropical freshwater bioassessment scheme

    Directory of Open Access Journals (Sweden)

    Isabel Eleanor Moore

    2015-06-01

    Full Text Available Aim: The study aimed to determine the effectiveness of benthic macroinvertebrate dredge net sampling procedures as an alternative method to kick net sampling in tropical freshwater systems, specifically as an evaluation of sampling methods used in the Zambian Invertebrate Scoring System (ZISS river bioassessment scheme. Tropical freshwater ecosystems are sometimes dangerous or inaccessible to sampling teams using traditional kick-sampling methods, so identifying an alternative procedure that produces similar results is necessary in order to collect data from a wide variety of habitats.MethodsBoth kick and dredge nets were used to collect macroinvertebrate samples at 16 riverine sites in Zambia, ranging from backwaters and floodplain lagoons to fast flowing streams and rivers. The data were used to calculate ZISS, diversity (S: number of taxa present, and Average Score Per Taxon (ASPT scores per site, using the two sampling methods to compare their sampling effectiveness. Environmental parameters, namely pH, conductivity, underwater photosynthetically active radiation (PAR, temperature, alkalinity, flow, and altitude, were also recorded and used in statistical analysis. Invertebrate communities present at the sample sites were determined using multivariate procedures.ResultsAnalysis of the invertebrate community and environmental data suggested that the testing exercise was undertaken in four distinct macroinvertebrate community types, supporting at least two quite different macroinvertebrate assemblages, and showing significant differences in habitat conditions. Significant correlations were found for all three bioassessment score variables between results acquired using the two methods, with dredge-sampling normally producing lower scores than did the kick net procedures. Linear regression models were produced in order to correct each biological variable score collected by a dredge net to a score similar to that of one collected by kick net

  13. An optimized watermarking scheme using an encrypted gyrator transform computer generated hologram based on particle swarm optimization.

    Science.gov (United States)

    Li, Jianzhong

    2014-04-21

    In this paper, a novel secure optimal image watermarking scheme using an encrypted gyrator transform computer generated hologram (CGH) in the contourlet domain is presented. A new encrypted CGH technique, which is based on the gyrator transform, the random phase mask, the three-step phase-shifting interferometry and the Fibonacci transform, has been proposed to produce a hologram of a watermark first. With the huge key space of the encrypted CGH, the security strength of the watermarking system is enhanced. To achieve better imperceptibility, an improved quantization embedding algorithm is proposed to embed the encrypted CGH into the low frequency sub-band of the contourlet-transformed host image. In order to obtain the highest possible robustness without losing the imperceptibility, particle swarm optimization algorithm is employed to search the optimal embedding parameter of the watermarking system. In comparison with other method, the proposed watermarking scheme offers better performances for both imperceptibility and robustness. Experimental results demonstrate that the proposed image watermarking is not only secure and invisible, but also robust against a variety of attacks.

  14. Application of Allan Deviation to Assessing Uncertainties of Continuous-measurement Instruments, and Optimizing Calibration Schemes

    Science.gov (United States)

    Jacobson, Gloria; Rella, Chris; Farinas, Alejandro

    2014-05-01

    Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits

  15. Optimization of separation and detection schemes for DNA with pulsed field slab gel and capillary electrophoresis

    Energy Technology Data Exchange (ETDEWEB)

    McGregor, David A. [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    The purpose of the Human Genome Project is outlined followed by a discussion of electrophoresis in slab gels and capillaries and its application to deoxyribonucleic acid (DNA). Techniques used to modify electroosmotic flow in capillaries are addressed. Several separation and detection schemes for DNA via gel and capillary electrophoresis are described. Emphasis is placed on the elucidation of DNA fragment size in real time and shortening separation times to approximate real time monitoring. The migration of DNA fragment bands through a slab gel can be monitored by UV absorption at 254 nm and imaged by a charge coupled device (CCD) camera. Background correction and immediate viewing of band positions to interactively change the field program in pulsed-field gel electrophoresis are possible throughout the separation. The use of absorption removes the need for staining or radioisotope labeling thereby simplifying sample preparation and reducing hazardous waste generation. This leaves the DNA in its native state and further analysis can be performed without de-staining. The optimization of several parameters considerably reduces total analysis time. DNA from 2 kb to 850 kb can be separated in 3 hours on a 7 cm gel with interactive control of the pulse time, which is 10 times faster than the use of a constant field program. The separation of ΦX174RF DNA-HaeIII fragments is studied in a 0.5% methyl cellulose polymer solution as a function of temperature and applied voltage. The migration times decreased with both increasing temperature and increasing field strength, as expected. The relative migration rates of the fragments do not change with temperature but are affected by the applied field. Conditions were established for the separation of the 271/281 bp fragments, even without the addition of intercalating agents. At 700 V/cm and 20°C, all fragments are separated in less than 4 minutes with an average plate number of 2.5 million per meter.

  16. The cognitive mechanisms of optimal sampling.

    Science.gov (United States)

    Lea, Stephen E G; McLaren, Ian P L; Dow, Susan M; Graft, Donald A

    2012-02-01

    How can animals learn the prey densities available in an environment that changes unpredictably from day to day, and how much effort should they devote to doing so, rather than exploiting what they already know? Using a two-armed bandit situation, we simulated several processes that might explain the trade-off between exploring and exploiting. They included an optimising model, dynamic backward sampling; a dynamic version of the matching law; the Rescorla-Wagner model; a neural network model; and ɛ-greedy and rule of thumb models derived from the study of reinforcement learning in artificial intelligence. Under conditions like those used in published studies of birds' performance under two-armed bandit conditions, all models usually identified the more profitable source of reward, and did so more quickly when the reward probability differential was greater. Only the dynamic programming model switched from exploring to exploiting more quickly when available time in the situation was less. With sessions of equal length presented in blocks, a session-length effect was induced in some of the models by allowing motivational, but not memory, carry-over from one session to the next. The rule of thumb model was the most successful overall, though the neural network model also performed better than the remaining models. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. A Fairness-Based Access Control Scheme to Optimize IPTV Fast Channel Changing

    Directory of Open Access Journals (Sweden)

    Junyu Lai

    2014-01-01

    Full Text Available IPTV services are typically featured with a longer channel changing delay compared to the conventional TV systems. The major contributor to this lies in the time spent on intraframe (I-frame acquisition during channel changing. Currently, most widely adopted fast channel changing (FCC methods rely on promptly transmitting to the client (conducting the channel changing a retained I-frame of the targeted channel as a separate unicasting stream. However, this I-frame acceleration mechanism has an inherent scalability problem due to the explosions of channel changing requests during commercial breaks. In this paper, we propose a fairness-based admission control (FAC scheme for the original I-frame acceleration mechanism to enhance its scalability by decreasing the bandwidth demands. Based on the channel changing history of every client, the FAC scheme can intelligently decide whether or not to conduct the I-frame acceleration for each channel change request. Comprehensive simulation experiments demonstrate the potential of our proposed FAC scheme to effectively optimize the scalability of the I-frame acceleration mechanism, particularly in commercial breaks. Meanwhile, the FAC scheme only slightly increases the average channel changing delay by temporarily disabling FCC (i.e., I-frame acceleration for the clients who are addicted to frequent channel zapping.

  18. Adaptive control schemes for improving dynamic performance of efficiency-optimized induction motor drives.

    Science.gov (United States)

    Kumar, Navneet; Raj Chelliah, Thanga; Srivastava, S P

    2015-07-01

    Model Based Control (MBC) is one of the energy optimal controllers used in vector-controlled Induction Motor (IM) for controlling the excitation of motor in accordance with torque and speed. MBC offers energy conservation especially at part-load operation, but it creates ripples in torque and speed during load transition, leading to poor dynamic performance of the drive. This study investigates the opportunity for improving dynamic performance of a three-phase IM operating with MBC and proposes three control schemes: (i) MBC with a low pass filter (ii) torque producing current (iqs) injection in the output of speed controller (iii) Variable Structure Speed Controller (VSSC). The pre and post operation of MBC during load transition is also analyzed. The dynamic performance of a 1-hp, three-phase squirrel-cage IM with mine-hoist load diagram is tested. Test results are provided for the conventional field-oriented (constant flux) control and MBC (adjustable excitation) with proposed schemes. The effectiveness of proposed schemes is also illustrated for parametric variations. The test results and subsequent analysis confer that the motor dynamics improves significantly with all three proposed schemes in terms of overshoot/undershoot peak amplitude of torque and DC link power in addition to energy saving during load transitions.

  19. A density-dependent matrix model and its applications in optimizing harvest schemes

    Institute of Scientific and Technical Information of China (English)

    Guofan Shao; WANG Fei; DAI Limin; BAI Jianwei; LI Yingshan

    2006-01-01

    Based on temporal data collected from 36 re-measured plots, transition probabilities of trees from a diameter class to a higher class were analyzed for the broadleaved-Korean pine forest in the Changbai Mountains. It was found that the transition probabilities were related not only to diameter size but also to the total basal area of trees with the diameter class. This paper demonstrates the development of a density-dependent matrix model, DM2, and a series of simulations with it for forest stands with different conditions under different harvest schemes. After validations with independent field data, this model proved a suitable tool for optimization analysis of harvest schemes on computers. The optimum harvest scheme(s) can be determined by referring to stand growth, total timbers harvested, and size diversity changes over time. Three user-friendly interfaces were built with a forest management decision support system FORESTAR(R) for easy operations of DM2 by forest managers. This paper also summarizes the advantages and disadvantages of DM2.

  20. An Efficient Radial Basis Function Mesh Deformation Scheme within an Adjoint-Based Aerodynamic Optimization Framework

    Science.gov (United States)

    Poirier, Vincent

    Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.

  1. ECCM scheme against interrupted sampling repeater jammer based on time-frequency analysis

    Institute of Scientific and Technical Information of China (English)

    Shixian Gong; Xizhang Wei; Xiang Li

    2014-01-01

    The interrupted sampling repeater jamming (ISRJ) is an effective deception jamming method for coherent radar, especial y for the wideband linear frequency modulation (LFM) radar. An electronic counter-countermeasure (ECCM) scheme is proposed to remove the ISRJ-based false targets from the pulse compres-sion result of the de-chirping radar. Through the time-frequency (TF) analysis of the radar echo signal, it can be found that the TF characteristics of the ISRJ signal are discontinuous in the pulse duration because the ISRJ jammer needs short durations to re-ceive the radar signal. Based on the discontinuous characteristics a particular band-pass filter can be generated by two alternative approaches to retain the true target signal and suppress the ISRJ signal. The simulation results prove the validity of the proposed ECCM scheme for the ISRJ.

  2. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    Directory of Open Access Journals (Sweden)

    Huan Chen

    Full Text Available This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN. Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  3. Hybrid microsystem with functionalized silicon substrate and PDMS sample operating microchannel: A reconfigurable microfluidics scheme

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    A hybrid microsystem with separately functioned temperature controlling substrate and sample operating fluidic microchannel was developed to demonstrate a reconfigurable microfluidics scheme.The temperature controlling substrate integrated a micro heater and a temperature sensor by using traditional silicon-based micromechanical system(MEMS)technique,which guaranteed high performance and robust reliability for repeatable usage.The sample operating fluidic microchannel was prepared by poly-(dimethylsiloxane) (PDMS)based soft lithography technique,which made it cheap enough for disposable applications.The PDMS microchannel chip was attached to the temperature controlling substrate for reconfigurable thermal applications.A thin PDMS film was used to seal the microchannel and bridge the functionalized substrate and the sample inside the channel,which facilitated heat transferring and prevented sample contaminating the temperature controlling substrate.Demonstrated by a one dimensional thermal resistance model,the thin PDMS film was important for the present reconfiguration applications.Thermal performance of this hybrid microsystem was examined,and the experimental results demonstrated that the chip system could work stably over hours with temperature variation less than 0.1oC.Multiple PDMS microchannel chips were tested on one heating substrate sequentially with a maximum intra-chip temperature difference of 1.0oC.DNA extracted from serum of a chronic hepatitis B virus(HBV)patient was amplified by this hybrid microsystem and the gel electrophoresis result indicated that the present reconfigurable microfluidic scheme worked successfully.

  4. An optimized node-disjoint multipath routing scheme in mobile ad hoc

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Liu, Zhiyu

    2016-02-01

    In mobile ad hoc networks (MANETs), link failures are caused frequently because of node’s mobility and use of unreliable wireless channels for data transmission. Multipath routing strategy can cope with the problem of the traffic overloads while balancing the network resource consumption. In the paper, an optimized node-disjoint multipath routing (ONMR) protocol based on ad hoc on-demand vector (AODV) is proposed to establish effective multipath to enhance the network reliability and robustness. The scheme combines the characteristics of reverse AODV (R-AODV) strategy and on-demand node-disjoint multipath routing protocol to determine available node-disjoint routes with minimum routing control overhead. Meanwhile, it adds the backup routing strategy to make the process of data salvation more efficient in case of link failure. The results obtained through various simulations show the effectiveness of the proposed scheme in terms of route availability, control overhead and packet delivery ratio.

  5. Determination of Optimal Opening Scheme for Electromagnetic Loop Networks Based on Fuzzy Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Yang Li

    2016-01-01

    Full Text Available Studying optimization and decision for opening electromagnetic loop networks plays an important role in planning and operation of power grids. First, the basic principle of fuzzy analytic hierarchy process (FAHP is introduced, and then an improved FAHP-based scheme evaluation method is proposed for decoupling electromagnetic loop networks based on a set of indicators reflecting the performance of the candidate schemes. The proposed method combines the advantages of analytic hierarchy process (AHP and fuzzy comprehensive evaluation. On the one hand, AHP effectively combines qualitative and quantitative analysis to ensure the rationality of the evaluation model; on the other hand, the judgment matrix and qualitative indicators are expressed with trapezoidal fuzzy numbers to make decision-making more realistic. The effectiveness of the proposed method is validated by the application results on the real power system of Liaoning province of China.

  6. Convergent and Correct Message Passing Schemes for Optimization Problems over Graphical Models

    CERN Document Server

    Ruozzi, Nicholas

    2010-01-01

    The max-product algorithm, which attempts to compute the most probable assignment (MAP) of a given probability distribution, has recently found applications in quadratic minimization and combinatorial optimization. Unfortunately, the max-product algorithm is not guaranteed to converge and, even if it does, is not guaranteed to produce the MAP assignment. In this work, we provide a simple derivation of a new family of message passing algorithms. We first show how to arrive at this general message passing scheme by "splitting" the factors of our graphical model and then we demonstrate that this construction can be extended beyond integral splitting. We prove that, for any objective function which attains its maximum value over its domain, this new family of message passing algorithms always contains a message passing scheme that guarantees correctness upon convergence to a unique estimate. We then adopt a serial message passing schedule and prove that, under mild assumptions, such a schedule guarantees the conv...

  7. Designing an Optimal Subsidy Scheme to Reduce Emissions for a Competitive Urban Transport Market

    Directory of Open Access Journals (Sweden)

    Feifei Qin

    2015-08-01

    Full Text Available With the purpose of establishing an effective subsidy scheme to reduce Greenhouse Gas (GHG emissions, this paper proposes a two-stage game for a competitive urban transport market. In the first stage, the authority determines operating subsidies based on social welfare maximization. Observing the predetermined subsidies, two transit operators set fares and frequencies to maximize their own profits at the second stage. The detailed analytical and numerical analyses demonstrate that of the three proposed subsidy schemes, the joint implementation of trip-based and frequency-related subsidies not only generates the largest welfare gains and makes competitive operators provide equilibrium fares and frequencies, which largely resemble first-best optimal levels but also greatly contributes to reducing Greenhouse Gas (GHG emissions on major urban transport corridors.

  8. Optimizing sparse sampling for 2D electronic spectroscopy

    Science.gov (United States)

    Roeding, Sebastian; Klimovich, Nikita; Brixner, Tobias

    2017-02-01

    We present a new data acquisition concept using optimized non-uniform sampling and compressed sensing reconstruction in order to substantially decrease the acquisition times in action-based multidimensional electronic spectroscopy. For this we acquire a regularly sampled reference data set at a fixed population time and use a genetic algorithm to optimize a reduced non-uniform sampling pattern. We then apply the optimal sampling for data acquisition at all other population times. Furthermore, we show how to transform two-dimensional (2D) spectra into a joint 4D time-frequency von Neumann representation. This leads to increased sparsity compared to the Fourier domain and to improved reconstruction. We demonstrate this approach by recovering transient dynamics in the 2D spectrum of a cresyl violet sample using just 25% of the originally sampled data points.

  9. Scheme for the implementation of 1→3 optimal phase-covariant quantum cloning in ion-trap systems

    Institute of Scientific and Technical Information of China (English)

    Yang Rong-Can; Li Hong-Cai; Lin Xiu; Huang Zhi-Ping; Xie Hong

    2008-01-01

    This paper proposes a scheme for the implementation of 1 → 3 optimal phase-covariant quantum cloning with trapped ions. In the present protocol, the required time for the whole procedure is short due to the resonant interaction,which is important in view of decoherence. Furthermore, the scheme is feasible based on current technologies.

  10. Scheme for Implementation of Ancillary-Free 1 → 3 Optimal Phase-Covariant Quantum Cloning with Trapped Ions

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yan-Ping; YANG Rong-Can; LI Ping; LI Hong-Cai; GOU Qing-Quan; LIN Xiu; LIU Wei-Na; HUANG Zhi-Ping; XIE Hong

    2008-01-01

    We propose a simple scheme for the implementation of the ancillary-free 1 → 3 optimal phase-covariant quantum cloning for x-y equatorial qubits in ion-trap system. In the scheme, the vibrational mode is only virtually excited, which is very important in view of decoherence. The present proposal can be realized based on current available technologies.

  11. A note on a fatal error of optimized LFC private information retrieval scheme and its corrected results

    DEFF Research Database (Denmark)

    Tamura, Jim; Kobara, Kazukuni; Fathi, Hanane

    2010-01-01

    A number of lightweight PIR (Private Information Retrieval) schemes have been proposed in recent years. In JWIS2006, Kwon et al. proposed a new scheme (optimized LFCPIR, or OLFCPIR), which aimed at reducing the communication cost of Lipmaa's O(log2 n) PIR(LFCPIR) to O(logn). However in this paper...

  12. A novel optimal sensitivity design scheme for yarn tension sensor using surface acoustic wave device.

    Science.gov (United States)

    Lei, Bingbing; Lu, Wenke; Zhu, Changchun; Liu, Qinghong; Zhang, Haoxin

    2014-08-01

    In this paper, we propose a novel optimal sensitivity design scheme for the yarn tension sensor using surface acoustic wave (SAW) device. In order to obtain the best sensitivity, the regression model between the size of the SAW yarn tension sensor substrate and the sensitivity of the SAW yarn tension sensor was established using the least square method. The model was validated too. Through analyzing the correspondence between the regression function monotonicity and its partial derivative sign, the effect of the SAW yarn tension sensor substrate size on the sensitivity of the SAW yarn tension sensor was investigated. Based on the regression model, a linear programming model was established to gain the optimal sensitivity of the SAW yarn tension sensor. The linear programming result shows that the maximum sensitivity will be achieved when the SAW yarn tension sensor substrate length is equal to 15 mm and its width is equal to 3mm within a fixed interval of the substrate size. An experiment of SAW yarn tension sensor about 15 mm long and 3mm wide was presented. Experimental results show that the maximum sensitivity 1982.39 Hz/g was accomplished, which confirms that the optimal sensitivity design scheme is useful and effective. Copyright © 2014. Published by Elsevier B.V.

  13. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    Directory of Open Access Journals (Sweden)

    Yongkai An

    2015-07-01

    Full Text Available This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately.

  14. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    Science.gov (United States)

    An, Yongkai; Lu, Wenxi; Cheng, Weiguo

    2015-01-01

    This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008

  15. Fault isolation filter for networked control system with event-triggered sampling scheme.

    Science.gov (United States)

    Li, Shanbin; Sauter, Dominique; Xu, Bugong

    2011-01-01

    In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method.

  16. Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Bugong Xu

    2011-01-01

    Full Text Available In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method.

  17. Optimization of pumping schemes for 160-Gb/s single channel Raman amplified systems

    DEFF Research Database (Denmark)

    Xu, Lin; Rottwitt, Karsten; Peucheret, Christophe;

    2004-01-01

    Three different distributed Raman amplification schemes-backward pumping, bidirectional pumping, and second-order pumping-are evaluated numerically for 160-Gb/s single-channel transmission. The same longest transmission distance of 2500 km is achieved for all three pumping methods with a 105-km...... span composed of superlarge effective area fiber and inverse dispersion fiber. For longest system reach, second-order pumping and backward pumping have larger pump power tolerance than bidirectional pumping, while the optimal span input signal power margin of second-order pumping is the largest...

  18. Evolutional Optimization on Material Ordering and Inventory Control of Supply Chain through Incentive Scheme

    Science.gov (United States)

    Prasertwattana, Kanit; Shimizu, Yoshiaki; Chiadamrong, Navee

    This paper studied the material ordering and inventory control of supply chain systems. The effect of controlling policies is analyzed under three different configurations of the supply chain systems, and the formulated problem has been solved by using an evolutional optimization method known as Differential Evolution (DE). The numerical results show that the coordinating policy with the incentive scheme outperforms the other policies and can improve the performance of the overall system as well as all members under the concept of supply chain management.

  19. Approximate Optimal Control of Affine Nonlinear Continuous-Time Systems Using Event-Sampled Neurodynamic Programming.

    Science.gov (United States)

    Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani

    2017-03-01

    This paper presents an approximate optimal control of nonlinear continuous-time systems in affine form by using the adaptive dynamic programming (ADP) with event-sampled state and input vectors. The knowledge of the system dynamics is relaxed by using a neural network (NN) identifier with event-sampled inputs. The value function, which becomes an approximate solution to the Hamilton-Jacobi-Bellman equation, is generated by using event-sampled NN approximator. Subsequently, the NN identifier and the approximated value function are utilized to obtain the optimal control policy. Both the identifier and value function approximator weights are tuned only at the event-sampled instants leading to an aperiodic update scheme. A novel adaptive event sampling condition is designed to determine the sampling instants, such that the approximation accuracy and the stability are maintained. A positive lower bound on the minimum inter-sample time is guaranteed to avoid accumulation point, and the dependence of inter-sample time upon the NN weight estimates is analyzed. A local ultimate boundedness of the resulting nonlinear impulsive dynamical closed-loop system is shown. Finally, a numerical example is utilized to evaluate the performance of the near-optimal design. The net result is the design of an event-sampled ADP-based controller for nonlinear continuous-time systems.

  20. A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme

    Science.gov (United States)

    Ghoman, Satyajit S.

    The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of

  1. Optimal feedback scheme and universal time scaling for Hamiltonian parameter estimation.

    Science.gov (United States)

    Yuan, Haidong; Fung, Chi-Hang Fred

    2015-09-11

    Time is a valuable resource and it is expected that a longer time period should lead to better precision in Hamiltonian parameter estimation. However, recent studies in quantum metrology have shown that in certain cases more time may even lead to worse estimations, which puts this intuition into question. In this Letter we show that by including feedback controls this intuition can be restored. By deriving asymptotically optimal feedback controls we quantify the maximal improvement feedback controls can provide in Hamiltonian parameter estimation and show a universal time scaling for the precision limit under the optimal feedback scheme. Our study reveals an intriguing connection between noncommutativity in the dynamics and the gain of feedback controls in Hamiltonian parameter estimation.

  2. Optimization technology of 9/7 wavelet lifting scheme on DSP*

    Science.gov (United States)

    Chen, Zhengzhang; Yang, Xiaoyuan; Yang, Rui

    2007-12-01

    Nowadays wavelet transform has been one of the most effective transform means in the realm of image processing, especially the biorthogonal 9/7 wavelet filters proposed by Daubechies, which have good performance in image compression. This paper deeply studied the implementation and optimization technologies of 9/7 wavelet lifting scheme based on the DSP platform, including carrying out the fixed-point wavelet lifting steps instead of time-consuming floating-point operation, adopting pipelining technique to improve the iteration procedure, reducing the times of multiplication calculation by simplifying the normalization operation of two-dimension wavelet transform, and improving the storage format and sequence of wavelet coefficients to reduce the memory consumption. Experiment results have shown that these implementation and optimization technologies can improve the wavelet lifting algorithm's efficiency more than 30 times, which establish a technique foundation for successfully developing real-time remote sensing image compression system in future.

  3. Smart AMS : Optimizing the measurement procedure for small radiocarbon samples

    NARCIS (Netherlands)

    Vries de, Hendrik

    2010-01-01

    Abstract In order to improve the measurement efficiency of radiocarbon samples, particularly small samples (< 300 µg C), the measurement procedure was optimized using Smart AMS, which is the name of the new control system of the AMS. The system gives the

  4. Wrapped Progressive Sampling Search for Optimizing Learning Algorithm Parameters

    NARCIS (Netherlands)

    Bosch, Antal van den

    2005-01-01

    We present a heuristic meta-learning search method for finding a set of optimized algorithmic parameters for a range of machine learning algo- rithms. The method, wrapped progressive sampling, is a combination of classifier wrapping and progressive sampling of training data. A series of experiments

  5. Wrapped Progressive Sampling Search for Optimizing Learning Algorithm Parameters

    NARCIS (Netherlands)

    Bosch, Antal van den

    2005-01-01

    We present a heuristic meta-learning search method for finding a set of optimized algorithmic parameters for a range of machine learning algo- rithms. The method, wrapped progressive sampling, is a combination of classifier wrapping and progressive sampling of training data. A series of experiments

  6. Effects of autocorrelation and temporal sampling schemes on estimates of trend and spatial correlation

    Energy Technology Data Exchange (ETDEWEB)

    Tiao, G.C.; Daming, Xu; Pedrick, J.H.; Xiaodong, Zhu (Univ. of Chicago, IL (USA)); Reinsel, G.C. (Univ. of Wisconsin, Madison (USA)); Miller, A.J.; DeLuisi, J.J. (National Oceanic and Atmospheric Administration, Boulder, CO (USA)); Mateer, C.L. (Atmospheric Environment Service, Ottawa, Ontario (Canada)); Wuebbles, D.J. (Lawrence Livermore National Lab., CA (USA))

    1990-11-20

    This paper is concerned with temporal data requirements for the assessment of trends and for estimating spatial correlations of atmospheric species. The authors examine statistically three basic issues: (1) the effect of autocorrelations in monthly observations and the effect of the length of data record on the precision of trend estimates, (2) the effect of autocorrelations in the daily data on the sampling frequency requirements with respect to the representativeness of monthly averages for trend estimation, and (3) the effect of temporal sampling schemes on estimating spatial correlations of atmospheric species in neighboring stations. The principal findings are (1) the precision of trend estimates depends critically on the magnitude of auto-correlations in the monthly observations, (2) this precision is insensitive to the temporal sampling rates of daily measurements under systematic sampling, and (3) the estimate of spatial correlation between two neighboring stations is insensitive to temporal sampling rate under systematic sampling, but is sensitive to the time lag between measurements taken at the two stations. These results are based on methodological considerations as well as on empirical analysis of total and profile ozone and rawinsonde temperature data from selected ground stations.

  7. On Optimal, Minimal BRDF Sampling for Reflectance Acquisition

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Jensen, Henrik Wann; Ramamoorthi, Ravi

    2015-01-01

    , such as the MERL BRDF database. We optimize for the best sampling directions, and explicitly provide the optimal set of incident and outgoing directions in the Rusinkiewicz parameterization for n = {1, 2, 5, 10, 20} samples. Based on the principal components, we describe a method for accurately reconstructing BRDF...... demonstrate how this method can be used to find optimal sampling directions when imaging a sphere of a homogeneous material; in this case, only two images are often adequate for high accuracy.......The bidirectional reflectance distribution function (BRDF) is critical for rendering, and accurate material representation requires data-driven reflectance models. However, isotropic BRDFs are 3D functions, and measuring the reflectance of a flat sample can require a million incident and outgoing...

  8. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  9. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  10. Using remotely-sensed data for optimal field sampling

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-09-01

    Full Text Available to carry out a fieldwork sample is an important issue as it avoids subjective judgement and can save on time and costs in the field. STATISTICAL SAMPLING, USING DATA OBTAINED FROM REMOTE SENSING, FINDS APPLICATION IN A VARIETY OF FIELDS... M B E R 2 0 0 8 15 USING REMOTELY- SENSED DATA FOR OPTIMAL FIELD SAMPLING BY DR PRAVESH DEBBA STATISTICS IS THE SCIENCE pertaining to the collection, summary, analysis, interpretation and presentation of data. It is often impractical...

  11. An Energy Efficient Scheme for Data Gathering in Wireless Sensor Networks Using Particle Swarm Optimization

    CERN Document Server

    Chakraborty, Ayon; Mitra, Swarup Kumar; Naskar, M K

    2010-01-01

    Energy Efficiency of sensor nodes is a sizzling issue, given the severe resource constraints of sensor nodes and pervasive nature of sensor networks. The base station being located at variable distances from the nodes in the sensor field, each node actually dissipates a different amount of energy to transmit data to the same. The LEACH [4] and PEGASIS [5] protocols provide elegant solutions to this problem, but may not always result in optimal performance. In this paper we have proposed a novel data gathering protocol for enhancing the network lifetime by optimizing energy dissipation in the nodes. To achieve our design objective we have applied particle swarm optimization (PSO) with Simulated Annealing (SA) to form a sub-optimal data gathering chain and devised a method for selecting an efficient leader for communicating to the base station. In our scheme each node only communicates with a close neighbor and takes turns in being the leader depending on its residual energy and location. This helps to rule out...

  12. Retailer’s optimal ordering policies with cash discount and progressive payment scheme derived algebraically

    Directory of Open Access Journals (Sweden)

    Alok kumar

    2011-10-01

    Full Text Available This study presents optimal ordering policies for retailer when supplier offers cash discount and two progressive payment schemes for paying of purchasing cost. If the retailer pays the outstanding amount before or at first trade credit period M, the supplier provides r_1cash discount and does not charge any interest. If the retailer pays after M but before or at the second trade period N offered by the supplier, the supplier provides r_2 cash discount and charges interest on unpaid balance at the rate 〖Ic〗_1 . If retailer pays the balance after N, (N>M then the supplier does not provide any cash discount but charges interest on unpaid balance at the rate 〖Ic〗_2. The primary objective of this paper is to minimize the total cost of inventory system. This paper develops an algebraic approach to determine the optimal cycle time, optimal order quantity and optimal relevant cost. Numerical example are also presented to illustrate the result of propose model and solution procedure developed.

  13. Optimal linear shrinkage corrections of sample LMMSE and MVDR estimators

    OpenAIRE

    2012-01-01

    La proposició d'estimadors shrinkage òptims que corregeixen la degradació dels mètodes sample LMMSE i sample MUDR en el règim on el número de mostres és petit en comparació a la dimensió de les observacions. [ANGLÈS] This master thesis proposes optimal shrinkage estimators that counteract the performance degradation of the sample LMMSE and sample MVDR methods in the regime where the sample size is small compared to the observation dimension. [CASTELLÀ] Esta máster tesis propone estimado...

  14. A unified numerical scheme for linear-quadratic optimal control problems with joint control and state constraints

    NARCIS (Netherlands)

    Han, Lanshan; Camlibel, M. Kanat; Pang, Jong-Shi; Heemels, W. P. Maurice H.

    2012-01-01

    This paper presents a numerical scheme for solving the continuous-time convex linear-quadratic (LQ) optimal control problem with mixed polyhedral state and control constraints. Unifying a discretization of this optimal control problem as often employed in model predictive control and that obtained

  15. The dependence of optimal fractionation schemes on the spatial dose distribution

    Science.gov (United States)

    Unkelbach, Jan; Craft, David; Salari, Ehsan; Ramakrishnan, Jagdish; Bortfeld, Thomas

    2013-01-01

    We consider the fractionation problem in radiation therapy. Tumor sites in which the dose-limiting organ at risk (OAR) receives a substantially lower dose than the tumor, bear potential for hypofractionation even if the α/β-ratio of the tumor is larger than the α/β-ratio of the OAR. In this work, we analyze the interdependence of the optimal fractionation scheme and the spatial dose distribution in the OAR. In particular, we derive a criterion under which a hypofractionation regimen is indicated for both a parallel and a serial OAR. The approach is based on the concept of the biologically effective dose (BED). For a hypothetical homogeneously irradiated OAR, it has been shown that hypofractionation is suggested by the BED model if the α/β-ratio of the OAR is larger than α/β-ratio of the tumor times the sparing factor, i.e. the ratio of the dose received by the tumor and the OAR. In this work, we generalize this result to inhomogeneous dose distributions in the OAR. For a parallel OAR, we determine the optimal fractionation scheme by minimizing the integral BED in the OAR for a fixed BED in the tumor. For a serial structure, we minimize the maximum BED in the OAR. This leads to analytical expressions for an effective sparing factor for the OAR, which provides a criterion for hypofractionation. The implications of the model are discussed for lung tumor treatments. It is shown that the model supports hypofractionation for small tumors treated with rotation therapy, i.e. highly conformal techniques where a large volume of lung tissue is exposed to low but nonzero dose. For larger tumors, the model suggests hyperfractionation. We further discuss several non-intuitive interdependencies between optimal fractionation and the spatial dose distribution. For instance, lowering the dose in the lung via proton therapy does not necessarily provide a biological rationale for hypofractionation.

  16. Optimization of enrichment processes of pentachlorophenol (PCP) from water samples

    Institute of Scientific and Technical Information of China (English)

    LI Ping; LIU Jun-xin

    2004-01-01

    The method of enriching PCP (pentachlorophenol) from aquatic environment by solid phase extraction (SPE) was studied.Several factors affecting the recoveries of PCP, including sample pH, eluting solvent, eluting volume and flow rate of water sample, were optimized by orthogonal array design(OAD). The optimized results were sample pH 4; eluting solvent, 100% methanol; eluting solvent volume, 2 mi and flow rate of water sample, 4 mi/min. A comparison is made between SPE and liquid-liquid extraction(LLE) method. The recoveries of PCP were in the range of 87.6 % 133.6 % and 79 %- 120.3 % for S PE and LLE, respectively. Important advantages of the SPE compared with the LLE include the short extraction time and reduced consumption of organic solvents. SPE can replace LLE for isolating and concentrating PCP from water samples.

  17. A coupled well-balanced and random sampling scheme for computing bubble oscillations*

    Directory of Open Access Journals (Sweden)

    Jung Jonathan

    2012-04-01

    Full Text Available We propose a finite volume scheme to study the oscillations of a spherical bubble of gas in a liquid phase. Spherical symmetry implies a geometric source term in the Euler equations. Our scheme satisfies the well-balanced property. It is based on the VFRoe approach. In order to avoid spurious pressure oscillations, the well-balanced approach is coupled with an ALE (Arbitrary Lagrangian Eulerian technique at the interface and a random sampling remap. Nous proposons un schéma de volumes finis pour étudier les oscillations d’une bulle sphérique de gaz dans l’eau. La symétrie sphérique fait apparaitre un terme source géométrique dans les équations d’Euler. Notre schéma est basé sur une approche VFRoe et préserve les états stationnaires. Pour éviter les oscillations de pression, l’approche well-balanced est couplée avec une approche ALE (Arbitrary Lagrangian Eulerian, et une étape de projection basée sur un échantillonage aléatoire.

  18. Optimization of the scheme for natural ecology planning of urban rivers based on ANP (analytic network process) model.

    Science.gov (United States)

    Zhang, Yichuan; Wang, Jiangping

    2015-07-01

    Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.

  19. Optimizing the atmospheric sampling sites using fuzzy mathematic methods

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A new approach applying fuzzy mathematic theorems, including the Primary Matrix Element Theorem and the Fisher ClassificationMethod, was established to solve the optimization problem of atmospheric environmental sampling sites. According to its basis, an applicationin the optimization of sampling sites in the atmospheric environmental monitoring was discussed. The method was proven to be suitable andeffective. The results were admitted and applied by the Environmental Protection Bureau (EPB) of many cities of China. A set of computersoftware of this approach was also completely compiled and used.

  20. Michaelis-Menten reaction scheme as a unified approach towards the optimal restart problem.

    Science.gov (United States)

    Rotbart, Tal; Reuveni, Shlomi; Urbakh, Michael

    2015-12-01

    We study the effect of restart, and retry, on the mean completion time of a generic process. The need to do so arises in various branches of the sciences and we show that it can naturally be addressed by taking advantage of the classical reaction scheme of Michaelis and Menten. Stopping a process in its midst-only to start it all over again-may prolong, leave unchanged, or even shorten the time taken for its completion. Here we are interested in the optimal restart problem, i.e., in finding a restart rate which brings the mean completion time of a process to a minimum. We derive the governing equation for this problem and show that it is exactly solvable in cases of particular interest. We then continue to discover regimes at which solutions to the problem take on universal, details independent forms which further give rise to optimal scaling laws. The formalism we develop, and the results obtained, can be utilized when optimizing stochastic search processes and randomized computer algorithms. An immediate connection with kinetic proofreading is also noted and discussed.

  1. Performance Analysis and Optimization of an Adaptive Admission Control Scheme in Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Shunfu Jin

    2013-01-01

    Full Text Available In cognitive radio networks, if all the secondary user (SU packets join the system without any restrictions, the average latency of the SU packets will be greater, especially when the traffic load of the system is higher. For this, we propose an adaptive admission control scheme with a system access probability for the SU packets in this paper. We suppose the system access probability is inversely proportional to the total number of packets in the system and introduce an Adaptive Factor to adjust the system access probability. Accordingly, we build a discrete-time preemptive queueing model with adjustable joining rate. In order to obtain the steady-state distribution of the queueing model exactly, we construct a two-dimensional Markov chain. Moreover, we derive the formulas for the blocking rate, the throughput, and the average latency of the SU packets. Afterwards, we provide numerical results to investigate the influence of the Adaptive Factor on different performance measures. We also give the individually optimal strategy and the socially optimal strategy from the standpoints of the SU packets. Finally, we provide a pricing mechanism to coordinate the two optimal strategies.

  2. Michaelis-Menten reaction scheme as a unified approach towards the optimal restart problem

    Science.gov (United States)

    Rotbart, Tal; Reuveni, Shlomi; Urbakh, Michael

    2015-12-01

    We study the effect of restart, and retry, on the mean completion time of a generic process. The need to do so arises in various branches of the sciences and we show that it can naturally be addressed by taking advantage of the classical reaction scheme of Michaelis and Menten. Stopping a process in its midst—only to start it all over again—may prolong, leave unchanged, or even shorten the time taken for its completion. Here we are interested in the optimal restart problem, i.e., in finding a restart rate which brings the mean completion time of a process to a minimum. We derive the governing equation for this problem and show that it is exactly solvable in cases of particular interest. We then continue to discover regimes at which solutions to the problem take on universal, details independent forms which further give rise to optimal scaling laws. The formalism we develop, and the results obtained, can be utilized when optimizing stochastic search processes and randomized computer algorithms. An immediate connection with kinetic proofreading is also noted and discussed.

  3. CLUSTERING BASED ADAPTIVE IMAGE COMPRESSION SCHEME USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    M.Mohamed Ismail,

    2010-10-01

    Full Text Available This paper presents an image compression scheme with particle swarm optimization technique for clustering. The PSO technique is a powerful general purpose optimization technique that uses the concept of fitness.It provides a mechanism such that individuals in the swarm communicate and exchange information which is similar to the social behaviour of insects & human beings. Because of the mimicking the social sharing of information ,PSO directs particle to search the solution more efficiently.PSO is like a GA in that the population isinitialized with random potential solutions.The adjustment towards the best individual experience (PBEST and the best social experience (GBEST.Is conceptually similar to the cross over operaton of the GA.However it is unlike a GA in that each potential solution , called a particle is flying through the solution space with a velocity.Moreover the particles and the swarm have memory,which does not exist in the populatiom of GA.This optimization technique is used in Image compression and better results have obtained in terms of PSNR, CR and the visual quality of the image when compared to other existing methods.

  4. Feedback optimal control of distributed parameter systems by using finite-dimensional approximation schemes.

    Science.gov (United States)

    Alessandri, Angelo; Gaggero, Mauro; Zoppoli, Riccardo

    2012-06-01

    Optimal control for systems described by partial differential equations is investigated by proposing a methodology to design feedback controllers in approximate form. The approximation stems from constraining the control law to take on a fixed structure, where a finite number of free parameters can be suitably chosen. The original infinite-dimensional optimization problem is then reduced to a mathematical programming one of finite dimension that consists in optimizing the parameters. The solution of such a problem is performed by using sequential quadratic programming. Linear combinations of fixed and parameterized basis functions are used as the structure for the control law, thus giving rise to two different finite-dimensional approximation schemes. The proposed paradigm is general since it allows one to treat problems with distributed and boundary controls within the same approximation framework. It can be applied to systems described by either linear or nonlinear elliptic, parabolic, and hyperbolic equations in arbitrary multidimensional domains. Simulation results obtained in two case studies show the potentials of the proposed approach as compared with dynamic programming.

  5. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  6. Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP

    Science.gov (United States)

    Roshani, E.; Berg, A. A.; Lindsay, J.

    2013-12-01

    Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories

  7. Optimization of the Combined Proton Acceleration Regime with a Target Composition Scheme

    CERN Document Server

    Yao, W P; Zheng, C Y; Liu, Z J; Yan, X Q

    2015-01-01

    A target composition scheme to optimize the combined proton acceleration regime is presented and verified by two-dimensional particle-in-cell (2D PIC) simulations by using an ultra-intense circularly-polarized (CP) laser pulse irradiating an overdense hydrocarbon (CH) target, instead of a pure hydrogen (H) one. The combined acceleration regime is a two-stage proton acceleration scheme combining the radiation pressure dominated acceleration (RPDA) stage and the laser wakefield acceleration (LWFA) stage sequentially together. With an ultra-intense CP laser pulse irradiating an overdense CH target, followed by an underdense tritium plasma gas, protons with higher energies (from about $20$ GeV up to about $30$ GeV) and lower energy spreads (from about $18\\%$ down to about $5\\%$ in full-width at half-maximum, or FWHM) are generated, as compared to the use of a pure H target. It is because protons can be more stably pre-accelerated in the first RPDA stage when using CH targets. With the increase of the carbon-to-hy...

  8. A novel resource optimization scheme for multi-cell OFDMA relay network

    Institute of Scientific and Technical Information of China (English)

    Ning DU; Fa-sheng LIU

    2016-01-01

    In cellular networks, users communicate with each other through their respective base stations (BSs). Conventionally, users are assumed to be in different cells. BSs serve as decode-and-forward (DF) relay nodes to users. In addition to this type of conventional user, we recognize that there are scenarios users who want to communicate with each other are located in the same cell. This gives rise to the scenario of intra-cell communication. In this case, a BS can behave as a two-way relay to achieve information exchange instead of using conventional DF relay. We consider a multi-cell orthogonal frequency division multiple access (OFDMA) network that comprises these two types of users. We are interested in resource allocation between them. Specifi cally, we jointly optimize subcarrier assignment, subcarrier pairing, and power allocation to maximize the weighted sum rate. We consider the resource allocation problem at BSs when the end users’ power is fi xed. We solve the problem approximately through Lagrange dual decomposition. Simulation results show that the proposed schemes outperform other existing schemes.

  9. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  10. 163 years of refinement: the British Geological Survey sample registration scheme

    Science.gov (United States)

    Howe, M. P.

    2011-12-01

    The British Geological Survey manages the largest UK geoscience samples collection, including: - 15,000 onshore boreholes, including over 250 km of drillcore - Vibrocores, gravity cores and grab samples from over 32,000 UK marine sample stations. 640 boreholes - Over 3 million UK fossils, including a "type and stratigraphic" reference collection of 250,000 fossils, 30,000 of which are "type, figured or cited" - Comprehensive microfossil collection, including many borehole samples - 290km of drillcore and 4.5 million cuttings samples from over 8000 UK continental shelf hydrocarbon wells - Over one million mineralogical and petrological samples, including 200,00 thin sections The current registration scheme was introduced in 1848 and is similar to that used by Charles Darwin on the Beagle. Every Survey collector or geologist has been issue with a unique prefix code of one or more letters and these were handwritten on preprinted numbers, arranged in books of 1 - 5,000 and 5,001 to 10,000. Similar labels are now computer printed. Other prefix codes are used for corporate collections, such as borehole samples, thin sections, microfossils, macrofossil sections, museum reference fossils, display quality rock samples and fossil casts. Such numbers infer significant immediate information to the curator, without the need to consult detailed registers. The registration numbers have been recorded in a series of over 1,000 registers, complete with metadata including sample ID, locality, horizon, collector and date. Citations are added as appropriate. Parent-child relationships are noted when re-registering subsubsamples. For example, a borehole sample BDA1001 could have been subsampled for a petrological thin section and off-cut (E14159), a fossil thin section (PF365), micropalynological slides (MPA273), one of which included a new holotype (MPK111), and a figured macrofossil (GSE1314). All main corporate collection now have publically-available online databases, such as Palaeo

  11. Discordance of the unified scheme with observed properties of quasars and high-excitation galaxies in the 3CRR sample

    Energy Technology Data Exchange (ETDEWEB)

    Singal, Ashok K., E-mail: asingal@prl.res.in [Astronomy and Astrophysics Division, Physical Research Laboratory, Navrangpura, Ahmedabad 380 009 (India)

    2014-07-01

    We examine the consistency of the unified scheme of Fanaroff-Riley type II radio galaxies and quasars with their observed number and size distributions in the 3CRR sample. We separate the low-excitation galaxies from the high-excitation ones, as the former might not harbor a quasar within and thus may not be partaking in the unified scheme models. In the updated 3CRR sample, at low redshifts (z < 0.5), the relative number and luminosity distributions of high-excitation galaxies and quasars roughly match the expectations from the orientation-based unified scheme model. However, a foreshortening in the observed sizes of quasars, which is a must in the orientation-based model, is not seen with respect to radio galaxies even when the low-excitation galaxies are excluded. This dashes the hope that the unified scheme might still work if one includes only the high-excitation galaxies.

  12. Ad-Hoc vs. Standardized and Optimized Arthropod Diversity Sampling

    Directory of Open Access Journals (Sweden)

    Pedro Cardoso

    2009-09-01

    Full Text Available The use of standardized and optimized protocols has been recently advocated for different arthropod taxa instead of ad-hoc sampling or sampling with protocols defined on a case-by-case basis. We present a comparison of both sampling approaches applied for spiders in a natural area of Portugal. Tests were made to their efficiency, over-collection of common species, singletons proportions, species abundance distributions, average specimen size, average taxonomic distinctness and behavior of richness estimators. The standardized protocol revealed three main advantages: (1 higher efficiency; (2 more reliable estimations of true richness; and (3 meaningful comparisons between undersampled areas.

  13. Advanced overlay: sampling and modeling for optimized run-to-run control

    Science.gov (United States)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to

  14. Accelerated Simplified Swarm Optimization with Exploitation Search Scheme for Data Clustering.

    Directory of Open Access Journals (Sweden)

    Wei-Chang Yeh

    Full Text Available Data clustering is commonly employed in many disciplines. The aim of clustering is to partition a set of data into clusters, in which objects within the same cluster are similar and dissimilar to other objects that belong to different clusters. Over the past decade, the evolutionary algorithm has been commonly used to solve clustering problems. This study presents a novel algorithm based on simplified swarm optimization, an emerging population-based stochastic optimization approach with the advantages of simplicity, efficiency, and flexibility. This approach combines variable vibrating search (VVS and rapid centralized strategy (RCS in dealing with clustering problem. VVS is an exploitation search scheme that can refine the quality of solutions by searching the extreme points nearby the global best position. RCS is developed to accelerate the convergence rate of the algorithm by using the arithmetic average. To empirically evaluate the performance of the proposed algorithm, experiments are examined using 12 benchmark datasets, and corresponding results are compared with recent works. Results of statistical analysis indicate that the proposed algorithm is competitive in terms of the quality of solutions.

  15. An intelligent hybrid scheme for optimizing parking space: A Tabu metaphor and rough set based approach

    Directory of Open Access Journals (Sweden)

    Soumya Banerjee

    2011-03-01

    Full Text Available Congested roads, high traffic, and parking problems are major concerns for any modern city planning. Congestion of on-street spaces in official neighborhoods may give rise to inappropriate parking areas in office and shopping mall complex during the peak time of official transactions. This paper proposes an intelligent and optimized scheme to solve parking space problem for a small city (e.g., Mauritius using a reactive search technique (named as Tabu Search assisted by rough set. Rough set is being used for the extraction of uncertain rules that exist in the databases of parking situations. The inclusion of rough set theory depicts the accuracy and roughness, which are used to characterize uncertainty of the parking lot. Approximation accuracy is employed to depict accuracy of a rough classification [1] according to different dynamic parking scenarios. And as such, the hybrid metaphor proposed comprising of Tabu Search and rough set could provide substantial research directions for other similar hard optimization problems.

  16. A Spectrum Handoff Scheme for Optimal Network Selection in NEMO Based Cognitive Radio Vehicular Networks

    Directory of Open Access Journals (Sweden)

    Krishan Kumar

    2017-01-01

    Full Text Available When a mobile network changes its point of attachments in Cognitive Radio (CR vehicular networks, the Mobile Router (MR requires spectrum handoff. Network Mobility (NEMO in CR vehicular networks is concerned with the management of this movement. In future NEMO based CR vehicular networks deployment, multiple radio access networks may coexist in the overlapping areas having different characteristics in terms of multiple attributes. The CR vehicular node may have the capability to make call for two or more types of nonsafety services such as voice, video, and best effort simultaneously. Hence, it becomes difficult for MR to select optimal network for the spectrum handoff. This can be done by performing spectrum handoff using Multiple Attributes Decision Making (MADM methods which is the objective of the paper. The MADM methods such as grey relational analysis and cost based methods are used. The application of MADM methods provides wider and optimum choice among the available networks with quality of service. Numerical results reveal that the proposed scheme is effective for spectrum handoff decision for optimal network selection with reduced complexity in NEMO based CR vehicular networks.

  17. Geminal embedding scheme for optimal atomic basis set construction in correlated calculations

    Energy Technology Data Exchange (ETDEWEB)

    Sorella, S., E-mail: sorella@sissa.it [International School for Advanced Studies (SISSA), Via Beirut 2-4, 34014 Trieste, Italy and INFM Democritos National Simulation Center, Trieste (Italy); Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr [Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France); Mazzola, G., E-mail: gmazzola@phys.ethz.ch [Theoretische Physik, ETH Zurich, 8093 Zurich (Switzerland); Casula, M., E-mail: michele.casula@impmc.upmc.fr [CNRS and Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France)

    2015-12-28

    We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.

  18. A Latency and Coverage Optimized Data Collection Scheme for Smart Cities Based on Vehicular Ad-hoc Networks.

    Science.gov (United States)

    Xu, Yixuan; Chen, Xi; Liu, Anfeng; Hu, Chunhua

    2017-04-18

    Using mobile vehicles as "data mules" to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC) scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D), but also vehicle to vehicle transmission (V2V). Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%.

  19. A Latency and Coverage Optimized Data Collection Scheme for Smart Cities Based on Vehicular Ad-Hoc Networks

    Science.gov (United States)

    Xu, Yixuan; Chen, Xi; Liu, Anfeng; Hu, Chunhua

    2017-01-01

    Using mobile vehicles as “data mules” to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC) scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D), but also vehicle to vehicle transmission (V2V). Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%. PMID:28420218

  20. Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research.

    Science.gov (United States)

    Duan, Naihua; Bhaumik, Dulal K; Palinkas, Lawrence A; Hoagwood, Kimberly

    2015-09-01

    Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research.

  1. Optimal allocation of point-count sampling effort

    Science.gov (United States)

    Barker, R.J.; Sauer, J.R.; Link, W.A.

    1993-01-01

    Both unlimited and fixedradius point counts only provide indices to population size. Because longer count durations lead to counting a higher proportion of individuals at the point, proper design of these surveys must incorporate both count duration and sampling characteristics of population size. Using information about the relationship between proportion of individuals detected at a point and count duration, we present a method of optimizing a pointcount survey given a fixed total time for surveying and travelling between count points. The optimization can be based on several quantities that measure precision, accuracy, or power of tests based on counts, including (1) meansquare error of estimated population change; (2) mean-square error of average count; (3) maximum expected total count; or (4) power of a test for differences in average counts. Optimal solutions depend on a function that relates count duration at a point to the proportion of animals detected. We model this function using exponential and Weibull distributions, and use numerical techniques to conduct the optimization. We provide an example of the procedure in which the function is estimated from data of cumulative number of individual birds seen for different count durations for three species of Hawaiian forest birds. In the example, optimal count duration at a point can differ greatly depending on the quantities that are optimized. Optimization of the mean-square error or of tests based on average counts generally requires longer count durations than does estimation of population change. A clear formulation of the goals of the study is a critical step in the optimization process.

  2. A sampling scheme to assess persistence and transport characteristics of xenobiotics within an urban river section

    Science.gov (United States)

    Schwientek, Marc; Guillet, Gaelle; Kuch, Bertram; Rügner, Hermann; Grathwohl, Peter

    2014-05-01

    Xenobiotic contaminants such as pharmaceuticals or personal care products typically are continuously introduced into the receiving water bodies via wastewater treatment plant (WWTP) outfalls and, episodically, via combined sewer overflows in the case of precipitation events. Little is known about how these chemicals behave in the environment and how they affect ecosystems and human health. Examples of traditional persistent organic pollutants reveal, that they may still be present in the environment even decades after they have been released. In this study a sampling strategy was developed which gives valuable insights into the environmental behaviour of xenobiotic chemicals. The method is based on the Lagrangian sampling scheme by which a parcel of water is sampled repeatedly as it moves downstream while chemical, physical, and hydrologic processes altering the characteristics of the water mass can be investigated. The Steinlach is a tributary of the River Neckar in Southwest Germany with a catchment area of 140 km². It receives the effluents of a WWTP with 99,000 inhabitant equivalents 4 km upstream of its mouth. The varying flow rate of effluents induces temporal patterns of electrical conductivity in the river water which enable to track parcels of water along the subsequent urban river section. These parcels of water were sampled a) close to the outlet of the WWTP and b) 4 km downstream at the confluence with the Neckar. Sampling was repeated at a 15 min interval over a complete diurnal cycle and 2 h composite samples were prepared. A model-based analysis demonstrated, on the one hand, that substances behaved reactively to a varying extend along the studied river section. On the other hand, it revealed that the observed degradation rates are likely dependent on the time of day. Some chemicals were degraded mainly during daytime (e.g. the disinfectant Triclosan or the phosphorous flame retardant TDCP), others as well during nighttime (e.g. the musk fragrance

  3. [Optimized sample preparation for metabolome studies on Streptomyces coelicolor].

    Science.gov (United States)

    Li, Yihong; Li, Shanshan; Ai, Guomin; Wang, Weishan; Zhang, Buchang; Yang, Keqian

    2014-04-01

    Streptomycetes produce many antibiotics and are important model microorgansims for scientific research and antibiotic production. Metabolomics is an emerging technological platform to analyze low molecular weight metabolites in a given organism qualitatively and quantitatively. Compared to other Omics platform, metabolomics has greater advantage in monitoring metabolic flux distribution and thus identifying key metabolites related to target metabolic pathway. The present work aims at establishing a rapid, accurate sample preparation protocol for metabolomics analysis in streptomycetes. In the present work, several sample preparation steps, including cell quenching time, cell separation method, conditions for metabolite extraction and metabolite derivatization were optimized. Then, the metabolic profiles of Streptomyces coelicolor during different growth stages were analyzed by GC-MS. The optimal sample preparation conditions were as follows: time of low-temperature quenching 4 min, cell separation by fast filtration, time of freeze-thaw 45 s/3 min and the conditions of metabolite derivatization at 40 degrees C for 90 min. By using this optimized protocol, 103 metabolites were finally identified from a sample of S. coelicolor, which distribute in central metabolic pathways (glycolysis, pentose phosphate pathway and citrate cycle), amino acid, fatty acid, nucleotide metabolic pathways, etc. By comparing the temporal profiles of these metabolites, the amino acid and fatty acid metabolic pathways were found to stay at a high level during stationary phase, therefore, these pathways may play an important role during the transition between the primary and secondary metabolism. An optimized protocol of sample preparation was established and applied for metabolomics analysis of S. coelicolor, 103 metabolites were identified. The temporal profiles of metabolites reveal amino acid and fatty acid metabolic pathways may play an important role in the transition from primary to

  4. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  5. Optimization of the combined proton acceleration regime with a target composition scheme

    Energy Technology Data Exchange (ETDEWEB)

    Yao, W. P. [Center for Applied Physics and Technology, HEDPS, State Key Laboratory of Nuclear Physics and Technology, and School of Physics, Peking University, Beijing 100871 (China); Graduate School, China Academy of Engineering Physics, Beijing 100088 (China); Li, B. W., E-mail: li-baiwen@iapcm.ac.cn [Institute of Applied Physics and Computational Mathematics, Beijing 100088 (China); Zheng, C. Y.; Liu, Z. J. [Center for Applied Physics and Technology, HEDPS, State Key Laboratory of Nuclear Physics and Technology, and School of Physics, Peking University, Beijing 100871 (China); Institute of Applied Physics and Computational Mathematics, Beijing 100088 (China); Yan, X. Q. [Center for Applied Physics and Technology, HEDPS, State Key Laboratory of Nuclear Physics and Technology, and School of Physics, Peking University, Beijing 100871 (China); Qiao, B. [Center for Applied Physics and Technology, HEDPS, State Key Laboratory of Nuclear Physics and Technology, and School of Physics, Peking University, Beijing 100871 (China); Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi 030006 (China)

    2016-01-15

    A target composition scheme to optimize the combined proton acceleration regime is presented and verified by two-dimensional particle-in-cell simulations by using an ultra-intense circularly polarized (CP) laser pulse irradiating an overdense hydrocarbon (CH) target, instead of a pure hydrogen (H) one. The combined acceleration regime is a two-stage proton acceleration scheme combining the radiation pressure dominated acceleration (RPDA) stage and the laser wakefield acceleration (LWFA) stage sequentially together. Protons get pre-accelerated in the first stage when an ultra-intense CP laser pulse irradiating an overdense CH target. The wakefield is driven by the laser pulse after penetrating through the overdense CH target and propagating in the underdense tritium plasma gas. With the pre-accelerate stage, protons can now get trapped in the wakefield and accelerated to much higher energy by LWFA. Finally, protons with higher energies (from about 20 GeV up to about 30 GeV) and lower energy spreads (from about 18% down to about 5% in full-width at half-maximum, or FWHM) are generated, as compared to the use of a pure H target. It is because protons can be more stably pre-accelerated in the first RPDA stage when using CH targets. With the increase of the carbon-to-hydrogen density ratio, the energy spread is lower and the maximum proton energy is higher. It also shows that for the same laser intensity around 10{sup 22} W cm{sup −2}, using the CH target will lead to a higher proton energy, as compared to the use of a pure H target. Additionally, proton energy can be further increased by employing a longitudinally negative gradient of a background plasma density.

  6. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  7. Optimal tests for the two-sample spherical location problem

    CERN Document Server

    Ley, Christophe; Verdebout, Thomas

    2012-01-01

    We tackle the classical two-sample spherical location problem for directional data by having recourse to the Le Cam methodology, habitually used in classical "linear" multivariate analysis. More precisely we construct locally and asymptotically optimal (in the maximin sense) parametric tests, which we then turn into semi-parametric ones in two distinct ways. First, by using a studentization argument; this leads to so-called pseudo-FvML tests. Second, by resorting to the invariance principle; this leads to efficient rank-based tests. Within each construction, the semi-parametric tests inherit optimality under a given distribution (the FvML in the first case, any rotationally symmetric one in the second) from their parametric counterparts and also improve on the latter by being valid under the whole class of rotationally symmetric distributions. Asymptotic relative efficiencies are calculated and the finite-sample behavior of the proposed tests is investigated by means of a Monte Carlo simulation.

  8. Efficient infill sampling for unconstrained robust optimization problems

    Science.gov (United States)

    Rehman, Samee Ur; Langelaar, Matthijs

    2016-08-01

    A novel infill sampling criterion is proposed for efficient estimation of the global robust optimum of expensive computer simulation based problems. The algorithm is especially geared towards addressing problems that are affected by uncertainties in design variables and problem parameters. The method is based on constructing metamodels using Kriging and adaptively sampling the response surface via a principle of expected improvement adapted for robust optimization. Several numerical examples and an engineering case study are used to demonstrate the ability of the algorithm to estimate the global robust optimum using a limited number of expensive function evaluations.

  9. Simultaneous beam sampling and aperture shape optimization for SPORT

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Ye, Yinyu [Department of Management Science and Engineering, Stanford University, Stanford, California 94305 (United States)

    2015-02-15

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  10. Identification of isomers and control of ionization and dissociation processes using dual-mass-spectrometer scheme and genetic algorithm optimization

    Institute of Scientific and Technical Information of China (English)

    陈洲; 佟秋男; 张丛丛; 胡湛

    2015-01-01

    Identification of acetone and its two isomers, and the control of their ionization and dissociation processes are per-formed using a dual-mass-spectrometer scheme. The scheme employs two sets of time of flight mass spectrometers to simultaneously acquire the mass spectra of two different molecules under the irradiation of identically shaped femtosecond laser pulses. The optimal laser pulses are found using closed-loop learning method based on a genetic algorithm. Com-pared with the mass spectra of the two isomers that are obtained with the transform limited pulse, those obtained under the irradiation of the optimal laser pulse show large differences and the various reaction pathways of the two molecules are selectively controlled. The experimental results demonstrate that the scheme is quite effective and useful in studies of two molecules having common mass peaks, which makes a traditional single mass spectrometer unfeasible.

  11. An optimized staggered variable-grid finite-difference scheme and its application in cross-well acoustic survey

    Institute of Scientific and Technical Information of China (English)

    ZHAO HaiBo; WANG XiuMing

    2008-01-01

    In this paper, an optimized staggered variable-grid finite-difference (FD) method is developed in veloc-ity-stress elastic wave equations. On the basis of the dispersion-relation-preserving (DRP), a fourth-order finite-difference operator on non-uniform grids is constructed. The proposed algorithm is a continuous variable-grid method. It does not need interpolations for the field variables between re-gions with the fine spacing and the coarse one. The accuracy of the optimized scheme has been veri-fied with an analytical solution and a regular staggered-grid FD method for the eighth order accuracy in space. The comparisons of the proposed scheme with the variable-grid FD method based on Taylor series expansion are made. It is demonstrated that this optimized scheme has less dispersion errors than that with Taylor's series expansion. Thus, the proposed scheme uses coarser grids in numerical simulations than that constructed by the Taylor's series expansion. Finally, the capability of the opti-mized FD is demonstrated for a complex cross-well acoustic simulation. The numerical experiment shows that this method greatly saves storage requirements and computational time, and is stable.

  12. A test of an optimal stomatal conductance scheme within the CABLE Land Surface Model

    Directory of Open Access Journals (Sweden)

    M. G. De Kauwe

    2014-10-01

    Full Text Available Stomatal conductance (gs affects the fluxes of carbon, energy and water between the vegetated land surface and the atmosphere. We test an implementation of an optimal stomatal conductance model within the Community Atmosphere Biosphere Land Exchange (CABLE land surface model (LSM. In common with many LSMs, CABLE does not differentiate between gs model parameters in relation to plant functional type (PFT, but instead only in relation to photosynthetic pathway. We therefore constrained the key model parameter "g1" which represents a plants water use strategy by PFT based on a global synthesis of stomatal behaviour. As proof of concept, we also demonstrate that the g1 parameter can be estimated using two long-term average (1960–1990 bioclimatic variables: (i temperature and (ii an indirect estimate of annual plant water availability. The new stomatal models in conjunction with PFT parameterisations resulted in a large reduction in annual fluxes of transpiration (~ 30% compared to the standard CABLE simulations across evergreen needleleaf, tundra and C4 grass regions. Differences in other regions of the globe were typically small. Model performance when compared to upscaled data products was not degraded, though the new stomatal conductance scheme did not noticeably change existing model-data biases. We conclude that optimisation theory can yield a simple and tractable approach to predicting stomatal conductance in LSMs.

  13. Optimized Voting Scheme for Efficient Vanishing Point Detection in General Road Images

    Directory of Open Access Journals (Sweden)

    Vipul H. Mistry

    2016-08-01

    Full Text Available Next generation automobile industries are aiming for development of vision-based driver assistance system and driver-less vehicle system. In the context of this application, a major challenge lies in the identification of efficient road region segmentation from captured image frames. Recent research work suggests that use of a global feature like vanishing point makes the road detection algorithm more robust and general for all types of roads. The goal of this research work is the reduction of computational complexity involved with voting process for identification of vanishing point. This paper presents an efficient and optimized voter selection strategy to identify vanishing point in general road images. The major outcome of this algorithm is the reduction in computational complexity as well as improvement in efficiency of vanishing point detection algorithm for all types of road images. The key attributes of the methodology are dominant orientation selection, voter selection based on voter location and modified voting scheme, combining dominant orientation and distance based soft voting process. Results of a number of qualitative and quantitative experiments clearly demonstrate the efficiency of proposed algorithm.

  14. Breeding programmes for smallholder sheep farming systems: II. Optimization of cooperative village breeding schemes.

    Science.gov (United States)

    Gizaw, S; van Arendonk, J A M; Valle-Zárate, A; Haile, A; Rischkowsky, B; Dessie, T; Mwai, A O

    2014-10-01

    A simulation study was conducted to optimize a cooperative village-based sheep breeding scheme for Menz sheep of Ethiopia. Genetic gains and profits were estimated under nine levels of farmers' participation and three scenarios of controlled breeding achieved in the breeding programme, as well as under three cooperative flock sizes, ewe to ram mating ratios and durations of ram use for breeding. Under fully controlled breeding, that is, when there is no gene flow between participating (P) and non-participating (NP) flocks, profits ranged from Birr 36.9 at 90% of participation to Birr 21.3 at 10% of participation. However, genetic progress was not affected adversely. When there was gene flow from the NP to P flocks, profits declined from Birr 28.6 to Birr -3.7 as participation declined from 90 to 10%. Under the two-way gene flow model (i.e. when P and NP flocks are herded mixed in communal grazing areas), NP flocks benefited from the genetic gain achieved in the P flocks, but the benefits declined sharply when participation declined beyond 60%. Our results indicate that a cooperative breeding group can be established with as low as 600 breeding ewes mated at a ratio of 45 ewes to one ram, and the rams being used for breeding for a period of two years. This study showed that farmer cooperation is crucial to effect genetic improvement under smallholder low-input sheep farming systems.

  15. An Efficient Searching and an Optimized Cache Coherence handling Scheme on DSR Routing Protocol for MANETS

    Directory of Open Access Journals (Sweden)

    Rajneesh Kumar Gujral

    2011-01-01

    Full Text Available Mobile ad hoc networks (MANETS are self-created and self organized by a collection of mobile nodes, interconnected by multi-hop wireless paths in a strictly peer to peer fashion. DSR (Dynamic Source Routing is an on-demand routing protocol for wireless ad hoc networks that floods route requests when the route is needed. Route caches in intermediate mobile node on DSR are used to reduce flooding of route requests. But with the increase in network size, node mobility and local cache of every mobile node cached route quickly become stale or inefficient. In this paper, for efficient searching, we have proposed a generic searching algorithm on associative cache memory organization to faster searching single/multiple paths for destination if exist in intermediate mobile node cache with a complexity O(n (Where n is number of bits required to represent the searched field.The other major problem of DSR is that the route maintenance mechanism does not locally repair a broken link and Stale cache information could also result in inconsistencies during the route discovery /reconstruction phase. So to deal this, we have proposed an optimized cache coherence handling scheme for on -demand routing protocol (DSR.

  16. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    Science.gov (United States)

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  17. A whole-path importance-sampling scheme for Feynman path integral calculations of absolute partition functions and free energies.

    Science.gov (United States)

    Mielke, Steven L; Truhlar, Donald G

    2016-01-21

    Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function.

  18. Near-Optimal Random Walk Sampling in Distributed Networks

    CERN Document Server

    Sarma, Atish Das; Pandurangan, Gopal

    2012-01-01

    Performing random walks in networks is a fundamental primitive that has found numerous applications in communication networks such as token management, load balancing, network topology discovery and construction, search, and peer-to-peer membership management. While several such algorithms are ubiquitous, and use numerous random walk samples, the walks themselves have always been performed naively. In this paper, we focus on the problem of performing random walk sampling efficiently in a distributed network. Given bandwidth constraints, the goal is to minimize the number of rounds and messages required to obtain several random walk samples in a continuous online fashion. We present the first round and message optimal distributed algorithms that present a significant improvement on all previous approaches. The theoretical analysis and comprehensive experimental evaluation of our algorithms show that they perform very well in different types of networks of differing topologies. In particular, our results show h...

  19. Optimal sampling frequency in recording of resistance training exercises.

    Science.gov (United States)

    Bardella, Paolo; Carrasquilla García, Irene; Pozzo, Marco; Tous-Fajardo, Julio; Saez de Villareal, Eduardo; Suarez-Arrones, Luis

    2017-03-01

    The purpose of this study was to analyse the raw lifting speed collected during four different resistance training exercises to assess the optimal sampling frequency. Eight physically active participants performed sets of Squat Jumps, Countermovement Jumps, Squats and Bench Presses at a maximal lifting speed. A linear encoder was used to measure the instantaneous speed at a 200 Hz sampling rate. Subsequently, the power spectrum of the signal was computed by evaluating its Discrete Fourier Transform. The sampling frequency needed to reconstruct the signals with an error of less than 0.1% was f99.9 = 11.615 ± 2.680 Hz for the exercise exhibiting the largest bandwidth, with the absolute highest individual value being 17.467 Hz. There was no difference between sets in any of the exercises. Using the closest integer sampling frequency value (25 Hz) yielded a reconstruction of the signal up to 99.975 ± 0.025% of its total in the worst case. In conclusion, a sampling rate of 25 Hz or above is more than adequate to record raw speed data and compute power during resistance training exercises, even under the most extreme circumstances during explosive exercises. Higher sampling frequencies provide no increase in the recording precision and may instead have adverse effects on the overall data quality.

  20. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  1. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Science.gov (United States)

    Gossner, Martin M; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W; Zytynska, Sharon E

    2016-01-01

    There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis

  2. Optimized explicit Runge-Kutta schemes for the spectral difference method applied to wave propagation problems

    CERN Document Server

    Parsani, M; Deconinck, W

    2012-01-01

    Explicit Runge-Kutta schemes with large stable step sizes are developed for integration of high order spectral difference spatial discretization on quadrilateral grids. The new schemes permit an effective time step that is substantially larger than the maximum admissible time step of standard explicit Runge-Kutta schemes available in literature. Furthermore, they have a small principal error norm and admit a low-storage implementation. The advantages of the new schemes are demonstrated through application to the Euler equations and the linearized Euler equations.

  3. Optimized Explicit Runge--Kutta Schemes for the Spectral Difference Method Applied to Wave Propagation Problems

    KAUST Repository

    Parsani, Matteo

    2013-04-10

    Explicit Runge--Kutta schemes with large stable step sizes are developed for integration of high-order spectral difference spatial discretizations on quadrilateral grids. The new schemes permit an effective time step that is substantially larger than the maximum admissible time step of standard explicit Runge--Kutta schemes available in the literature. Furthermore, they have a small principal error norm and admit a low-storage implementation. The advantages of the new schemes are demonstrated through application to the Euler equations and the linearized Euler equations.

  4. Sampling-based Algorithms for Optimal Motion Planning

    CERN Document Server

    Karaman, Sertac

    2011-01-01

    During the last decade, sampling-based path planning algorithms, such as Probabilistic RoadMaps (PRM) and Rapidly-exploring Random Trees (RRT), have been shown to work well in practice and possess theoretical guarantees such as probabilistic completeness. However, little effort has been devoted to the formal analysis of the quality of the solution returned by such algorithms, e.g., as a function of the number of samples. The purpose of this paper is to fill this gap, by rigorously analyzing the asymptotic behavior of the cost of the solution returned by stochastic sampling-based algorithms as the number of samples increases. A number of negative results are provided, characterizing existing algorithms, e.g., showing that, under mild technical conditions, the cost of the solution returned by broadly used sampling-based algorithms converges almost surely to a non-optimal value. The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically opti...

  5. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    Science.gov (United States)

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between

  6. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.

    Science.gov (United States)

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between

  7. Optimal CCD readout by digital correlated double sampling

    CERN Document Server

    Alessandri, Cristobal; Guzman, Dani; Passalacqua, Ignacio; Alvarez-Fontecilla, Enrique; Guarini, Marcelo

    2015-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-dom...

  8. The Use of Satellite Imagery to Guide Field Plot Sampling Scheme for Biomass Estimation in Ghanaian Forest

    Science.gov (United States)

    Sah, B. P.; Hämäläinen, J. M.; Sah, A. K.; Honji, K.; Foli, E. G.; Awudi, C.

    2012-07-01

    Accurate and reliable estimation of biomass in tropical forest has been a challenging task because a large proportion of forests are difficult to access or inaccessible. So, for effective implementation of REDD+ and fair benefit sharing, the proper designing of field plot sampling schemes plays a significant role in achieving robust biomass estimation. The existing forest inventory protocols using various field plot sampling schemes, including FAO's regular grid concept of sampling for land cover inventory at national level, are time and human resource intensive. Wall to wall LiDAR scanning is, however, a better approach to assess biomass with high precision and spatial resolution even though this approach suffers from high costs. Considering the above, in this study a sampling design based on a LiDAR strips sampling scheme has been devised for Ghanaian forests to support field plot sampling. Using Top-of-Atmosphere (TOA) reflectance value of satellite data, Land Use classification was carried out in accordance with IPCC definitions and the resulting classes were further stratified, incorporating existing GIS data of ecological zones in the study area. Employing this result, LiDAR sampling strips were allocated using systematic sampling techniques. The resulting LiDAR strips represented all forest categories, as well as other Land Use classes, with their distribution adequately representing the areal share of each category. In this way, out of at total area of 15,153km2 of the study area, LiDAR scanning was required for only 770 km2 (sampling intensity being 5.1%). We conclude that this systematic LiDAR sampling design is likely to adequately cover variation in above-ground biomass densities and serve as sufficient a-priori data, together with the Land Use classification produced, for designing efficient field plot sampling over the seven ecological zones.

  9. Determining the Bayesian optimal sampling strategy in a hierarchical system.

    Energy Technology Data Exchange (ETDEWEB)

    Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre

    2010-09-01

    Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.

  10. Effects of optimized root water uptake parameterization schemes on water and heat flux simulation in a maize agroecosystem

    Science.gov (United States)

    Cai, Fu; Ming, Huiqing; Mi, Na; Xie, Yanbing; Zhang, Yushu; Li, Rongping

    2017-04-01

    As root water uptake (RWU) is an important link in the water and heat exchange between plants and ambient air, improving its parameterization is key to enhancing the performance of land surface model simulations. Although different types of RWU functions have been adopted in land surface models, there is no evidence as to which scheme most applicable to maize farmland ecosystems. Based on the 2007-09 data collected at the farmland ecosystem field station in Jinzhou, the RWU function in the Common Land Model (CoLM) was optimized with scheme options in light of factors determining whether roots absorb water from a certain soil layer ( W x ) and whether the baseline cumulative root efficiency required for maximum plant transpiration ( W c ) is reached. The sensibility of the parameters of the optimization scheme was investigated, and then the effects of the optimized RWU function on water and heat flux simulation were evaluated. The results indicate that the model simulation was not sensitive to W x but was significantly impacted by W c . With the original model, soil humidity was somewhat underestimated for precipitation-free days; soil temperature was simulated with obvious interannual and seasonal differences and remarkable underestimations for the maize late-growth stage; and sensible and latent heat fluxes were overestimated and underestimated, respectively, for years with relatively less precipitation, and both were simulated with high accuracy for years with relatively more precipitation. The optimized RWU process resulted in a significant improvement of CoLM's performance in simulating soil humidity, temperature, sensible heat, and latent heat, for dry years. In conclusion, the optimized RWU scheme available for the CoLM model is applicable to the simulation of water and heat flux for maize farmland ecosystems in arid areas.

  11. Optimal experimental scheme for practical BB84 quantum key distribution protocol with weak coherent sources, noises, and high losses

    CERN Document Server

    Cai, Q

    2005-01-01

    It is the first scheme which allows the detection apparatus to achieve the photon number of arriving signals. Moreover, quantum bit error rates (QBERs) of multiphoton pulses can also be achieved precisely. Thus, our method is sensitive to the photon number splitting and resending (PNSR) attack, i.e., the eavesdropper (Eve) replaces one photon of the multiphoton pulse by a false one and forwards the pulse to the receiver, while the decoy-state protocols are not. In our scheme, Eve's whatever attacks will be limited by using the $improved$ decoy-state protocols and by checking the QBERs. Based on our multiphoton pulses detection apparatus, a quasi-single-photon protocol is presented to improve the security of the communication and the rate of the final key. We analyze that our scheme is optimal under today's technology. PACS: 03.67.Dd

  12. Neuro-genetic system for optimization of GMI samples sensitivity.

    Science.gov (United States)

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities.

  13. Sample size matters: Investigating the optimal sample size for a logistic regression debris flow susceptibility model

    Science.gov (United States)

    Heckmann, Tobias; Gegg, Katharina; Becht, Michael

    2013-04-01

    Statistical approaches to landslide susceptibility modelling on the catchment and regional scale are used very frequently compared to heuristic and physically based approaches. In the present study, we deal with the problem of the optimal sample size for a logistic regression model. More specifically, a stepwise approach has been chosen in order to select those independent variables (from a number of derivatives of a digital elevation model and landcover data) that explain best the spatial distribution of debris flow initiation zones in two neighbouring central alpine catchments in Austria (used mutually for model calculation and validation). In order to minimise problems arising from spatial autocorrelation, we sample a single raster cell from each debris flow initiation zone within an inventory. In addition, as suggested by previous work using the "rare events logistic regression" approach, we take a sample of the remaining "non-event" raster cells. The recommendations given in the literature on the size of this sample appear to be motivated by practical considerations, e.g. the time and cost of acquiring data for non-event cases, which do not apply to the case of spatial data. In our study, we aim at finding empirically an "optimal" sample size in order to avoid two problems: First, a sample too large will violate the independent sample assumption as the independent variables are spatially autocorrelated; hence, a variogram analysis leads to a sample size threshold above which the average distance between sampled cells falls below the autocorrelation range of the independent variables. Second, if the sample is too small, repeated sampling will lead to very different results, i.e. the independent variables and hence the result of a single model calculation will be extremely dependent on the choice of non-event cells. Using a Monte-Carlo analysis with stepwise logistic regression, 1000 models are calculated for a wide range of sample sizes. For each sample size

  14. Optimal Compensation for Fund Managers of Uncertain Type: The Information Advantages of Bonus Schemes

    OpenAIRE

    Alexander Stremme

    1999-01-01

    Performance-sensitivity of compensation schemes for portfolio managers is well explained by classic principal-agent theory as a device to provide incentives for managers to exert effort or bear the cost of acquiring information. However, the majority of compensation packages observed in reality display in addition a fair amount of convexity in the form of performance-related bonus schemes. While convex contracts may be explained by principal-agent theory in some rather specific situations, th...

  15. Optimal Runge-Kutta Schemes for High-order Spatial and Temporal Discretizations

    Science.gov (United States)

    2015-06-01

    Discretizations 5a. CONTRACT NUMBER In-House 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Mundis , N., Edoh, A. and Sankaran, V. 5d...Schemes for High-order Spatial and Temporal Discretizations Nathan L. Mundis ∗ Ayaboe K. Edoh† Venkateswaran Sankaran‡ * ERC, Inc., †University of...the wave number being the parameter) are overlaid on the contour map of the amplification factor in the complex plane for the chosen temporal scheme

  16. Automatic, optimized interface placement in forward flux sampling simulations

    CERN Document Server

    Kratzer, Kai; Allen, Rosalind J

    2013-01-01

    Forward flux sampling (FFS) provides a convenient and efficient way to simulate rare events in equilibrium or non-equilibrium systems. FFS ratchets the system from an initial state to a final state via a series of interfaces in phase space. The efficiency of FFS depends sensitively on the positions of the interfaces. We present two alternative methods for placing interfaces automatically and adaptively in their optimal locations, on-the-fly as an FFS simulation progresses, without prior knowledge or user intervention. These methods allow the FFS simulation to advance efficiently through bottlenecks in phase space by placing more interfaces where the probability of advancement is lower. The methods are demonstrated both for a single-particle test problem and for the crystallization of Yukawa particles. By removing the need for manual interface placement, our methods both facilitate the setting up of FFS simulations and improve their performance, especially for rare events which involve complex trajectories thr...

  17. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...... by taking the position-dependent diffusion coefficient into account, thus placing greater emphasis on regions diffusing slowly. Although some promising examples of applications of this approach exist, the practical usefulness of the method has been hindered by the difficulty in obtaining sufficiently...

  18. Conical Intersection Optimization Using Composed Steps Inside the ONIOM(QM:MM) Scheme: CASSCF:UFF Implementation with Microiterations.

    Science.gov (United States)

    Ruiz-Barragan, Sergi; Morokuma, Keiji; Blancafort, Lluís

    2015-04-14

    Three algorithms for optimization of minimum energy conical intersections (MECI) are implemented inside an ONIOM(QM:MM) scheme combined with microiterations. The algorithms follow the composed gradient (CG), composed gradient-composed steps (CG-CS), and double Newton-Raphson-composed step (DNR-CS) schemes developed previously for purely QM optimizations. The CASSCF and UFF methods are employed for the QM and MM calculations, respectively. Conical intersections are essential to describe excited state processes in chemistry, including biological systems or functional molecules, and our approach is suitable for large molecules or systems where the excitation is well localized on a fragment that can be treated at the CASSCF level. The algorithms are tested on a set of 14 large hydrocarbons composed of a medium-sized chromophore (fulvene, benzene, butadiene, and hexatriene) derivatized with alkyl substituents. Thanks to the microiteration technique, the number of steps required to optimize the MECI of the large molecules is similar to the one needed to optimize the unsubstituted chromophores at the QM level. The three tested algorithms have a similar performance, although the CG-CS implementation is the most efficient one on average. The implementation can be straightforwardly applied to ONIOM(QM:QM) schemes, and its potential is further demonstrated locating the MECI of diphenyl dibenzofulvene (DPDBF) in its crystal, which is relevant for the aggregation induced emission (AIE) of this molecule. A cluster of 12 molecules (528 atoms) is relaxed during the MECI optimization, with one molecule treated at the QM level. Our results confirm the mechanistic picture that AIE in DPDBF is due to the packing of the molecules in the crystal. Even when the molecules surrounding the excited molecule are allowed to relax, the rotation of the bulky substituents is hindered, and the conical intersection responsible for radiationless decay in solution is not accessible energetically.

  19. Optimization for Peptide Sample Preparation for Urine Peptidomics

    Energy Technology Data Exchange (ETDEWEB)

    Sigdel, Tara K.; Nicora, Carrie D.; Hsieh, Szu-Chuan; Dai, Hong; Qian, Weijun; Camp, David G.; Sarwal, Minnie M.

    2014-02-25

    when utilizing the conventional SPE method. In conclusion, the mSPE method was found to be superior to the conventional, standard SPE method for urine peptide sample preparation when applying LC-MS peptidomics analysis due to the optimized sample clean up that provided improved experimental inference from the confidently identified peptides.

  20. Programming scheme based optimization of hybrid 4T-2R OxRAM NVSRAM

    Science.gov (United States)

    Majumdar, Swatilekha; Kingra, Sandeep Kaur; Suri, Manan

    2017-09-01

    In this paper, we present a novel single-cycle programming scheme for 4T-2R NVSRAM, exploiting pulse engineered input signals. OxRAM devices based on 3 nm thick bi-layer active switching oxide and 90 nm CMOS technology node were used for all simulations. The cell design is implemented for real-time non-volatility rather than last-bit, or power-down non-volatility. Detailed analysis of the proposed single-cycle, parallel RRAM device programming scheme is presented in comparison to the two-cycle sequential RRAM programming used for similar 4T-2R NVSRAM bit-cells. The proposed single-cycle programming scheme coupled with the 4T-2R architecture leads to several benefits such as- possibility of unconventional transistor sizing, 50% lower latency, 20% improvement in SNM and ∼20× reduced energy requirements, when compared against two-cycle programming approach.

  1. 40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Interpreting PCB concentration... § 761.79(b)(3) § 761.316 Interpreting PCB concentration measurements resulting from this sampling... concentration measured in that sample. If the sample surface concentration is not equal to or lower than the...

  2. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    Science.gov (United States)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  3. Importance Sampling Based Decision Trees for Security Assessment and the Corresponding Preventive Control Schemes: the Danish Case Study

    DEFF Research Database (Denmark)

    Liu, Leo; Rather, Zakir Hussain; Chen, Zhe

    2013-01-01

    and adopts a methodology of importance sampling to maximize the information contained in the database so as to increase the accuracy of DT. Further, this paper also studies the effectiveness of DT by implementing its corresponding preventive control schemes. These approaches are tested on the detailed model......Decision Trees (DT) based security assessment helps Power System Operators (PSO) by providing them with the most significant system attributes and guiding them in implementing the corresponding emergency control actions to prevent system insecurity and blackouts. DT is obtained offline from time......-domain simulation and the process of data mining, which is then implemented online as guidelines for preventive control schemes. An algorithm named Classification and Regression Trees (CART) is used to train the DT and key to this approach lies on the accuracy of DT. This paper proposes contingency oriented DT...

  4. Steerable antenna with circular-polarization. 2. Selection of optimal scheme

    Energy Technology Data Exchange (ETDEWEB)

    Abranin, E.P.; Bazelyan, L.L.; Brazhenko, A.I.

    1987-11-01

    In order to study the sporadic radio emission from the sun a polarimeter operating at 25 MGz was developed and constructed. It employs the steerable antenna array of the URAN-1 radio telescope. The results of numerical calculations of compensation schemes, intended for emission (reception) of circularly polarized waves in an arbitrary direction with the help of crossed dipoles, are presented.

  5. CROSS LAYER BASED THROUGHPUT OPTIMIZATION IN COGNITIVE RADIO NETWORKS WITH EFFECTIVE CHANNEL SENSING SCHEMES

    Directory of Open Access Journals (Sweden)

    T.Manimekalai

    2010-06-01

    Full Text Available Cognitive Radio technology is a novel and effective approach to improve utilization of the precious radio spectrum. Spectrum sensing is one of the essential mechanisms for cognitive radio (CR and various sensing techniques are used by the secondary users to scan the licensed spectrum band of the primary radio (PR users to determine the spectrum holes. These can be intelligently used by the secondary users also referred to as CR users, for their own transmission without causing interference to the PR users. In this paper, a MAC protocol with two spectrum sensing schemes, namely Fusion based Arbitrary sensing scheme and Intelligence based sensing scheme are analyzed including the effects of interference. Rayleigh channel model for PR-PR interference and CR-PR interference is considered. An expression for the aggregate throughput of the cognitive radio network is derived for the two channel sensing schemes. The effects of interference on throughput are studied both by analysis and by simulation. It is found that interference affects the sensing efficiency which in turn affects the throughput of the cognitive radio users. Rate Adaptation techniques are further employed to enhance the cognitive radio network throughput.

  6. Geostatistical sampling optimization and waste characterization of contaminated premises

    Energy Technology Data Exchange (ETDEWEB)

    Desnoyers, Y.; Jeannee, N. [GEOVARIANCES, 49bis avenue Franklin Roosevelt, BP91, Avon, 77212 (France); Chiles, J.P. [Centre de geostatistique, Ecole des Mines de Paris (France); Dubot, D. [CEA DSV/FAR/USLT/SPRE/SAS (France); Lamadie, F. [CEA DEN/VRH/DTEC/SDTC/LTM (France)

    2009-06-15

    At the end of process equipment dismantling, the complete decontamination of nuclear facilities requires a radiological assessment of the building structure residual activity. From this point of view, the set up of an appropriate evaluation methodology is of crucial importance. The radiological characterization of contaminated premises can be divided into three steps. First, the most exhaustive facility analysis provides historical and qualitative information. Then, a systematic (exhaustive) control of the emergent signal is commonly performed using in situ measurement methods such as surface controls combined with in situ gamma spectrometry. Finally, in order to assess the contamination depth, samples are collected at several locations within the premises and analyzed. Combined with historical information and emergent signal maps, such data allow the definition of a preliminary waste zoning. The exhaustive control of the emergent signal with surface measurements usually leads to inaccurate estimates, because of several factors: varying position of the measuring device, subtraction of an estimate of the background signal, etc. In order to provide reliable estimates while avoiding supplementary investigation costs, there is therefore a crucial need for sampling optimization methods together with appropriate data processing techniques. The initial activity usually presents a spatial continuity within the premises, with preferential contamination of specific areas or existence of activity gradients. Taking into account this spatial continuity is essential to avoid bias while setting up the sampling plan. In such a case, Geostatistics provides methods that integrate the contamination spatial structure. After the characterization of this spatial structure, most probable estimates of the surface activity at un-sampled locations can be derived using kriging techniques. Variants of these techniques also give access to estimates of the uncertainty associated to the spatial

  7. Optimization and validation of sample preparation for metagenomic sequencing of viruses in clinical samples.

    Science.gov (United States)

    Lewandowska, Dagmara W; Zagordi, Osvaldo; Geissberger, Fabienne-Desirée; Kufner, Verena; Schmutz, Stefan; Böni, Jürg; Metzner, Karin J; Trkola, Alexandra; Huber, Michael

    2017-08-08

    Sequence-specific PCR is the most common approach for virus identification in diagnostic laboratories. However, as specific PCR only detects pre-defined targets, novel virus strains or viruses not included in routine test panels will be missed. Recently, advances in high-throughput sequencing allow for virus-sequence-independent identification of entire virus populations in clinical samples, yet standardized protocols are needed to allow broad application in clinical diagnostics. Here, we describe a comprehensive sample preparation protocol for high-throughput metagenomic virus sequencing using random amplification of total nucleic acids from clinical samples. In order to optimize metagenomic sequencing for application in virus diagnostics, we tested different enrichment and amplification procedures on plasma samples spiked with RNA and DNA viruses. A protocol including filtration, nuclease digestion, and random amplification of RNA and DNA in separate reactions provided the best results, allowing reliable recovery of viral genomes and a good correlation of the relative number of sequencing reads with the virus input. We further validated our method by sequencing a multiplexed viral pathogen reagent containing a range of human viruses from different virus families. Our method proved successful in detecting the majority of the included viruses with high read numbers and compared well to other protocols in the field validated against the same reference reagent. Our sequencing protocol does work not only with plasma but also with other clinical samples such as urine and throat swabs. The workflow for virus metagenomic sequencing that we established proved successful in detecting a variety of viruses in different clinical samples. Our protocol supplements existing virus-specific detection strategies providing opportunities to identify atypical and novel viruses commonly not accounted for in routine diagnostic panels.

  8. How old is this bird? The age distribution under some phase sampling schemes.

    Science.gov (United States)

    Hautphenne, Sophie; Massaro, Melanie; Taylor, Peter

    2017-04-03

    In this paper, we use a finite-state continuous-time Markov chain with one absorbing state to model an individual's lifetime. Under this model, the time of death follows a phase-type distribution, and the transient states of the Markov chain are known as phases. We then attempt to provide an answer to the simple question "What is the conditional age distribution of the individual, given its current phase"? We show that the answer depends on how we interpret the question, and in particular, on the phase observation scheme under consideration. We then apply our results to the computation of the age pyramid for the endangered Chatham Island black robin Petroica traversi during the monitoring period 2007-2014.

  9. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  10. OPTIMIZATION OF THE TEMPERATURE CONTROL SCHEME FOR ROLLER COMPACTED CONCRETE DAMS BASED ON FINITE ELEMENT AND SENSITIVITY ANALYSIS METHODS

    Directory of Open Access Journals (Sweden)

    Huawei Zhou

    2016-10-01

    Full Text Available Achieving an effective combination of various temperature control measures is critical for temperature control and crack prevention of concrete dams. This paper presents a procedure for optimizing the temperature control scheme of roller compacted concrete (RCC dams that couples the finite element method (FEM with a sensitivity analysis method. In this study, seven temperature control schemes are defined according to variations in three temperature control measures: concrete placement temperature, water-pipe cooling time, and thermal insulation layer thickness. FEM is employed to simulate the equivalent temperature field and temperature stress field obtained under each of the seven designed temperature control schemes for a typical overflow dam monolith based on the actual characteristics of a RCC dam located in southwestern China. A sensitivity analysis is subsequently conducted to investigate the degree of influence each of the three temperature control measures has on the temperature field and temperature tensile stress field of the dam. Results show that the placement temperature has a substantial influence on the maximum temperature and tensile stress of the dam, and that the placement temperature cannot exceed 15 °C. The water-pipe cooling time and thermal insulation layer thickness have little influence on the maximum temperature, but both demonstrate a substantial influence on the maximum tensile stress of the dam. The thermal insulation thickness is significant for reducing the probability of cracking as a result of high thermal stress, and the maximum tensile stress can be controlled under the specification limit with a thermal insulation layer thickness of 10 cm. Finally, an optimized temperature control scheme for crack prevention is obtained based on the analysis results.

  11. A self-optimizing scheme for energy balanced routing in Wireless Sensor Networks using SensorAnt.

    Science.gov (United States)

    Shamsan Saleh, Ahmed M; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A; Ismail, Alyani

    2012-01-01

    Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency.

  12. A Self-Optimizing Scheme for Energy Balanced Routing in Wireless Sensor Networks Using SensorAnt

    Directory of Open Access Journals (Sweden)

    Alyani Ismail

    2012-08-01

    Full Text Available Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs because of the constraints on the sensor nodes’ energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes’ resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR in terms of energy consumption, balancing and efficiency.

  13. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...... in a Fisherian sense, is given. The solution is investigated by a simulation study. It is shown that if the experimental length T1 is fixed it may be useful to sample the record at a high sampling rate, since more measurements from the system are then collected. No optimal sampling interval exists....... But if the total number of sample points N is fixed an optimal sampling interval exists. Then it is far worse to use a too large sampling interval than a too small one since the information losses increase rapidly when the sampling interval increases from the optimal value....

  14. Nonlinear H∞ Optimal Control Scheme for an Underwater Vehicle with Regional Function Formulation

    Directory of Open Access Journals (Sweden)

    Zool H. Ismail

    2013-01-01

    Full Text Available A conventional region control technique cannot meet the demands for an accurate tracking performance in view of its inability to accommodate highly nonlinear system dynamics, imprecise hydrodynamic coefficients, and external disturbances. In this paper, a robust technique is presented for an Autonomous Underwater Vehicle (AUV with region tracking function. Within this control scheme, nonlinear H∞ and region based control schemes are used. A Lyapunov-like function is presented for stability analysis of the proposed control law. Numerical simulations are presented to demonstrate the performance of the proposed tracking control of the AUV. It is shown that the proposed control law is robust against parameter uncertainties, external disturbances, and nonlinearities and it leads to uniform ultimate boundedness of the region tracking error.

  15. Energy-Efficient Distributed Lifetime Optimizing Scheme for Wireless Sensor Networks

    Institute of Scientific and Technical Information of China (English)

    吕伟杰; 白栋霖

    2016-01-01

    In this paper, a sensing model for the coverage analysis of wireless sensor networks is provided. Using this model and Monte Carlo method, the ratio of private range to sensing range required to obtain the desired cover-age can be derived considering the scale of deployment area and the number of sensor nodes. Base on the coverage analysis, an energy-efficient distributed node scheduling scheme is proposed to prolong the network lifetime while maintaining the desired sensing coverage, which does not need the geographic or neighbor information of nodes. The proposed scheme can also handle uneven distribution, and it is robust against node failures. Theoretical and simulation results demonstrate its efficiency and usefulness.

  16. On the Optimality of Successive Decoding in Compress-and-Forward Relay Schemes

    CERN Document Server

    Wu, Xiugang

    2010-01-01

    In the classical compress-and-forward relay scheme developed by (Cover and El Gamal, 1979), the decoding process operates in a successive way: the destination first decodes the compressed observation of the relay, and then decodes the original message of the source. Recently, two modified compress-and-forward relay schemes were proposed, and in both of them, the destination jointly decodes the compressed observation of the relay and the original message, instead of successively. Such a modification on the decoding process was motivated by realizing that it is generally easier to decode the compressed observation jointly with the original message, and more importantly, the original message can be decoded even without completely decoding the compressed observation. However, the question remains whether this freedom of choosing a higher compression rate at the relay improves the achievable rate of the original message. It has been shown in (El Gamal and Kim, 2010) that the answer is negative in the single relay ...

  17. A DISTRIBUTED COOPERATIVE RELAYING OPTIMIZATION SCHEME FOR SECONDARY TRANSMISSION IN COGNITIVE RADIO NETWORKS

    Institute of Scientific and Technical Information of China (English)

    Chen Dan; Ji Hong

    2011-01-01

    In Cognitive Radio (CR) networks,cooperative communication has been recently regarded as a key technology for improving the spectral utilization efficiency and ensuring the Quality of Service (QoS) for Primary Users (PUs).In this paper,we propose a distributed joint relay selection and power allocation scheme for cooperative secondary transmission,taking both Instantaneous Channel State Information (I-CSI) and residual energy into consideration,where secondary source and destination may have different available spectrum.Specifically,we formulate the cognitive relay network as a restless bandit system,where the channel and energy state transition is characterized by the finite-state Markov chain.The proposed policy has indexability property that dramatically reduces the computation and implementation complexity.Analytical and simulation results demonstrate that our proposed scheme can efficiently enhance overall system reward,while guaranteeing a good tradeoff between achievable date rate and average network lifetime.

  18. A scheme for multiple sequence alignment optimization--an improvement based on family representative mechanics features.

    Science.gov (United States)

    Liu, Xin; Zhao, Ya-Pu

    2009-12-21

    As a basic tool of modern biology, sequence alignment can provide us useful information in fold, function, and active site of protein. For many cases, the increased quality of sequence alignment means a better performance. The motivation of present work is to increase ability of the existing scoring scheme/algorithm by considering residue-residue correlations better. Based on a coarse-grained approach, the hydrophobic force between each pair of residues is written out from protein sequence. It results in the construction of an intramolecular hydrophobic force network that describes the whole residue-residue interactions of each protein molecule, and characterizes protein's biological properties in the hydrophobic aspect. A former work has suggested that such network can characterize the top weighted feature regarding hydrophobicity. Moreover, for each homologous protein of a family, the corresponding network shares some common and representative family characters that eventually govern the conservation of biological properties during protein evolution. In present work, we score such family representative characters of a protein by the deviation of its intramolecular hydrophobic force network from that of background. Such score can assist the existing scoring schemes/algorithms, and boost up the ability of multiple sequences alignment, e.g. achieving a prominent increase (approximately 50%) in searching the structurally alike residue segments at a low identity level. As the theoretical basis is different, the present scheme can assist most existing algorithms, and improve their efficiency remarkably.

  19. Improving perfusion quantification in arterial spin labeling for delayed arrival times by using optimized acquisition schemes

    Energy Technology Data Exchange (ETDEWEB)

    Kramme, Johanna [Fraunhofer MEVIS-Institute for Medical Image Computing, Bremen (Germany); Univ. Bremen (Germany). Faculty of Physics and Electronics; Gregori, Johannes [mediri GmbH, Heidelberg (Germany); Diehl, Volker [Fraunhofer MEVIS-Institute for Medical Image Computing, Bremen (Germany); ZEMODI (Zentrum fuer morderne Diagnostik), Bremen (Germany); Madai, Vince I.; Sobesky, Jan [Charite-Universitaetsmedizin Berlin (Germany). Center for Stroke Research Berlin (CSB); Charite-Universitaetsmedizin Berlin (Germany). Dept. of Neurology; Samson-Himmelstjerna, Frederico C. von [Fraunhofer MEVIS-Institute for Medical Image Computing, Bremen (Germany); Charite-Universitaetsmedizin Berlin (Germany). Center for Stroke Research Berlin (CSB); Charite-Universitaetsmedizin Berlin (Germany). Dept. of Neurology; Lentschig, Markus [ZEMODI (Zentrum fuer morderne Diagnostik), Bremen (Germany); Guenther, Matthias [Fraunhofer MEVIS-Institute for Medical Image Computing, Bremen (Germany); Univ. Bremen (Germany). Faculty of Physics and Electronics; mediri GmbH, Heidelberg (Germany)

    2015-07-01

    The improvement in Arterial Spin Labeling (ASL) perfusion quantification, especially for delayed bolus arrival times (BAT), with an acquisition redistribution scheme mitigating the T1 decay of the label in multi-TI ASL measurements is investigated. A multi inflow time (TI) 3D-GRASE sequence is presented which adapts the distribution of acquisitions accordingly, by keeping the scan time constant. The MR sequence increases the number of averages at long TIs and decreases their number at short TIs and thus compensating the T1 decay of the label. The improvement of perfusion quantification is evaluated in simulations as well as in-vivo in healthy volunteers and patients with prolonged BATs due to age or steno-occlusive disease. The improvement in perfusion quantification depends on BAT. At healthy BATs the differences are small, but become larger for longer BATs typically found in certain diseases. The relative error of perfusion is improved up to 30% at BATs > 1500 ms in comparison to the standard acquisition scheme. This adapted acquisition scheme improves the perfusion measurement in comparison to standard multi-TI ASL implementations. It provides relevant benefit in clinical conditions that cause prolonged BATs and is therefore of high clinical relevance for neuroimaging of steno-occlusive diseases.

  20. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Science.gov (United States)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  1. ESTIMATION OF MEAN IN PRESENCE OF MISSING DATA UNDER TWO-PHASE SAMPLING SCHEME

    Directory of Open Access Journals (Sweden)

    Narendra Singh Thakur

    2011-01-01

    Full Text Available To estimate the population mean with imputation i.e. the technique of substitutingmissing data, there are a number of techniques available in literature like Ratio method ofimputation, Compromised method of imputation, Mean method of imputation, Ahmed method ofimputation, F-T method of imputation, and so on. If population mean of auxiliary information isunknown then these methods are not useful and the two-phase sampling is used to obtain thepopulation mean. This paper presents some imputation methods of for missing values in twophasesampling. Two different sampling designs in two-phase sampling are compared underimputed data. The bias and m.s.e of suggested estimators are derived in the form of populationparameters using the concept of large sample approximation. Numerical study is performed overtwo populations using the expressions of bias and m.s.e and efficiency compared with Ahmedestimators.

  2. Optimized Mooring Line Simulation Using a Hybrid Method Time Domain Scheme

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Voie, Per Erlend Torbergsen; Høgsberg, Jan;

    2014-01-01

    of mooring lines by two orders of magnitude. The present study shows how an ANN trained to perform nonlinear dynamic response simulation can be optimized using a method known as optimal brain damage (OBD) and thereby be used to rank the importance of all analysis input. Both the training and the optimization...... of the ANN are based on one short time domain simulation sequence generated by a FEM model of the structure. This means that it is possible to evaluate the importance of input parameters based on this single simulation only. The method is tested on a numerical model of mooring lines on a floating offshore...

  3. Joint Optimized CPU and Networking Control Scheme for Improved Energy Efficiency in Video Streaming on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Sung-Woong Jo

    2017-01-01

    Full Text Available Video streaming service is one of the most popular applications for mobile users. However, mobile video streaming services consume a lot of energy, resulting in a reduced battery life. This is a critical problem that results in a degraded user’s quality of experience (QoE. Therefore, in this paper, a joint optimization scheme that controls both the central processing unit (CPU and wireless networking of the video streaming process for improved energy efficiency on mobile devices is proposed. For this purpose, the energy consumption of the network interface and CPU is analyzed, and based on the energy consumption profile a joint optimization problem is formulated to maximize the energy efficiency of the mobile device. The proposed algorithm adaptively adjusts the number of chunks to be downloaded and decoded in each packet. Simulation results show that the proposed algorithm can effectively improve the energy efficiency when compared with the existing algorithms.

  4. PARAMETRIC OPTIMIZATION OF THE MULTIMODAL DECISION-LEVEL FUSION SCHEME IN AUTOMATIC BIOMETRIC PERSON’S IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    A. V. Timofeev

    2014-05-01

    Full Text Available This paper deals with an original method of structure parametric optimization for multimodal decision-level fusion scheme which combines the results of the partial solution for the classification task obtained from assembly of the monomodal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained. Properties of the proposed approach are proved rigorously. Suggested method has an urgent practical application in the automatic multimodal biometric person’s identification systems and in the systems for remote monitoring of extended objects. The proposed solution is easy for practical implementation into real operating systems. The paper presents a simulation study of the effectiveness of this optimized multimodal fusion classifier carried out on special bimodal biometrical database. Simulation results showed high practical effectiveness of the suggested method.

  5. Simplified Optimal Parenthesization Scheme for Matrix Chain Multiplication Problem using Bottom-up Practice in 2-Tree Structure

    Directory of Open Access Journals (Sweden)

    Biswajit BHOWMIK

    2011-01-01

    Full Text Available Dynamic Programming is one of the sledgehammers of the algorithms craft in optimizations. The versatility of the dynamic programming method is really appreciated by exposure to a wide variety of applications. In this paper a modified algorithm is introduced to provide suitable procedure that ensures how to multiply a chain of matrices. The model Optimal Parenthesization Scheme using 2-Tree Generation (OPS2TG acts as one relevancies of the proposed algorithm. A new approach for how to break the matrix chain and how to insert parenthesis in it is also designed in this model. The comparison study between the proposed model and the classical approach shows acceptance of the model suggested here.

  6. ModelCenter-Integrated Reduced Order Multi-fidelity Optimization Scheme for NASA MDAO Framework Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this effort, ZONA Technology, Inc aims at developing an innovative multi-fidelity and multi-disciplinary optimization (MDO) sub-framework that can (i) effectively...

  7. Improved Data Transmission Scheme of Network Coding Based on Access Point Optimization in VANET

    Directory of Open Access Journals (Sweden)

    Zhe Yang

    2014-01-01

    Full Text Available VANET is a hot spot of intelligent transportation researches. For vehicle users, the file sharing and content distribution through roadside access points (AP as well as the vehicular ad hoc networks (VANET have been an important complement to that cellular network. So the AP deployment is one of the key issues to improve the communication performance of VANET. In this paper, an access point optimization method is proposed based on particle swarm optimization algorithm. The transmission performances of the routing protocol with random linear network coding before and after the access point optimization are analyzed. The simulation results show the optimization model greatly affects the VANET transmission performances based on network coding, and it can enhance the delivery rate by 25% and 14% and reduce the average delay of transmission by 38% and 33%.

  8. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    Energy Technology Data Exchange (ETDEWEB)

    Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.

  9. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  10. Sample-Path Optimization of Buffer Allocations in a Tandem Queue - Part I : Theoretical Issues

    NARCIS (Netherlands)

    Gürkan, G.; Ozge, A.Y.

    1996-01-01

    This is the first of two papers dealing with the optimal bu er allocation problem in tandem manufacturing lines with unreliable machines.We address the theoretical issues that arise when using sample-path optimization, a simulation-based optimization method, to solve this problem.Sample-path optimiz

  11. Kernel Density Independence Sampling based Monte Carlo Scheme (KISMCS) for inverse hydrological modeling

    NARCIS (Netherlands)

    Shafiei, M.; Gharari, S.; Pande, S.; Bhulai, S.

    2014-01-01

    Posterior sampling methods are increasingly being used to describe parameter and model predictive uncertainty in hydrologic modelling. This paper proposes an alternative to random walk chains (such as DREAM-zs). We propose a sampler based on independence chains with an embedded feature of standardiz

  12. Continuous quality control of the blood sampling procedure using a structured observation scheme

    DEFF Research Database (Denmark)

    Seemann, Tine Lindberg; Nybo, Mads

    2016-01-01

    blood drawings by 39 phlebotomists were observed in the pilot study, while 84 blood drawings by 34 phlebotomists were observed in the follow-up study. In the pilot study, the three major error items were hand hygiene (42% error), mixing of samples (22%), and order of draw (21%). Minor significant...

  13. Frequency-Selective Signal Sensing with Sub-Nyquist Uniform Sampling Scheme

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2015-01-01

    by the Restricted Isometry Property, which is known from the field of compressed sensing. Then, compressed sensing is used to successfully reconstruct a wanted signal even if some of the uniform samples were randomly lost, e. g. due to ADC saturation. An experiment which tests the proposed method in practice...

  14. Optimal decision-making model of spatial sampling for survey of China's land with remotely sensed data

    Institute of Scientific and Technical Information of China (English)

    LI Lianfa; WANG Jinfeng; LIU Jiyuan

    2005-01-01

    Abstract In the remote sensing survey of the country land, cost and accuracy are a pair of conflicts, for which spatial sampling is a preferable solution with the aim of an optimal balance between economic input and accuracy of results, or in other words, acquirement of higher accuracy at less cost. Counter to drawbacks of previous application models, e.g. lack of comprehensive and quantitative-comparison, the optimal decision-making model of spatial sampling is proposed. This model first acquires the possible accuracy-cost diagrams of multiple schemes through initial spatial exploration, then regresses them and standardizes them into a unified reference frame, and finally produces the relatively optimal sampling scheme by using the discrete decision-making function (built by this paper) and comparing them in combination with the diagrams. According to the test result in the survey of the arable land using remotely sensed data, the Sandwich model, while applied in the survey of the thin-feature and cultivated land areas with aerial photos, can better realize the goal of the best balance between investment and accuracy. With this case and other cases, it is shown that the optimal decision-making model of spatial sampling is a good choice in the survey of the farm areas using remote sensing, with its distinguished benefit of higher precision at less cost or vice versa. In order to extensively apply the model in the surveys of natural resources, including arable farm areas, this paper proposes the prototype of development using the component technology, that could considerably improve the analysis efficiency by insetting program components within the software environment of GIS and RS.

  15. Developing a computationally efficient dynamic multilevel hybrid optimization scheme using multifidelity model interactions.

    Energy Technology Data Exchange (ETDEWEB)

    Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr. (; .); Giunta, Anthony Andrew

    2006-01-01

    Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and

  16. Evaluation of an Optimal Epidemiological Typing Scheme for Legionella pneumophila with Whole-Genome Sequence Data Using Validation Guidelines.

    Science.gov (United States)

    David, Sophia; Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R; Afshar, Baharak; Underwood, Anthony; Fry, Norman K; Parkhill, Julian; Harrison, Timothy G

    2016-08-01

    Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current "gold standard" typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard "typing panel," previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila.

  17. A global earthquake discrimination scheme to optimize ground-motion prediction equation selection

    Science.gov (United States)

    Garcia, Daniel; Wald, David J.; Hearne, Michael

    2012-01-01

    We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

  18. Schemes of Repeater Optimizing Distribution based on the MLC Application and CBLRD Simulation

    Directory of Open Access Journals (Sweden)

    Qian Qiuye

    2013-07-01

    Full Text Available The widely use of repeaters raises concern about their coordination among the public. Since repeaters may suffer interaction and limitation bearing capacity, designing a reasonable repeaters coordination method is of great significance. This study address the problem if repeater coordination in a circular flat area with minimal number of repeaters with seamless coverage theory, system simulation method. With 1,000 users, this study model the coverage, getting the minimal number of repeaters of different coverage radius based on extensive used regular hexagon coverage theory. A numerical example was given in this case. When the number of users increases to 10,000, this study simulate to get the signal density across the area according to the consideration of repeaters and the different distribution of users, which are divided into uniform distribution, linear distribution, normal distribution and lognormal distribution. Then, Multi-Layer Coverage (MLC and Coverage by Link Rate Density (CBLRD are created as the distribution scheme on the area where repeat service demand is large. Moreover, for solution on the distribution of the repeaters with barriers, distribution schemes are given considering the transmission of VHF spectrums and the distribution of users around the barrier. Additionally, Spring Comfortable Degree (SCD is used for evaluation of the results and the developing tends are given to improve the model. Due to the reasonable assumption, the solution of repeater distribution is of pivotal reference value based on the reasonable results.

  19. A Numerical Approach for Solving Optimal Control Problems Using the Boubaker Polynomials Expansion Scheme

    Directory of Open Access Journals (Sweden)

    B. Kafash

    2014-04-01

    Full Text Available In this paper, we present a computational method for solving optimal control problems and the controlled Duffing oscillator. This method is based on state parametrization. In fact, the state variable is approximated by Boubaker polynomials with unknown coefficients. The equation of motion, performance index and boundary conditions are converted into some algebraic equations. Thus, an optimal control problem converts to a optimization problem, which can then be solved easily. By this method, the numerical value of the performance index is obtained. Also, the control and state variables can be approximated as functions of time. Convergence of the algorithms is proved. Numerical results are given for several test examples to demonstrate the applicability and efficiency of the method.

  20. Triangular Geometrized Sampling Heuristics for Fast Optimal Motion Planning

    Directory of Open Access Journals (Sweden)

    Ahmed Hussain Qureshi

    2015-02-01

    Full Text Available Rapidly-exploring Random Tree (RRT-based algorithms have become increasingly popular due to their lower computational complexity as compared with other path planning algorithms. The recently presented RRT* motion planning algorithm improves upon the original RRT algorithm by providing optimal path solutions. While RRT determines an initial collision-free path fairly quickly, RRT* guarantees almost certain convergence to an optimal, obstacle-free path from the start to the goal points for any given geometrical environment. However, the main limitations of RRT* include its slow processing rate and high memory consumption, due to the large number of iterations required for calculating the optimal path. In order to overcome these limitations, we present another improvement, i.e, the Triangular Geometerized-RRT* (TGRRT* algorithm, which utilizes triangular geometrical methods to improve the performance of the RRT* algorithm in terms of the processing time and a decreased number of iterations required for an optimal path solution. Simulations comparing the performance results of the improved TG-RRT* with RRT* are presented to demonstrate the overall improvement in performance and optimal path detection.

  1. Triangular Geometrized Sampling Heuristics for Fast Optimal Motion Planning

    Directory of Open Access Journals (Sweden)

    Ahmed Hussain Qureshi

    2015-02-01

    Full Text Available Rapidly-exploring Random Tree (RRT-based algorithms have become increasingly popular due to their lower computational complexity as compared with other path planning algorithms. The recently presented RRT* motion planning algorithm improves upon the original RRT algorithm by providing optimal path solutions. While RRT determines an initial collision-free path fairly quickly, RRT* guarantees almost certain convergence to an optimal, obstacle-free path from the start to the goal points for any given geometrical environment. However, the main limitations of RRT* include its slow processing rate and high memory consumption, due to the large number of iterations required for calculating the optimal path. In order to overcome these limitations, we present another improvement, i.e, the Triangular Geometerized-RRT* (TG-RRT* algorithm, which utilizes triangular geometrical methods to improve the performance of the RRT* algorithm in terms of the processing time and a decreased number of iterations required for an optimal path solution. Simulations comparing the performance results of the improved TG-RRT* with RRT* are presented to demonstrate the overall improvement in performance and optimal path detection.

  2. Multiobjective optimization scheme for industrial synthesis gas sweetening plant in GTL process

    Institute of Scientific and Technical Information of China (English)

    Alireza Behroozsarand; Akbar Zamaniyan

    2011-01-01

    In industrial amine plants the optimized operating conditions are obtained from the conclusion of occurred events and challenges that are normal in the working units.For the sake of reducing the costs, time consuming, and preventing unsuitable accidents, the optimization could be performed by a computer program.In this paper, simulation and parameter analysis of amine plant is performed at first.The optimization of this unit is studied using Non-Dominated Sorting Genetic Algorithm-II in order to produce sweet gas with C02 mole percentage less than 2.0%and H2S concentration less than 10 ppm for application in Fischer-Tropsch synthesis.The simulation of the plant in HYSYS v.3.1 software has been linked with MATLAB code for real-parameter NSGA-II to simulate and optimize the amine process.Three scenarios are selected to cover the effect of (DEA/MDEA) mass composition percent ratio at amine solution on objective functions.Results show that sour gas temperature and pressure of 33.98 ℃ and 14.96 bar, DEA/C02 molar flow ratio of 12.58, regeneration gas temperature and pressure of 94.92 ℃ and 3.0 bar,regenerator pressure of 1.53 bar, and ratio of DEA/MDEA= 20%/10% are the best values for minimizing plant energy consumption, amine circulation rate, and carbon dioxide recovery.

  3. On the optimality of the Ott-Grebogi-Yorke control scheme

    Science.gov (United States)

    Epureanu, Bogdan I.; Dowell, Earl H.

    1998-05-01

    Some of the characteristics of the Ott-Grebogi-Yorke (OGY) control technique are presented as applied to nonlinear flows, as distinct from nonlinear maps. Specifically, we consider the case where the magnitude of the control parameter varies in time within each control cycle in proportion to a given function referred to as a basis function. The choice of the basis function is shown to influence the basin of convergence for a given level of parameter variation in the OGY controller. An algorithm for designing the optimal basis function is presented. The optimal basis function is shown to be defined by a step function with potentially several jumps, thus revealing the intrinsic power of a standard OGY technique that uses a single step function as a basis function. Two numerical applications of the optimal design technique to a Duffing oscillator are also presented to show that the standard OGY technique may be significantly improved by making an optimal choice of a basis function. Copyright

  4. Progress Towards Optimally Efficient Schemes for Monte Carlo Thermal Radiation Transport

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R P; Brooks III, E D

    2007-09-26

    In this summary we review the complementary research being undertaken at AWE and LLNL aimed at developing optimally efficient algorithms for Monte Carlo thermal radiation transport based on the difference formulation. We conclude by presenting preliminary results on the application of Newton-Krylov methods for solving the Symbolic Implicit Monte Carlo (SIMC) energy equation.

  5. TEM10 homodyne detection as an optimal small-displacement and tilt-measurement scheme

    DEFF Research Database (Denmark)

    Delaubert, Vincent; Treps, Nikolas; Lassen, Mikael Østergaard

    2006-01-01

    We report an experimental demonstration of optimal measurements of small displacement and tilt of a Gaussian beam - two conjugate variables - involving a homodyne detection with a TEM10 local oscillator. We verify that the standard split detection is only 64% efficient. We also show a displacement...

  6. Optimal Scheme for Search State Space and Scheduling on Multiprocessor Systems

    Science.gov (United States)

    Youness, Hassan A.; Sakanushi, Keishi; Takeuchi, Yoshinori; Salem, Ashraf; Wahdan, Abdel-Moneim; Imai, Masaharu

    A scheduling algorithm aims to minimize the overall execution time of the program by properly allocating and arranging the execution order of the tasks on the core processors such that the precedence constraints among the tasks are preserved. In this paper, we present a new scheduling algorithm by using geometry analysis of the Task Precedence Graph (TPG) based on A* search technique and uses a computationally efficient cost function for guiding the search with reduced complexity and pruning techniques to produce an optimal solution for the allocation/scheduling problem of a parallel application to parallel and multiprocessor architecture. The main goal of this work is to significantly reduce the search space and achieve the optimality or near optimal solution. We implemented the algorithm on general task graph problems that are processed on most of related search work and obtain the optimal scheduling with a small number of states. The proposed algorithm reduced the exhaustive search by at least 50% of search space. The viability and potential of the proposed algorithm is demonstrated by an illustrative example.

  7. Optimization of time data codification and transmission schemes: Application to Gaia

    CERN Document Server

    Portell, J; Luri, X; Portell, Jordi; Garcia-Berro, Enrique; Luri, Xavier

    2005-01-01

    Gaia is an ambitious space observatory devoted to obtain the largest and most precise astrometric catalogue of astronomical objects from our Galaxy and beyond. On-board processing and transmission of the huge amount of data generated by the instruments is one of its several technological challenges. The measurement time tags are critical for the scientific results of the mission, so they must be measured and transmitted with the highest precision - leading to an important telemetry channel occupation. In this paper we present the optimisation of time data, which has resulted in a useful software tool. We also present how time data is adapted to the Packet Telemetry standard. The several communication layers are illustrated and a method for coding and transmitting the relevant data is described as well. Although our work is focused on Gaia, the timing scheme and the corresponding tools can be applied to any other instrument or mission with similar operational principles.

  8. Optimal placement of FACTS controller scheme for enhancement of power system security in Indian scenario

    Directory of Open Access Journals (Sweden)

    Imran Khan

    2015-09-01

    Full Text Available This paper presents a FACTS operation scheme to enhance the power system security. Three main generic types of FACTS devices are introduced. Line overloads are solved by controlling active power of series compensators and low voltages are solved by controlling reactive power of shunt compensators, respectively. Especially, the combined series-shunt compensators such as UPFC are applied to solve both line congestions and low voltages simultaneously. Two kinds of indices that indicate the security level related to line flow and bus voltage is utilized in this paper. They are iteratively minimized to determine operating points of the devices for security enhancement. The sensitivity vectors of the indices are derived to determine the direction of minimum. The proposed algorithm is verified on the IEEE 14-bus system with FACTS devices in a normal condition and in a line-faulted contingency.

  9. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  10. Sampled-Data and Discrete-Time H2 Optimal Control

    NARCIS (Netherlands)

    Trentelman, H.L.; Stoorvogel, A.A.

    1995-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  11. An Optimal Integrated Control Scheme for Permanent Magnet Synchronous Generator-Based Wind Turbines under Asymmetrical Grid Fault Conditions

    Directory of Open Access Journals (Sweden)

    Dan Wang

    2016-04-01

    Full Text Available In recent years, the increasing penetration level of wind energy into power systems has brought new issues and challenges. One of the main concerns is the issue of dynamic response capability during outer disturbance conditions, especially the fault-tolerance capability during asymmetrical faults. In order to improve the fault-tolerance and dynamic response capability under asymmetrical grid fault conditions, an optimal integrated control scheme for the grid-side voltage-source converter (VSC of direct-driven permanent magnet synchronous generator (PMSG-based wind turbine systems is proposed in this paper. The optimal control strategy includes a main controller and an additional controller. In the main controller, a double-loop controller based on differential flatness-based theory is designed for grid-side VSC. Two parts are involved in the design process of the flatness-based controller: the reference trajectories generation of flatness output and the implementation of the controller. In the additional control aspect, an auxiliary second harmonic compensation control loop based on an improved calculation method for grid-side instantaneous transmission power is designed by the quasi proportional resonant (Quasi-PR control principle, which is able to simultaneously restrain the second harmonic components in active power and reactive power injected into the grid without the respective calculation for current control references. Moreover, to reduce the DC-link overvoltage during grid faults, the mathematical model of DC-link voltage is analyzed and a feedforward modified control factor is added to the traditional DC voltage control loop in grid-side VSC. The effectiveness of the optimal control scheme is verified in PSCAD/EMTDC simulation software.

  12. Near-Optimal Detection in MIMO Systems using Gibbs Sampling

    DEFF Research Database (Denmark)

    Hansen, Morten; Hassibi, Babak; Dimakis, Georgios Alexandros

    2009-01-01

    In this paper we study a Markov Chain Monte Carlo (MCMC) Gibbs sampler for solving the integer least-squares problem. In digital communication the problem is equivalent to preforming Maximum Likelihood (ML) detection in Multiple-Input Multiple-Output (MIMO) systems. While the use of MCMC methods...... for such problems has already been proposed, our method is novel in that we optimize the "temperature" parameter so that in steady state, i.e., after the Markov chain has mixed, there is only polynomially (rather than exponentially) small probability of encountering the optimal solution. More precisely, we obtain...

  13. An optimal implicit staggered-grid finite-difference scheme based on the modified Taylor-series expansion with minimax approximation method for elastic modeling

    Science.gov (United States)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2017-03-01

    Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.

  14. Time-optimal path planning in dynamic flows using level set equations: theory and schemes

    Science.gov (United States)

    Lolla, Tapovan; Lermusiaux, Pierre F. J.; Ueckermann, Mattheus P.; Haley, Patrick J.

    2014-10-01

    We develop an accurate partial differential equation-based methodology that predicts the time-optimal paths of autonomous vehicles navigating in any continuous, strong, and dynamic ocean currents, obviating the need for heuristics. The goal is to predict a sequence of steering directions so that vehicles can best utilize or avoid currents to minimize their travel time. Inspired by the level set method, we derive and demonstrate that a modified level set equation governs the time-optimal path in any continuous flow. We show that our algorithm is computationally efficient and apply it to a number of experiments. First, we validate our approach through a simple benchmark application in a Rankine vortex flow for which an analytical solution is available. Next, we apply our methodology to more complex, simulated flow fields such as unsteady double-gyre flows driven by wind stress and flows behind a circular island. These examples show that time-optimal paths for multiple vehicles can be planned even in the presence of complex flows in domains with obstacles. Finally, we present and support through illustrations several remarks that describe specific features of our methodology.

  15. A New Energy Optimal Control Scheme for a Separately Excited DC Motor Based Incremental Motion Drive

    Institute of Scientific and Technical Information of China (English)

    Milan A.Sheta; Vivek Agarwal; Paluri S.V.Nataraj

    2009-01-01

    This paper considers minimization of resistive and frictional power dissipation in a separately excited DC motor based incremental motion drive (IMD). The drive is required to displace a given, fixed load through a definite angle in specified time, with minimum energy dissipation in the motor windings and minimum frictional losses. Accordingly, an energy optimal (EO) control strategy is proposed in which the motor is first accelerated to track a specific speed profile for a pre-determined optimal time period. Thereafter, both armature and field power supplies are disconnected, and the motor decelerates and comes to a halt at the desired displacement point in the desired total displacement time. The optimal time period for the initial acceleration phase is computed so that the motor stores just enough energy to decelerate to the final position at the specified displacement time. The parameters, such as the moment of inertia and coefficient of friction, which depend on the load and other external conditions, have been obtained using system identification method. Comparison with earlier control techniques is included. The results show that the proposed EO control strategy results in significant reduction of energy loases compared to the existing ones.

  16. Fuzzy logic scheme for tip-sample distance control for a low cost near field optical microscope

    Directory of Open Access Journals (Sweden)

    J.A. Márquez

    2013-12-01

    Full Text Available The control of the distance between the surface and the tip-sample of a Scanning Near Field Optical Microscope (SNOM is essential for a reliable surface mapping. The control algorithm should be able to maintain the system in a constant distance between the tip and the surface. In this system, nanometric adjustments should be made in order to sense topographies at the same scale with an appropriate resolution. These kinds of devices varies its properties through short periods of time, and it is required a control algorithm capable of handle these changes. In this work a fuzzy logic control scheme is proposed in order to manage the changes the device might have through the time, and to counter the effects of the non-linearity as well. Two inputs are used to program the rules inside the fuzzy logic controller, the difference between the reference signal and the sample signal (error, and the speed in which it decreases or increases. A lock-in amplifier is used as data acquisition hardware to sample the high frequency signals used to produce the tuning fork oscillations. Once these variables are read the control algorithm calculate a voltage output to move the piezoelectric device, approaching or removing the tip-probe from the sample analyzed.

  17. Optimization of ITS Construction Scheme for Road Network under the Restriction of Different Transports’ Passenger Person-Kilometers

    Directory of Open Access Journals (Sweden)

    Ming-wei Li

    2017-01-01

    Full Text Available Diversified transport modes and increased personal transportation demands have increased in urban traffic problems such as traffic congestion and environmental pollution. To cope with traffic problems, advanced transportation technologies are being developed as intelligent transportation system (ITS. There is a growing trend to coordinate varying kinds of transportation modes. However, the effective construction and application of ITS in urban traffic can be affected by many factors, such as transport mode. Therefore, how to reasonably construct ITS by consideration of different transport modes’ characteristics and requirements is an important research challenge. Additionally, both costs and negative effects must be minimized and application efficiency is required to be optimal in the construction process. To address these requirements, a multiobjective optimization model and a fuzzy selecting optimum model were combined to study the construction scheme based on optimization results. The empirical analysis of Beijing, China, suggested several considerations for improvements to future road network ITS construction with controlled costs. Finally, guidelines are proposed to facilitate ITS construction, improve ITS application efficiency, and transform and innovate strategies to cope with urban traffic.

  18. Optimal modulation and coding scheme allocation of scalable video multicast over IEEE 802.16e networks

    Directory of Open Access Journals (Sweden)

    Tsai Chia-Tai

    2011-01-01

    Full Text Available Abstract With the rapid development of wireless communication technology and the rapid increase in demand for network bandwidth, IEEE 802.16e is an emerging network technique that has been deployed in many metropolises. In addition to the features of high data rate and large coverage, it also enables scalable video multicasting, which is a potentially promising application, over an IEEE 802.16e network. How to optimally assign the modulation and coding scheme (MCS of the scalable video stream for the mobile subscriber stations to improve spectral efficiency and maximize utility is a crucial task. We formulate this MCS assignment problem as an optimization problem, called the total utility maximization problem (TUMP. This article transforms the TUMP into a precedence constraint knapsack problem, which is a NP-complete problem. Then, a branch and bound method, which is based on two dominance rules and a lower bound, is presented to solve the TUMP. The simulation results show that the proposed branch and bound method can find the optimal solution efficiently.

  19. An Approach for Optimal Feature Subset Selection using a New Term Weighting Scheme and Mutual Information

    Directory of Open Access Journals (Sweden)

    Shine N Das

    2011-01-01

    Full Text Available With the development of the web, large numbers of documents are available on the Internet and they are growing drastically day by day. Hence automatic text categorization becomes more and more important for dealing with massive data. However the major problem of document categorization is the high dimensionality of feature space.  The measures to decrease the feature dimension under not decreasing recognition effect are called the problems of feature optimum extraction or selection. Dealing with reduced relevant feature set can be more efficient and effective. The objective of feature selection is to find a subset of features that have all characteristics of the full features set. Instead Dependency among features is also important for classification. During past years, various metrics have been proposed to measure the dependency among different features. A popular approach to realize dependency is maximal relevance feature selection: selecting the features with the highest relevance to the target class. A new feature weighting scheme, we proposed have got a tremendous improvements in dimensionality reduction of the feature space. The experimental results clearly show that this integrated method works far better than the others.

  20. Practical splitting methods for the adaptive integration of nonlinear evolution equations. Part I: Construction of optimized schemes and pairs of schemes

    KAUST Repository

    Auzinger, Winfried

    2016-07-28

    We present a number of new contributions to the topic of constructing efficient higher-order splitting methods for the numerical integration of evolution equations. Particular schemes are constructed via setup and solution of polynomial systems for the splitting coefficients. To this end we use and modify a recent approach for generating these systems for a large class of splittings. In particular, various types of pairs of schemes intended for use in adaptive integrators are constructed.

  1. Design and implementation of an optimal laser pulse front tilting scheme for ultrafast electron diffraction in reflection geometry with high temporal resolution

    Directory of Open Access Journals (Sweden)

    Francesco Pennacchio

    2017-07-01

    Full Text Available Ultrafast electron diffraction is a powerful technique to investigate out-of-equilibrium atomic dynamics in solids with high temporal resolution. When diffraction is performed in reflection geometry, the main limitation is the mismatch in group velocity between the overlapping pump light and the electron probe pulses, which affects the overall temporal resolution of the experiment. A solution already available in the literature involved pulse front tilt of the pump beam at the sample, providing a sub-picosecond time resolution. However, in the reported optical scheme, the tilted pulse is characterized by a temporal chirp of about 1 ps at 1 mm away from the centre of the beam, which limits the investigation of surface dynamics in large crystals. In this paper, we propose an optimal tilting scheme designed for a radio-frequency-compressed ultrafast electron diffraction setup working in reflection geometry with 30 keV electron pulses containing up to 105 electrons/pulse. To characterize our scheme, we performed optical cross-correlation measurements, obtaining an average temporal width of the tilted pulse lower than 250 fs. The calibration of the electron-laser temporal overlap was obtained by monitoring the spatial profile of the electron beam when interacting with the plasma optically induced at the apex of a copper needle (plasma lensing effect. Finally, we report the first time-resolved results obtained on graphite, where the electron-phonon coupling dynamics is observed, showing an overall temporal resolution in the sub-500 fs regime. The successful implementation of this configuration opens the way to directly probe structural dynamics of low-dimensional systems in the sub-picosecond regime, with pulsed electrons.

  2. Optimal satellite sampling to resolve global-scale dynamics in the I-T system

    Science.gov (United States)

    Rowland, D. E.; Zesta, E.; Connor, H. K.; Pfaff, R. F., Jr.

    2016-12-01

    The recent Decadal Survey highlighted the need for multipoint measurements of ion-neutral coupling processes to study the pathways by which solar wind energy drives dynamics in the I-T system. The emphasis in the Decadal Survey is on global-scale dynamics and processes, and in particular, mission concepts making use of multiple identical spacecraft in low earth orbit were considered for the GDC and DYNAMIC missions. This presentation will provide quantitative assessments of the optimal spacecraft sampling needed to significantly advance our knowledge of I-T dynamics on the global scale.We will examine storm time and quiet time conditions as simulated by global circulation models, and determine how well various candidate satellite constellations and satellite schemes can quantify the plasma and neutral convection patterns and global-scale distributions of plasma density, neutral density, and composition, and their response to changes in the IMF. While the global circulation models are data-starved, and do not contain all the physics that we might expect to observe with a global-scale constellation mission, they are nonetheless an excellent "starting point" for discussions of the implementation of such a mission. The result will be of great utility for the design of future missions, such as GDC, to study the global-scale dynamics of the I-T system.

  3. Optimization of insulin pump therapy based on high order run-to-run control scheme.

    Science.gov (United States)

    Tuo, Jianyong; Sun, Huiling; Shen, Dong; Wang, Hui; Wang, Youqing

    2015-07-01

    Continuous subcutaneous insulin infusion (CSII) pump is widely considered a convenience and promising way for type 1 diabetes mellitus (T1DM) subjects, who need exogenous insulin infusion. In the standard insulin pump therapy, there are two modes for insulin infusion: basal and bolus insulin. The basal-bolus therapy should be individualized and optimized in order to keep one subject's blood glucose (BG) level within the normal range; however, the optimization procedure is troublesome and it perturb the patients a lot. Therefore, an automatic adjustment method is needed to reduce the burden of the patients, and run-to-run (R2R) control algorithm can be used to handle this significant task. In this study, two kinds of high order R2R control methods are presented to adjust the basal and bolus insulin simultaneously. For clarity, a second order R2R control algorithm is first derived and studied. Furthermore, considering the differences between weekdays and weekends, a seventh order R2R control algorithm is also proposed and tested. In order to simulate real situation, the proposed method has been tested with uncertainties on measurement noise, drifts, meal size, meal time and snack. The proposed method can converge even when there are ±60 min random variations in meal timing or ±50% random variations in meal size. According to the robustness analysis, one can see that the proposed high order R2R has excellent robustness and could be a promising candidate to optimize insulin pump therapy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Optimal weighting scheme for suppressing cascades and traffic congestion in complex networks.

    Science.gov (United States)

    Yang, Rui; Wang, Wen-Xu; Lai, Ying-Cheng; Chen, Guanrong

    2009-02-01

    This paper is motivated by the following two related problems in complex networks: (i) control of cascading failures and (ii) mitigation of traffic congestion. Both problems are of significant recent interest as they address, respectively, the security of and efficient information transmission on complex networks. Taking into account typical features of load distribution and weights in real-world networks, we have discovered an optimal solution to both problems. In particular, we shall provide numerical evidence and theoretical analysis that, by choosing a proper weighting parameter, a maximum level of robustness against cascades and traffic congestion can be achieved, which practically rids the network of occurrences of the catastrophic dynamics.

  5. Model-Based Predictive Control Scheme for Cost Optimization and Balancing Services for Supermarket Refrigeration Systems

    DEFF Research Database (Denmark)

    Weerts, Hermanus H. M.; Shafiei, Seyed Ehsan; Stoustrup, Jakob

    2014-01-01

    A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predictive...... control design. It is however shown that taking into account the knowledge of different time scales in the dynamical subsystems makes possible a linear formulation of a centralized predictive controller. A realistic scenario of regulatory power services in the smart grid is considered and formulated...

  6. Optimal ordering and pricing policy for price sensitive stock–dependent demand under progressive payment scheme

    Directory of Open Access Journals (Sweden)

    Nita H. Shah

    2011-01-01

    Full Text Available The terminal condition of inventory level to be zero at the end of the cycle time adopted by Soni and Shah (2008, 2009 is not viable when demand is stock-dependent. To rectify this assumption, we extend their model for (1 an ending – inventory to be non-zero; (2 limited floor space; (3 a profit maximization model; (4 selling price to be a decision variable, and (5 units in inventory deteriorate at a constant rate. The algorithm is developed to search for the optimal decision policy. The working of the proposed model is supported with a numerical example. Sensitivity analysis is carried out to investigate critical parameters.

  7. An effective coded excitation scheme based on a predistorted FM signal and an optimized digital filter

    DEFF Research Database (Denmark)

    Misaridis, Thanasis; Jensen, Jørgen Arendt

    1999-01-01

    performed with the program Field II. A commercial scanner (B-K Medical 3535) was modified and interfaced to an arbitrary function generator along with an RF power amplifier (Ritec). Hydrophone measurements in water were done to establish excitation voltage and corresponding intensity levels (I-sptp and I......This paper presents a coded excitation imaging system based on a predistorted FM excitation and a digital compression filter designed for medical ultrasonic applications, in order to preserve both axial resolution and contrast. In radars, optimal Chebyshev windows efficiently weight a nearly...

  8. A curated public database for multilocus sequence typing (MLST) and analysis of Haemophilus parasuis based on an optimized typing scheme.

    Science.gov (United States)

    Mullins, Michael A; Register, Karen B; Brunelle, Brian W; Aragon, Virginia; Galofré-Mila, Nuria; Bayles, Darrell O; Jolley, Keith A

    2013-03-23

    Haemophilus parasuis causes Glässer's disease and pneumonia in swine. Serotyping is often used to classify isolates but requires reagents that are costly to produce and not standardized or widely available. Sequence-based methods, such as multilocus sequence typing (MLST), offer many advantages over serotyping. An MLST scheme was previously proposed for H. parasuis but genome sequence data only recently available reveals the primers recommended, based on sequences of related bacteria, are not optimal. Here we report modifications to enhance the original method, including primer redesign to eliminate mismatches with H. parasuis sequences and to avoid regions of high sequence heterogeneity, standardization of primer T(m)s and identification of universal PCR conditions that result in robust and reproducible amplification of all targets. The modified typing method was applied to a collection of 127 isolates from North and South America, Europe and Asia. An alignment of the concatenated sequences obtained from seven target housekeeping genes identified 278 variable nucleotide sites that define 116 unique sequence types. A comparison of the original and modified methods using a subset of 86 isolates indicates little difference in overall locus diversity, discriminatory power or in the clustering of strains within Neighbor-Joining trees. Data from the optimized MLST were used to populate a newly created and publicly available H. parasuis database. An accompanying database designed to capture provenance and epidemiological information for each isolate was also created. The modified MLST scheme is highly discriminatory but more robust, reproducible and user-friendly than the original. The MLST database provides a novel resource for investigation of H. parasuis outbreaks and for tracking strain evolution.

  9. Middle cerebral artery bifurcation aneurysms: an anatomic classification scheme for planning optimal surgical strategies.

    Science.gov (United States)

    Washington, Chad W; Ju, Tao; Zipfel, Gregory J; Dacey, Ralph G

    2014-03-01

    Changing landscapes in neurosurgical training and increasing use of endovascular therapy have led to decreasing exposure in open cerebrovascular neurosurgery. To ensure the effective transition of medical students into competent practitioners, new training paradigms must be developed. Using principles of pattern recognition, we created a classification scheme for middle cerebral artery (MCA) bifurcation aneurysms that allows their categorization into a small number of shape pattern groups. Angiographic data from patients with MCA aneurysms between 1995 and 2012 were used to construct 3-dimensional models. Models were then analyzed and compared objectively by assessing the relationship between the aneurysm sac, parent vessel, and branch vessels. Aneurysms were then grouped on the basis of the similarity of their shape patterns in such a way that the in-class similarities were maximized while the total number of categories was minimized. For each category, a proposed clip strategy was developed. From the analysis of 61 MCA bifurcation aneurysms, 4 shape pattern categories were created that allowed the classification of 56 aneurysms (91.8%). The number of aneurysms allotted to each shape cluster was 10 (16.4%) in category 1, 24 (39.3%) in category 2, 7 (11.5%) in category 3, and 15 (24.6%) in category 4. This study demonstrates that through the use of anatomic visual cues, MCA bifurcation aneurysms can be grouped into a small number of shape patterns with an associated clip solution. Implementing these principles within current neurosurgery training paradigms can provide a tool that allows more efficient transition from novice to cerebrovascular expert.

  10. Nonuniform sampling schemes of the Brillouin zone for many-electron perturbation-theory calculations in reduced dimensionality

    Science.gov (United States)

    da Jornada, Felipe H.; Qiu, Diana Y.; Louie, Steven G.

    2017-01-01

    First-principles calculations based on many-electron perturbation theory methods, such as the ab initio G W and G W plus Bethe-Salpeter equation (G W -BSE) approach, are reliable ways to predict quasiparticle and optical properties of materials, respectively. However, these methods involve more care in treating the electron-electron interaction and are considerably more computationally demanding when applied to systems with reduced dimensionality, since the electronic confinement leads to a slower convergence of sums over the Brillouin zone due to a much more complicated screening environment that manifests in the "head" and "neck" elements of the dielectric matrix. Here we present two schemes to sample the Brillouin zone for G W and G W -BSE calculations: the nonuniform neck subsampling method and the clustered sampling interpolation method, which can respectively be used for a family of single-particle problems, such as G W calculations, and for problems involving the scattering of two-particle states, such as when solving the BSE. We tested these methods on several few-layer semiconductors and graphene and show that they perform a much more efficient sampling of the Brillouin zone and yield two to three orders of magnitude reduction in the computer time. These two methods can be readily incorporated into several ab initio packages that compute electronic and optical properties through the G W and G W -BSE approaches.

  11. Optimization of the active absorber scheme for the protection of the Dispersion Suppressor

    CERN Document Server

    Magistris, M; Assmann, R; Bracco, C; Brugger, M; Cerutti, F; Ferrari, A; Redaelli, S; Vlachoudis, V

    2009-01-01

    There are two main types of cold elements in IR7: quadrupole and dipole magnets (MQ and MB). According to predictions, these objects are to lose their superconducting properties if the spurious power densities reach about 1 and 5 mW/cm3, respectively. In order to protect these fragile components, 5 active absorbers (TCLA) were designed and a systematic study was launched to maximize the shielding efficiency of the absorber system for different configurations (locations and orientations). The TCLA's are identical to the secondary collimators (TCS), the only difference is found in the material of the jaw, which, initially, was set integrally to Cu (instead of C) and later included a small W insertion. This report summarizes the survey of cold element protection through TCLA insertion optimization.

  12. An optimal adaptive time-stepping scheme for solving reaction-diffusion-chemotaxis systems.

    Science.gov (United States)

    Chiu, Chichia; Yu, Jui-Ling

    2007-04-01

    Reaction-diffusion-chemotaxis systems have proven to be fairly accurate mathematical models for many pattern formation problems in chemistry and biology. These systems are important for computer simulations of patterns, parameter estimations as well as analysis of the biological systems. To solve reaction-diffusion-chemotaxis systems, efficient and reliable numerical algorithms are essential for pattern generations. In this paper, a general reaction-diffusion-chemotaxis system is considered for specific numerical issues of pattern simulations. We propose a fully explicit discretization combined with a variable optimal time step strategy for solving the reaction-diffusion-chemotaxis system. Theorems about stability and convergence of the algorithm are given to show that the algorithm is highly stable and efficient. Numerical experiment results on a model problem are given for comparison with other numerical methods. Simulations on two real biological experiments will also be shown.

  13. Determination of Optimal Double Sampling Plan using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sampath Sundaram

    2012-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Designing double sampling plan requires identification of sample sizes and acceptance numbers. In this paper a genetic algorithm has been designed for the selection of optimal acceptance numbers and sample sizes for the specified producer’s risk and consumer’s risk. Implementation of the algorithm has been illustrated numerically for different choices of quantities involved in a double sampling plan   

  1. Optimizing analog-to-digital converters for sampling extracellular potentials.

    Science.gov (United States)

    Artan, N Sertac; Xu, Xiaoxiang; Shi, Wei; Chao, H Jonathan

    2012-01-01

    In neural implants, an analog-to-digital converter (ADC) provides the delicate interface between the analog signals generated by neurological processes and the digital signal processor that is tasked to interpret these signals for instance for epileptic seizure detection or limb control. In this paper, we propose a low-power ADC architecture for neural implants that process extracellular potentials. The proposed architecture uses the spike detector that is readily available on most of these implants in a closed-loop with an ADC. The spike detector determines whether the current input signal is part of a spike or it is part of noise to adaptively determine the instantaneous sampling rate of the ADC. The proposed architecture can reduce the power consumption of a traditional ADC by 62% when sampling extracellular potentials without any significant impact on spike detection accuracy.

  2. Optimized nested Markov chain Monte Carlo sampling: theory

    Energy Technology Data Exchange (ETDEWEB)

    Coe, Joshua D [Los Alamos National Laboratory; Shaw, M Sam [Los Alamos National Laboratory; Sewell, Thomas D [U. MISSOURI

    2009-01-01

    Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples of the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.

  3. Optimization conditions of samples saponification for tocopherol analysis.

    Science.gov (United States)

    Souza, Aloisio Henrique Pereira; Gohara, Aline Kirie; Rodrigues, Ângela Claudia; Ströher, Gisely Luzia; Silva, Danielle Cristina; Visentainer, Jesuí Vergílio; Souza, Nilson Evelázio; Matsushita, Makoto

    2014-09-01

    A full factorial design 2(2) (two factors at two levels) with duplicates was performed to investigate the influence of the factors agitation time (2 and 4 h) and the percentage of KOH (60% and 80% w/v) in the saponification of samples for the determination of α, β and γ+δ-tocopherols. The study used samples of peanuts (cultivar armadillo), produced and marketed in Maringá, PR. The factors % KOH and agitation time were significant, and an increase in their values contributed negatively to the responses. The interaction effect was not significant for the response δ-tocopherol, and the contribution of this effect to the other responses was positive, but less than 10%. The ANOVA and response surfaces analysis showed that the most efficient saponification procedure was obtained using a 60% (w/v) solution of KOH and with an agitation time of 2 h.

  4. Strategies to optimize monitoring schemes of recreational waters from Salta, Argentina: a multivariate approach

    Science.gov (United States)

    Gutiérrez-Cacciabue, Dolores; Teich, Ingrid; Poma, Hugo Ramiro; Cruz, Mercedes Cecilia; Balzarini, Mónica; Rajal, Verónica Beatriz

    2014-01-01

    Several recreational surface waters in Salta, Argentina, were selected to assess their quality. Seventy percent of the measurements exceeded at least one of the limits established by international legislation becoming unsuitable for their use. To interpret results of complex data, multivariate techniques were applied. Arenales River, due to the variability observed in the data, was divided in two: upstream and downstream representing low and high pollution sites, respectively; and Cluster Analysis supported that differentiation. Arenales River downstream and Campo Alegre Reservoir were the most different environments and Vaqueros and La Caldera Rivers were the most similar. Canonical Correlation Analysis allowed exploration of correlations between physicochemical and microbiological variables except in both parts of Arenales River, and Principal Component Analysis allowed finding relationships among the 9 measured variables in all aquatic environments. Variable’s loadings showed that Arenales River downstream was impacted by industrial and domestic activities, Arenales River upstream was affected by agricultural activities, Campo Alegre Reservoir was disturbed by anthropogenic and ecological effects, and La Caldera and Vaqueros Rivers were influenced by recreational activities. Discriminant Analysis allowed identification of subgroup of variables responsible for seasonal and spatial variations. Enterococcus, dissolved oxygen, conductivity, E. coli, pH, and fecal coliforms are sufficient to spatially describe the quality of the aquatic environments. Regarding seasonal variations, dissolved oxygen, conductivity, fecal coliforms, and pH can be used to describe water quality during dry season, while dissolved oxygen, conductivity, total coliforms, E. coli, and Enterococcus during wet season. Thus, the use of multivariate techniques allowed optimizing monitoring tasks and minimizing costs involved. PMID:25190636

  5. Application of a derivative-free global optimization algorithm to the derivation of a new time integration scheme for the simulation of incompressible turbulence

    Science.gov (United States)

    Alimohammadi, Shahrouz; Cavaglieri, Daniele; Beyhaghi, Pooriya; Bewley, Thomas R.

    2016-11-01

    This work applies a recently developed Derivative-free optimization algorithm to derive a new mixed implicit-explicit (IMEX) time integration scheme for Computational Fluid Dynamics (CFD) simulations. This algorithm allows imposing a specified order of accuracy for the time integration and other important stability properties in the form of nonlinear constraints within the optimization problem. In this procedure, the coefficients of the IMEX scheme should satisfy a set of constraints simultaneously. Therefore, the optimization process, at each iteration, estimates the location of the optimal coefficients using a set of global surrogates, for both the objective and constraint functions, as well as a model of the uncertainty function of these surrogates based on the concept of Delaunay triangulation. This procedure has been proven to converge to the global minimum of the constrained optimization problem provided the constraints and objective functions are twice differentiable. As a result, a new third-order, low-storage IMEX Runge-Kutta time integration scheme is obtained with remarkably fast convergence. Numerical tests are then performed leveraging the turbulent channel flow simulations to validate the theoretical order of accuracy and stability properties of the new scheme.

  6. Study on optimal model of hypothetical work injury insurance scheme%理论工伤保险的优化模型研究

    Institute of Scientific and Technical Information of China (English)

    叶驰宇; 董恒进; 吴媛; 段胜楠; 刘小方; 尤华; 胡慧美; 王林浩; 张菁

    2013-01-01

    Objective To explore an optimal model of hypothetical work injury insurance scheme,which is in line with the wishes of workers,based on the problems in the implementation of work injury insurance in China and to provide useful information for relevant policy makers.Methods Multistage cluster sampling was used to select subjects:first,9 small,medium,and large enterprises were selected from three cities (counties) in Zhejiang Province,China according to the economic development,transportation,and cooperation; then,31 workshops were randomly selected from the 9 enterprises.Face-to-face interviews were conducted by trained interviewers using a pre-designed questionnaire among all workers in the 31 workshops.Results After optimization of hypothetical work injury insurance scheme,the willingness to participate in the scheme increased from 73.87% to 80.96%; the average willingness to pay for the scheme increased from 2.21% (51.77 yuan) to 2.38% of monthly wage (54.93 Yuan); the median willingness to pay for the scheme increased from 1% to 1.2% of monthly wage,but decreased from 35 yuan to 30 yuan.The optimal model of hypothetical work injury insurance scheme covers all national and provincial statutory occupational diseases and work accidents,as well as consultations about occupational diseases.The scheme is supposed to be implemented worldwide by the National Social Security Department,without regional differences.The premium is borne by the state,enterprises,and individuals,and an independent insurance fund is kept in the lifetime personal account for each of insured individuals.The premium is not refunded in any event.Compensation for occupational diseases or work accidents is unrelated to the enterprises of the insured workers but related to the length of insurance.The insurance becomes effective one year after enrollment,while it is put into effect immediately after the occupational disease or accident occurs.Conclusion The optimal model of hypothetical

  7. Optimizing fish sampling for fish - mercury bioaccumulation factors

    Science.gov (United States)

    Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Knightes, Christopher D.; Journey, Celeste A.; Chasar, Lia C.; Brigham, Mark E.; Bradley, Paul M.

    2015-01-01

    Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements.

  8. Sampling technique is important for optimal isolation of pharyngeal gonorrhoea.

    Science.gov (United States)

    Mitchell, M; Rane, V; Fairley, C K; Whiley, D M; Bradshaw, C S; Bissessor, M; Chen, M Y

    2013-11-01

    Culture is insensitive for the detection of pharyngeal gonorrhoea but isolation is pivotal to antimicrobial resistance surveillance. The aim of this study was to ascertain whether recommendations provided to clinicians (doctors and nurses) on pharyngeal swabbing technique could improve gonorrhoea detection rates and to determine which aspects of swabbing technique are important for optimal isolation. This study was undertaken at the Melbourne Sexual Health Centre, Australia. Detection rates among clinicians for pharyngeal gonorrhoea were compared before (June 2006-May 2009) and after (June 2009-June 2012) recommendations on swabbing technique were provided. Associations between detection rates and reported swabbing technique obtained via a clinician questionnaire were examined. The overall yield from testing before and after provision of the recommendations among 28 clinicians was 1.6% (134/8586) and 1.8% (264/15,046) respectively (p=0.17). Significantly higher detection rates were seen following the recommendations among clinicians who reported a change in their swabbing technique in response to the recommendations (2.1% vs. 1.5%; p=0.004), swabbing a larger surface area (2.0% vs. 1.5%; p=0.02), applying more swab pressure (2.5% vs. 1.5%; p<0.001) and a change in the anatomical sites they swabbed (2.2% vs. 1.5%; p=0.002). The predominant change in sites swabbed was an increase in swabbing of the oropharynx: from a median of 0% to 80% of the time. More thorough swabbing improves the isolation of pharyngeal gonorrhoea using culture. Clinicians should receive training to ensure swabbing is performed with sufficient pressure and that it covers an adequate area that includes the oropharynx.

  9. Optimal purification and sensitive quantification of DNA from fecal samples

    DEFF Research Database (Denmark)

    Jensen, Annette Nygaard; Hoorfar, Jeffrey

    2002-01-01

    Application of reliable, rapid and sensitive methods to laboratory diagnosis of zoonotic infections continues to challenge microbiological laboratories. The recovery of DNA from a swine fecal sample and a bacterial culture extracted by a conventional phenol-chloroform extraction method was compared......, the detection range of X-DNA of a spectrophotometric and a fluorometric (PicoGreen) method was compared. The PicoGreen showed a quantification limit of 1 ng/mL, consistent triplicate measurements, and finally a linear relationship between the concentrations of DNA standards and the fluorescence readings (R-2...... = 0.99 and R-2 = 1.00). In conclusion, silica-membrane, columns can provide a more convenient and less hazardous alternative to the conventional phenol-based method. The results have implication for further improvement of sensitive amplification methods for laboratory diagnosis....

  10. Optimizing Endoscopic Ultrasound Guided Tissue Sampling of the Pancreas

    Directory of Open Access Journals (Sweden)

    Pujan Kandel

    2016-03-01

    Full Text Available Endoscopic ultrasound is an important innovation in the field of gastrointestinal endoscopy and allows evaluation of many organs in the vicinity of the gastrointestinal tract. Endoscopic ultrasound-fine needle aspiration has been established to be an important tool in the management of pancreaticobiliary disease and is used for screening, staging, biopsy confirmation, and palliation. The accuracy of endoscopic ultrasound-fine needle aspiration is affected by several factors such as different needle sizes and types and fine needle aspiration techniques. Several comparative studies have been published on various techniques, such as the use of a stylet and suction during fine needle aspiration. Although most studies demonstrate high accuracy across techniques and equipment, various fine needle biopsy histology needles have been studied to compare the advantage of fine needle biopsy over fine needle aspiration. Although fine needle biopsy needles provide better tissue architecture and require fewer numbers of passes, there is no significant evidence of the superiority of fine needle biopsy over fine needle aspiration with regard to diagnostic yield and core tissue procurement. The main aim of this article is to review the various methodologies for improving the practice of endoscopic ultrasound-fine needle aspiration and endoscopic ultrasound- fine needle biopsy tissue sampling for cytological and histological analysis.

  11. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  12. Statistical power and optimal design in experiments in which samples of participants respond to samples of stimuli.

    Science.gov (United States)

    Westfall, Jacob; Kenny, David A; Judd, Charles M

    2014-10-01

    Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.

  13. An accurate metalloprotein-specific scoring function and molecular docking program devised by a dynamic sampling and iteration optimization strategy.

    Science.gov (United States)

    Bai, Fang; Liao, Sha; Gu, Junfeng; Jiang, Hualiang; Wang, Xicheng; Li, Honglin

    2015-04-27

    Metalloproteins, particularly zinc metalloproteins, are promising therapeutic targets, and recent efforts have focused on the identification of potent and selective inhibitors of these proteins. However, the ability of current drug discovery and design technologies, such as molecular docking and molecular dynamics simulations, to probe metal-ligand interactions remains limited because of their complicated coordination geometries and rough treatment in current force fields. Herein we introduce a robust, multiobjective optimization algorithm-driven metalloprotein-specific docking program named MpSDock, which runs on a scheme similar to consensus scoring consisting of a force-field-based scoring function and a knowledge-based scoring function. For this purpose, in this study, an effective knowledge-based zinc metalloprotein-specific scoring function based on the inverse Boltzmann law was designed and optimized using a dynamic sampling and iteration optimization strategy. This optimization strategy can dynamically sample and regenerate decoy poses used in each iteration step of refining the scoring function, thus dramatically improving both the effectiveness of the exploration of the binding conformational space and the sensitivity of the ranking of the native binding poses. To validate the zinc metalloprotein-specific scoring function and its special built-in docking program, denoted MpSDockZn, an extensive comparison was performed against six universal, popular docking programs: Glide XP mode, Glide SP mode, Gold, AutoDock, AutoDock4Zn, and EADock DSS. The zinc metalloprotein-specific knowledge-based scoring function exhibited prominent performance in accurately describing the geometries and interactions of the coordination bonds between the zinc ions and chelating agents of the ligands. In addition, MpSDockZn had a competitive ability to sample and identify native binding poses with a higher success rate than the other six docking programs.

  14. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    Science.gov (United States)

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  15. Autonomous Optimal Coordination Scheme in Protection System of Power Distribution Network by Using Multi-Agent Concept

    Institute of Scientific and Technical Information of China (English)

    LEE Seung-Jae; KIM Tae-Wan; LEE Gi-Young

    2008-01-01

    A protection system using a multi-agent concept for power distribution networks is pro- posed. Every digital over current relay(OCR) is developed as an agent by adding its own intelli- gence, self-tuning and communication ability. The main advantage of the multi-agent concept is that a group of agents work together to achieve a global goal which is beyond the ability of each individual agent. In order to cope with frequent changes in the network operation condition and faults, an OCR agent, proposed in this paper, is able to detect a fault or a change in the network and find its optimal parameters for protection in an autonomous manner considering information of the whole network obtained by communication between other agents.Through this kind of coordi- nation and information exchanges, not only a local but also a global protective scheme is com- pleted. Simulations in a simple distribution network show the effectiveness of the proposed protec- tion system.

  16. Optimal designs of the median run length based double sampling X chart for minimizing the average sample size.

    Directory of Open Access Journals (Sweden)

    Wei Lin Teoh

    Full Text Available Designs of the double sampling (DS X chart are traditionally based on the average run length (ARL criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i the in-control average sample size (ASS and (ii both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed.

  17. SamplingStrata: An R Package for the Optimization of Strati?ed Sampling

    Directory of Open Access Journals (Sweden)

    Giulio Barcaroli

    2014-11-01

    Full Text Available When designing a sampling survey, usually constraints are set on the desired precision levels regarding one or more target estimates (the Ys. If a sampling frame is available, containing auxiliary information related to each unit (the Xs, it is possible to adopt a stratified sample design. For any given strati?cation of the frame, in the multivariate case it is possible to solve the problem of the best allocation of units in strata, by minimizing a cost function sub ject to precision constraints (or, conversely, by maximizing the precision of the estimates under a given budget. The problem is to determine the best stratification in the frame, i.e., the one that ensures the overall minimal cost of the sample necessary to satisfy precision constraints. The Xs can be categorical or continuous; continuous ones can be transformed into categorical ones. The most detailed strati?cation is given by the Cartesian product of the Xs (the atomic strata. A way to determine the best stratification is to explore exhaustively the set of all possible partitions derivable by the set of atomic strata, evaluating each one by calculating the corresponding cost in terms of the sample required to satisfy precision constraints. This is una?ordable in practical situations, where the dimension of the space of the partitions can be very high. Another possible way is to explore the space of partitions with an algorithm that is particularly suitable in such situations: the genetic algorithm. The R package SamplingStrata, based on the use of a genetic algorithm, allows to determine the best strati?cation for a population frame, i.e., the one that ensures the minimum sample cost necessary to satisfy precision constraints, in a multivariate and multi-domain case.

  18. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  19. Optimization of techniques for multiple platform testing in small, precious samples such as human chorionic villus sampling.

    Science.gov (United States)

    Pisarska, Margareta D; Akhlaghpour, Marzieh; Lee, Bora; Barlow, Gillian M; Xu, Ning; Wang, Erica T; Mackey, Aaron J; Farber, Charles R; Rich, Stephen S; Rotter, Jerome I; Chen, Yii-der I; Goodarzi, Mark O; Guller, Seth; Williams, John

    2016-11-01

    Multiple testing to understand global changes in gene expression based on genetic and epigenetic modifications is evolving. Chorionic villi, obtained for prenatal testing, is limited, but can be used to understand ongoing human pregnancies. However, optimal storage, processing and utilization of CVS for multiple platform testing have not been established. Leftover CVS samples were flash-frozen or preserved in RNAlater. Modifications to standard isolation kits were performed to isolate quality DNA and RNA from samples as small as 2-5 mg. RNAlater samples had significantly higher RNA yields and quality and were successfully used in microarray and RNA-sequencing (RNA-seq). RNA-seq libraries generated using 200 versus 800-ng RNA showed similar biological coefficients of variation. RNAlater samples had lower DNA yields and quality, which improved by heating the elution buffer to 70 °C. Purification of DNA was not necessary for bisulfite-conversion and genome-wide methylation profiling. CVS cells were propagated and continue to express genes found in freshly isolated chorionic villi. CVS samples preserved in RNAlater are superior. Our optimized techniques provide specimens for genetic, epigenetic and gene expression studies from a single small sample which can be used to develop diagnostics and treatments using a systems biology approach in the prenatal period. © 2016 John Wiley & Sons, Ltd. © 2016 John Wiley & Sons, Ltd.

  20. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, M; Li, R; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States); Ye, Y [Stanford Univ, Management Science and Engineering, Stanford, Ca (United States); Boyd, S [Stanford University, Electrical Engineering, Stanford, CA (United States)

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  1. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  2. Radiobiological modeling analysis of the optimal fraction scheme in patients with peripheral non-small cell lung cancer undergoing stereotactic body radiotherapy

    OpenAIRE

    Bao-Tian Huang; Jia-Yang Lu; Pei-Xian Lin; Jian-Zhou Chen; De-Rui Li; Chuang-Zhen Chen

    2015-01-01

    This study aimed to determine the optimal fraction scheme (FS) in patients with small peripheral non-small cell lung cancer (NSCLC) undergoing stereotactic body radiotherapy (SBRT) with the 4 × 12 Gy scheme as the reference. CT simulation data for sixteen patients diagnosed with primary NSCLC or metastatic tumor with a single peripheral lesion ≤3 cm were used in this study. Volumetric modulated arc therapy (VMAT) plans were designed based on ten different FS of 1 × 25 Gy, 1 × 30 Gy, 1 × 34 Gy...

  3. Optimized 3D-NMR sampling for resonance assignment of partially unfolded proteins.

    Science.gov (United States)

    Pannetier, Nicolas; Houben, Klaartje; Blanchard, Laurence; Marion, Dominique

    2007-05-01

    Resonance assignment of NMR spectra of unstructured proteins is made difficult by severe overlap due to the lack of secondary structure. Fortunately, this drawback is partially counterbalanced by the narrow line-widths due to the internal flexibility. Alternate sampling schemes can be used to achieve better resolution in less experimental time. Deterministic schemes (such as radial sampling) suffer however from the presence of systematic artifacts. Random acquisition patterns can alleviate this problem by randomizing the artifacts. We show in this communication that quantitative well-resolved spectra can be obtained, provided that the data points are properly weighted before FT. These weights can be evaluated using the concept of Voronoi cells associated with the data points. The introduced artifacts do not affect the direct surrounding of the peaks and thus do not alter the amplitude and frequency of the signals. This procedure is illustrated on 60-residue viral protein, which lacks any persistent secondary structure and thus exhibits major signal overlap.

  4. A novel infinite-time optimal tracking control scheme for a class of discrete-time nonlinear systems via the greedy HDP iteration algorithm.

    Science.gov (United States)

    Zhang, Huaguang; Wei, Qinglai; Luo, Yanhong

    2008-08-01

    In this paper, we aim to solve the infinite-time optimal tracking control problem for a class of discrete-time nonlinear systems using the greedy heuristic dynamic programming (HDP) iteration algorithm. A new type of performance index is defined because the existing performance indexes are very difficult in solving this kind of tracking problem, if not impossible. Via system transformation, the optimal tracking problem is transformed into an optimal regulation problem, and then, the greedy HDP iteration algorithm is introduced to deal with the regulation problem with rigorous convergence analysis. Three neural networks are used to approximate the performance index, compute the optimal control policy, and model the nonlinear system for facilitating the implementation of the greedy HDP iteration algorithm. An example is given to demonstrate the validity of the proposed optimal tracking control scheme.

  5. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    Science.gov (United States)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  6. Optimal and maximin sample sizes for multicentre cost-effectiveness trials.

    Science.gov (United States)

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2015-10-01

    This paper deals with the optimal sample sizes for a multicentre trial in which the cost-effectiveness of two treatments in terms of net monetary benefit is studied. A bivariate random-effects model, with the treatment-by-centre interaction effect being random and the main effect of centres fixed or random, is assumed to describe both costs and effects. The optimal sample sizes concern the number of centres and the number of individuals per centre in each of the treatment conditions. These numbers maximize the efficiency or power for given research costs or minimize the research costs at a desired level of efficiency or power. Information on model parameters and sampling costs are required to calculate these optimal sample sizes. In case of limited information on relevant model parameters, sample size formulas are derived for so-called maximin sample sizes which guarantee a power level at the lowest study costs. Four different maximin sample sizes are derived based on the signs of the lower bounds of two model parameters, with one case being worst compared to others. We numerically evaluate the efficiency of the worst case instead of using others. Finally, an expression is derived for calculating optimal and maximin sample sizes that yield sufficient power to test the cost-effectiveness of two treatments. © The Author(s) 2015.

  7. An Approximate Optimal Relationship in the Sampling Plan with Inspection Errors

    Institute of Scientific and Technical Information of China (English)

    YANG Ji-ping; QIU Wan-hua; Martin NEWBY

    2001-01-01

    The paper presents and proves an approximate optimal relationship between sample size n andacceptance number c in the sampling plans under imperfect inspection which minimize the Bayesian risk. Theconclusion generalizes the result obtained by A. Hald on the assumption that the inspection is perfect.

  8. A normative inference approach for optimal sample sizes in decisions from experience.

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    "Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE.

  9. A normative inference approach for optimal sample sizes in decisions from experience

    Directory of Open Access Journals (Sweden)

    Dirk eOstwald

    2015-09-01

    Full Text Available Decisions from experience (DFE refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experienced-based choice is the sampling paradigm, which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the optimal sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical manuscript, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for decisions from experience. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE.

  10. An improved adaptive sampling and experiment design method for aerodynamic optimization

    Institute of Scientific and Technical Information of China (English)

    Huang Jiangtao; Gao Zhenghong; Zhou Zhu; Zhao Ke

    2015-01-01

    Experiment design method is a key to construct a highly reliable surrogate model for numerical optimization in large-scale project. Within the method, the experimental design criterion directly affects the accuracy of the surrogate model and the optimization efficient. According to the shortcomings of the traditional experimental design, an improved adaptive sampling method is pro-posed in this paper. The surrogate model is firstly constructed by basic sparse samples. Then the supplementary sampling position is detected according to the specified criteria, which introduces the energy function and curvature sampling criteria based on radial basis function (RBF) network. Sampling detection criteria considers both the uniformity of sample distribution and the description of hypersurface curvature so as to significantly improve the prediction accuracy of the surrogate model with much less samples. For the surrogate model constructed with sparse samples, the sample uniformity is an important factor to the interpolation accuracy in the initial stage of adaptive sam-pling and surrogate model training. Along with the improvement of uniformity, the curvature description of objective function surface gradually becomes more important. In consideration of these issues, crowdness enhance function and root mean square error (RMSE) feedback function are introduced in C criterion expression. Thus, a new sampling method called RMSE and crowd-ness enhance (RCE) adaptive sampling is established. The validity of RCE adaptive sampling method is studied through typical test function firstly and then the airfoil/wing aerodynamic opti-mization design problem, which has high-dimensional design space. The results show that RCE adaptive sampling method not only reduces the requirement for the number of samples, but also effectively improves the prediction accuracy of the surrogate model, which has a broad prospects for applications.

  11. Comparison of DSM-IV diagnostic criteria versus the Broad Categories for the Diagnosis of Eating Disorders scheme in a Japanese sample.

    Science.gov (United States)

    Nakai, Yoshikatsu; Nin, Kazuko; Teramukai, Satoshi; Taniguchi, Ataru; Fukushima, Mitsuo; Wonderlich, Stephen A

    2013-08-01

    The purposes of this study were to compare DSM-IV diagnostic criteria and the Broad Categories for the Diagnosis of Eating Disorders (BCD-ED) scheme in terms of the number of cases of Eating Disorder Not Otherwise Specified (EDNOS) and to test which diagnostic tool better captures the variance of psychiatric symptoms in a Japanese sample. One thousand and twenty-nine women with an eating disorder (ED) participated in this study. Assessment methods included structured clinical interviews and administration of the Eating Attitudes Test and the Eating Disorder Inventory. The BCD-ED scheme dramatically decreased the proportion of DSM-IV EDNOS from 45.1% to 1.5%. However, the categorization of patients with the BCD-ED scheme was less able to capture the variance in psychopathology scales than the DSM-IV, suggesting that the BCD-ED scheme may differentiate ED groups less effectively than the DSM-IV. These results suggest that the BCD-ED scheme may have the potential to eliminate the use of DSM-IV EDNOS, but it may have problems capturing the variance of psychiatric symptoms.

  12. A Velocity-Level Bi-Criteria Optimization Scheme for Coordinated Path Tracking of Dual Robot Manipulators Using Recurrent Neural Network.

    Science.gov (United States)

    Xiao, Lin; Zhang, Yongsheng; Liao, Bolin; Zhang, Zhijun; Ding, Lei; Jin, Long

    2017-01-01

    A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network.

  13. Confronting the ironies of optimal design: Nonoptimal sampling designs with desirable properties

    Science.gov (United States)

    Casman, Elizabeth A.; Naiman, Daniel Q.; Chamberlin, Charles E.

    1988-03-01

    Two sampling designs are developed for the improvement of parameter estimate precision in nonlinear regression, one for when there is uncertainty in the parameter values, and the other for when the correct model formulation is unknown. Although based on concepts of optimal design theory, the design criteria emphasize efficiency rather than optimality. The development is illustrated using a Streeter-Phelps dissolved oxygen-biochemical oxygen demand model.

  14. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  15. OPTIMIZATION ABOUT COMBINED HYBRID SCHEME WITH 5-PARAMETER STRESS MODE%5参数应力组合杂交格式优化

    Institute of Scientific and Technical Information of China (English)

    聂玉峰; 尹云辉; 周天孝

    2004-01-01

    It is posed in paper [1] that Zero energy-error can be used to realize the optimization of Combined hybrid finite element methods through adjusting the combined factor. In this paper, the optimization method is used to plane 4-node quadrilateral Combined hybrid scheme CH(0-1) which own 5 stress parameters with energy compatibility characteristic. Based on the optimization results, the analysis of components of element stiffness matrix, and the conclusions about numerical stability and convergence, this paper deduces that the optimal form of CH(0-1) element, is let the combined factor take 1, i.e., just base on Hellinger-Reissner variational principle, and take bilinear compatible displacement interpolation instead of enrich-strain Wilson's displacements interpolation for the orthgonality of 5-parameter stresses mode with the derived strain from Wilson bubble displacements and the weak force balance.

  16. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  17. An optimal scheme for numerical evaluation of Eshelby tensors and its implementation in a MATLAB package for simulating the motion of viscous ellipsoids in slow flows

    Science.gov (United States)

    Qu, Mengmeng; Jiang, Dazhi; Lu, Lucy X.

    2016-11-01

    To address the multiscale deformation and fabric development in Earth's ductile lithosphere, micromechanics-based self-consistent homogenization is commonly used to obtain macroscale rheological properties from properties of constituent elements. The homogenization is heavily based on the solution of an Eshelby viscous inclusion in a linear viscous medium and the extension of the solution to nonlinear viscous materials. The homogenization requires repeated numerical evaluation of Eshelby tensors for constituent elements and becomes ever more computationally challenging as the elements are deformed to more elongate or flattened shapes. In this paper, we develop an optimal scheme for evaluating Eshelby tensors, using a combination of a product Gaussian quadrature and the Lebedev quadrature. We first establish, through numerical experiments, an empirical relationship between the inclusion shape and the computational time it takes to evaluate its Eshelby tensors. We then use the relationship to develop an optimal scheme for selecting the most efficient quadrature to obtain the Eshelby tensors. The optimal scheme is applicable to general homogenizations. In this paper, it is implemented in a MATLAB package for investigating the evolution of solitary rigid or deformable inclusions and the development of shape preferred orientations in multi-inclusion systems during deformation. The MATLAB package, upgrading an earlier effort written in MathCad, can be downloaded online.

  18. The Role of Vertex Consistency in Sampling-based Algorithms for Optimal Motion Planning

    CERN Document Server

    Arslan, Oktay

    2012-01-01

    Motion planning problems have been studied by both the robotics and the controls research communities for a long time, and many algorithms have been developed for their solution. Among them, incremental sampling-based motion planning algorithms, such as the Rapidly-exploring Random Trees (RRTs), and the Probabilistic Road Maps (PRMs) have become very popular recently, owing to their implementation simplicity and their advantages in handling high-dimensional problems. Although these algorithms work very well in practice, the quality of the computed solution is often not good, i.e., the solution can be far from the optimal one. A recent variation of RRT, namely the RRT* algorithm, bypasses this drawback of the traditional RRT algorithm, by ensuring asymptotic optimality as the number of samples tends to infinity. Nonetheless, the convergence rate to the optimal solution may still be slow. This paper presents a new incremental sampling-based motion planning algorithm based on Rapidly-exploring Random Graphs (RRG...

  19. Development zoning scheme of the territory of the projected national park "Orilskyi" in order to optimize the structure of natureusing

    Directory of Open Access Journals (Sweden)

    Zelens'ka L.I.

    2009-08-01

    Full Text Available The scheme planning of land reserved for the creation of a national park "Orilskyi" within Shulhivskoyi village council Petrikov district of Dnipropetrovsk region, which is based on a functional concept of territory planning. Dedicated areas protected mode, recreational and economic of subzones. Grounded floral-faunistic value protected territory types rationalization of nature. The results introduced in local government institutions for the planning scheme area.

  20. OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Maja Vrkljan

    2004-12-01

    Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.

  1. Optimization of Proteomic Sample Preparation Procedures for Comprehensive Protein Characterization of Pathogenic Systems

    Science.gov (United States)

    Mottaz-Brewer, Heather M.; Norbeck, Angela D.; Adkins, Joshua N.; Manes, Nathan P.; Ansong, Charles; Shi, Liang; Rikihisa, Yasuko; Kikuchi, Takane; Wong, Scott W.; Estep, Ryan D.; Heffron, Fred; Pasa-Tolic, Ljiljana; Smith, Richard D.

    2008-01-01

    Mass spectrometry-based proteomics is a powerful analytical tool for investigating pathogens and their interactions within a host. The sensitivity of such analyses provides broad proteome characterization, but the sample-handling procedures must first be optimized to ensure compatibility with the technique and to maximize the dynamic range of detection. The decision-making process for determining optimal growth conditions, preparation methods, sample analysis methods, and data analysis techniques in our laboratory is discussed herein with consideration of the balance in sensitivity, specificity, and biomass losses during analysis of host-pathogen systems. PMID:19183792

  2. Sample Subset Optimization Techniques for Imbalanced and Ensemble Learning Problems in Bioinformatics Applications.

    Science.gov (United States)

    Yang, Pengyi; Yoo, Paul D; Fernando, Juanita; Zhou, Bing B; Zhang, Zili; Zomaya, Albert Y

    2014-03-01

    Data sampling is a widely used technique in a broad range of machine learning problems. Traditional sampling approaches generally rely on random resampling from a given dataset. However, these approaches do not take into consideration additional information, such as sample quality and usefulness. We recently proposed a data sampling technique, called sample subset optimization (SSO). The SSO technique relies on a cross-validation procedure for identifying and selecting the most useful samples as subsets. In this paper, we describe the application of SSO techniques to imbalanced and ensemble learning problems, respectively. For imbalanced learning, the SSO technique is employed as an under-sampling technique for identifying a subset of highly discriminative samples in the majority class. In ensemble learning, the SSO technique is utilized as a generic ensemble technique where multiple optimized subsets of samples from each class are selected for building an ensemble classifier. We demonstrate the utilities and advantages of the proposed techniques on a variety of bioinformatics applications where class imbalance, small sample size, and noisy data are prevalent.

  3. OptisampleTM: Open web-based application to optimize sampling strategies for active surveillance activities at the herd level illustrated using Porcine Respiratory Reproductive Syndrome (PRRS).

    Science.gov (United States)

    Alba, Anna; Morrison, Robert E; Cheeran, Ann; Rovira, Albert; Alvarez, Julio; Perez, Andres M

    2017-01-01

    Porcine reproductive and respiratory syndrome virus (PRRSv) infection causes a devastating economic impact to the swine industry. Active surveillance is routinely conducted in many swine herds to demonstrate freedom from PRRSv infection. The design of efficient active surveillance sampling schemes is challenging because optimum surveillance strategies may differ depending on infection status, herd structure, management, or resources for conducting sampling. Here, we present an open web-based application, named 'OptisampleTM', designed to optimize herd sampling strategies to substantiate freedom of infection considering also costs of testing. In addition to herd size, expected prevalence, test sensitivity, and desired level of confidence, the model takes into account the presumed risk of pathogen introduction between samples, the structure of the herd, and the process to select the samples over time. We illustrate the functionality and capacity of 'OptisampleTM' through its application to active surveillance of PRRSv in hypothetical swine herds under disparate epidemiological situations. Diverse sampling schemes were simulated and compared for each herd to identify effective strategies at low costs. The model results show that to demonstrate freedom from disease, it is important to consider both the epidemiological situation of the herd and the sample selected. The approach illustrated here for PRRSv may be easily extended to other animal disease surveillance systems using the web-based application available at http://stemma.ahc.umn.edu/optisample.

  4. XAFSmass: a program for calculating the optimal mass of XAFS samples

    Science.gov (United States)

    Klementiev, K.; Chernikov, R.

    2016-05-01

    We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.

  5. Optimization of Bartonella henselae multilocus sequence typing scheme using single-nucleotide polymorphism analysis of SOLiD sequence data

    Institute of Scientific and Technical Information of China (English)

    ZHAO Fan; Gemma Chaloner; Alistair Darby; SONG Xiu-ping; LI Dong-mei; Richard Birtles; LIU Qi-yong

    2012-01-01

    Background Multi-locus sequence typing (MLST) is widely used to explore the population structure of numerous bacterial pathogens.However,for genotypically-restricted pathogens,the sensitivity of MLST is limited by a paucity of variation within selected loci.For Bartonella henselae (B.henselae),although the MLST scheme currently used has been proven useful in defining the overall population structure of the species,its reliability for the accurate delineation of closely-related sequence types,between which allelic variation is usually limited to,at most,one or two nucleotide polymorphisms.Exploitation of high-throughput sequencing data allows a more informed selection of MLST loci and thus,potentially,a means of enhancing the sensitivity of the schemes they comprise.Methods We carried out SOLiD resequencing on 12 representative B.henselae isolates and explored these data using single nucleotide polymorphism (SNP) analysis.We determined the number and distribution of SNPs in the genes targeted by the established MLST scheme and modified the position of loci within these genes to capture as much genetic variation as possible.Results Using genome-wide SNP data,we found the distribution of SNPs within each open reading frame (ORF) of MLST loci,which were not represented by the established B.henselae MLST scheme.We then modified the position of loci in the MLST scheme to better reflect the polymorphism in the ORF as a whole.The use of amended loci in this scheme allowed previously indistinguishable ST1 strains to be differentiated.However,the diversity of B.henselae was still rare in China.Conclusions Our study demonstrates the use of SNP analysis to facilitate the selection of MLST loci to augment the currently-described scheme for B.henselae.And the diversity among B.henselae strains in China is markedly less than that observed in B.henselae populations elsewhere in the world.

  6. Implementation of suitable flow injection/sequential-sample separation/preconcentration schemes for determination of trace metal concentrations using detection by electrothermal atomic absorption spectrometry and inductively coupled plasma mass spectrometry

    DEFF Research Database (Denmark)

    Hansen, Elo Harald; Wang, Jianhua

    2002-01-01

    Various preconditioning procedures encomprising appropriate separation/preconcentration schemes in order to obtain optimal sensitivity and selectivity characteristics when using electrothermal atomic absorption spectrometry (ETAAS) and inductively coupled plasma mass spectrometry (ICPMS) are pres...

  7. Algorithms for integration of stochastic differential equations using parallel optimized sampling in the Stratonovich calculus

    Science.gov (United States)

    Kiesewetter, Simon; Drummond, Peter D.

    2017-03-01

    A variance reduction method for stochastic integration of Fokker-Planck equations is derived. This unifies the cumulant hierarchy and stochastic equation approaches to obtaining moments, giving a performance superior to either. We show that the brute force method of reducing sampling error by just using more trajectories in a sampled stochastic equation is not the best approach. The alternative of using a hierarchy of moment equations is also not optimal, as it may converge to erroneous answers. Instead, through Bayesian conditioning of the stochastic noise on the requirement that moment equations are satisfied, we obtain improved results with reduced sampling errors for a given number of stochastic trajectories. The method used here converges faster in time-step than Ito-Euler algorithms. This parallel optimized sampling (POS) algorithm is illustrated by several examples, including a bistable nonlinear oscillator case where moment hierarchies fail to converge.

  8. Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater.

    Science.gov (United States)

    Zahid, Erum; Hussain, Ijaz; Spöck, Gunter; Faisal, Muhammad; Shabbir, Javid; M AbdEl-Salam, Nasser; Hussain, Tajammal

    Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design.

  9. An Optimal Spatial Sampling Design for Intra-Urban Population Exposure Assessment.

    Science.gov (United States)

    Kumar, Naresh

    2009-02-01

    This article offers an optimal spatial sampling design that captures maximum variance with the minimum sample size. The proposed sampling design addresses the weaknesses of the sampling design that Kanaroglou et al. (2005) used for identifying 100 sites for capturing population exposure to NO(2) in Toronto, Canada. Their sampling design suffers from a number of weaknesses and fails to capture the spatial variability in NO(2) effectively. The demand surface they used is spatially autocorrelated and weighted by the population size, which leads to the selection of redundant sites. The location-allocation model (LAM) available with the commercial software packages, which they used to identify their sample sites, is not designed to solve spatial sampling problems using spatially autocorrelated data. A computer application (written in C++) that utilizes spatial search algorithm was developed to implement the proposed sampling design. This design was implemented in three different urban environments - namely Cleveland, OH; Delhi, India; and Iowa City, IA - to identify optimal sample sites for monitoring airborne particulates.

  10. Estimation of the optimal statistical quality control sampling time intervals using a residual risk measure.

    Directory of Open Access Journals (Sweden)

    Aristides T Hatjimihail

    Full Text Available BACKGROUND: An open problem in clinical chemistry is the estimation of the optimal sampling time intervals for the application of statistical quality control (QC procedures that are based on the measurement of control materials. This is a probabilistic risk assessment problem that requires reliability analysis of the analytical system, and the estimation of the risk caused by the measurement error. METHODOLOGY/PRINCIPAL FINDINGS: Assuming that the states of the analytical system are the reliability state, the maintenance state, the critical-failure modes and their combinations, we can define risk functions based on the mean time of the states, their measurement error and the medically acceptable measurement error. Consequently, a residual risk measure rr can be defined for each sampling time interval. The rr depends on the state probability vectors of the analytical system, the state transition probability matrices before and after each application of the QC procedure and the state mean time matrices. As optimal sampling time intervals can be defined those minimizing a QC related cost measure while the rr is acceptable. I developed an algorithm that estimates the rr for any QC sampling time interval of a QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any failure time and measurement error probability density function for each mode. Furthermore, given the acceptable rr, it can estimate the optimal QC sampling time intervals. CONCLUSIONS/SIGNIFICANCE: It is possible to rationally estimate the optimal QC sampling time intervals of an analytical system to sustain an acceptable residual risk with the minimum QC related cost. For the optimization the reliability analysis of the analytical system and the risk analysis of the measurement error are needed.

  11. Implementation of suitable flow injection/sequential-sample separation/preconcentration schemes for determination of trace metal concentrations using detection by electrothermal atomic absorption spectrometry and inductively coupled plasma mass spectrometry

    DEFF Research Database (Denmark)

    Hansen, Elo Harald; Wang, Jianhua

    2002-01-01

    Various preconditioning procedures encomprising appropriate separation/preconcentration schemes in order to obtain optimal sensitivity and selectivity characteristics when using electrothermal atomic absorption spectrometry (ETAAS) and inductively coupled plasma mass spectrometry (ICPMS) are pres......Various preconditioning procedures encomprising appropriate separation/preconcentration schemes in order to obtain optimal sensitivity and selectivity characteristics when using electrothermal atomic absorption spectrometry (ETAAS) and inductively coupled plasma mass spectrometry (ICPMS...... prior to detection are effected in a microconduit placed on top of an SI selection valve....

  12. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  13. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  14. On the Effectiveness of Sampling for Evolutionary Optimization in Noisy Environments.

    Science.gov (United States)

    Qian, Chao; Yu, Yang; Tang, Ke; Jin, Yaochu; Yao, Xin; Zhou, Zhi-Hua

    2016-12-16

    In real-world optimization tasks, the objective (i.e., fitness) function evaluation is often disturbed by noise due to a wide range of uncertainties. Evolutionary algorithms are often employed in noisy optimization, where reducing the negative effect of noise is a crucial issue. Sampling is a popular strategy for dealing with noise: to estimate the fitness of a solution, it evaluates the fitness multiple (k) times independently and then uses the sample average to approximate the true fitness. Obviously, sampling can make the fitness estimation closer to the true value, but also increases the estimation cost. Previous studies mainly focused on empirical analysis and design of efficient sampling strategies, while the impact of sampling is unclear from a theoretical viewpoint. In this paper, we show that sampling can speed up noisy evolutionary optimization exponentially via rigorous running time analysis. For the (1+1)-EA solving the OneMax and the LeadingOnes problems under prior (e.g., one-bit) or posterior (e.g., additive Gaussian) noise, we prove that, under a high noise level, the running time can be reduced from exponential to polynomial by sampling. The analysis also shows that a gap of one on the value of k for sampling can lead to an exponential difference on the expected running time, cautioning for a careful selection of k. We further prove by using two illustrative examples that sampling can be more effective for noise handling than parent populations and threshold selection, two strategies that have shown to be robust to noise. Finally, we also show that sampling can be ineffective when noise does not bring a negative impact.

  15. Optimization and experimental verification for aerodynamic scheme of flying-wing%飞翼布局气动方案优选和试验验证

    Institute of Scientific and Technical Information of China (English)

    鲍君波; 王钢林; 武哲

    2012-01-01

    The characteristic arguments to describe the plane shape considering the stealthy and aerody- namic performance of the flying-wing was proposed, the constraint relation in the scheme optimization was ana- lyzed. The 3-dimensional curved surface model was built by using parameterization method, and the process to divide the surface grids was packaged automatically. The new surface grids can be generated accurately and rapidly by changing the design arguments, thus the iteration efficiency in the scheme optimization process was improved. The aerodynamic performance was calculated by using the numerical method based on the Euler e- quation, the viscous correction was added in the analysis of the major scheme. The stealthy performance was estimated by using the high-frequency approximate method. The cruise status was taken as the design point to optimize the aerodynamic scheme considering the constraints of stealthy performance based on analysis-modifi- cation method, and the selected scheme was tested by the wind tunnel. The results prove the research deserv- ing the selected scheme.%提出综合考虑飞翼布局隐身性能和气动性能的平面形状特征参数,分析了方案优选中的约束关系,采用参数化方法构建了三维曲面模型,并将物面网格划分流程进行自动化封装,通过更改设计参数准确快速地得到新方案的物面网格,应用基于Euler方程的数值方法进行布局方案的气动性能计算分析,在重点方案的分析中加入黏性修正;应用高频近似方法估算方案的隐身性能.以巡航状态作为设计点,在隐身性能的约束下,应用分析一修正的方法完成了气动布局方案优选,并对最终选定的方案进行风洞试验验证,证明该方案有进一步研究的价值。

  16. Validation of genetic algorithm-based optimal sampling for ocean data assimilation

    Science.gov (United States)

    Heaney, Kevin D.; Lermusiaux, Pierre F. J.; Duda, Timothy F.; Haley, Patrick J.

    2016-08-01

    Regional ocean models are capable of forecasting conditions for usefully long intervals of time (days) provided that initial and ongoing conditions can be measured. In resource-limited circumstances, the placement of sensors in optimal locations is essential. Here, a nonlinear optimization approach to determine optimal adaptive sampling that uses the genetic algorithm (GA) method is presented. The method determines sampling strategies that minimize a user-defined physics-based cost function. The method is evaluated using identical twin experiments, comparing hindcasts from an ensemble of simulations that assimilate data selected using the GA adaptive sampling and other methods. For skill metrics, we employ the reduction of the ensemble root mean square error (RMSE) between the "true" data-assimilative ocean simulation and the different ensembles of data-assimilative hindcasts. A five-glider optimal sampling study is set up for a 400 km × 400 km domain in the Middle Atlantic Bight region, along the New Jersey shelf-break. Results are compared for several ocean and atmospheric forcing conditions.

  17. 'Adaptive Importance Sampling for Performance Evaluation and Parameter Optimization of Communication Systems'

    NARCIS (Netherlands)

    Remondo Bueno, D.; Srinivasan, R.; Nicola, V.F.; van Etten, Wim; Tattje, H.E.P.

    2000-01-01

    We present new adaptive importance sampling techniques based on stochastic Newton recursions. Their applicability to the performance evaluation of communication systems is studied. Besides bit-error rate (BER) estimation, the techniques are used for system parameter optimization. Two system models

  18. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  19. CRN中一种优化的数据传输方案%An Optimal Data Transmission Scheme in Cognitive Radio Networks

    Institute of Scientific and Technical Information of China (English)

    文豪

    2012-01-01

    Effective,fast and reliable data transmission is a hot problem in cognitive radio network.To overcome some disadvantages of existing schemes,this paper proposes an optimal data transmission scheme.This scheme considers the constraints of transmitter power and load balancing of nodes,and achieves optimal transmission based on spectrum allocation,on-demand routing policy,and link reliability prediction.Simulation results show that this method,compared with the existing algorithms,reduces the number of routing reconstruction and greatly improves system throughput.%在认知无线电网络(CRN)中,如何有效地实现认知用户间快速可靠的数据传输是目前的研究热点。针对现有传输方案的不足,提出了一种优化的数据传输方案。在考虑节点发射功率和负载均衡的约束条件下,基于对认知用户间链路可靠性预测,联合频谱分配与按需路由策略实现认知用户数据的最优化传输。仿真实验表明,在减少路由重构次数的同时,较大地提高了系统的吞吐量。

  20. SpaGrOW—A Derivative-Free Optimization Scheme for Intermolecular Force Field Parameters Based on Sparse Grid Methods

    Directory of Open Access Journals (Sweden)

    Dirk Reith

    2013-09-01

    Full Text Available Molecular modeling is an important subdomain in the field of computational modeling, regarding both scientific and industrial applications. This is because computer simulations on a molecular level are a virtuous instrument to study the impact of microscopic on macroscopic phenomena. Accurate molecular models are indispensable for such simulations in order to predict physical target observables, like density, pressure, diffusion coefficients or energetic properties, quantitatively over a wide range of temperatures. Thereby, molecular interactions are described mathematically by force fields. The mathematical description includes parameters for both intramolecular and intermolecular interactions. While intramolecular force field parameters can be determined by quantum mechanics, the parameterization of the intermolecular part is often tedious. Recently, an empirical procedure, based on the minimization of a loss function between simulated and experimental physical properties, was published by the authors. Thereby, efficient gradient-based numerical optimization algorithms were used. However, empirical force field optimization is inhibited by the two following central issues appearing in molecular simulations: firstly, they are extremely time-consuming, even on modern and high-performance computer clusters, and secondly, simulation data is affected by statistical noise. The latter provokes the fact that an accurate computation of gradients or Hessians is nearly impossible close to a local or global minimum, mainly because the loss function is flat. Therefore, the question arises of whether to apply a derivative-free method approximating the loss function by an appropriate model function. In this paper, a new Sparse Grid-based Optimization Workflow (SpaGrOW is presented, which accomplishes this task robustly and, at the same time, keeps the number of time-consuming simulations relatively small. This is achieved by an efficient sampling procedure

  1. Optimization of groundwater sampling approach under various hydrogeological conditions using a numerical simulation model

    Science.gov (United States)

    Qi, Shengqi; Hou, Deyi; Luo, Jian

    2017-09-01

    This study presents a numerical model based on field data to simulate groundwater flow in both the aquifer and the well-bore for the low-flow sampling method and the well-volume sampling method. The numerical model was calibrated to match well with field drawdown, and calculated flow regime in the well was used to predict the variation of dissolved oxygen (DO) concentration during the purging period. The model was then used to analyze sampling representativeness and sampling time. Site characteristics, such as aquifer hydraulic conductivity, and sampling choices, such as purging rate and screen length, were found to be significant determinants of sampling representativeness and required sampling time. Results demonstrated that: (1) DO was the most useful water quality indicator in ensuring groundwater sampling representativeness in comparison with turbidity, pH, specific conductance, oxidation reduction potential (ORP) and temperature; (2) it is not necessary to maintain a drawdown of less than 0.1 m when conducting low flow purging. However, a high purging rate in a low permeability aquifer may result in a dramatic decrease in sampling representativeness after an initial peak; (3) the presence of a short screen length may result in greater drawdown and a longer sampling time for low-flow purging. Overall, the present study suggests that this new numerical model is suitable for describing groundwater flow during the sampling process, and can be used to optimize sampling strategies under various hydrogeological conditions.

  2. Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States

    Science.gov (United States)

    Sousan, Sinan Dhia Jameel

    This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that

  3. A Simplified Approach for Two-Dimensional Optimal Controlled Sampling Designs

    Directory of Open Access Journals (Sweden)

    Neeraj Tiwari

    2014-01-01

    Full Text Available Controlled sampling is a unique method of sample selection that minimizes the probability of selecting nondesirable combinations of units. Extending the concept of linear programming with an effective distance measure, we propose a simple method for two-dimensional optimal controlled selection that ensures zero probability to nondesired samples. Alternative estimators for population total and its variance have also been suggested. Some numerical examples have been considered to demonstrate the utility of the proposed procedure in comparison to the existing procedures.

  4. Optimizing headspace sampling temperature and time for analysis of volatile oxidation products in fish oil

    DEFF Research Database (Denmark)

    Rørbæk, Karen; Jensen, Benny

    1997-01-01

    Headspace-gas chromatography (HS-GC), based on adsorption to Tenax GR(R), thermal desorption and GC, has been used for analysis of volatiles in fish oil. To optimize sam sampling conditions, the effect of heating the fish oil at various temperatures and times was evaluated from anisidine values (......) and HS-CC. AV indicated sample degradations at 90 degrees C but only small alterations between 60 and 75 degrees C. HS-GC showed increasing response with temperature and rime. Purging at 75 degrees C for 45 min was selected as the preferred sampling condition for oxidized fish oil....

  5. Optimizing School-Based Health-Promotion Programmes: Lessons from a Qualitative Study of Fluoridated Milk Schemes in the UK

    Science.gov (United States)

    Foster, Geraldine R. K.; Tickle, Martin

    2013-01-01

    Background and objective: Some districts in the United Kingdom (UK), where the level of child dental caries is high and water fluoridation has not been possible, implement school-based fluoridated milk (FM) schemes. However, process variables, such as consent to drink FM and loss of children as they mature, impede the effectiveness of these…

  6. Optimized methods for high-throughput analysis of hair samples for American black bears (Ursus americanus

    Directory of Open Access Journals (Sweden)

    Thea V Kristensen

    2011-06-01

    Full Text Available Noninvasive sampling has revolutionized the study of species that are difficult or dangerous to study using traditional methods. Early studies were often confined to small populations as genotyping large numbers of samples was prohibitively costly and labor intensive. Here we describe optimized protocols designed to reduce the costs and effort required for microsatellite genotyping and sex determination for American black bears (Ursus americanus. We redesigned primers for six microsatellite loci, designed novel primers for the amelogenin gene for genetic determination of sex, and optimized conditions for a nine-locus multiplex PCR. Our high-throughput methods will enable researchers to include larger sample sizes in studies of black bears, providing data in a timely fashion that can be used to inform population management.

  7. An advanced computational scheme for the optimization of 2D radial reflector calculations in pressurized water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Clerc, T., E-mail: thomas.clerc2@gmail.com [Institut de Génie Nucléaire, P.O. Box 6079, Station “Centre-Ville”, Montréal, Qc., Canada H3C 3A7 (Canada); Hébert, A., E-mail: alain.hebert@polymtl.ca [Institut de Génie Nucléaire, P.O. Box 6079, Station “Centre-Ville”, Montréal, Qc., Canada H3C 3A7 (Canada); Leroyer, H.; Argaud, J.P.; Bouriquet, B.; Ponçot, A. [Électricité de France, R and D, SINETICS, 1 Av. du Général de Gaulle, 92141 Clamart (France)

    2014-07-01

    Highlights: • We present a computational scheme for the determination of reflector properties in a PWR. • The approach is based on the minimization of a functional. • We use a data assimilation method or a parametric complementarity principle. • The reference target is a solution obtained with the method of characteristics. • The simplified flux solution is based on diffusion theory or on the simplified Pn method. - Abstract: This paper presents a computational scheme for the determination of equivalent 2D multi-group spatially dependant reflector parameters in a Pressurized Water Reactor (PWR). The proposed strategy is to define a full-core calculation consistent with a reference lattice code calculation such as the Method Of Characteristics (MOC) as implemented in APOLLO2 lattice code. The computational scheme presented here relies on the data assimilation module known as “Assimilation de données et Aide à l’Optimisation (ADAO)” of the SALOME platform developed at Électricité De France (EDF), coupled with the full-core code COCAGNE and with the lattice code APOLLO2. A first code-to-code verification of the computational scheme is made using the OPTEX reflector model developed at École Polytechnique de Montréal (EPM). As a result, we obtain 2D multi-group, spatially dependant reflector parameters, using both diffusion or SP{sub N} operators. We observe important improvements of the power discrepancies distribution over the core when using reflector parameters computed with the proposed computational scheme, and the SP{sub N} operator enables additional improvements.

  8. NSGA-II based optimal control scheme of wind thermal power system for improvement of frequency regulation characteristics

    Directory of Open Access Journals (Sweden)

    S. Chaine

    2015-09-01

    Full Text Available This work presents a methodology to optimize the controller parameters of doubly fed induction generator modeled for frequency regulation in interconnected two-area wind power integrated thermal power system. The gains of integral controller of automatic generation control loop and the proportional and derivative controllers of doubly fed induction generator inertial control loop are optimized in a coordinated manner by employing the multi-objective non-dominated sorting genetic algorithm-II. To reduce the numbers of optimization parameters, a sensitivity analysis is done to determine that the above mentioned three controller parameters are the most sensitive among the rest others. Non-dominated sorting genetic algorithm-II has depicted better efficiency of optimization compared to the linear programming, genetic algorithm, particle swarm optimization, and cuckoo search algorithm. The performance of the designed optimal controller exhibits robust performance even with the variation in penetration levels of wind energy, disturbances, parameter and operating conditions in the system.

  9. 基于AHP-模糊综合评价的回采方案优选%Optimization of Stoping Scheme Based on Comprehensive AHP - Fuzzy Evaluation

    Institute of Scientific and Technical Information of China (English)

    钟福生; 陈建宏; 周智勇

    2012-01-01

    针对传统类比法在回采方案优选中难以综合考虑各指标的缺点,运用层次分析法(AHP)对影响回采方案优选的指标进行分析,建立了综合评价指标体系,得出了各指标权重;再根据模糊综合评判的原理,对提出的备选方案进行综合评判。实例矿山的综合评判结果为:0.78,0.79,0.87,据此确定了3最优回采方案。结果表明,评价结果与试验采场的实际生产情况相符。%Considering that traditional analogy methods can hardly make a comprehensive consideration of all indicators in stoping scheme optimization, analytic hierarchy process (AHP) was used to analyze the factors influencing stopping scheme optimization, and set up a synthetic assessment index system to get the weight of each index. On the basis of fuzzy comprehensive evaluation, the comprehensive evaluation of three alternative stopping schemes for am example of mine was made, and the results of comprehensive evaluation were 0. 78,0.79 and 0. 87, from this, the best stopping scheme could be determined. It was proved that the evaluation result was in conformity with the actual production in testing stope.

  10. Optimization of sampling and counting times for gamma-ray spectrometric measurements of short-lived gamma-ray emitters in aqueous samples.

    Science.gov (United States)

    Korun, M

    2008-01-01

    A method to determine the optimal sampling and counting regimes for water monitoring is presented. It is assumed that samples are collected at a constant rate. The collection time is followed by a sample preparation time that is proportional to the sample quantity collected, and then by the counting time. In the optimal regime these times are chosen in such a way that the minimum detectable concentration is the lowest. Two cases are presented: the case when the background originates from the spectrometer background, which is constant in time and independent of the sample properties, and the case when the background originates from the radioactivity present in the sample.

  11. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Time optimization of (90)Sr measurements: Sequential measurement of multiple samples during ingrowth of (90)Y.

    Science.gov (United States)

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-04-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing (90)Sr by making the Cherenkov measurement of the daughter nuclide (90)Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of (90)Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21h to 6.5h, when assuming a MDA of 1Bq/L and at a background count rate of approximately 0.8cpm.

  13. Average Sample-path Optimality for Continuous-time Markov Decision Processes in Polish Spaces

    Institute of Scientific and Technical Information of China (English)

    Quan-xin ZHU

    2011-01-01

    In this paper we study the average sample-path cost (ASPC) problem for continuous-time Markov decision processes in Polish spaces.To the best of our knowledge,this paper is a first attempt to study the ASPC criterion on continuous-time MDPs with Polish state and action spaces.The corresponding transition rates are allowed to be unbounded,and the cost rates may have neither upper nor lower bounds.Under some mild hypotheses,we prove the existence of e (ε ≥ 0)-ASPC optimal stationary policies based on two different approaches:one is the “optimality equation” approach and the other is the “two optimality inequalities” approach.

  14. Optimizing Diagnostic Yield for EUS-Guided Sampling of Solid Pancreatic Lesions: A Technical Review

    Science.gov (United States)

    Weston, Brian R.

    2013-01-01

    Endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) has a higher diagnostic accuracy for pancreatic cancer than other techniques. This article will review the current advances and considerations for optimizing diagnostic yield for EUS-guided sampling of solid pancreatic lesions. Preprocedural considerations include patient history, confirmation of appropriate indication, review of imaging, method of sedation, experience required by the endoscopist, and access to rapid on-site cytologic evaluation. New EUS imaging techniques that may assist with differential diagnoses include contrast-enhanced harmonic EUS, EUS elastography, and EUS spectrum analysis. FNA techniques vary, and multiple FNA needles are now commercially available; however, neither techniques nor available FNA needles have been definitively compared. The need for suction depends on the lesion, and the need for a stylet is equivocal. No definitive endosonographic finding can predict the optimal number of passes for diagnostic yield. Preparation of good smears and communication with the cytopathologist are essential to optimize yield. PMID:23935542

  15. A New Wavelength Optimization and Energy-Saving Scheme Based on Network Coding in Software-Defined WDM-PON Networks

    Science.gov (United States)

    Ren, Danping; Wu, Shanshan; Zhang, Lijing

    2016-09-01

    In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.

  16. Optimization of the fractionated irradiation scheme considering physical doses to tumor and organ at risk based on dose–volume histograms

    Energy Technology Data Exchange (ETDEWEB)

    Sugano, Yasutaka [Graduate School of Health Sciences, Hokkaido University, Kita-12, Nishi-5, Kita-ku, Sapporo, Hokkaido 060-0812 (Japan); Mizuta, Masahiro [Laboratory of Advanced Data Science, Information Initiative Center, Hokkaido University, Kita-11, Nishi-5, Kita-ku, Sapporo, Hokkaido 060-0811 (Japan); Takao, Seishin; Shirato, Hiroki; Sutherland, Kenneth L. [Department of Radiation Medicine, Graduate School of Medicine, Hokkaido University, Kita-15, Nishi-5, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Date, Hiroyuki, E-mail: date@hs.hokudai.ac.jp [Faculty of Health Sciences, Hokkaido University, Kita-12, Nishi-5, Kita-ku, Sapporo, Hokkaido 060-0812 (Japan)

    2015-11-15

    Purpose: Radiotherapy of solid tumors has been performed with various fractionation regimens such as multi- and hypofractionations. However, the ability to optimize the fractionation regimen considering the physical dose distribution remains insufficient. This study aims to optimize the fractionation regimen, in which the authors propose a graphical method for selecting the optimal number of fractions (n) and dose per fraction (d) based on dose–volume histograms for tumor and normal tissues of organs around the tumor. Methods: Modified linear-quadratic models were employed to estimate the radiation effects on the tumor and an organ at risk (OAR), where the repopulation of the tumor cells and the linearity of the dose-response curve in the high dose range of the surviving fraction were considered. The minimization problem for the damage effect on the OAR was solved under the constraint that the radiation effect on the tumor is fixed by a graphical method. Here, the damage effect on the OAR was estimated based on the dose–volume histogram. Results: It was found that the optimization of fractionation scheme incorporating the dose–volume histogram is possible by employing appropriate cell surviving models. The graphical method considering the repopulation of tumor cells and a rectilinear response in the high dose range enables them to derive the optimal number of fractions and dose per fraction. For example, in the treatment of prostate cancer, the optimal fractionation was suggested to lie in the range of 8–32 fractions with a daily dose of 2.2–6.3 Gy. Conclusions: It is possible to optimize the number of fractions and dose per fraction based on the physical dose distribution (i.e., dose–volume histogram) by the graphical method considering the effects on tumor and OARs around the tumor. This method may stipulate a new guideline to optimize the fractionation regimen for physics-guided fractionation.

  17. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on i-band absolute magnitude (M i ), or, for a small subset of our sample, M i and color (NUV - i). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M i and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  18. Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples

    Directory of Open Access Journals (Sweden)

    Hyunok Oh

    2003-05-01

    Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.

  19. On-line sample-pre-treatment schemes for trace-level determinations of metals by coupling flow injection or sequential injection with ICP-MS

    DEFF Research Database (Denmark)

    Wang, Jianhua; Hansen, Elo Harald

    2003-01-01

    for on-line matrix separation and pre-concentration of trace levels of metals with detection by ICP-MS. It highlights some of the frequently applied on-line, sample-pre-treatment schemes, including solid phase extraction (SPE), on-wall molecular sorption and precipitate/(co)-precipitate retention using...... a polytetrafluoroethylene (PTFE) knotted reactor (KR), solvent extraction-back extraction and hydride/vapor generation. It also addresses a novel, robust approach, whereby the protocol of SI-LOV-bead injection (BI) on-line separation and pre-concentration of ultra-trace levels of metals by a renewable microcolumn...

  20. An Optimal Watermarking Scheme Based on DWT and SVD%基于DWT-SVD的数字水印嵌入方法

    Institute of Scientific and Technical Information of China (English)

    张红; 马彩文; 董永英; 李艳

    2005-01-01

    目前基于SVD的水印算法是直接对宿主图像或水印图像进行奇异值变换(SVD),然后将水印嵌入宿主图像的的奇异值中.本文提出了一种新的基于SVD-DWT的水印算法,先对宿主图像和水印图像进行小波变换(DWT),然后将水印嵌入各频带的奇异值中,并且水印嵌入系数随频带不同而不同.在JPEG压缩、图像旋转及剪切等攻击方式下,对该水印算法进行了鲁棒性分析,数值实验表明该水印算法具有良好的抗攻击性和安全性.%The watermarking schemes based on SVD were directly applied SVD to original images. An optimal watermarking scheme applying SVD to the subbands of cover image after applying DWT to both the cover image and watermark is presented. Experiments show that this scheme is robust to a wide range of attacks.

  1. Optimal adaptive group sequential design with flexible timing of sample size determination.

    Science.gov (United States)

    Cui, Lu; Zhang, Lanju; Yang, Bo

    2017-04-26

    Flexible sample size designs, including group sequential and sample size re-estimation designs, have been used as alternatives to fixed sample size designs to achieve more robust statistical power and better trial efficiency. In this work, a new representation of sample size re-estimation design suggested by Cui et al. [5,6] is introduced as an adaptive group sequential design with flexible timing of sample size determination. This generalized adaptive group sequential design allows one time sample size determination either before the start of or in the mid-course of a clinical study. The new approach leads to possible design optimization on an expanded space of design parameters. Its equivalence to sample size re-estimation design proposed by Cui et al. provides further insight on re-estimation design and helps to address common confusions and misunderstanding. Issues in designing flexible sample size trial, including design objective, performance evaluation and implementation are touched upon with an example to illustrate. Copyright © 2017. Published by Elsevier Inc.

  2. 赋权控制分配策略的空间优化设计%The Design of Space Optimization for Weighted Control Allocation Scheme

    Institute of Scientific and Technical Information of China (English)

    陈勇; 董新民; 王发威; 赵丽

    2012-01-01

    To increase the using rate of attainable virtual instruction set in static weighted control allocation, a design scheme of space optimization for determining the best weights of instructions offline is proposed based on the improved particle swarm optimization. The mathematical models of weighted pseudo-inverse and mixed optimization methods are built respectively whose uniform control law is deducted and attainable set constructing algorithm is presented. By introducing the quantum and genetic factors, the diversity of particle swarm is enhanced through cross operation to obtain the global optimal weights of control instructions quickly. The simulation results show that the designed scheme can achieve the maximal attainable set space of weighted control allocation.%为提高静态赋权控制分配策略虚拟指令可达集的使用率,基于改进粒子群算法提出了一种离线确定控制指令最优化权值的空间优化设计方案.分别建立了加权伪逆法和混合优化法的数学模型,推导出统一的控制律形式,并给出了构造可达集空间的算法:引入量子和遗传因了,通过粒子的交叉操作增强粒子群的多样性,以快速求解控制指令权值的全局最优点.仿真结果表明,设计的方案可以实现赋权控制分配策略可达集空间的最大化.

  3. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    Energy Technology Data Exchange (ETDEWEB)

    Tiwari, P; Xie, Y; Chen, Y [Washington University in Saint Louis, Saint Louis, Missouri (United States); Deasy, J [Memorial Sloan Kettering Cancer Center, NY, NY (United States)

    2014-06-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality.

  4. Optimal error estimates and energy conservation identities of the ADI-FDTD scheme on staggered grids for 3D Maxwell's equations

    CERN Document Server

    Gao, Liping

    2011-01-01

    This paper is concerned with the optimal error estimates and energy conservation properties of the alternating direction implicit finite-difference time-domain (ADI-FDTD) method which is a popular scheme for solving the 3D Maxwell equations. Precisely, for the case with a perfectly electric conducting (PEC) boundary condition we establish the optimal second-order error estimates in both space and time in the discrete $H^1$-norm for the ADI-FDTD scheme and prove the approximate divergence preserving property that if the divergence of the initial electric and magnetic fields are zero then the discrete $L^2$-norm of the discrete divergence of the ADI-FDTD solution is approximately zero with the second-order accuracy in both space and time. A key ingredient is two new discrete energy norms which are second-order in time perturbations of two new energy conservation laws for the Maxwell equations introduced in this paper. Furthermore, we prove that, in addition to two known discrete energy identities which are seco...

  5. Characterization and Design of High-Level VHDL I/Q Frequency Downconverter Via Special Sampling Scheme

    Science.gov (United States)

    2006-03-01

    3.22 shows an equivalent plot for the secondary case (100 MHz sampled at 1 GHz) in which only widths above 8-bits can be used for phase imbalance...phase imbalance, but only above 7-bits for amplitude imbalance. For the secondary case in Figure 3.24, the use of bit-widths above 8-bits for phase

  6. Trends and perspectives of flow injection/sequential injection on-line sample-pretreatment schemes coupled to ETAAS

    DEFF Research Database (Denmark)

    Wang, Jianhua; Hansen, Elo Harald

    2005-01-01

    Flow injection (FI) analysis, the first generation of this technique, became in the 1990s supplemented by its second generation, sequential injection (SI), and most recently by the third generation (i.e.,Lab-on-Valve). The dominant role played by FI in automatic, on-line, sample pretreatments...

  7. WISECONDOR: detection of fetal aberrations from shallow sequencing maternal plasma based on a within-sample comparison scheme

    NARCIS (Netherlands)

    Straver, R.; Sistermans, E.A.; Holstege, H.; Visser, A.; Oudejans, C.B.M.; Reinders, M.J.T.

    2013-01-01

    Genetic disorders can be detected by prenatal diagnosis using Chorionic Villus Sampling, but the 1:100 chance to result in miscarriage restricts the use to fetuses that are suspected to have an aberration. Detection of trisomy 21 cases noninvasively is now possible owing to the upswing of next-gener

  8. A two-stage method to determine optimal product sampling considering dynamic potential market.

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level.

  9. Evaluation of sample preparation methods and optimization of nickel determination in vegetable tissues

    Directory of Open Access Journals (Sweden)

    Rodrigo Fernando dos Santos Salazar

    2011-02-01

    Full Text Available Nickel, although essential to plants, may be toxic to plants and animals. It is mainly assimilated by food ingestion. However, information about the average levels of elements (including Ni in edible vegetables from different regions is still scarce in Brazil. The objectives of this study were to: (a evaluate and optimize a method for preparation of vegetable tissue samples for Ni determination; (b optimize the analytical procedures for determination by Flame Atomic Absorption Spectrometry (FAAS and by Electrothermal Atomic Absorption (ETAAS in vegetable samples and (c determine the Ni concentration in vegetables consumed in the cities of Lorena and Taubaté in the Vale do Paraíba, State of São Paulo, Brazil. By means of the analytical technique for determination by ETAAS or FAAS, the results were validated by the test of analyte addition and recovery. The most viable method tested for quantification of this element was HClO4-HNO3 wet digestion. All samples but carrot tissue collected in Lorena contained Ni levels above the permitted by the Brazilian Ministry of Health. The most disturbing results, requiring more detailed studies, were the Ni concentrations measured in carrot samples from Taubaté, where levels were five times higher than permitted by Brazilian regulations.

  10. Optimized sample preparation of endoscopic collected pancreatic fluid for SDS-PAGE analysis.

    Science.gov (United States)

    Paulo, Joao A; Lee, Linda S; Wu, Bechien; Repas, Kathryn; Banks, Peter A; Conwell, Darwin L; Steen, Hanno

    2010-07-01

    The standardization of methods for human body fluid protein isolation is a critical initial step for proteomic analyses aimed to discover clinically relevant biomarkers. Several caveats have hindered pancreatic fluid proteomics, including the heterogeneity of samples and protein degradation. We aim to optimize sample handling of pancreatic fluid that has been collected using a safe and effective endoscopic collection method (endoscopic pancreatic function test). Using SDS-PAGE protein profiling, we investigate (i) precipitation techniques to maximize protein extraction, (ii) auto-digestion of pancreatic fluid following prolonged exposure to a range of temperatures, (iii) effects of multiple freeze-thaw cycles on protein stability, and (iv) the utility of protease inhibitors. Our experiments revealed that TCA precipitation resulted in the most efficient extraction of protein from pancreatic fluid of the eight methods we investigated. In addition, our data reveal that although auto-digestion of proteins is prevalent at 23 and 37 degrees C, incubation on ice significantly slows such degradation. Similarly, when the sample is maintained on ice, proteolysis is minimal during multiple freeze-thaw cycles. We have also determined the addition of protease inhibitors to be assay-dependent. Our optimized sample preparation strategy can be applied to future proteomic analyses of pancreatic fluid.

  11. A resting box for outdoor sampling of adult Anopheles arabiensis in rice irrigation schemes of lower Moshi, northern Tanzania

    Directory of Open Access Journals (Sweden)

    Msangi Shandala

    2009-04-01

    Full Text Available Abstract Background Malaria vector sampling is the best method for understanding the vector dynamics and infectivity; thus, disease transmission seasonality can be established. There is a need to protecting humans involved in the sampling of disease vectors during surveillance or in control programmes. In this study, human landing catch, two cow odour baited resting boxes and an unbaited resting box were evaluated as vector sampling tools in an area with a high proportion of Anopheles arabiensis, as the major malaria vector. Methods Three resting boxes were evaluated against human landing catch. Two were baited with cow odour, while the third was unbaited. The inner parts of the boxes were covered with black cloth materials. Experiments were arranged in latin-square design. Boxes were set in the evening and left undisturbed; mosquitoes were collected at 06:00 am the next morning, while human landing catch was done overnight. Results A total of 9,558 An. arabiensis mosquitoes were collected. 17.5% (N = 1668 were collected in resting box baited with cow body odour, 42.5% (N = 4060 in resting box baited with cow urine, 15.1% (N = 1444 in unbaited resting box and 24.9% (N = 2386 were collected by human landing catch technique. In analysis, the house positions had no effect on the density of mosquitoes caught (DF = 3, F = 0.753, P = 0.387; the sampling technique had significant impact on the caught mosquitoes densities (DF = 3, F 37. 944, P Conclusion Odour-baited resting boxes have shown the possibility of replacing the existing traditional method (human landing catch for sampling malaria vectors in areas with a high proportion of An. arabiensis as malaria vectors. Further evaluations of fermented urine and longevity of the urine odour still need to be investigated.

  12. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    Science.gov (United States)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  13. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    Science.gov (United States)

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  14. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil.

    Science.gov (United States)

    Silvestri, Erin E; Feldhake, David; Griffin, Dale; Lisle, John; Nichols, Tonya L; Shah, Sanjiv R; Pemberton, Adin; Schaefer, Frank W

    2016-11-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries. Copyright © 2016. Published by Elsevier B.V.

  15. Dynamics of hepatitis C under optimal therapy and sampling based analysis

    Science.gov (United States)

    Pachpute, Gaurav; Chakrabarty, Siddhartha P.

    2013-08-01

    We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.

  16. Optimization of Load Recovery Scheme Considering System Security Factors%考虑系统安全因素的负荷恢复方案优化

    Institute of Scientific and Technical Information of China (English)

    刘文轩; 顾雪平; 王佳裕; 赵宝斌

    2016-01-01

    在电力系统网架重构初期,电网结构相对薄弱,恢复部分负荷可以在一定程度上维持系统功率平衡,确保系统安全运行。文中结合系统安全运行特征和负荷投入特点,建立相关负荷恢复模型,提出了一种考虑系统安全因素的负荷恢复方法;以网架各节点电压偏移程度、机组进相运行程度和潮流熵为目标函数,根据实际变电站负荷投入进行潮流调整,确定各节点负荷投入量,以满足系统功率平衡的要求;采用改进粒子群算法对恢复方案进行多目标优化,获得其Pareto 最优解集,并采用基于模糊反熵权的M-TOPSIS方法进行最优方案排序,以制订科学合理的负荷投入方案。最后,以新英格兰10机39节点系统和河北南网局部系统为算例验证了所述方法的有效性。%Load restoration in the early stage of network reconfiguration plays a role in maintaining power balance and stable operation of the power system. According to the operation characteristics of units and the pickup pattern of load, a corresponding model for load recovery is established.Moreover,a method for load pickup considering system stability is presented.The voltage deviation of the restored nodes,the leading phase operation situations of units and power flow entropy are defined as the obj ective functions.In order to obtain the optimal scheme,power flow regulation based on the load pickup patterns is carried out,and the amount of load pickup is determined.An improved particle swarm optimization algorithm is applied to search for the optimal scheme or feasible schemes.The Pareto optimal solution set acquired by solving the load recovery optimization model provides candidate schemes.The M-TOPSIS multiple attributes decision-making method based on fuzzy anti-entropy weight is applied to identify the satisfactory solution.In the end,effectiveness of the proposed method is validated by the optimization results on

  17. Model reduction algorithms for optimal control and importance sampling of diffusions

    Science.gov (United States)

    Hartmann, Carsten; Schütte, Christof; Zhang, Wei

    2016-08-01

    We propose numerical algorithms for solving optimal control and importance sampling problems based on simplified models. The algorithms combine model reduction techniques for multiscale diffusions and stochastic optimization tools, with the aim of reducing the original, possibly high-dimensional problem to a lower dimensional representation of the dynamics, in which only a few relevant degrees of freedom are controlled or biased. Specifically, we study situations in which either a reaction coordinate onto which the dynamics can be projected is known, or situations in which the dynamics shows strongly localized behavior in the small noise regime. No explicit assumptions about small parameters or scale separation have to be made. We illustrate the approach with simple, but paradigmatic numerical examples.

  18. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Science.gov (United States)

    Ridolfi, E.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.

    2016-06-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers' cross-sectional spacing.

  19. Interference-Limited Device-to-Device Multi-User Cooperation Scheme for Optimization of Edge Networking

    Institute of Scientific and Technical Information of China (English)

    Hong-Cheng Huang; Jie Zhang; Zu-Fan Zhang; Zhong-Yang Xiong

    2016-01-01

    Device-to-device (D2D) communication is an emerging technology for improving cellular networks, which plays an important role in realizing Internet of Things (IoT). The spectrum efficiency, energy efficiency and throughput of network can be enhanced by the cooperation among multiple D2D users in a self-organized method. In order to limit the interference of D2D users and load off the energy consumption of D2D users without decreasing communication quality, an interference-limited multi-user cooperation scheme is proposed for multiple D2D users to solve the energy problem and the interference problem in this paper. Multiple D2D users use non-orthogonal spectrums to form clusters by self-organized method. Multiple D2D users are divided into different cooperative units. There is no interference among different cooperative units so as to limit the interference of each D2D user in cooperative units. When the link capacity cannot meet the requirements of the user rate, it will produce an interrupt event. In order to evaluate the communication quality, the outrage probability of D2D link is derived by considering link delay threshold, data rate and interference. Besides the energy availability and signal-to-noise ratio (SNR) of each D2D user, the distance between D2D users is considered when selecting the relaying D2D users so as to enhance the signal-to-interference-plus-noise ratio (SINR) of D2D receiving users. Combining the derived outrage probability, the relationships among the average link delay threshold, the efficiency of energy and the efficiency of capacity are studied. The simulation results show that the interference-limited multiple D2D users cooperation scheme can not only help to offload energy consumption and limit the interference of D2D users, but also enhance the efficiency of energy and the efficiency of capacity.

  20. An S/H circuit with parasitics optimized for IF-sampling

    Science.gov (United States)

    Xuqiang, Zheng; Fule, Li; Zhijun, Wang; Weitao, Li; Wen, Jia; Zhihua, Wang; Shigang, Yue

    2016-06-01

    An IF-sampling S/H is presented, which adopts a flip-around structure, bottom-plate sampling technique and improved input bootstrapped switches. To achieve high sampling linearity over a wide input frequency range, the floating well technique is utilized to optimize the input switches. Besides, techniques of transistor load linearization and layout improvement are proposed to further reduce and linearize the parasitic capacitance. The S/H circuit has been fabricated in 0.18-μm CMOS process as the front-end of a 14 bit, 250 MS/s pipeline ADC. For 30 MHz input, the measured SFDR/SNDR of the ADC is 94.7 dB/68. 5dB, which can remain over 84.3 dB/65.4 dB for input frequency up to 400 MHz. The ADC presents excellent dynamic performance at high input frequency, which is mainly attributed to the parasitics optimized S/H circuit. Poject supported by the Shenzhen Project (No. JSGG20150512162029307).

  1. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Tijssen, Rob H.N. [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Senneville, Baudouin D. de [Imaging Division, University Medical Center Utrecht, Utrecht (Netherlands); L' Institut de Mathématiques de Bordeaux, Unité Mixte de Recherche 5251, Centre National de la Recherche Scientifique/University of Bordeaux, Bordeaux (France); Heerkens, Hanne D.; Vulpen, Marco van; Lagendijk, Jan J.W.; Berg, Cornelis A.T. van den [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands)

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.

  2. A 4D-Var inversion system based on the icosahedral grid model (NICAM-TM 4D-Var v1.0) - Part 2: Optimization scheme and identical twin experiment of atmospheric CO2 inversion

    Science.gov (United States)

    Niwa, Yosuke; Fujii, Yosuke; Sawa, Yousuke; Iida, Yosuke; Ito, Akihiko; Satoh, Masaki; Imasu, Ryoichi; Tsuboi, Kazuhiro; Matsueda, Hidekazu; Saigusa, Nobuko

    2017-06-01

    A four-dimensional variational method (4D-Var) is a popular technique for source/sink inversions of atmospheric constituents, but it is not without problems. Using an icosahedral grid transport model and the 4D-Var method, a new atmospheric greenhouse gas (GHG) inversion system has been developed. The system combines offline forward and adjoint models with a quasi-Newton optimization scheme. The new approach is then used to conduct identical twin experiments to investigate optimal system settings for an atmospheric CO2 inversion problem, and to demonstrate the validity of the new inversion system. In this paper, the inversion problem is simplified by assuming the prior flux errors to be reasonably well known and by designing the prior error correlations with a simple function as a first step. It is found that a system of forward and adjoint models with smaller model errors but with nonlinearity has comparable optimization performance to that of another system that conserves linearity with an exact adjoint relationship. Furthermore, the effectiveness of the prior error correlations is demonstrated, as the global error is reduced by about 15 % by adding prior error correlations that are simply designed when 65 weekly flask sampling observations at ground-based stations are used. With the optimal setting, the new inversion system successfully reproduces the spatiotemporal variations of the surface fluxes, from regional (such as biomass burning) to global scales. The optimization algorithm introduced in the new system does not require decomposition of a matrix that establishes the correlation among the prior flux errors. This enables us to design the prior error covariance matrix more freely.

  3. A 4D-Var inversion system based on the icosahedral grid model (NICAM-TM 4D-Var v1.0 – Part 2: Optimization scheme and identical twin experiment of atmospheric CO2 inversion

    Directory of Open Access Journals (Sweden)

    Y. Niwa

    2017-06-01

    Full Text Available A four-dimensional variational method (4D-Var is a popular technique for source/sink inversions of atmospheric constituents, but it is not without problems. Using an icosahedral grid transport model and the 4D-Var method, a new atmospheric greenhouse gas (GHG inversion system has been developed. The system combines offline forward and adjoint models with a quasi-Newton optimization scheme. The new approach is then used to conduct identical twin experiments to investigate optimal system settings for an atmospheric CO2 inversion problem, and to demonstrate the validity of the new inversion system. In this paper, the inversion problem is simplified by assuming the prior flux errors to be reasonably well known and by designing the prior error correlations with a simple function as a first step. It is found that a system of forward and adjoint models with smaller model errors but with nonlinearity has comparable optimization performance to that of another system that conserves linearity with an exact adjoint relationship. Furthermore, the effectiveness of the prior error correlations is demonstrated, as the global error is reduced by about 15 % by adding prior error correlations that are simply designed when 65 weekly flask sampling observations at ground-based stations are used. With the optimal setting, the new inversion system successfully reproduces the spatiotemporal variations of the surface fluxes, from regional (such as biomass burning to global scales. The optimization algorithm introduced in the new system does not require decomposition of a matrix that establishes the correlation among the prior flux errors. This enables us to design the prior error covariance matrix more freely.

  4. Analysis and Optimization of Bulk DNA Sampling with Binary Scoring for Germplasm Characterization

    Science.gov (United States)

    Reyes-Valdés, M. Humberto; Santacruz-Varela, Amalio; Martínez, Octavio; Simpson, June; Hayano-Kanashiro, Corina; Cortés-Romero, Celso

    2013-01-01

    The strategy of bulk DNA sampling has been a valuable method for studying large numbers of individuals through genetic markers. The application of this strategy for discrimination among germplasm sources was analyzed through information theory, considering the case of polymorphic alleles scored binarily for their presence or absence in DNA pools. We defined the informativeness of a set of marker loci in bulks as the mutual information between genotype and population identity, composed by two terms: diversity and noise. The first term is the entropy of bulk genotypes, whereas the noise term is measured through the conditional entropy of bulk genotypes given germplasm sources. Thus, optimizing marker information implies increasing diversity and reducing noise. Simple formulas were devised to estimate marker information per allele from a set of estimated allele frequencies across populations. As an example, they allowed optimization of bulk size for SSR genotyping in maize, from allele frequencies estimated in a sample of 56 maize populations. It was found that a sample of 30 plants from a random mating population is adequate for maize germplasm SSR characterization. We analyzed the use of divided bulks to overcome the allele dilution problem in DNA pools, and concluded that samples of 30 plants divided into three bulks of 10 plants are efficient to characterize maize germplasm sources through SSR with a good control of the dilution problem. We estimated the informativeness of 30 SSR loci from the estimated allele frequencies in maize populations, and found a wide variation of marker informativeness, which positively correlated with the number of alleles per locus. PMID:24260321

  5. Analysis and optimization of bulk DNA sampling with binary scoring for germplasm characterization.

    Directory of Open Access Journals (Sweden)

    M Humberto Reyes-Valdés

    Full Text Available The strategy of bulk DNA sampling has been a valuable method for studying large numbers of individuals through genetic markers. The application of this strategy for discrimination among germplasm sources was analyzed through information theory, considering the case of polymorphic alleles scored binarily for their presence or absence in DNA pools. We defined the informativeness of a set of marker loci in bulks as the mutual information between genotype and population identity, composed by two terms: diversity and noise. The first term is the entropy of bulk genotypes, whereas the noise term is measured through the conditional entropy of bulk genotypes given germplasm sources. Thus, optimizing marker information implies increasing diversity and reducing noise. Simple formulas were devised to estimate marker information per allele from a set of estimated allele frequencies across populations. As an example, they allowed optimization of bulk size for SSR genotyping in maize, from allele frequencies estimated in a sample of 56 maize populations. It was found that a sample of 30 plants from a random mating population is adequate for maize germplasm SSR characterization. We analyzed the use of divided bulks to overcome the allele dilution problem in DNA pools, and concluded that samples of 30 plants divided into three bulks of 10 plants are efficient to characterize maize germplasm sources through SSR with a good control of the dilution problem. We estimated the informativeness of 30 SSR loci from the estimated allele frequencies in maize populations, and found a wide variation of marker informativeness, which positively correlated with the number of alleles per locus.

  6. A texture-based rolling bearing fault diagnosis scheme using adaptive optimal kernel time frequency representation and uniform local binary patterns

    Science.gov (United States)

    Chen, Haizhou; Wang, Jiaxu; Li, Junyang; Tang, Baoping

    2017-03-01

    This paper presents a new scheme for rolling bearing fault diagnosis using texture features extracted from the time-frequency representations (TFRs) of the signal. To derive the proposed texture features, firstly adaptive optimal kernel time frequency representation (AOK-TFR) is applied to extract TFRs of the signal which essentially describe the energy distribution characteristics of the signal over time and frequency domain. Since the AOK-TFR uses the signal-dependent radially Gaussian kernel that adapts over time, it can exactly track the minor variations in the signal and provide an excellent time-frequency concentration in noisy environment. Simulation experiments are furthermore performed in comparison with common time-frequency analysis methods under different noisy conditions. Secondly, the uniform local binary pattern (uLBP), which is a computationally simple and noise-resistant texture analysis method, is used to calculate the histograms from the TFRs to characterize rolling bearing fault information. Finally, the obtained histogram feature vectors are input into the multi-SVM classifier for pattern recognition. We validate the effectiveness of the proposed scheme by several experiments, and comparative results demonstrate that the new fault diagnosis technique performs better than most state-of-the-art techniques, and yet we find that the proposed algorithm possess the adaptivity and noise resistance qualities that could be very useful in real industrial applications.

  7. Influence of sampling, storage, processing and optimal experimental conditions on adenylate energy charge in penaeid shrimp

    Directory of Open Access Journals (Sweden)

    Robles-Romo Arlett

    2014-01-01

    Full Text Available Adenylate energy charge (AEC has been used as a practical index of the physiological status and health in several disciplines, such as ecotoxicology and aquaculture. This study standardizes several procedures for AEC determination in penaeid shrimp that are very sensitive to sampling. We concluded that shrimp can be frozen in liquid nitrogen and then stored at -76°C for up to two years for further analysis, or freshly dissected and immediately homogenized in acid. Other cooling procedures, such as immersion in cold water or placing shrimp on ice for 15 min resulted in 50% and 73% decreases in ATP levels, and 9-fold and 10-fold increases in IMP levels, respectively. Optimal values of AEC (0.9 were obtained in shrimp recently transferred from ponds to indoor conditions, but decreased to 0.77 after one month in indoor tanks when stocked at high densities; the AEC re-established to 0.85 when the shrimps were transferred to optimal conditions (lower density and dark tanks. While the levels of arginine phosphate followed the same pattern, its levels did not fully re-establish. Comparison of different devices for sample homogenization indicated that a cryogenic ball mill mixer is the more suitable procedure.

  8. Optimization of a miniaturized DBD plasma chip for mercury detection in water samples.

    Science.gov (United States)

    Abdul-Majeed, Wameath S; Parada, Jaime H Lozano; Zimmerman, William B

    2011-11-01

    In this work, an optimization study was conducted to investigate the performance of a custom-designed miniaturized dielectric barrier discharge (DBD) microplasma chip to be utilized as a radiation source for mercury determination in water samples. The experimental work was implemented by using experimental design, and the results were assessed by applying statistical techniques. The proposed DBD chip was designed and fabricated in a simple way by using a few microscope glass slides aligned together and held by a Perspex chip holder, which proved useful for miniaturization purposes. Argon gas at 75-180 mL/min was used in the experiments as a discharge gas, while AC power in the range 75-175 W at 38 kHz was supplied to the load from a custom-made power source. A UV-visible spectrometer was used, and the spectroscopic parameters were optimized thoroughly and applied in the later analysis. Plasma characteristics were determined theoretically by analysing the recorded spectroscopic data. The estimated electron temperature (T(e) = 0.849 eV) was found to be higher than the excitation temperature (T(exc) = 0.55 eV) and the rotational temperature (T(rot) = 0.064 eV), which indicates non-thermal plasma is generated in the proposed chip. Mercury cold vapour generation experiments were conducted according to experimental plan by examining four parameters (HCl and SnCl(2) concentrations, argon flow rate, and the applied power) and considering the recorded intensity for the mercury line (253.65 nm) as the objective function. Furthermore, an optimization technique and statistical approaches were applied to investigate the individual and interaction effects of the tested parameters on the system performance. The calculated analytical figures of merit (LOD = 2.8 μg/L and RSD = 3.5%) indicates a reasonable precision system to be adopted as a basis for a miniaturized portable device for mercury detection in water samples.

  9. Optimization of a Pre-MEKC Separation SPE Procedure for Steroid Molecules in Human Urine Samples

    Directory of Open Access Journals (Sweden)

    Ilona Olędzka

    2013-11-01

    Full Text Available Many steroid hormones can be considered as potential biomarkers and their determination in body fluids can create opportunities for the rapid diagnosis of many diseases and disorders of the human body. Most existing methods for the determination of steroids are usually time- and labor-consuming and quite costly. Therefore, the aim of analytical laboratories is to develop a new, relatively low-cost and rapid implementation methodology for their determination in biological samples. Due to the fact that there is little literature data on concentrations of steroid hormones in urine samples, we have made attempts at the electrophoretic determination of these compounds. For this purpose, an extraction procedure for the optimized separation and simultaneous determination of seven steroid hormones in urine samples has been investigated. The isolation of analytes from biological samples was performed by liquid-liquid extraction (LLE with dichloromethane and compared to solid phase extraction (SPE with C18 and hydrophilic-lipophilic balance (HLB columns. To separate all the analytes a micellar electrokinetic capillary chromatography (MECK technique was employed. For full separation of all the analytes a running buffer (pH 9.2, composed of 10 mM sodium tetraborate decahydrate (borax, 50 mM sodium dodecyl sulfate (SDS, and 10% methanol was selected. The methodology developed in this work for the determination of steroid hormones meets all the requirements of analytical methods. The applicability of the method has been confirmed for the analysis of urine samples collected from volunteers—both men and women (students, amateur bodybuilders, using and not applying steroid doping. The data obtained during this work can be successfully used for further research on the determination of steroid hormones in urine samples.

  10. Automation of sample preparation for mass cytometry barcoding in support of clinical research: protocol optimization.

    Science.gov (United States)

    Nassar, Ala F; Wisnewski, Adam V; Raddassi, Khadir

    2017-03-01

    Analysis of multiplexed assays is highly important for clinical diagnostics and other analytical applications. Mass cytometry enables multi-dimensional, single-cell analysis of cell type and state. In mass cytometry, the rare earth metals used as reporters on antibodies allow determination of marker expression in individual cells. Barcode-based bioassays for CyTOF are able to encode and decode for different experimental conditions or samples within the same experiment, facilitating progress in producing straightforward and consistent results. Herein, an integrated protocol for automated sample preparation for barcoding used in conjunction with mass cytometry for clinical bioanalysis samples is described; we offer results of our work with barcoding protocol optimization. In addition, we present some points to be considered in order to minimize the variability of quantitative mass cytometry measurements. For example, we discuss the importance of having multiple populations during titration of the antibodies and effect of storage and shipping of labelled samples on the stability of staining for purposes of CyTOF analysis. Data quality is not affected when labelled samples are stored either frozen or at 4 °C and used within 10 days; we observed that cell loss is greater if cells are washed with deionized water prior to shipment or are shipped in lower concentration. Once the labelled samples for CyTOF are suspended in deionized water, the analysis should be performed expeditiously, preferably within the first hour. Damage can be minimized if the cells are resuspended in phosphate-buffered saline (PBS) rather than deionized water while waiting for data acquisition.

  11. Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation.

    Science.gov (United States)

    Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A; Bouquerel, Hélène

    2016-06-01

    Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L(-1) and 10% for 10 mBq L(-1). While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L(-1), a conservative experimental estimate is rather 5 mBq L(-1), corresponding to 0.14 fg g(-1). The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported.

  12. A Survey on an Ice Thermal Storage System and a Study on an Operation Scheme for a Performance Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Kim, In Soo [Korea Energy Management Corporation, Yongin (Korea)

    2001-02-01

    Because the composition of an ice thermal storage system is more complicated than the existing freezer for air conditioning, the excellent air conditioning effect cannot be made with as low as a cost in the early stage of installing if the system is not correctly understood and properly maintained through all the process, from a selection of the system to an installation and operation. However, most ice thermal storage systems do not perform the specific operation suitable for the features of the system under the uncontrolled program, so most freezers is operating on the condition that the efficiency declines to more than 30% compared with a regular efficiency. The systematic management by the thermal storage rate is not also being made. To optimize the efficiency of an ice thermal storage system, the management ability should be improved through educating field managers, and professionals in design and operation fields should regularly and continuously manage the system through performing a complete A/S. To maximize the retrenchment of the electricity demand through optimizing the system, the parallel operation of an ice thermal storage system in the peak time should be controlled by discounting a rate and enlarging the difference of an extra rate between late-night time and daytime. In addition to that, to prevent the inefficiency operation due to a negligent management in most ice thermal storage systems and promote the activation of spreading ice thermal storage systems as an electricity saving equipment, the systematic supplement that connects the efficiency of room temperature (RT/m{sup 3}.hr) to a rebate, a regular education for system managers, and the technical supports should be made. 17 figs., 7 tabs.

  13. The accuracy of seven mathematical functions in modeling dairy cattle lactation curves based on test-day records from varying sample schemes.

    Science.gov (United States)

    Silvestre, A M; Petim-Batista, F; Colaço, J

    2006-05-01

    Daily milk yield over the course of the lactation follows a curvilinear pattern, so a suitable function is required to model this curve. In this study, 7 functions (Wood, Wilmink, Ali and Schaeffer, cubic splines, and 3 Legendre polynomials) were used to model the lactation curve at the phenotypic level, using both daily observations and data from commonly used recording schemes. The number of observations per lactation varied from 4 to 11. Several criteria based on the analysis of the real error were used to compare models. The performance of models showed few discrepancies in the comparison criteria when daily or 4-weekly (with first test at days in milk 8) data by lactation were used. The performance of the Wood, Wilmink, and Ali and Schaeffer models were highly affected by the reduction of the sample dimension. The results of this work support the idea that the performance of these models depends on the sample properties but also shows considerable variation within the sampling groups.

  14. 一原油远洋拼船运输方案优化研究%Optimization of ocean shipping schemes of LCL crude oil

    Institute of Scientific and Technical Information of China (English)

    周晓玲; 王震; 肖文涛

    2016-01-01

    As an extension and development of oil and gas storage and transportation, the optimization of crude carrier co-loading scheme plays an important role in improving the profit and reducing loss in terms of crude oil logistics, which is of great importance in upgrading the overall competitive strength for the oil and petrochemical companies in China. The objec-tion function on the optimization of long-distance crude oil transportation was analyzed, and seven constraints, such as the balance of supply and demand were concluded, which set limits on the extent to which the system will work. An optimization model of crude carrier co-loading scheme was established by using an improved differential evolution algorithm. The equation constrained was satisfied by the method of paired chromosomes. Some constraints were simplified by the form of additional distance. The crossover of small probability between paired chromosomes was used to prevent premature convergence. Final-ly, to recycle some potential excellent genes and to speed up the algorithm optimization, some chromosomes of high "semi-fitness" were selected for individual evolutions during the execution of differential evolution algorithm. Compared with manual searching plans, this improved differential evolution algorithm can not only significantly reduce the cost in long-distance crude oil co-loading and transportation, but also save the time in optimizing the crude carrier co-loading schemes.%分析当前运作模式下中国原油远洋运输优化的目标函数,总结供需平衡约束等7类运输优化限制条件,建立原油远洋拼船运输方案的优化模型,并应用改进差分进化算法求解。在模型求解过程中,对多种维度变量与约束因素数据进行有效整合,利用双染色体配对编码的方法实现供需平衡限制,通过配对染色体间的小概率交叉降低算法的早熟收敛概率,利用整体进化与个别进化相结合的方法回收潜在的优

  15. A New Deferred Sentencing Scheme

    Directory of Open Access Journals (Sweden)

    N. K. Chakravarti

    1968-10-01

    Full Text Available A new deferred sentencing scheme resembling double sampling scheme has been suggested from viewpoint of operational and administrative. It is recommended particularly when the inspection is destructive. The O.C. curves of the scheme for two sample sizes of 5 and 10 have been given.

  16. Optimizing EUS-guided liver biopsy sampling: comprehensive assessment of needle types and tissue acquisition techniques.

    Science.gov (United States)

    Schulman, Allison R; Thompson, Christopher C; Odze, Robert; Chan, Walter W; Ryou, Marvin

    2017-02-01

    EUS-guided liver biopsy sampling using FNA and, more recently, fine-needle biopsy (FNB) needles has been reported with discrepant diagnostic accuracy, in part due to differences in methodology. We aimed to compare liver histologic yields of 4 EUS-based needles and 2 percutaneous needles to identify optimal number of needle passes and suction. Six needle types were tested on human cadaveric tissue: one 19G FNA needle, one existing 19G FNB needle, one novel 19G FNB needle, one 22G FNB needle, and two 18G percutaneous needles (18G1 and 18G2). Two needle excursion patterns (1 vs 3 fanning passes) were performed on all EUS needles. Primary outcome was number of portal tracts. Secondary outcomes were degree of fragmentation and specimen adequacy. Pairwise comparisons were performed using t tests, with a 2-sided P samplings (48 per needle type) were performed. The novel 19G FNB needle had significantly increased mean portal tracts compared with all needle types. The 22G FNB needle had significantly increased portal tracts compared with the 18G1 needle (3.8 vs 2.5, P sampling. Investigations are underway to determine whether these results can be replicated in a clinical setting. Copyright © 2017 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  17. Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate

    Directory of Open Access Journals (Sweden)

    Davide Brunelli

    2015-07-01

    Full Text Available Compressive sensing (CS is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs. In this work, we extensively investigate the effectiveness of compressive sensing (CS when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring.

  18. Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate.

    Science.gov (United States)

    Brunelli, Davide; Caione, Carlo

    2015-07-10

    Compressive sensing (CS) is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs). In this work, we extensively investigate the effectiveness of compressive sensing (CS) when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring.

  19. An efficient self-optimized sampling method for rare events in nonequilibrium systems

    Institute of Scientific and Technical Information of China (English)

    JIANG HuiJun; PU MingFeng; HOU ZhongHuai

    2014-01-01

    Rare events such as nucleation processes are of ubiquitous importance in real systems.The most popular method for nonequilibrium systems,forward flux sampling(FFS),samples rare events by using interfaces to partition the whole transition process into sequence of steps along an order parameter connecting the initial and final states.FFS usually suffers from two main difficulties:low computational efficiency due to bad interface locations and even being not applicable when trapping into unknown intermediate metastable states.In the present work,we propose an approach to overcome these difficulties,by self-adaptively locating the interfaces on the fly in an optimized manner.Contrary to the conventional FFS which set the interfaces with equal distance of the order parameter,our approach determines the interfaces with equal transition probability which is shown to satisfy the optimization condition.This is done by firstly running long local trajectories starting from the current interface i to get the conditional probability distribution Pc(〉i|i),and then determining i+1by equaling Pc(i+1|i)to a give value p0.With these optimized interfaces,FFS can be run in a much more efficient way.In addition,our approach can conveniently find the intermediate metastable states by monitoring some special long trajectories that neither end at the initial state nor reach the next interface,the number of which will increase sharply from zero if such metastable states are encountered.We apply our approach to a two-state model system and a two-dimensional lattice gas Ising model.Our approach is shown to be much more efficient than the conventional FFS method without losing accuracy,and it can also well reproduce the two-step nucleation scenario of the Ising model with easy identification of the intermediate metastable state.

  20. Dynamic simulation tools for the analysis and optimization of novel collection, filtration and sample preparation systems

    Energy Technology Data Exchange (ETDEWEB)

    Clague, D; Weisgraber, T; Rockway, J; McBride, K

    2006-02-12

    The focus of research effort described here is to develop novel simulation tools to address design and optimization needs in the general class of problems that involve species and fluid (liquid and gas phases) transport through sieving media. This was primarily motivated by the heightened attention on Chem/Bio early detection systems, which among other needs, have a need for high efficiency filtration, collection and sample preparation systems. Hence, the said goal was to develop the computational analysis tools necessary to optimize these critical operations. This new capability is designed to characterize system efficiencies based on the details of the microstructure and environmental effects. To accomplish this, new lattice Boltzmann simulation capabilities where developed to include detailed microstructure descriptions, the relevant surface forces that mediate species capture and release, and temperature effects for both liquid and gas phase systems. While developing the capability, actual demonstration and model systems (and subsystems) of national and programmatic interest were targeted to demonstrate the capability. As a result, where possible, experimental verification of the computational capability was performed either directly using Digital Particle Image Velocimetry or published results.

  1. An optimal survey geometry of weak lensing survey: minimizing super-sample covariance

    CERN Document Server

    Takahashi, Ryuichi; Takada, Masahiro; Kayo, Issha

    2014-01-01

    Upcoming wide-area weak lensing surveys are expensive both in time and cost and require an optimal survey design in order to attain maximum scientific returns from a fixed amount of available telescope time. The super-sample covariance (SSC), which arises from unobservable modes that are larger than the survey size, significantly degrades the statistical precision of weak lensing power spectrum measurement even for a wide-area survey. Using the 1000 mock realizations of the log-normal model, which approximates the weak lensing field for a $\\Lambda$-dominated cold dark matter model, we study an optimal survey geometry to minimize the impact of SSC contamination. For a continuous survey geometry with a fixed survey area, a more elongated geometry such as a rectangular shape of 1:400 side-length ratio reduces the SSC effect and allows for a factor 2 improvement in the cumulative signal-to-noise ratio ($S/N$) of power spectrum measurement up to $\\ell_{\\rm max}\\simeq $ a few $10^3$, compared to compact geometries ...

  2. Fitting in a complex chi^2 landscape using an optimized hypersurface sampling

    CERN Document Server

    Pardo, L C; Busch, S; Moulin, J -F; Tamarit, J Ll

    2011-01-01

    Fitting a data set with a parametrized model can be seen geometrically as finding the global minimum of the chi^2 hypersurface, depending on a set of parameters {P_i}. This is usually done using the Levenberg-Marquardt algorithm. The main drawback of this algorithm is that despite of its fast convergence, it can get stuck if the parameters are not initialized close to the final solution. We propose a modification of the Metropolis algorithm introducing a parameter step tuning that optimizes the sampling of parameter space. The ability of the parameter tuning algorithm together with simulated annealing to find the global chi^2 hypersurface minimum, jumping across chi^2{P_i} barriers when necessary, is demonstrated with synthetic functions and with real data.

  3. Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models

    Science.gov (United States)

    Thon, Ingo

    One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.

  4. Spatially-Optimized Sequential Sampling Plan for Cabbage Aphids Brevicoryne brassicae L. (Hemiptera: Aphididae) in Canola Fields.

    Science.gov (United States)

    Severtson, Dustin; Flower, Ken; Nansen, Christian

    2016-08-01

    The cabbage aphid is a significant pest worldwide in brassica crops, including canola. This pest has shown considerable ability to develop resistance to insecticides, so these should only be applied on a "when and where needed" basis. Thus, optimized sampling plans to accurately assess cabbage aphid densities are critically important to determine the potential need for pesticide applications. In this study, we developed a spatially optimized binomial sequential sampling plan for cabbage aphids in canola fields. Based on five sampled canola fields, sampling plans were developed using 0.1, 0.2, and 0.3 proportions of plants infested as action thresholds. Average sample numbers required to make a decision ranged from 10 to 25 plants. Decreasing acceptable error from 10 to 5% was not considered practically feasible, as it substantially increased the number of samples required to reach a decision. We determined the relationship between the proportions of canola plants infested and cabbage aphid densities per plant, and proposed a spatially optimized sequential sampling plan for cabbage aphids in canola fields, in which spatial features (i.e., edge effects) and optimization of sampling effort (i.e., sequential sampling) are combined. Two forms of stratification were performed to reduce spatial variability caused by edge effects and large field sizes. Spatially optimized sampling, starting at the edge of fields, reduced spatial variability and therefore increased the accuracy of infested plant density estimates. The proposed spatially optimized sampling plan may be used to spatially target insecticide applications, resulting in cost savings, insecticide resistance mitigation, conservation of natural enemies, and reduced environmental impact.

  5. Data-Driven Sampling Matrix Boolean Optimization for Energy-Efficient Biomedical Signal Acquisition by Compressive Sensing.

    Science.gov (United States)

    Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao

    2017-04-01

    Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.

  6. Optimization for Municipal Engineering Construction Scheme Based on Lattice-order Theory Combined with Optimal Combination Empowerment%基于优化组合赋权的市政工程建设方案格序优选

    Institute of Scientific and Technical Information of China (English)

    朱玮; 吴凤平

    2015-01-01

    Evaluation of municipal construction scheme of engineering comprises fuzzification, multi-objectivity and uncertainty, in order to solve the problem of single depends on subjective or objective weighting methods, a method of optimization for municipal engineering construction scheme based on lattice-order theory combined with optimal combination empowerment is proposed. Based on fuzzy theory, entropy theory to determine subjective weights and objective weights of decision indicators, and the optimal model is established to calculate combination weights. On this basis, for the incommensurability problems and contradictions between the multi-objective optimization of municipal engineering construction scheme exists, the lattice order theory is introduced, a decision making model for municipal engineering construction scheme based on lattice-order theory combined with optimal combination empowerment is proposed, combined with the concept of weighted Kaufmann distance or nearness to the distance and comprehensive alternative positive and negative ideal solution is a comprehensive standard to determine the order of priority programs. At last a city’ s example that rebuilt the city main road construction program is analysised, the results show that this method is preferred public works construction program reasonable and effective.%结合市政工程建设方案评价具有模糊性、多目标性和不确定性的特点,针对单一的依赖于主观或者客观赋权方法的缺陷问题,提出一种基于优化组合赋权的市政工程建设方案格序优选方法。首先,基于模糊理论、熵理论对决策指标进行主、客观赋权,再构建组合权重优化模型来求得组合权重值;在此基础上,针对市政工程建设方案优选中存在的多目标之间的不可公度性和矛盾性问题,引入格序理论,构建基于优化组合赋权的市政工程建设方案格序优选方法;再结合加权Kaufmann距离的概念

  7. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT).

    Science.gov (United States)

    Kim, Hojin; Li, Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing, Lei

    2012-07-01

    A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the same in both cases. For the

  8. 一种优化校园VOD系统性能的方案%A Scheme to Optimizing Campus VOD System Performance

    Institute of Scientific and Technical Information of China (English)

    黄方亮; 俞磊; 黄炎; 吴开军

    2015-01-01

    Campus network video on demand system(VOD)is an important part of digital campus construction ,and it can make up for the shortage of traditional teaching methods and provide varieties of digital media services. A set of VOD system based on Java platform has been improved during the period of the research ,and some factors that influence the quality of service have also been found in the phase of operation and maintenance. So an optimized scheme of campus VOD system performance has been proposed and realized by experiment. The results show that the integration of Ubuntu and Helix can improve the efficiency of VOD system ,and the scheme can be generalized in practice.%校园网络视频点播(VOD)系统是数字校园建设的重要环节,可提供丰富的数字媒体服务,用以弥补传统教学手段的不足。在以前设计的基于Java的VOD系统的基础上,改进了一些影响服务质量的因素,提出了一种优化校园VOD系统性能的方案并进行实验。结果表明,Ubuntu整合Helix能提高VOD系统的运行效率,使用户有更加流畅的体验,且可推广至实际应用中。

  9. Optimizing stream water mercury sampling for calculation of fish bioaccumulation factors

    Science.gov (United States)

    Riva-Murray, Karen; Bradley, Paul M.; Journey, Celeste A.; Brigham, Mark E.; Scudder Eikenberry, Barbara C.; Knightes, Christopher; Button, Daniel T.

    2013-01-01

    Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hgfish) divided by the water Hg concentration (Hgwater) and, consequently, are sensitive to sampling and analysis artifacts for fish and water. We evaluated the influence of water sample timing, filtration, and mercury species on the modeled relation between game fish and water mercury concentrations across 11 streams and rivers in five states in order to identify optimum Hgwater sampling approaches. Each model included fish trophic position, to account for a wide range of species collected among sites, and flow-weighted Hgwater estimates. Models were evaluated for parsimony, using Akaike’s Information Criterion. Better models included filtered water methylmercury (FMeHg) or unfiltered water methylmercury (UMeHg), whereas filtered total mercury did not meet parsimony requirements. Models including mean annual FMeHg were superior to those with mean FMeHg calculated over shorter time periods throughout the year. FMeHg models including metrics of high concentrations (80th percentile and above) observed during the year performed better, in general. These higher concentrations occurred most often during the growing season at all sites. Streamflow was significantly related to the probability of achieving higher concentrations during the growing season at six sites, but the direction of influence varied among sites. These findings indicate that streamwater Hg collection can be optimized by evaluating site-specific FMeHg - UMeHg relations, intra-annual temporal variation in their concentrations, and streamflow-Hg dynamics.

  10. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    Science.gov (United States)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  11. Organ sample generator for expected treatment dose construction and adaptive inverse planning optimization

    Energy Technology Data Exchange (ETDEWEB)

    Nie Xiaobo; Liang Jian; Yan Di [Department of Radiation Oncology, Beaumont Health System, Royal Oak, Michigan 48073 (United States)

    2012-12-15

    Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h

  12. Optimization of Network Topology in Computer-Aided Detection Schemes Using Phased Searching with NEAT in a Time-Scaled Framework.

    Science.gov (United States)

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-01-01

    In the field of computer-aided mammographic mass detection, many different features and classifiers have been tested. Frequently, the relevant features and optimal topology for the artificial neural network (ANN)-based approaches at the classification stage are unknown, and thus determined by trial-and-error experiments. In this study, we analyzed a classifier that evolves ANNs using genetic algorithms (GAs), which combines feature selection with the learning task. The classifier named "Phased Searching with NEAT in a Time-Scaled Framework" was analyzed using a dataset with 800 malignant and 800 normal tissue regions in a 10-fold cross-validation framework. The classification performance measured by the area under a receiver operating characteristic (ROC) curve was 0.856 ± 0.029. The result was also compared with four other well-established classifiers that include fixed-topology ANNs, support vector machines (SVMs), linear discriminant analysis (LDA), and bagged decision trees. The results show that Phased Searching outperformed the LDA and bagged decision tree classifiers, and was only significantly outperformed by SVM. Furthermore, the Phased Searching method required fewer features and discarded superfluous structure or topology, thus incurring a lower feature computational and training and validation time requirement. Analyses performed on the network complexities evolved by Phased Searching indicate that it can evolve optimal network topologies based on its complexification and simplification parameter selection process. From the results, the study also concluded that the three classifiers - SVM, fixed-topology ANN, and Phased Searching with NeuroEvolution of Augmenting Topologies (NEAT) in a Time-Scaled Framework - are performing comparably well in our mammographic mass detection scheme.

  13. The scheme of combined application of optimization and simulation models for formation of an optimum structure of an automated control system of space systems

    Science.gov (United States)

    Chernigovskiy, A. S.; Tsarev, R. Yu; Nikiforov, A. Yu; Zelenkov, P. V.

    2016-11-01

    With the development of automated control systems of space systems, there are new classes of spacecraft that requires improvement of their structure and expand their functions. When designing the automated control system of space systems occurs various tasks such as: determining location of elements and subsystems in the space, hardware selection, the distribution of the set of functions performed by the system units, all of this under certain conditions on the quality of control and connectivity of components. The problem of synthesis of structure of automated control system of space systems formalized using discrete variables at various levels of system detalization. A sequence of tasks and stages of the formation of automated control system of space systems structure is developed. The authors have developed and proposed a scheme of the combined implementation of optimization and simulation models to ensure rational distribution of functions between the automated control system complex and the rest of the system units. The proposed approach allows to make reasonable hardware selection, taking into account the different requirements for the operation of automated control systems of space systems.

  14. 基于模拟退火算法的全国最优旅行方案%Optimal Nationwide Traveling Scheme Based on Simulated Annealing Algorithm

    Institute of Scientific and Technical Information of China (English)

    吕鹏举; 原杰; 吕菁华

    2011-01-01

    An optimal itinerary scheme to travel through provincial capitals, municipalities, Hong Kong, Macao, Taiwan is designed.The practical problems of the shortest path and least cost for travelling to the above places are analyzed.Taking account of the relationship of cost, route, duration and transportation, a model is established.The simulated annealing algorithm is adopted to solve the model.A travel path of saving money and time is obtained by a comprehensive consideration.The results show the correctness of this travel program and practical value.%以如何走遍全国各省会、直辖市、香港、澳门、台北为基础设计旅行方案,对旅行时的路径最短,费用最少等现实问题进行分析,在充分考虑旅行费用与路线,时间与交通工具的关系后,以实现路径最短与费用时间最少为目标,进行系统建模,并应用模拟退火算法对模型进行求解,得出了一条综合考虑省钱、省时的旅行路径.结果表明了该旅行方案的正确性和现实价值.

  15. Quality analysis of salmon calcitonin in a polymeric bioadhesive pharmaceutical formulation: sample preparation optimization by DOE.

    Science.gov (United States)

    D'Hondt, Matthias; Van Dorpe, Sylvia; Mehuys, Els; Deforce, Dieter; DeSpiegeleer, Bart

    2010-12-01

    A sensitive and selective HPLC method for the assay and degradation of salmon calcitonin, a 32-amino acid peptide drug, formulated at low concentrations (400 ppm m/m) in a bioadhesive nasal powder containing polymers, was developed and validated. The sample preparation step was optimized using Plackett-Burman and Onion experimental designs. The response functions evaluated were calcitonin recovery and analytical stability. The best results were obtained by treating the sample with 0.45% (v/v) trifluoroacetic acid at 60 degrees C for 40 min. These extraction conditions did not yield any observable degradation, while a maximum recovery for salmon calcitonin of 99.6% was obtained. The HPLC-UV/MS methods used a reversed-phase C(18) Vydac Everest column, with a gradient system based on aqueous acid and acetonitrile. UV detection, using trifluoroacetic acid in the mobile phase, was used for the assay of calcitonin and related degradants. Electrospray ionization (ESI) ion trap mass spectrometry, using formic acid in the mobile phase, was implemented for the confirmatory identification of degradation products. Validation results showed that the methodology was fit for the intended use, with accuracy of 97.4+/-4.3% for the assay and detection limits for degradants ranging between 0.5 and 2.4%. Pilot stability tests of the bioadhesive powder under different storage conditions showed a temperature-dependent decrease in salmon calcitonin assay value, with no equivalent increase in degradation products, explained by the chemical interaction between salmon calcitonin and the carbomer polymer.

  16. Towards an optimal sampling of peculiar velocity surveys for Wiener Filter reconstructions

    Science.gov (United States)

    Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan

    2017-06-01

    The Wiener Filter (WF) technique enables the reconstruction of density and velocity fields from observed radial peculiar velocities. This paper aims at identifying the optimal design of peculiar velocity surveys within the WF framework. The prime goal is to test the dependence of the reconstruction quality on the distribution and nature of data points. Mock data sets, extending to 250 h-1 Mpc, are drawn from a constrained simulation that mimics the local Universe to produce realistic mock catalogues. Reconstructed fields obtained with these mocks are compared to the reference simulation. Comparisons, including residual distributions, cell-to-cell and bulk velocities, imply that the presence of field data points is essential to properly measure the flows. The fields reconstructed from mocks that consist only of galaxy cluster data points exhibit poor-quality bulk velocities. In addition, the reconstruction quality depends strongly on the grouping of individual data points into single points to suppress virial motions in high-density regions. Conversely, the presence of a Zone of Avoidance hardly affects the reconstruction. For a given number of data points, a uniform sample does not score any better than a sample with decreasing number of data points with the distance. The best reconstructions are obtained with a grouped survey containing field galaxies: assuming no error, they differ from the simulated field by less than 100 km s-1 up to the extreme edge of the catalogues or up to a distance of three times the mean distance of data points for non-uniform catalogues. The overall conclusions hold when errors are added.

  17. Inter-pulse delay optimization in dual-pulse laser induced breakdown vacuum ultraviolet spectroscopy of a steel sample in ambient gases at low pressure

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, X., E-mail: xi.jiang2@mail.dcu.ie [School of Physical Sciences, Dublin City University, Dublin (Ireland); National Centre for Plasma Science and Technology, Dublin City University, Dublin (Ireland); Hayden, P. [School of Physical Sciences, Dublin City University, Dublin (Ireland); National Centre for Plasma Science and Technology, Dublin City University, Dublin (Ireland); Laasch, R. [Institut fuer Experimentalphysik, Universitat Hamburg, Luruper Chaussee 149, 22761 Hamburg (Germany); Costello, J.T.; Kennedy, E.T. [School of Physical Sciences, Dublin City University, Dublin (Ireland); National Centre for Plasma Science and Technology, Dublin City University, Dublin (Ireland)

    2013-08-01

    Time-integrated spatially-resolved Laser Induced Breakdown Spectroscopy (LIBS) has been used to investigate spectral emissions from laser-induced plasmas generated on steel targets. Instead of detecting spectral lines in the visible/near ultraviolet (UV), as investigated in conventional LIBS, this work explored the use of spectral lines emitted by ions in the shorter wavelength vacuum ultraviolet (VUV) spectral region. Single-pulse (SP) and dual-pulse LIBS (DP-LIBS) experiments were performed on standardized steel samples. In the case of the double-pulse scheme, two synchronized lasers were used, an ablation laser (200 mJ/15 ns), and a reheating laser (665 mJ/6 ns) in a collinear beam geometry. Spatially resolved and temporally integrated laser induced plasma VUV emission in the DP scheme and its dependence on inter-pulse delay time were studied. The VUV spectral line intensities were found to be enhanced in the DP mode and were significantly affected by the inter-pulse delay time. Additionally, the influence of ambient conditions was investigated by employing low pressure nitrogen, argon or helium as buffer gases in the ablation chamber. The results clearly demonstrate the existence of a sharp ubiquitous emission intensity peak at 100 ns and a wider peak, in the multi-microsecond range of inter-pulse time delay, dependent on the ambient gas conditions. - Highlights: • First dual-pulse and ambient gas deep VUV LIBS plasma emission study • Optimization of inter-pulse delay time for vacuum and ambient gas environments • A sharp intensity peak implies optimal inter-pulse delay of 100 ns for all conditions. • A broad peak appears in the microsecond delay range, but only in ambient gases. • Pressure dependence implies a different enhancement process.

  18. Optimized scheme about FMIPv6 with π-calculus verification%带π演算验证的FMIPv6优化方案

    Institute of Scientific and Technical Information of China (English)

    李向丽; 王晓燕; 王正斌; 屈智巍

    2012-01-01

    In order to solve the problems of long handover delay and high packet loss rate existing in FMIPv6, an improved scheme named PI-FMIPv6 was designed. Information learning, proxy binding and the tunnel timer were introduced into it so as to complete the configuration of the New Care-of Address (NCoA), Duplicate Address Detection (DAD), Binding Update (BU) by advancing and managing the tunnel, π-calculus was used to define and deduce the mathematical model about PI-FMIPv6. It is proved that the optimized scheme PI-FMIPv6 is standard and precise. Furthermore, the simulation results from the NS-2 show that PI-FMIPv6 can reduce the handover delay by 60. 7% and packet loss rate by 61. 5% at least compared to FMIPv6, which verifies that the PI-FMIPv6 is superior to FMIPv6 and can better meet the real-time requirement.%为解决FMIPv6的切换延迟长和丢包率高的问题,提出一种改进方案PI-FMIPv6.该方案通过引入信息学习机制、绑定代理机制及隧道定时器,将转交地址配置、重复地址检测、绑定注册等工作提前完成,并合理设置隧道生存期,优化了FMIPv6切换流程.通过π演算对PI-FMIPv6方案进行数学模型定义和推导,证明了PI-FMIPv6方案的规范性和严谨性.同时,NS-2仿真结果表明,改进方案PI-FMIPv6相对原FMIPv6方案减少至少60.7%的切换延迟和61.5%的丢包率,验证了其性能优于FMIPv6,更好地满足了实时业务的需求.

  19. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  20. Colour schemes

    DEFF Research Database (Denmark)

    van Leeuwen, Theo

    2013-01-01

    This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....

  1. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  2. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log-log mesh optimization and local monotonicity preserving Steffen spline

    Science.gov (United States)

    Maglevanny, I. I.; Smolar, V. A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  3. Hyphenation of optimized microfluidic sample preparation with nano liquid chromatography for faster and greener alkaloid analysis

    NARCIS (Netherlands)

    Shen, Y.; Beek, van T.A.; Zuilhof, H.; Chen, B.

    2013-01-01

    A glass liquid–liquid extraction (LLE) microchip with three parallel 3.5 cm long and 100 µm wide interconnecting channels was optimized in terms of more environmentally friendly (greener) solvents and extraction efficiency. In addition, the optimized chip was successfully hyphenated with nano-liquid

  4. Hyphenation of optimized microfluidic sample preparation with nano liquid chromatography for faster and greener alkaloid analysis

    NARCIS (Netherlands)

    Shen, Y.; Beek, van T.A.; Zuilhof, H.; Chen, B.

    2013-01-01

    A glass liquid–liquid extraction (LLE) microchip with three parallel 3.5 cm long and 100 µm wide interconnecting channels was optimized in terms of more environmentally friendly (greener) solvents and extraction efficiency. In addition, the optimized chip was successfully hyphenated with nano-liquid

  5. Linear multi-secret sharing schemes

    Institute of Scientific and Technical Information of China (English)

    XIAO Liangliang; LIU Mulan

    2005-01-01

    In this paper the linear multi-secret sharing schemes are studied by using monotone span programs. A relation between computing monotone Boolean functions by using monotone span programs and realizing multi-access structures by using linear multisecret sharing schemes is shown. Furthermore, the concept of optimal linear multi-secret sharing scheme is presented and the several schemes are proved to be optimal.

  6. The Research of the Optimization of Bid Quotation Scheme Based on Entropy Weight and Multi-objective Optimization Model%基于熵权和多目标优化的投标报价方案优选

    Institute of Scientific and Technical Information of China (English)

    南铁雷

    2016-01-01

    随着招标投标制的推行以及工程量清单计价方式的实施,我国水利工程建设市场面临着越来越激烈的竞争,投标人为了在尽可能中标的同时实现利益的最大化,往往采用不平衡报价法进行投标。这不仅对发包人的经济利益造成损害,还扰乱了招投标的正常秩序,因此发包人在评标阶段要对各投标方案进行优选。本文将熵权理论引入多目标优化的数学模型中,利用熵权法较为客观地确定各指标权重,并通过分析计算各投标方案与理想点间的相对海明距离,确定各投标方案的不平衡度,从而达到优选投标方案的目的。案例分析表明,该方法准确有效,能够为招标人评标提供参考和依据。%With the implementation of the bidding system and the implementation of the bill of quantities, the competition in the construction market of water conservancy projects is becoming increasingly fierce, the bidder in order to achieve the best possible bidder and achieve the maximum benefits, often using unbalanced bidding method. This not only damages the interests of the employer, but also disrupts the normal order of the bidding. Therefore, the employer in the bid evaluation stage to optimize the tender scheme. In this paper, the author introduced the theory of entropy weight into the mathematical model of multi-objective optimization, and used it to determine the index weight objectively. Then the author analysed and calculated the Relative Hamming Distance of each bidding scheme and the ideal point, and determined the degree of imbalance in the tender. Thereby we can select the best winner. The case analysis shows that the method is accurate and effective, and can provide reference and basis for the bid evaluation.

  7. Optimized Optical Rectification and Electro-optic Sampling in ZnTe Crystals with Chirped Femtosecond Laser Pulses

    DEFF Research Database (Denmark)

    Erschens, Dines Nøddegaard; Turchinovich, Dmitry; Jepsen, Peter Uhd

    2011-01-01

    We report on optimization of the intensity of THz signals generated and detected by optical rectification and electro-optic sampling in dispersive, nonlinear media. Addition of a negative prechirp to the femtosecond laser pulses used in the THz generation and detection processes in 1-mm thick ZnT...

  8. Debba China presentation on optimal field sampling for exploration targets and geochemicals

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available a Introductio n to Remot e Sensin g Optimi ze d samplin g scheme s cas e studie s De rivin g optima l explo ratio n targe t zone s Opti m u m samplin g schem e fo r sur fac e geochemica l cha racte... rizatio n of min e tailing s Optima lFiel d Samplin g fo r 1. Explo ratio n Targe tZ one s an d 2. Geochemica lCha rac te rizatio n of Min e Tailing s P. De bb a CSIR ,Logistic s an d Quantitati ve Method s...

  9. Improving the execution performance of FreeSurfer : a new scheduled pipeline scheme for optimizing the use of CPU and GPU resources.

    Science.gov (United States)

    Delgado, J; Moure, J C; Vives-Gilabert, Y; Delfino, M; Espinosa, A; Gómez-Ansón, B

    2014-07-01

    A scheme to significantly speed up the processing of MRI with FreeSurfer (FS) is presented. The scheme is aimed at maximizing the productivity (number of subjects processed per unit time) for the use case of research projects with datasets involving many acquisitions. The scheme combines the already existing GPU-accelerated version of the FS workflow with a task-level parallel scheme supervised by a resource scheduler. This allows for an optimum utilization of the computational power of a given hardware platform while avoiding problems with shortages of platform resources. The scheme can be executed on a wide variety of platforms, as its implementation only involves the script that orchestrates the execution of the workflow components and the FS code itself requires no modifications. The scheme has been implemented and tested on a commodity platform within the reach of most research groups (a personal computer with four cores and an NVIDIA GeForce 480 GTX graphics card). Using the scheduled task-level parallel scheme, a productivity above 0.6 subjects per hour is achieved on the test platform, corresponding to a speedup of over six times compared to the default CPU-only serial FS workflow.

  10. Parametric optimization of selective laser melting for forming Ti6Al4V samples by Taguchi method

    Science.gov (United States)

    Sun, Jianfeng; Yang, Yongqiang; Wang, Di

    2013-07-01

    In this study, a selective laser melting experiment was carried out with Ti6Al4V alloy powders. To produce samples with maximum density, selective laser melting parameters of laser power, scanning speed, powder thickness, hatching space and scanning strategy were carefully selected. As a statistical design of experimental technique, the Taguchi method was used to optimize the selected parameters. The results were analyzed using analyses of variance (ANOVA) and the signal-to-noise (S/N) ratios by design-expert software for the optimal parameters, and a regression model was established. The regression equation revealed a linear relationship among the density, laser power, scanning speed, powder thickness and scanning strategy. From the experiments, sample with density higher than 95% was obtained. The microstructure of obtained sample was mainly composed of acicular martensite, α phase and β phase. The micro-hardness was 492 HV0.2.

  11. CSR schemes in agribusiness

    DEFF Research Database (Denmark)

    Pötz, Katharina Anna; Haas, Rainer; Balzarova, Michaela

    2013-01-01

    Purpose – The rise of CSR followed a demand for CSR standards and guidelines. In a sector already characterized by a large number of standards, the authors seek to ask what CSR schemes apply to agribusiness, and how they can be systematically compared and analysed. Design....../methodology/approach – Following a deductive-inductive approach the authors develop a model to compare and analyse CSR schemes based on existing studies and on coding qualitative data on 216 CSR schemes. Findings – The authors confirm that CSR standards and guidelines have entered agribusiness and identify a complex landscape...... of schemes that can be categorized on focus areas, scales, mechanisms, origins, types and commitment levels. Research limitations/implications – The findings contribute to conceptual and empirical research on existing models to compare and analyse CSR standards. Sampling technique and depth of analysis limit...

  12. Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate

    National Research Council Canada - National Science Library

    Brunelli, Davide; Caione, Carlo

    2015-01-01

    .... Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate...

  13. Characterizing the optimal flux space of genome-scale metabolic reconstructions through modified latin-hypercube sampling.

    Science.gov (United States)

    Chaudhary, Neha; Tøndel, Kristin; Bhatnagar, Rakesh; dos Santos, Vítor A P Martins; Puchałka, Jacek

    2016-03-01

    Genome-Scale Metabolic Reconstructions (GSMRs), along with optimization-based methods, predominantly Flux Balance Analysis (FBA) and its derivatives, are widely applied for assessing and predicting the behavior of metabolic networks upon perturbation, thereby enabling identification of potential novel drug targets and biotechnologically relevant pathways. The abundance of alternate flux profiles has led to the evolution of methods to explore the complete solution space aiming to increase the accuracy of predictions. Herein we present a novel, generic algorithm to characterize the entire flux space of GSMR upon application of FBA, leading to the optimal value of the objective (the optimal flux space). Our method employs Modified Latin-Hypercube Sampling (LHS) to effectively border the optimal space, followed by Principal Component Analysis (PCA) to identify and explain the major sources of variability within it. The approach was validated with the elementary mode analysis of a smaller network of Saccharomyces cerevisiae and applied to the GSMR of Pseudomonas aeruginosa PAO1 (iMO1086). It is shown to surpass the commonly used Monte Carlo Sampling (MCS) in providing a more uniform coverage for a much larger network in less number of samples. Results show that although many fluxes are identified as variable upon fixing the objective value, majority of the variability can be reduced to several main patterns arising from a few alternative pathways. In iMO1086, initial variability of 211 reactions could almost entirely be explained by 7 alternative pathway groups. These findings imply that the possibilities to reroute greater portions of flux may be limited within metabolic networks of bacteria. Furthermore, the optimal flux space is subject to change with environmental conditions. Our method may be a useful device to validate the predictions made by FBA-based tools, by describing the optimal flux space associated with these predictions, thus to improve them.

  14. 模糊一致矩阵理论在建筑设计方案优选中的应用%APPLICATION OF FUZZY CONSISTENT MATRIX THEORY TO OPTIMIZATION OF ARCHITECTURAL DESIGN SCHEME

    Institute of Scientific and Technical Information of China (English)

    邢彦富; 孔娅; 方兴

    2011-01-01

    建筑设计方案的优选是一项复杂的决策过程。运用模糊一致矩阵理论,建立建筑设计方案优选的数学模型,对设计方案进行多阂素综合评价,克服了建筑设计方案选择中影响因素的粗略性以及人为主观判断的随意性,从而获得良好的经济效益和社会效益。实例表明,采用该方法进行建筑设计方案的优选是可行的、有效的。%It is a complicated decision process for the optimization of architectural design scheme. The paper adopts the fuzzy consistent matrix theory, the indicator system of the architectural design scheme was established, has the comprehensive evaluation on the architectural design scheme, overcomes the rudeness of the influential factors in the architectural scheme selection, and the casualness of personal objective judgment, finally get the best benefits from the economic and social. A case shows that the method is reasonable and effective to evaluate architectural design scheme.

  15. Synthetic Judgement for Filling Scheme Optimization Based on AHP and TPOSIS Methods%基于 AHP 和 TOPSIS 的充填方案综合评判优选

    Institute of Scientific and Technical Information of China (English)

    王新民; 樊彪; 张德明; 李帅

    2016-01-01

    In order to make a synthetic assessment about filling schemes optimization,the synthetic assessment index system of filling scheme was established according to the analytic hierarchy process (AHP) and TOPSIS methods.Based on the four filling schemes optimization in a metal mine,considering the economic,technical, security and etc,which influenced the evaluation indexes (filling difficulty level,roof-contacted filling level, settlement of filling body,strength of filling body,etc)of filling schemes,the influence factor indexes of the filling schemes were transformed into multi-factor decision matrix;the weight of the influence factors matrix were gotten according to the AHP method,and then,the AHP-TOPSIS multi-factor decision model was established with the fundamental theory of TOPSIS,and the synthetic superior degree of the four filling schemes were obtained.The research results show that the synthetic superior degree of filling schemes are 36.2%,85.2%,57.6%,32.0%, respectively,according to the method,and thus determine the second scheme (consolidated filling method) is the optimal one.The optimization scheme is consistent with an actual mining,filling effect is good.Practical application shows that the decision model provides a new method to the filling schemes optimization.%为对多种充填方案进行评判优选,建立了一种基于层次分析法(AHP)和逼近理想解的排序法(TOPSIS)相结合的综合评判指标体系。以某金属矿4种充填方案的选择为基础,从经济、技术及安全等方面综合考虑影响充填方案的评判指标(充填工艺难易度、充填接顶程度、充填体沉降度和充填体强度等),将待选方案指标转换成多因素决策矩阵,再通过层次分析法得到各因素权重向量,进而结合逼近理想解的排序法原理构建 AHP-TOPSIS 的多因素决策模型,得出4种方案的综合优越度。研究结果表明:4种充填方案的优越度分别为36.2%,85

  16. Non-uniform sampling in EPR--optimizing data acquisition for HYSCORE spectroscopy.

    Science.gov (United States)

    Nakka, K K; Tesiram, Y A; Brereton, I M; Mobli, M; Harmer, J R

    2014-08-21

    Non-uniform sampling combined with maximum entropy reconstruction is a powerful technique used in multi-dimensional NMR spectroscopy to reduce sample measurement time. We adapted this technique to the pulse EPR experiment hyperfine sublevel correlation (HYSCORE) and show that experimental times can be shortened by approximately an order of magnitude as compared to conventional linear sampling with negligible loss of information.

  17. Some Numerical Quadrature Schemes of a Non-conforming Quadrilateral Finite Element

    Institute of Scientific and Technical Information of China (English)

    Xiao-fei GUAN; Ming-xia LI; Shao-chun CHEN

    2012-01-01

    Numerical quadrature schemes of a non-conforming finite element method for general second order elliptic problems in two dimensional (2-D) and three dimensional (3-D) space are discussed in this paper.We present and analyze some optimal numerical quadrature schemes. One of the schemes contains only three sampling points,which greatly improves the efficiency of numerical computations.The optimal error estimates are derived by using some traditional approaches and techniques.Lastly,some numerical results are provided to verify our theoretical analysis.

  18. Optimizing human semen cryopreservation by reducing test vial volume and repetitive test vial sampling

    DEFF Research Database (Denmark)

    Jensen, Christian F S; Ohl, Dana A; Parker, Walter R

    2015-01-01

    OBJECTIVE: To investigate optimal test vial (TV) volume, utility and reliability of TVs, intermediate temperature exposure (-88°C to -93°C) before cryostorage, cryostorage in nitrogen vapor (VN2) and liquid nitrogen (LN2), and long-term stability of VN2 cryostorage of human semen. DESIGN: Prospec......OBJECTIVE: To investigate optimal test vial (TV) volume, utility and reliability of TVs, intermediate temperature exposure (-88°C to -93°C) before cryostorage, cryostorage in nitrogen vapor (VN2) and liquid nitrogen (LN2), and long-term stability of VN2 cryostorage of human semen. DESIGN...

  19. Optimization of Sample Preparation for the Identification and Quantification of Saxitoxin in Proficiency Test Mussel Sample using Liquid Chromatography-Tandem Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Kirsi Harju

    2015-11-01

    Full Text Available Saxitoxin (STX and some selected paralytic shellfish poisoning (PSP analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS. Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk. Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD.

  20. Counting, enumerating and sampling of execution plans in a cost-based query optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    1999-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on