WorldWideScience

Sample records for optimal sampling schemes

  1. Optimal sampling schemes applied in geology

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-05-01

    Full Text Available Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology UP 2010 2 / 47 Outline 1 Introduction to hyperspectral remote... sensing 2 Objective of Study 1 3 Study Area 4 Data used 5 Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology...

  2. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  3. Optimal sampling schemes for vegetation and geological field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2012-07-01

    Full Text Available The presentation made to Wits Statistics Department was on common classification methods used in the field of remote sensing, and the use of remote sensing to design optimal sampling schemes for field visits with applications in vegetation...

  4. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available sampling schemes case studies Optimized field sampling representing the overall distribution of a particular mineral Deriving optimal exploration target zones CONTINUUM REMOVAL for vegetation [13, 27, 46]. The convex hull transform is a method... of normalizing spectra [16, 41]. The convex hull technique is anal- ogous to fitting a rubber band over a spectrum to form a continuum. Figure 5 shows the concept of the convex hull transform. The differ- ence between the hull and the orig- inal spectrum...

  5. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  6. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    Science.gov (United States)

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  7. Interpolation-free scanning and sampling scheme for tomographic reconstructions

    International Nuclear Information System (INIS)

    Donohue, K.D.; Saniie, J.

    1987-01-01

    In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation

  8. Optimal updating magnitude in adaptive flat-distribution sampling.

    Science.gov (United States)

    Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery

    2017-11-07

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  9. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  10. Effects of sparse sampling schemes on image quality in low-dose CT

    International Nuclear Information System (INIS)

    Abbas, Sajid; Lee, Taewon; Cho, Seungryong; Shin, Sukyoung; Lee, Rena

    2013-01-01

    Purpose: Various scanning methods and image reconstruction algorithms are actively investigated for low-dose computed tomography (CT) that can potentially reduce a health-risk related to radiation dose. Particularly, compressive-sensing (CS) based algorithms have been successfully developed for reconstructing images from sparsely sampled data. Although these algorithms have shown promises in low-dose CT, it has not been studied how sparse sampling schemes affect image quality in CS-based image reconstruction. In this work, the authors present several sparse-sampling schemes for low-dose CT, quantitatively analyze their data property, and compare effects of the sampling schemes on the image quality.Methods: Data properties of several sampling schemes are analyzed with respect to the CS-based image reconstruction using two measures: sampling density and data incoherence. The authors present five different sparse sampling schemes, and simulated those schemes to achieve a targeted dose reduction. Dose reduction factors of about 75% and 87.5%, compared to a conventional scan, were tested. A fully sampled circular cone-beam CT data set was used as a reference, and sparse sampling has been realized numerically based on the CBCT data.Results: It is found that both sampling density and data incoherence affect the image quality in the CS-based reconstruction. Among the sampling schemes the authors investigated, the sparse-view, many-view undersampling (MVUS)-fine, and MVUS-moving cases have shown promising results. These sampling schemes produced images with similar image quality compared to the reference image and their structure similarity index values were higher than 0.92 in the mouse head scan with 75% dose reduction.Conclusions: The authors found that in CS-based image reconstructions both sampling density and data incoherence affect the image quality, and suggest that a sampling scheme should be devised and optimized by use of these indicators. With this strategic

  11. An Optimization Scheme for ProdMod

    International Nuclear Information System (INIS)

    Gregory, M.V.

    1999-01-01

    A general purpose dynamic optimization scheme has been devised in conjunction with the ProdMod simulator. The optimization scheme is suitable for the Savannah River Site (SRS) High Level Waste (HLW) complex operations, and able to handle different types of optimizations such as linear, nonlinear, etc. The optimization is performed in the stand-alone FORTRAN based optimization deliver, while the optimizer is interfaced with the ProdMod simulator for flow of information between the two

  12. Optimal Face-Iris Multimodal Fusion Scheme

    Directory of Open Access Journals (Sweden)

    Omid Sharifi

    2016-06-01

    Full Text Available Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA, a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.

  13. Optimal Sales Schemes for Network Goods

    DEFF Research Database (Denmark)

    Parakhonyak, Alexei; Vikander, Nick

    consumers simultaneously, serve them all sequentially, or employ any intermediate scheme. We show that the optimal sales scheme is purely sequential, where each consumer observes all previous sales before choosing whether to buy himself. A sequential scheme maximizes the amount of information available...

  14. OLT-centralized sampling frequency offset compensation scheme for OFDM-PON.

    Science.gov (United States)

    Chen, Ming; Zhou, Hui; Zheng, Zhiwei; Deng, Rui; Chen, Qinghui; Peng, Miao; Liu, Cuiwei; He, Jing; Chen, Lin; Tang, Xionggui

    2017-08-07

    We propose an optical line terminal (OLT)-centralized sampling frequency offset (SFO) compensation scheme for adaptively-modulated OFDM-PON systems. By using the proposed SFO scheme, the phase rotation and inter-symbol interference (ISI) caused by SFOs between OLT and multiple optical network units (ONUs) can be centrally compensated in the OLT, which reduces the complexity of ONUs. Firstly, the optimal fast Fourier transform (FFT) size is identified in the intensity-modulated and direct-detection (IMDD) OFDM system in the presence of SFO. Then, the proposed SFO compensation scheme including phase rotation modulation (PRM) and length-adaptive OFDM frame has been experimentally demonstrated in the downlink transmission of an adaptively modulated optical OFDM with the optimal FFT size. The experimental results show that up to ± 300 ppm SFO can be successfully compensated without introducing any receiver performance penalties.

  15. Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme

    Science.gov (United States)

    Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook

    1995-01-01

    Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

  16. Failure Probability Estimation Using Asymptotic Sampling and Its Dependence upon the Selected Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Martinásková Magdalena

    2017-12-01

    Full Text Available The article examines the use of Asymptotic Sampling (AS for the estimation of failure probability. The AS algorithm requires samples of multidimensional Gaussian random vectors, which may be obtained by many alternative means that influence the performance of the AS method. Several reliability problems (test functions have been selected in order to test AS with various sampling schemes: (i Monte Carlo designs; (ii LHS designs optimized using the Periodic Audze-Eglājs (PAE criterion; (iii designs prepared using Sobol’ sequences. All results are compared with the exact failure probability value.

  17. Performance comparison of renewable incentive schemes using optimal control

    International Nuclear Information System (INIS)

    Oak, Neeraj; Lawson, Daniel; Champneys, Alan

    2014-01-01

    Many governments worldwide have instituted incentive schemes for renewable electricity producers in order to meet carbon emissions targets. These schemes aim to boost investment and hence growth in renewable energy industries. This paper examines four such schemes: premium feed-in tariffs, fixed feed-in tariffs, feed-in tariffs with contract for difference and the renewable obligations scheme. A generalised mathematical model of industry growth is presented and fitted with data from the UK onshore wind industry. The model responds to subsidy from each of the four incentive schemes. A utility or ‘fitness’ function that maximises installed capacity at some fixed time in the future while minimising total cost of subsidy is postulated. Using this function, the optimal strategy for provision and timing of subsidy for each scheme is calculated. Finally, a comparison of the performance of each scheme, given that they use their optimal control strategy, is presented. This model indicates that the premium feed-in tariff and renewable obligation scheme produce the joint best results. - Highlights: • Stochastic differential equation model of renewable energy industry growth and prices, using UK onshore wind data 1992–2010. • Cost of production reduces as cumulative installed capacity of wind energy increases, consistent with the theory of learning. • Studies the effect of subsidy using feed-in tariff schemes, and the ‘renewable obligations’ scheme. • We determine the optimal timing and quantity of subsidy required to maximise industry growth and minimise costs. • The premium feed-in tariff scheme and the renewable obligations scheme produce the best results under optimal control

  18. Multiobjective hyper heuristic scheme for system design and optimization

    Science.gov (United States)

    Rafique, Amer Farhan

    2012-11-01

    As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.

  19. Optimization of refueling-shuffling scheme in PWR core by random search strategy

    International Nuclear Information System (INIS)

    Wu Yuan

    1991-11-01

    A random method for simulating optimization of refueling management in a pressurized water reactor (PWR) core is described. The main purpose of the optimization was to select the 'best' refueling arrangement scheme which would produce maximum economic benefits under certain imposed conditions. To fulfill this goal, an effective optimization strategy, two-stage random search method was developed. First, the search was made in a manner similar to the stratified sampling technique. A local optimum can be reached by comparison of the successive results. Then the other random experiences would be carried on between different strata to try to find the global optimum. In general, it can be used as a practical tool for conventional fuel management scheme. However, it can also be used in studies on optimization of Low-Leakage fuel management. Some calculations were done for a typical PWR core on a CYBER-180/830 computer. The results show that the method proposed can obtain satisfactory approach at reasonable low computational cost

  20. Study on a new meteorological sampling scheme developed for the OSCAAR code system

    International Nuclear Information System (INIS)

    Liu Xinhe; Tomita, Kenichi; Homma, Toshimitsu

    2002-03-01

    One important step in Level-3 Probabilistic Safety Assessment is meteorological sequence sampling, on which the previous studies were mainly related to code systems using the straight-line plume model and more efforts are needed for those using the trajectory puff model such as the OSCAAR code system. This report describes the development of a new meteorological sampling scheme for the OSCAAR code system that explicitly considers population distribution. A group of principles set for the development of this new sampling scheme includes completeness, appropriate stratification, optimum allocation, practicability and so on. In this report, discussions are made about the procedures of the new sampling scheme and its application. The calculation results illustrate that although it is quite difficult to optimize stratification of meteorological sequences based on a few environmental parameters the new scheme do gather the most inverse conditions in a single subset of meteorological sequences. The size of this subset may be as small as a few dozens, so that the tail of a complementary cumulative distribution function is possible to remain relatively static in different trials of the probabilistic consequence assessment code. (author)

  1. Investigation of optimal photoionization schemes for Sm by multi-step resonance ionization

    International Nuclear Information System (INIS)

    Cha, H.; Song, K.; Lee, J.

    1997-01-01

    Excited states of Sm atoms are investigated by using multi-color resonance enhanced multiphoton ionization spectroscopy. Among the ionization signals one observed at 577.86 nm is regarded as the most efficient excited state if an 1-color 3-photon scheme is applied. Meanwhile an observed level located at 587.42 nm is regarded as the most efficient state if one uses a 2-color scheme. For 2-color scheme a level located at 573.50 nm from this first excited state is one of the best second excited state for the optimal photoionization scheme. Based on this ionization scheme various concentrations of standard solutions for samarium are determined. The minimum amount of sample which can be detected by a 2-color scheme is determined as 200 fg. The detection sensitivity is limited mainly due to the pollution of the graphite atomizer. copyright 1997 American Institute of Physics

  2. Optimal on/off scheme for all-optical switching

    DEFF Research Database (Denmark)

    Kristensen, Philip Trøst; Heuck, Mikkel; Mørk, Jesper

    2012-01-01

    We present a two-pulsed on/off scheme based on coherent control for fast switching of the optical energy in a micro cavity and use calculus of variations to optimize the switching in terms of energy.......We present a two-pulsed on/off scheme based on coherent control for fast switching of the optical energy in a micro cavity and use calculus of variations to optimize the switching in terms of energy....

  3. Optimized difference schemes for multidimensional hyperbolic partial differential equations

    Directory of Open Access Journals (Sweden)

    Adrian Sescu

    2009-04-01

    Full Text Available In numerical solutions to hyperbolic partial differential equations in multidimensions, in addition to dispersion and dissipation errors, there is a grid-related error (referred to as isotropy error or numerical anisotropy that affects the directional dependence of the wave propagation. Difference schemes are mostly analyzed and optimized in one dimension, wherein the anisotropy correction may not be effective enough. In this work, optimized multidimensional difference schemes with arbitrary order of accuracy are designed to have improved isotropy compared to conventional schemes. The derivation is performed based on Taylor series expansion and Fourier analysis. The schemes are restricted to equally-spaced Cartesian grids, so the generalized curvilinear transformation method and Cartesian grid methods are good candidates.

  4. Optimization of reliability centered predictive maintenance scheme for inertial navigation system

    International Nuclear Information System (INIS)

    Jiang, Xiuhong; Duan, Fuhai; Tian, Heng; Wei, Xuedong

    2015-01-01

    The goal of this study is to propose a reliability centered predictive maintenance scheme for a complex structure Inertial Navigation System (INS) with several redundant components. GO Methodology is applied to build the INS reliability analysis model—GO chart. Components Remaining Useful Life (RUL) and system reliability are updated dynamically based on the combination of components lifetime distribution function, stress samples, and the system GO chart. Considering the redundant design in INS, maintenance time is based not only on components RUL, but also (and mainly) on the timing of when system reliability fails to meet the set threshold. The definition of components maintenance priority balances three factors: components importance to system, risk degree, and detection difficulty. Maintenance Priority Number (MPN) is introduced, which may provide quantitative maintenance priority results for all components. A maintenance unit time cost model is built based on components MPN, components RUL predictive model and maintenance intervals for the optimization of maintenance scope. The proposed scheme can be applied to serve as the reference for INS maintenance. Finally, three numerical examples prove the proposed predictive maintenance scheme is feasible and effective. - Highlights: • A dynamic PdM with a rolling horizon is proposed for INS with redundant components. • GO Methodology is applied to build the system reliability analysis model. • A concept of MPN is proposed to quantify the maintenance sequence of components. • An optimization model is built to select the optimal group of maintenance components. • The optimization goal is minimizing the cost of maintaining system reliability

  5. Adaptive multi-objective Optimization scheme for cognitive radio resource management

    KAUST Repository

    Alqerm, Ismail

    2014-12-01

    Cognitive Radio is an intelligent Software Defined Radio that is capable to alter its transmission parameters according to predefined objectives and wireless environment conditions. Cognitive engine is the actuator that performs radio parameters configuration by exploiting optimization and machine learning techniques. In this paper, we propose an Adaptive Multi-objective Optimization Scheme (AMOS) for cognitive radio resource management to improve spectrum operation and network performance. The optimization relies on adapting radio transmission parameters to environment conditions using constrained optimization modeling called fitness functions in an iterative manner. These functions include minimizing power consumption, Bit Error Rate, delay and interference. On the other hand, maximizing throughput and spectral efficiency. Cross-layer optimization is exploited to access environmental parameters from all TCP/IP stack layers. AMOS uses adaptive Genetic Algorithm in terms of its parameters and objective weights as the vehicle of optimization. The proposed scheme has demonstrated quick response and efficiency in three different scenarios compared to other schemes. In addition, it shows its capability to optimize the performance of TCP/IP layers as whole not only the physical layer.

  6. Optimization of a middle atmosphere diagnostic scheme

    Science.gov (United States)

    Akmaev, Rashid A.

    1997-06-01

    A new assimilative diagnostic scheme based on the use of a spectral model was recently tested on the CIRA-86 empirical model. It reproduced the observed climatology with an annual global rms temperature deviation of 3.2 K in the 15-110 km layer. The most important new component of the scheme is that the zonal forcing necessary to maintain the observed climatology is diagnosed from empirical data and subsequently substituted into the simulation model at the prognostic stage of the calculation in an annual cycle mode. The simulation results are then quantitatively compared with the empirical model, and the above mentioned rms temperature deviation provides an objective measure of the `distance' between the two climatologies. This quantitative criterion makes it possible to apply standard optimization procedures to the whole diagnostic scheme and/or the model itself. The estimates of the zonal drag have been improved in this study by introducing a nudging (Newtonian-cooling) term into the thermodynamic equation at the diagnostic stage. A proper optimal adjustment of the strength of this term makes it possible to further reduce the rms temperature deviation of simulations down to approximately 2.7 K. These results suggest that direct optimization can successfully be applied to atmospheric model parameter identification problems of moderate dimensionality.

  7. Laplace-Fourier-domain dispersion analysis of an average derivative optimal scheme for scalar-wave equation

    Science.gov (United States)

    Chen, Jing-Bo

    2014-06-01

    By using low-frequency components of the damped wavefield, Laplace-Fourier-domain full waveform inversion (FWI) can recover a long-wavelength velocity model from the original undamped seismic data lacking low-frequency information. Laplace-Fourier-domain modelling is an important foundation of Laplace-Fourier-domain FWI. Based on the numerical phase velocity and the numerical attenuation propagation velocity, a method for performing Laplace-Fourier-domain numerical dispersion analysis is developed in this paper. This method is applied to an average-derivative optimal scheme. The results show that within the relative error of 1 per cent, the Laplace-Fourier-domain average-derivative optimal scheme requires seven gridpoints per smallest wavelength and smallest pseudo-wavelength for both equal and unequal directional sampling intervals. In contrast, the classical five-point scheme requires 23 gridpoints per smallest wavelength and smallest pseudo-wavelength to achieve the same accuracy. Numerical experiments demonstrate the theoretical analysis.

  8. Adaptive multi-objective Optimization scheme for cognitive radio resource management

    KAUST Repository

    Alqerm, Ismail; Shihada, Basem

    2014-01-01

    configuration by exploiting optimization and machine learning techniques. In this paper, we propose an Adaptive Multi-objective Optimization Scheme (AMOS) for cognitive radio resource management to improve spectrum operation and network performance

  9. Optimized spectroscopic scheme for enhanced precision CO measurements with applications to urban source attribution

    Science.gov (United States)

    Nottrott, A.; Hoffnagle, J.; Farinas, A.; Rella, C.

    2014-12-01

    Carbon monoxide (CO) is an urban pollutant generated by internal combustion engines which contributes to the formation of ground level ozone (smog). CO is also an excellent tracer for emissions from mobile combustion sources. In this work we present an optimized spectroscopic sampling scheme that enables enhanced precision CO measurements. The scheme was implemented on the Picarro G2401 Cavity Ring-Down Spectroscopy (CRDS) analyzer which measures CO2, CO, CH4 and H2O at 0.2 Hz. The optimized scheme improved the raw precision of CO measurements by 40% from 5 ppb to 3 ppb. Correlations of measured CO2, CO, CH4 and H2O from an urban tower were partitioned by wind direction and combined with a concentration footprint model for source attribution. The application of a concentration footprint for source attribution has several advantages. The upwind extent of the concentration footprint for a given sensor is much larger than the flux footprint. Measurements of mean concentration at the sensor location can be used to estimate source strength from a concentration footprint, while measurements of the vertical concentration flux are necessary to determine source strength from the flux footprint. Direct measurement of vertical concentration flux requires high frequency temporal sampling and increases the cost and complexity of the measurement system.

  10. Minimizing transient influence in WHPA delineation: An optimization approach for optimal pumping rate schemes

    Science.gov (United States)

    Rodriguez-Pretelin, A.; Nowak, W.

    2017-12-01

    For most groundwater protection management programs, Wellhead Protection Areas (WHPAs) have served as primarily protection measure. In their delineation, the influence of time-varying groundwater flow conditions is often underestimated because steady-state assumptions are commonly made. However, it has been demonstrated that temporary variations lead to significant changes in the required size and shape of WHPAs. Apart from natural transient groundwater drivers (e.g., changes in the regional angle of flow direction and seasonal natural groundwater recharge), anthropogenic causes such as transient pumping rates are of the most influential factors that require larger WHPAs. We hypothesize that WHPA programs that integrate adaptive and optimized pumping-injection management schemes can counter transient effects and thus reduce the additional areal demand in well protection under transient conditions. The main goal of this study is to present a novel management framework that optimizes pumping schemes dynamically, in order to minimize the impact triggered by transient conditions in WHPA delineation. For optimizing pumping schemes, we consider three objectives: 1) to minimize the risk of pumping water from outside a given WHPA, 2) to maximize the groundwater supply and 3) to minimize the involved operating costs. We solve transient groundwater flow through an available transient groundwater and Lagrangian particle tracking model. The optimization problem is formulated as a dynamic programming problem. Two different optimization approaches are explored: I) the first approach aims for single-objective optimization under objective (1) only. The second approach performs multiobjective optimization under all three objectives where compromise pumping rates are selected from the current Pareto front. Finally, we look for WHPA outlines that are as small as possible, yet allow the optimization problem to find the most suitable solutions.

  11. Optimization bitumen-based upgrading and refining schemes

    Energy Technology Data Exchange (ETDEWEB)

    Munteanu, M.; Chen, J. [National Centre for Upgrading Technology, Devon, AB (Canada); Natural Resources Canada, Devon, AB (Canada). CanmetENERGY

    2009-07-01

    This poster highlighted the results of a study in which the entire refining scheme for Canadian bitumen as feedstocks was modelled and simulated under different process configurations, operating conditions and product structures. The aim of the study was to optimize the economic benefits, product quality and energy use under a range of operational scenarios. Optimal refining schemes were proposed along with process conditions for existing refinery configurations and objectives. The goal was to provide guidelines and information for upgrading and refining process design and retrofitting. Critical steps were identified with regards to the upgrading process. It was concluded that the information obtained from this study would lead to significant improvement in process performance and operations, and in reducing the capital cost for building new upgraders and refineries. The simulation results provided valuable information for increasing the marketability of bitumen, reducing greenhouse gas emissions and other environmental impacts associated with bitumen upgrading and refining. tabs., figs.

  12. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  13. DRO: domain-based route optimization scheme for nested mobile networks

    Directory of Open Access Journals (Sweden)

    Chuang Ming-Chin

    2011-01-01

    Full Text Available Abstract The network mobility (NEMO basic support protocol is designed to support NEMO management, and to ensure communication continuity between nodes in mobile networks. However, in nested mobile networks, NEMO suffers from the pinball routing problem, which results in long packet transmission delays. To solve the problem, we propose a domain-based route optimization (DRO scheme that incorporates a domain-based network architecture and ad hoc routing protocols for route optimization. DRO also improves the intra-domain handoff performance, reduces the convergence time during route optimization, and avoids the out-of-sequence packet problem. A detailed performance analysis and simulations were conducted to evaluate the scheme. The results demonstrate that DRO outperforms existing mechanisms in terms of packet transmission delay (i.e., better route-optimization, intra-domain handoff latency, convergence time, and packet tunneling overhead.

  14. The same number of optimized parameters scheme for determining intermolecular interaction energies

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Ettenhuber, Patrick; Eriksen, Janus Juul

    2015-01-01

    We propose the Same Number Of Optimized Parameters (SNOOP) scheme as an alternative to the counterpoise method for treating basis set superposition errors in calculations of intermolecular interaction energies. The key point of the SNOOP scheme is to enforce that the number of optimized wave...... as numerically. Numerical results for second-order Møller-Plesset perturbation theory (MP2) and coupled-cluster with single, double, and approximate triple excitations (CCSD(T)) show that the SNOOP scheme in general outperforms the uncorrected and counterpoise approaches. Furthermore, we show that SNOOP...

  15. Axially perpendicular offset Raman scheme for reproducible measurement of housed samples in a noncircular container under variation of container orientation.

    Science.gov (United States)

    Duy, Pham K; Chang, Kyeol; Sriphong, Lawan; Chung, Hoeil

    2015-03-17

    An axially perpendicular offset (APO) scheme that is able to directly acquire reproducible Raman spectra of samples contained in an oval container under variation of container orientation has been demonstrated. This scheme utilized an axially perpendicular geometry between the laser illumination and the Raman photon detection, namely, irradiation through a sidewall of the container and gathering of the Raman photon just beneath the container. In the case of either backscattering or transmission measurements, Raman sampling volumes for an internal sample vary when the orientation of an oval container changes; therefore, the Raman intensities of acquired spectra are inconsistent. The generated Raman photons traverse the same bottom of the container in the APO scheme; the Raman sampling volumes can be relatively more consistent under the same situation. For evaluation, the backscattering, transmission, and APO schemes were simultaneously employed to measure alcohol gel samples contained in an oval polypropylene container at five different orientations and then the accuracies of the determination of the alcohol concentrations were compared. The APO scheme provided the most reproducible spectra, yielding the best accuracy when the axial offset distance was 10 mm. Monte Carlo simulations were performed to study the characteristics of photon propagation in the APO scheme and to explain the origin of the optimal offset distance that was observed. In addition, the utility of the APO scheme was further demonstrated by analyzing samples in a circular glass container.

  16. Evolutionary Algorithm for Optimal Vaccination Scheme

    International Nuclear Information System (INIS)

    Parousis-Orthodoxou, K J; Vlachos, D S

    2014-01-01

    The following work uses the dynamic capabilities of an evolutionary algorithm in order to obtain an optimal immunization strategy in a user specified network. The produced algorithm uses a basic genetic algorithm with crossover and mutation techniques, in order to locate certain nodes in the inputted network. These nodes will be immunized in an SIR epidemic spreading process, and the performance of each immunization scheme, will be evaluated by the level of containment that provides for the spreading of the disease

  17. Optimized low-order explicit Runge-Kutta schemes for high- order spectral difference method

    KAUST Repository

    Parsani, Matteo

    2012-01-01

    Optimal explicit Runge-Kutta (ERK) schemes with large stable step sizes are developed for method-of-lines discretizations based on the spectral difference (SD) spatial discretization on quadrilateral grids. These methods involve many stages and provide the optimal linearly stable time step for a prescribed SD spectrum and the minimum leading truncation error coefficient, while admitting a low-storage implementation. Using a large number of stages, the new ERK schemes lead to efficiency improvements larger than 60% over standard ERK schemes for 4th- and 5th-order spatial discretization.

  18. Optimal Scheme Selection of Agricultural Production Structure Adjustment - Based on DEA Model; Punjab (Pakistan)

    Institute of Scientific and Technical Information of China (English)

    Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan

    2015-01-01

    This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.

  19. Flexible aluminum tubes and a least square multi-objective non-linear optimization scheme

    International Nuclear Information System (INIS)

    Endelt, Benny; Nielsen, Karl Brian; Olsen, Soeren

    2004-01-01

    The automotive industry currently uses rubber hoses as the media carrier between e.g. the radiator and the engine, and the basic idea is to replace the rubber hoses with flexible aluminum tubes.A good quality is defined through several quality measurements, i.e. in the current case the key objective is to produce a flexible convolution through optimization of the tool geometry, but the process should also be stable, and the process stability is evaluated through Forming Limit Diagrams. Typically the defined objectives are conflicting, i.e. the optimized configuration represents therefore a trade-off between the individual objectives, in this case flexibility versus process stability.The optimization problem is solved through iteratively minimizing the object function. A second-order least square scheme is used for the approximation of the quadratic model, and the change in the design parameters is evaluated through the trust region scheme and box constraints are introduced within the trust region framework. Furthermore, the object function is minimized by applying the non-monotone scheme, and the trust region subproblem is solved by applying the Cholesky factorization scheme.An optimal bell shaped geometry is identified and the design is verified experimentally

  20. An Optimization Scheme for Water Pump Control in Smart Fish Farm with Efficient Energy Consumption

    Directory of Open Access Journals (Sweden)

    Israr Ullah

    2018-06-01

    Full Text Available Healthy fish production requires intensive care and ensuring stable and healthy production environment inside the farm tank is a challenging task. An Internet of Things (IoT based automated system is highly desirable that can continuously monitor the fish tanks with optimal resources utilization. Significant cost reduction can be achieved if farm equipment and water pumps are operated only when required using optimization schemes. In this paper, we present a general system design for smart fish farms. We have developed an optimization scheme for water pump control to maintain desired water level in fish tank with efficient energy consumption through appropriate selection of pumping flow rate and tank filling level. Proposed optimization scheme attempts to achieve a trade-off between pumping duration and flow rate through selection of optimized water level. Kalman filter algorithm is applied to remove error in sensor readings. We observed through simulation results that optimization scheme achieve significant reduction in energy consumption as compared to the two alternate schemes, i.e., pumping with maximum and minimum flow rates. Proposed system can help in collecting the data about the farm for long-term analysis and better decision making in future for efficient resource utilization and overall profit maximization.

  1. Nearly optimal measurement schemes in a noisy Mach-Zehnder interferometer with coherent and squeezed vacuum

    Energy Technology Data Exchange (ETDEWEB)

    Gard, Bryan T.; You, Chenglong; Singh, Robinjeet; Lee, Hwang; Corbitt, Thomas R.; Dowling, Jonathan P. [Louisiana State University, Baton Rouge, LA (United States); Mishra, Devendra K. [Louisiana State University, Baton Rouge, LA (United States); V.S. Mehta College of Science, Physics Department, Bharwari, UP (India)

    2017-12-15

    The use of an interferometer to perform an ultra-precise parameter estimation under noisy conditions is a challenging task. Here we discuss nearly optimal measurement schemes for a well known, sensitive input state, squeezed vacuum and coherent light. We find that a single mode intensity measurement, while the simplest and able to beat the shot-noise limit, is outperformed by other measurement schemes in the low-power regime. However, at high powers, intensity measurement is only outperformed by a small factor. Specifically, we confirm, that an optimal measurement choice under lossless conditions is the parity measurement. In addition, we also discuss the performance of several other common measurement schemes when considering photon loss, detector efficiency, phase drift, and thermal photon noise. We conclude that, with noise considerations, homodyne remains near optimal in both the low and high power regimes. Surprisingly, some of the remaining investigated measurement schemes, including the previous optimal parity measurement, do not remain even near optimal when noise is introduced. (orig.)

  2. A Distributed Intrusion Detection Scheme about Communication Optimization in Smart Grid

    Directory of Open Access Journals (Sweden)

    Yunfa Li

    2013-01-01

    Full Text Available We first propose an efficient communication optimization algorithm in smart grid. Based on the optimization algorithm, we propose an intrusion detection algorithm to detect malicious data and possible cyberattacks. In this scheme, each node acts independently when it processes communication flows or cybersecurity threats. And neither special hardware nor nodes cooperation is needed. In order to justify the feasibility and the availability of this scheme, a series of experiments have been done. The results show that it is feasible and efficient to detect malicious data and possible cyberattacks with less computation and communication cost.

  3. Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks.

    Science.gov (United States)

    Robinson, Y Harold; Rajaram, M

    2015-01-01

    Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique.

  4. Optimal Tradable Credits Scheme and Congestion Pricing with the Efficiency Analysis to Congestion

    Directory of Open Access Journals (Sweden)

    Ge Gao

    2015-01-01

    Full Text Available We allow for three traffic scenarios: the tradable credits scheme, congestion pricing, and no traffic measure. The utility functions of different modes (car, bus, and bicycle are developed by considering the income’s impact on travelers’ behaviors. Their purpose is to analyze the demand distribution of different modes. A social optimization model is built aiming at maximizing the social welfare. The optimal tradable credits scheme (distribution of credits, credits charging, and the credit price, congestion pricing fees, bus frequency, and bus fare are obtained by solving the model. Mode choice behavior under the tradable credits scheme is also studied. Numerical examples are presented to demonstrate the model’s availability and explore the effects of the three schemes on traffic system’s performance. Results show congestion pricing would earn more social welfare than the other traffic measures. However, tradable credits scheme will give travelers more consumer surplus than congestion pricing. Travelers’ consumer surplus with congestion pricing is the minimum, which injures the travelers’ benefits. Tradable credits scheme is considered the best scenario by comparing the three scenarios’ efficiency.

  5. Planning Framework for Mesolevel Optimization of Urban Runoff Control Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Qianqian; Blohm, Andrew; Liu, Bo

    2017-04-01

    A planning framework is developed to optimize runoff control schemes at scales relevant for regional planning at an early stage. The framework employs less sophisticated modeling approaches to allow a practical application in developing regions with limited data sources and computing capability. The methodology contains three interrelated modules: (1)the geographic information system (GIS)-based hydrological module, which aims at assessing local hydrological constraints and potential for runoff control according to regional land-use descriptions; (2)the grading module, which is built upon the method of fuzzy comprehensive evaluation. It is used to establish a priority ranking system to assist the allocation of runoff control targets at the subdivision level; and (3)the genetic algorithm-based optimization module, which is included to derive Pareto-based optimal solutions for mesolevel allocation with multiple competing objectives. The optimization approach describes the trade-off between different allocation plans and simultaneously ensures that all allocation schemes satisfy the minimum requirement on runoff control. Our results highlight the importance of considering the mesolevel allocation strategy in addition to measures at macrolevels and microlevels in urban runoff management. (C) 2016 American Society of Civil Engineers.

  6. Optimal design of a hybridization scheme with a fuel cell using genetic optimization

    Science.gov (United States)

    Rodriguez, Marco A.

    Fuel cell is one of the most dependable "green power" technologies, readily available for immediate application. It enables direct conversion of hydrogen and other gases into electric energy without any pollution of the environment. However, the efficient power generation is strictly stationary process that cannot operate under dynamic environment. Consequently, fuel cell becomes practical only within a specially designed hybridization scheme, capable of power storage and power management functions. The resultant technology could be utilized to its full potential only when both the fuel cell element and the entire hybridization scheme are optimally designed. The design optimization in engineering is among the most complex computational tasks due to its multidimensionality, nonlinearity, discontinuity and presence of constraints in the underlying optimization problem. this research aims at the optimal utilization of the fuel cell technology through the use of genetic optimization, and advance computing. This study implements genetic optimization in the definition of optimum hybridization rules for a PEM fuel cell/supercapacitor power system. PEM fuel cells exhibit high energy density but they are not intended for pulsating power draw applications. They work better in steady state operation and thus, are often hybridized. In a hybrid system, the fuel cell provides power during steady state operation while capacitors or batteries augment the power of the fuel cell during power surges. Capacitors and batteries can also be recharged when the motor is acting as a generator. Making analogies to driving cycles, three hybrid system operating modes are investigated: 'Flat' mode, 'Uphill' mode, and 'Downhill' mode. In the process of discovering the switching rules for these three modes, we also generate a model of a 30W PEM fuel cell. This study also proposes the optimum design of a 30W PEM fuel cell. The PEM fuel cell model and hybridization's switching rules are postulated

  7. Optimization of the two-sample rank Neyman-Pearson detector

    Science.gov (United States)

    Akimov, P. S.; Barashkov, V. M.

    1984-10-01

    The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.

  8. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    Directory of Open Access Journals (Sweden)

    Yongkai An

    2015-07-01

    Full Text Available This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately.

  9. 'Massfunktionen' as limit conditions of an optimization scheme for the telecobalt therapy

    International Nuclear Information System (INIS)

    Kirsch, M.; Forth, E.; Schumann, E.

    1978-01-01

    The basic ideas of the 'Score-Funktionen-Modell' of Hope and his collaborators are used for the establishment of the first stage of an optimization scheme for the telecobalt therapy. The new 'Massfunktionen' for the telecobalt therapy are limit conditions for the criterion of the optimum, i.e. the dose distribution in a body section. The 'Massfunktionen' are an analytic registration of parameters for the dose distribution such as dose homogeneity in the focal region and sparing of the subcutaneous tissues, the radiosensitive organs and the sound surroundings of the tumor. The functions are derived from the dose conditions in the irradiated body section. At the actual stage of development of the optimization scheme, these functions allow to decide whether an irradiation scheme is acceptable or not. (orig.) [de

  10. Numerical Comparison of Optimal Charging Schemes for Electric Vehicles

    DEFF Research Database (Denmark)

    You, Shi; Hu, Junjie; Pedersen, Anders Bro

    2012-01-01

    of four different charging schemes, namely night charging, night charging with V2G, 24 hour charging and 24 hour charging with V2G, on the basis of real driving data and electricity price of Denmark in 2003. For all schemes, optimal charging plans with 5 minute resolution are derived through the solving...... of a mixed integer programming problem which aims to minimize the charging cost and meanwhile takes into account the users' driving needs and the practical limitations of the EV battery. In the post processing stage, the rainflow counting algorithm is implemented to assess the lifetime usage of a lithium...

  11. A staggered-grid finite-difference scheme optimized in the time–space domain for modeling scalar-wave propagation in geophysical problems

    International Nuclear Information System (INIS)

    Tan, Sirui; Huang, Lianjie

    2014-01-01

    For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within a given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion

  12. Investigation of the influence of sampling schemes on quantitative dynamic fluorescence imaging.

    Science.gov (United States)

    Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Wang, Guodong; Wang, Bo; Zhan, Yonghua; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin

    2018-04-01

    Dynamic optical data from a series of sampling intervals can be used for quantitative analysis to obtain meaningful kinetic parameters of probe in vivo . The sampling schemes may affect the quantification results of dynamic fluorescence imaging. Here, we investigate the influence of different sampling schemes on the quantification of binding potential ( BP ) with theoretically simulated and experimentally measured data. Three groups of sampling schemes are investigated including the sampling starting point, sampling sparsity, and sampling uniformity. In the investigation of the influence of the sampling starting point, we further summarize two cases by considering the missing timing sequence between the probe injection and sampling starting time. Results show that the mean value of BP exhibits an obvious growth trend with an increase in the delay of the sampling starting point, and has a strong correlation with the sampling sparsity. The growth trend is much more obvious if throwing the missing timing sequence. The standard deviation of BP is inversely related to the sampling sparsity, and independent of the sampling uniformity and the delay of sampling starting time. Moreover, the mean value of BP obtained by uniform sampling is significantly higher than that by using the non-uniform sampling. Our results collectively suggest that a suitable sampling scheme can help compartmental modeling of dynamic fluorescence imaging provide more accurate results and simpler operations.

  13. Space-partition method for the variance-based sensitivity analysis: Optimal partition scheme and comparative study

    International Nuclear Information System (INIS)

    Zhai, Qingqing; Yang, Jun; Zhao, Yu

    2014-01-01

    Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one

  14. The Optimal Configuration Scheme of the Virtual Power Plant Considering Benefits and Risks of Investors

    Directory of Open Access Journals (Sweden)

    Jingmin Wang

    2017-07-01

    Full Text Available A virtual power plant (VPP is a special virtual unit that integrates various distributed energy resources (DERs distributed in the generation and consumption sides. The optimal configuration scheme of the VPP needs to break the geographical restrictions to make full use of DERs, considering the uncertainties. First, the components of the DERs and the structure of the VPP are briefly introduced. Next, the cubic exponential smoothing method is adopted to predict the VPP load requirement. Finally, the optimal configuration of the DER capacities inside the VPP is calculated by using portfolio theory and genetic algorithms (GA. The results show that the configuration scheme can optimize the DER capacities considering uncertainties, guaranteeing economic benefits of investors, and fully utilizing the DERs. Therefore, this paper provides a feasible reference for the optimal configuration scheme of the VPP from the perspective of investors.

  15. Revisiting Intel Xeon Phi optimization of Thompson cloud microphysics scheme in Weather Research and Forecasting (WRF) model

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2015-10-01

    The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.

  16. Optimization of Decision-Making for Spatial Sampling in the North China Plain, Based on Remote-Sensing a Priori Knowledge

    Science.gov (United States)

    Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.

    2012-07-01

    In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.

  17. Optimization of reference library used in content-based medical image retrieval scheme

    International Nuclear Information System (INIS)

    Park, Sang Cheol; Sukthankar, Rahul; Mummert, Lily; Satyanarayanan, Mahadev; Zheng Bin

    2007-01-01

    Building an optimal image reference library is a critical step in developing the interactive computer-aided detection and diagnosis (I-CAD) systems of medical images using content-based image retrieval (CBIR) schemes. In this study, the authors conducted two experiments to investigate (1) the relationship between I-CAD performance and size of reference library and (2) a new reference selection strategy to optimize the library and improve I-CAD performance. The authors assembled a reference library that includes 3153 regions of interest (ROI) depicting either malignant masses (1592) or CAD-cued false-positive regions (1561) and an independent testing data set including 200 masses and 200 false-positive regions. A CBIR scheme using a distance-weighted K-nearest neighbor algorithm is applied to retrieve references that are considered similar to the testing sample from the library. The area under receiver operating characteristic curve (A z ) is used as an index to evaluate the I-CAD performance. In the first experiment, the authors systematically increased reference library size and tested I-CAD performance. The result indicates that scheme performance improves initially from A z =0.715 to 0.874 and then plateaus when the library size reaches approximately half of its maximum capacity. In the second experiment, based on the hypothesis that a ROI should be removed if it performs poorly compared to a group of similar ROIs in a large and diverse reference library, the authors applied a new strategy to identify 'poorly effective' references. By removing 174 identified ROIs from the reference library, I-CAD performance significantly increases to A z =0.914 (p<0.01). The study demonstrates that increasing reference library size and removing poorly effective references can significantly improve I-CAD performance

  18. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  19. Optimization of the scheme for natural ecology planning of urban rivers based on ANP (analytic network process) model.

    Science.gov (United States)

    Zhang, Yichuan; Wang, Jiangping

    2015-07-01

    Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.

  20. Modeling of frequency-domain scalar wave equation with the average-derivative optimal scheme based on a multigrid-preconditioned iterative solver

    Science.gov (United States)

    Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue

    2018-01-01

    An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.

  1. Energy-Efficient Optimization for HARQ Schemes over Time-Correlated Fading Channels

    KAUST Repository

    Shi, Zheng

    2018-03-19

    Energy efficiency of three common hybrid automatic repeat request (HARQ) schemes including Type I HARQ, HARQ with chase combining (HARQ-CC) and HARQ with incremental redundancy (HARQ-IR), is analyzed and joint power allocation and rate selection to maximize the energy efficiency is investigated in this paper. Unlike prior literature, time-correlated fading channels is considered and two widely concerned quality of service (QoS) constraints, i.e., outage and goodput constraints, are also considered in the optimization, which further differentiates this work from prior ones. Using a unified expression of asymptotic outage probabilities, optimal transmission powers and optimal rate are derived in closed-forms to maximize the energy efficiency while satisfying the QoS constraints. These closed-form solutions then enable a thorough analysis of the maximal energy efficiencies of various HARQ schemes. It is revealed that with low outage constraint, the maximal energy efficiency achieved by Type I HARQ is $\\\\frac{1}{4\\\\ln2}$ bits/J, while HARQ-CC and HARQ-IR can achieve the same maximal energy efficiency as $\\\\frac{\\\\kappa_\\\\infty}{4\\\\ln2}$ bits/J where $\\\\kappa_\\\\infty = 1.6617$. Moreover, time correlation in the fading channels has a negative impact on the energy efficiency, while large maximal allowable number of transmissions is favorable for the improvement of energy efficiency. The effectiveness of the energy-efficient optimization is verified by extensive simulations and the results also show that HARQ-CC can achieve the best tradeoff between energy efficiency and spectral efficiency among the three HARQ schemes.

  2. Further optimization of a parallel double-effect organosilicon distillation scheme through exergy analysis

    International Nuclear Information System (INIS)

    Sun, Jinsheng; Dai, Leilei; Shi, Ming; Gao, Hong; Cao, Xijia; Liu, Guangxin

    2014-01-01

    In our previous work, a significant improvement in organosilicon monomer distillation using parallel double-effect heat integration between a heavies removal column and six other columns, as well as heat integration between methyltrichlorosilane and dimethylchlorosilane columns, reduced the total exergy loss of the currently running counterpart by 40.41%. Further research regarding this optimized scheme demonstrated that it was necessary to reduce the higher operating pressure of the methyltrichlorosilane column, which is required for heat integration between the methyltrichlorosilane and dimethylchlorosilane columns. Therefore, in this contribution, a challenger scheme is presented with heat pumps introduced separately from the originally heat-coupled methyltrichlorosilane and dimethylchlorosilane columns in the above-mentioned optimized scheme, which is the prototype for this work. Both schemes are simulated using the same purity requirements used in running industrial units. The thermodynamic properties from the simulation are used to calculate the energy consumption and exergy loss of the two schemes. The results show that the heat pump option further reduces the flowsheet energy consumption and exergy loss by 27.35% and 10.98% relative to the prototype scheme. These results indicate that the heat pumps are superior to heat integration in the context of energy-savings during organosilicon monomer distillation. - Highlights: • Combine the paralleled double-effect and heat pump distillation to organosilicon distillation. • Compare the double-effect with the heat pump in saving energy. • Further cut down the flowsheet energy consumption and exergy loss by 27.35% and 10.98% respectively

  3. Optimal Resource Allocation for NOMA-TDMA Scheme with α-Fairness in Industrial Internet of Things.

    Science.gov (United States)

    Sun, Yanjing; Guo, Yiyu; Li, Song; Wu, Dapeng; Wang, Bin

    2018-05-15

    In this paper, a joint non-orthogonal multiple access and time division multiple access (NOMA-TDMA) scheme is proposed in Industrial Internet of Things (IIoT), which allowed multiple sensors to transmit in the same time-frequency resource block using NOMA. The user scheduling, time slot allocation, and power control are jointly optimized in order to maximize the system α -fair utility under transmit power constraint and minimum rate constraint. The optimization problem is nonconvex because of the fractional objective function and the nonconvex constraints. To deal with the original problem, we firstly convert the objective function in the optimization problem into a difference of two convex functions (D.C.) form, and then propose a NOMA-TDMA-DC algorithm to exploit the global optimum. Numerical results show that the NOMA-TDMA scheme significantly outperforms the traditional orthogonal multiple access scheme in terms of both spectral efficiency and user fairness.

  4. Subdivision, Sampling, and Initialization Strategies for Simplical Branch and Bound in Global Optimization

    DEFF Research Database (Denmark)

    Clausen, Jens; Zilinskas, A,

    2002-01-01

    We consider the problem of optimizing a Lipshitzian function. The branch and bound technique is a well-known solution method, and the key components for this are the subdivision scheme, the bound calculation scheme, and the initialization. For Lipschitzian optimization, the bound calculations are...

  5. Optimized helper data scheme for biometric verification under zero leakage constraint

    NARCIS (Netherlands)

    Groot, de J.A.; Linnartz, J.P.M.G.

    2012-01-01

    In biometric verication, special measures are needed to prevent that a dishon- est verier can steal privacy-sensitive information about the prover from the template database. We introduce an improved version of the zero leakage quan- tization scheme, which optimizes detection performance in terms of

  6. A numerical scheme for optimal transition paths of stochastic chemical kinetic systems

    International Nuclear Information System (INIS)

    Liu Di

    2008-01-01

    We present a new framework for finding the optimal transition paths of metastable stochastic chemical kinetic systems with large system size. The optimal transition paths are identified to be the most probable paths according to the Large Deviation Theory of stochastic processes. Dynamical equations for the optimal transition paths are derived using the variational principle. A modified Minimum Action Method (MAM) is proposed as a numerical scheme to solve the optimal transition paths. Applications to Gene Regulatory Networks such as the toggle switch model and the Lactose Operon Model in Escherichia coli are presented as numerical examples

  7. An optimal probabilistic multiple-access scheme for cognitive radios

    KAUST Repository

    Hamza, Doha R.; Aï ssa, Sonia

    2012-01-01

    We study a time-slotted multiple-access system with a primary user (PU) and a secondary user (SU) sharing the same channel resource. The SU senses the channel at the beginning of the slot. If found free, it transmits with probability 1. If busy, it transmits with a certain access probability that is a function of its queue length and whether it has a new packet arrival. Both users, i.e., the PU and the SU, transmit with a fixed transmission rate by employing a truncated channel inversion power control scheme. We consider the case of erroneous sensing. The goal of the SU is to optimize its transmission scheduling policy to minimize its queueing delay under constraints on its average transmit power and the maximum tolerable primary outage probability caused by the miss detection of the PU. We consider two schemes regarding the secondary's reaction to transmission errors. Under the so-called delay-sensitive (DS) scheme, the packet received in error is removed from the queue to minimize delay, whereas under the delay-tolerant (DT) scheme, the said packet is kept in the buffer and is retransmitted until correct reception. Using the latter scheme, there is a probability of buffer loss that is also constrained to be lower than a certain specified value. We also consider the case when the PU maintains an infinite buffer to store its packets. In the latter case, we modify the SU access scheme to guarantee the stability of the PU queue. We show that the performance significantly changes if the realistic situation of a primary queue is considered. In all cases, although the delay minimization problem is nonconvex, we show that the access policies can be efficiently obtained using linear programming and grid search over one or two parameters. © 1967-2012 IEEE.

  8. An optimal probabilistic multiple-access scheme for cognitive radios

    KAUST Repository

    Hamza, Doha R.

    2012-09-01

    We study a time-slotted multiple-access system with a primary user (PU) and a secondary user (SU) sharing the same channel resource. The SU senses the channel at the beginning of the slot. If found free, it transmits with probability 1. If busy, it transmits with a certain access probability that is a function of its queue length and whether it has a new packet arrival. Both users, i.e., the PU and the SU, transmit with a fixed transmission rate by employing a truncated channel inversion power control scheme. We consider the case of erroneous sensing. The goal of the SU is to optimize its transmission scheduling policy to minimize its queueing delay under constraints on its average transmit power and the maximum tolerable primary outage probability caused by the miss detection of the PU. We consider two schemes regarding the secondary\\'s reaction to transmission errors. Under the so-called delay-sensitive (DS) scheme, the packet received in error is removed from the queue to minimize delay, whereas under the delay-tolerant (DT) scheme, the said packet is kept in the buffer and is retransmitted until correct reception. Using the latter scheme, there is a probability of buffer loss that is also constrained to be lower than a certain specified value. We also consider the case when the PU maintains an infinite buffer to store its packets. In the latter case, we modify the SU access scheme to guarantee the stability of the PU queue. We show that the performance significantly changes if the realistic situation of a primary queue is considered. In all cases, although the delay minimization problem is nonconvex, we show that the access policies can be efficiently obtained using linear programming and grid search over one or two parameters. © 1967-2012 IEEE.

  9. Prospective and retrospective spatial sampling scheme to characterize geochemicals in a mine tailings area

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-07-01

    Full Text Available This study demonstrates that designing sampling schemes using simulated annealing results in much better selection of samples from an existing scheme in terms of prediction accuracy. The presentation to the SASA Eastern Cape Chapter as an invited...

  10. Optimization Route of Food Logistics Distribution Based on Genetic and Graph Cluster Scheme Algorithm

    OpenAIRE

    Jing Chen

    2015-01-01

    This study takes the concept of food logistics distribution as the breakthrough point, by means of the aim of optimization of food logistics distribution routes and analysis of the optimization model of food logistics route, as well as the interpretation of the genetic algorithm, it discusses the optimization of food logistics distribution route based on genetic and cluster scheme algorithm.

  11. A Novel Scheme for Optimal Control of a Nonlinear Delay Differential Equations Model to Determine Effective and Optimal Administrating Chemotherapy Agents in Breast Cancer.

    Science.gov (United States)

    Ramezanpour, H R; Setayeshi, S; Akbari, M E

    2011-01-01

    Determining the optimal and effective scheme for administrating the chemotherapy agents in breast cancer is the main goal of this scientific research. The most important issue here is the amount of drug or radiation administrated in chemotherapy and radiotherapy for increasing patient's survival. This is because in these cases, the therapy not only kills the tumor cells, but also kills some of the healthy tissues and causes serious damages. In this paper we investigate optimal drug scheduling effect for breast cancer model which consist of nonlinear ordinary differential time-delay equations. In this paper, a mathematical model of breast cancer tumors is discussed and then optimal control theory is applied to find out the optimal drug adjustment as an input control of system. Finally we use Sensitivity Approach (SA) to solve the optimal control problem. The goal of this paper is to determine optimal and effective scheme for administering the chemotherapy agent, so that the tumor is eradicated, while the immune systems remains above a suitable level. Simulation results confirm the effectiveness of our proposed procedure. In this paper a new scheme is proposed to design a therapy protocol for chemotherapy in Breast Cancer. In contrast to traditional pulse drug delivery, a continuous process is offered and optimized, according to the optimal control theory for time-delay systems.

  12. An optimal guarding scheme for thermal conductivity measurement using a guarded cut-bar technique, part 1 experimental study

    International Nuclear Information System (INIS)

    Xing, Changhu

    2014-01-01

    In the guarded cut-bar technique, a guard surrounding the measured sample and reference (meter) bars is temperature controlled to carefully regulate heat losses from the sample and reference bars. Guarding is typically carried out by matching the temperature profiles between the guard and the test stack of sample and meter bars. Problems arise in matching the profiles, especially when the thermal conductivities of the meter bars and of the sample differ, as is usually the case. In a previous numerical study, the applied guarding condition (guard temperature profile) was found to be an important factor in measurement accuracy. Different from the linear-matched or isothermal schemes recommended in literature, the optimal guarding condition is dependent on the system geometry and thermal conductivity ratio of sample to meter bar. To validate the numerical results, an experimental study was performed to investigate the resulting error under different guarding conditions using stainless steel 304 as both the sample and meter bars. The optimal guarding condition was further verified on a certified reference material, pyroceram 9606, and 99.95% pure iron whose thermal conductivities are much smaller and much larger, respectively, than that of the stainless steel meter bars. Additionally, measurements are performed using three different inert gases to show the effect of the insulation effective thermal conductivity on measurement error, revealing low conductivity, argon gas, gives the lowest error sensitivity when deviating from the optimal condition. The result of this study provides a general guideline for the specific measurement method and for methods requiring optimal guarding or insulation

  13. The optimal sampling of outsourcing product

    International Nuclear Information System (INIS)

    Yang Chao; Pei Jiacheng

    2014-01-01

    In order to improve quality and cost, the sampling c = 0 has been introduced to the inspection of outsourcing product. According to the current quality level (p = 0.4%), we confirmed the optimal sampling that is: Ac = 0; if N ≤ 3000, n = 55; 3001 ≤ N ≤ 10000, n = 86; N ≥ 10001, n = 108. Through analyzing the OC curve, we came to the conclusion that when N ≤ 3000, the protective ability of optimal sampling for product quality is stronger than current sampling. Corresponding to the same 'consumer risk', the product quality of optimal sampling is superior to current sampling. (authors)

  14. β-NMR sample optimization

    CERN Document Server

    Zakoucka, Eva

    2013-01-01

    During my summer student programme I was working on sample optimization for a new β-NMR project at the ISOLDE facility. The β-NMR technique is well-established in solid-state physics and just recently it is being introduced for applications in biochemistry and life sciences. The β-NMR collaboration will be applying for beam time to the INTC committee in September for three nuclei: Cu, Zn and Mg. Sample optimization for Mg was already performed last year during the summer student programme. Therefore sample optimization for Cu and Zn had to be completed as well for the project proposal. My part in the project was to perform thorough literature research on techniques studying Cu and Zn complexes in native conditions, search for relevant binding candidates for Cu and Zn applicable for ß-NMR and eventually evaluate selected binding candidates using UV-VIS spectrometry.

  15. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    Science.gov (United States)

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  16. Study on the structure optimization scheme design of a double-tube once-through steam

    International Nuclear Information System (INIS)

    Wei, Xinyu; Wu, Shifa; Wang, Pengfei; Zhao, Fuyu

    2016-01-01

    A double-tube once-through steam generator (DOTSG) consisting of an outer straight tube and an inner helical tube is studied in this work. First, the structure of the DOTSG is optimized by considering two different objective functions. The tube length and the total pressure drop are considered as the first and second objective functions, respectively. Because the DOTSG is divided into the subcooled, boiling, and superheated sections according to the different secondary fluid states, the pitches in the three sections are defined as the optimization variables. A multi-objective optimization model is established and solved by particle swarm optimization. The optimization pitch is small in the subcooled region and superheated region, and large in the boiling region. Considering the availability of the optimum structure at power levels below 100% full power, we propose a new operating scheme that can fix the boundaries between the three heat-transfer sections. The operation scheme is proposed on the basis of data for full power, and the operation parameters are calculated at low power level. The primary inlet and outlet temperatures, as well as flow rate and secondary outlet temperature are changed according to the operation procedure

  17. Study on the structure optimization scheme design of a double-tube once-through steam

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Xinyu; Wu, Shifa; Wang, Pengfei; Zhao, Fuyu [Dept. of Nuclear Science and Technology, Xi' an Jiaotong University, Xi' an (China)

    2016-08-15

    A double-tube once-through steam generator (DOTSG) consisting of an outer straight tube and an inner helical tube is studied in this work. First, the structure of the DOTSG is optimized by considering two different objective functions. The tube length and the total pressure drop are considered as the first and second objective functions, respectively. Because the DOTSG is divided into the subcooled, boiling, and superheated sections according to the different secondary fluid states, the pitches in the three sections are defined as the optimization variables. A multi-objective optimization model is established and solved by particle swarm optimization. The optimization pitch is small in the subcooled region and superheated region, and large in the boiling region. Considering the availability of the optimum structure at power levels below 100% full power, we propose a new operating scheme that can fix the boundaries between the three heat-transfer sections. The operation scheme is proposed on the basis of data for full power, and the operation parameters are calculated at low power level. The primary inlet and outlet temperatures, as well as flow rate and secondary outlet temperature are changed according to the operation procedure.

  18. Towards an optimal sampling strategy for assessing genetic variation within and among white clover (Trifolium repens L. cultivars using AFLP

    Directory of Open Access Journals (Sweden)

    Khosro Mehdi Khanlou

    2011-01-01

    Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.

  19. Intel Many Integrated Core (MIC) architecture optimization strategies for a memory-bound Weather Research and Forecasting (WRF) Goddard microphysics scheme

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Goddard cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The WRF is a widely used weather prediction system in the world. It development is a done in collaborative around the globe. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the code of this important part of WRF. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU do. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 4.7x. Furthermore, the same optimizations improved performance on a dual socket Intel Xeon E5-2670 system by a factor of 2.8x compared to the original code.

  20. Optimization of Compton-suppression and summing schemes for the TIGRESS HPGe detector array

    Science.gov (United States)

    Schumaker, M. A.; Svensson, C. E.; Andreoiu, C.; Andreyev, A.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Boston, A. J.; Chakrawarthy, R. S.; Churchman, R.; Drake, T. E.; Finlay, P.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hyland, B.; Jones, B.; Maharaj, R.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Sarazin, F.; Scraggs, H. C.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Watters, L. M.

    2007-04-01

    Methods of optimizing the performance of an array of Compton-suppressed, segmented HPGe clover detectors have been developed which rely on the physical position sensitivity of both the HPGe crystals and the Compton-suppression shields. These relatively simple analysis procedures promise to improve the precision of experiments with the TRIUMF-ISAC Gamma-Ray Escape-Suppressed Spectrometer (TIGRESS). Suppression schemes will improve the efficiency and peak-to-total ratio of TIGRESS for high γ-ray multiplicity events by taking advantage of the 20-fold segmentation of the Compton-suppression shields, while the use of different summing schemes will improve results for a wide range of experimental conditions. The benefits of these methods are compared for many γ-ray energies and multiplicities using a GEANT4 simulation, and the optimal physical configuration of the TIGRESS array under each set of conditions is determined.

  1. Optimal Performance of a Nonlinear Gantry Crane System via Priority-based Fitness Scheme in Binary PSO Algorithm

    International Nuclear Information System (INIS)

    Jaafar, Hazriq Izzuan; Ali, Nursabillilah Mohd; Selamat, Nur Asmiza; Kassim, Anuar Mohamed; Mohamed, Z; Abidin, Amar Faiz Zainal; Jamian, J J

    2013-01-01

    This paper presents development of an optimal PID and PD controllers for controlling the nonlinear gantry crane system. The proposed Binary Particle Swarm Optimization (BPSO) algorithm that uses Priority-based Fitness Scheme is adopted in obtaining five optimal controller gains. The optimal gains are tested on a control structure that combines PID and PD controllers to examine system responses including trolley displacement and payload oscillation. The dynamic model of gantry crane system is derived using Lagrange equation. Simulation is conducted within Matlab environment to verify the performance of system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). This proposed technique demonstrates that implementation of Priority-based Fitness Scheme in BPSO is effective and able to move the trolley as fast as possible to the various desired position

  2. Optimal design of sampling and mapping schemes in the radiometric exploration of Chipilapa, El Salvador (Geo-statistics)

    International Nuclear Information System (INIS)

    Balcazar G, M.; Flores R, J.H.

    1992-01-01

    As part of the knowledge about the radiometric surface exploration, carried out in the geothermal field of Chipilapa, El Salvador, its were considered the geo-statistical parameters starting from the calculated variogram of the field data, being that the maxim distance of correlation of the samples in 'radon' in the different observation addresses (N-S, E-W, N W-S E, N E-S W), it was of 121 mts for the monitoring grill in future prospectus in the same area. Being derived of it an optimization (minimum cost) in the spacing of the field samples by means of geo-statistical techniques, without losing the detection of the anomaly. (Author)

  3. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  4. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  5. Optimal Interpolation scheme to generate reference crop evapotranspiration

    Science.gov (United States)

    Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco

    2018-05-01

    We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.

  6. Optimization study on multiple train formation scheme of urban rail transit

    Science.gov (United States)

    Xia, Xiaomei; Ding, Yong; Wen, Xin

    2018-05-01

    The new organization method, represented by the mixed operation of multi-marshalling trains, can adapt to the characteristics of the uneven distribution of passenger flow, but the research on this aspect is still not perfect enough. This paper introduced the passenger sharing rate and congestion penalty coefficient with different train formations. On this basis, this paper established an optimization model with the minimum passenger cost and operation cost as objective, and operation frequency and passenger demand as constraint. The ideal point method is used to solve this model. Compared with the fixed marshalling operation model, the overall cost of this scheme saves 9.24% and 4.43% respectively. This result not only validates the validity of the model, but also illustrate the advantages of the multiple train formations scheme.

  7. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    Science.gov (United States)

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  8. Optimization Model for Machinery Selection of Multi-Crop Farms in Elsuki Agricultural Scheme

    Directory of Open Access Journals (Sweden)

    Mysara Ahmed Mohamed

    2017-07-01

    Full Text Available The optimization machinery model was developed to aid decision-makers and farm machinery managers in determining the optimal number of tractors, scheduling the agricultural operation and minimizing machinery total costs. For purpose of model verification, validation and application input data was collected from primary & secondary sources from Elsuki agricultural scheme for two seasons namely 2011-2012 and 2013-2014. Model verification was made by comparing the numbers of tractors of Elsuki agricultural scheme for season 2011-2012 with those estimated by the model. The model succeeded in reducing the number of tractors and operation total cost by 23%. The effect of optimization model on elements of direct cost saving indicated that the highest cost saving is reached with depreciation, repair and maintenance (23% and the minimum cost saving is attained with fuel cost (22%. Sensitivity analysis in terms of change in model input for each of cultivated area and total costs of operations showing that: Increasing the operation total cost by 10% decreased the total number of tractors after optimization by 23% and total cost of operations was also decreased by 23%. Increasing the cultivated area by 10%, decreased the total number of tractors after optimization by(12% and total cost of operations was also decreased by 12% (16669206 SDG(1111280 $ to 14636376 SDG(975758 $. For the case of multiple input effect of the area and operation total cost resulted in decrease maximum number of tractors by 12%, and the total cost of operations also decreased by 12%. It is recommended to apply the optimization model as pre-requisite for improving machinery management during implementation of machinery scheduling.

  9. Experimental research of UWB over fiber system employing 128-QAM and ISFA-optimized scheme

    Science.gov (United States)

    He, Jing; Xiang, Changqing; Long, Fengting; Chen, Zuo

    2018-05-01

    In this paper, an optimized intra-symbol frequency-domain averaging (ISFA) scheme is proposed and experimentally demonstrated in intensity-modulation and direct-detection (IMDD) multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system. According to the channel responses of three MB-OFDM UWB sub-bands, the optimal ISFA window size for each sub-band is investigated. After 60-km standard single mode fiber (SSMF) transmission, the experimental results show that, at the bit error rate (BER) of 3.8 × 10-3, the receiver sensitivity of 128-quadrature amplitude modulation (QAM) can be improved by 1.9 dB using the proposed enhanced ISFA combined with training sequence (TS)-based channel estimation scheme, compared with the conventional TS-based channel estimation. Moreover, the spectral efficiency (SE) is up to 5.39 bit/s/Hz.

  10. An Optimally Stable and Accurate Second-Order SSP Runge-Kutta IMEX Scheme for Atmospheric Applications

    Science.gov (United States)

    Rokhzadi, Arman; Mohammadian, Abdolmajid; Charron, Martin

    2018-01-01

    The objective of this paper is to develop an optimized implicit-explicit (IMEX) Runge-Kutta scheme for atmospheric applications focusing on stability and accuracy. Following the common terminology, the proposed method is called IMEX-SSP2(2,3,2), as it has second-order accuracy and is composed of diagonally implicit two-stage and explicit three-stage parts. This scheme enjoys the Strong Stability Preserving (SSP) property for both parts. This new scheme is applied to nonhydrostatic compressible Boussinesq equations in two different arrangements, including (i) semiimplicit and (ii) Horizontally Explicit-Vertically Implicit (HEVI) forms. The new scheme preserves the SSP property for larger regions of absolute monotonicity compared to the well-studied scheme in the same class. In addition, numerical tests confirm that the IMEX-SSP2(2,3,2) improves the maximum stable time step as well as the level of accuracy and computational cost compared to other schemes in the same class. It is demonstrated that the A-stability property as well as satisfying "second-stage order" and stiffly accurate conditions lead the proposed scheme to better performance than existing schemes for the applications examined herein.

  11. Effects of changes in Italian bioenergy promotion schemes for agricultural biogas projects: Insights from a regional optimization model

    International Nuclear Information System (INIS)

    Chinese, D.; Patrizio, P.; Nardin, G.

    2014-01-01

    Italy has witnessed an extraordinary growth in biogas generation from livestock effluents and agricultural activities in the last few years as well as a severe isomorphic process, leading to a market dominance of 999 kW power plants owned by “entrepreneurial farms”. Under the pressure of the economic crisis in the country, the Italian government has restructured renewable energy support schemes, introducing a new program in 2013. In this paper, the effects of the previous and current support schemes on the optimal plant size, feedstock mix and profitability were investigated by introducing a spatially explicit biogas supply chain optimization model, which accounts for different incentive structures. By applying the model to a regional case study, homogenization observed to date is recognized as a result of former incentive structures. Considerable reductions in local economic potentials for agricultural biogas power plants without external heat use, are estimated. New plants are likely to be manure-based and due to the lower energy density of such feedstock, wider supply chains are expected although optimal plant size will be smaller. The new support scheme will therefore most likely eliminate past distortions but also slow down investments in agricultural biogas plants. - Highlights: • We review the evolution of agricultural biogas support schemes in Italy over last 20 years. • A biogas supply chain optimization model which accounts for feed-in-tariffs is introduced. • The model is applied to a regional case study under the two most recent support schemes. • Incentives in force until 2013 caused homogenization towards maize based 999 kW el plants. • Wider, manure based supply chains feeding smaller plants are expected with future incentives

  12. A new and fast image feature selection method for developing an optimal mammographic mass detection scheme.

    Science.gov (United States)

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-08-01

    Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Among these four methods, SFFS has highest efficacy, which takes 3%-5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC results of the ANNs optimized

  13. Sampling optimization for printer characterization by direct search.

    Science.gov (United States)

    Bianco, Simone; Schettini, Raimondo

    2012-12-01

    Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.

  14. A Fairness-Based Access Control Scheme to Optimize IPTV Fast Channel Changing

    Directory of Open Access Journals (Sweden)

    Junyu Lai

    2014-01-01

    Full Text Available IPTV services are typically featured with a longer channel changing delay compared to the conventional TV systems. The major contributor to this lies in the time spent on intraframe (I-frame acquisition during channel changing. Currently, most widely adopted fast channel changing (FCC methods rely on promptly transmitting to the client (conducting the channel changing a retained I-frame of the targeted channel as a separate unicasting stream. However, this I-frame acceleration mechanism has an inherent scalability problem due to the explosions of channel changing requests during commercial breaks. In this paper, we propose a fairness-based admission control (FAC scheme for the original I-frame acceleration mechanism to enhance its scalability by decreasing the bandwidth demands. Based on the channel changing history of every client, the FAC scheme can intelligently decide whether or not to conduct the I-frame acceleration for each channel change request. Comprehensive simulation experiments demonstrate the potential of our proposed FAC scheme to effectively optimize the scalability of the I-frame acceleration mechanism, particularly in commercial breaks. Meanwhile, the FAC scheme only slightly increases the average channel changing delay by temporarily disabling FCC (i.e., I-frame acceleration for the clients who are addicted to frequent channel zapping.

  15. Linear triangular optimization technique and pricing scheme in residential energy management systems

    Science.gov (United States)

    Anees, Amir; Hussain, Iqtadar; AlKhaldi, Ali Hussain; Aslam, Muhammad

    2018-06-01

    This paper presents a new linear optimization algorithm for power scheduling of electric appliances. The proposed system is applied in a smart home community, in which community controller acts as a virtual distribution company for the end consumers. We also present a pricing scheme between community controller and its residential users based on real-time pricing and likely block rates. The results of the proposed optimization algorithm demonstrate that by applying the anticipated technique, not only end users can minimise the consumption cost, but it can also reduce the power peak to an average ratio which will be beneficial for the utilities as well.

  16. Tank waste remediation system optimized processing strategy with an altered treatment scheme

    International Nuclear Information System (INIS)

    Slaathaug, E.J.

    1996-03-01

    This report provides an alternative strategy evolved from the current Hanford Site Tank Waste Remediation System (TWRS) programmatic baseline for accomplishing the treatment and disposal of the Hanford Site tank wastes. This optimized processing strategy with an altered treatment scheme performs the major elements of the TWRS Program, but modifies the deployment of selected treatment technologies to reduce the program cost. The present program for development of waste retrieval, pretreatment, and vitrification technologies continues, but the optimized processing strategy reuses a single facility to accomplish the separations/low-activity waste (LAW) vitrification and the high-level waste (HLW) vitrification processes sequentially, thereby eliminating the need for a separate HLW vitrification facility

  17. Best Hiding Capacity Scheme for Variable Length Messages Using Particle Swarm Optimization

    Science.gov (United States)

    Bajaj, Ruchika; Bedi, Punam; Pal, S. K.

    Steganography is an art of hiding information in such a way that prevents the detection of hidden messages. Besides security of data, the quantity of data that can be hidden in a single cover medium, is also very important. We present a secure data hiding scheme with high embedding capacity for messages of variable length based on Particle Swarm Optimization. This technique gives the best pixel positions in the cover image, which can be used to hide the secret data. In the proposed scheme, k bits of the secret message are substituted into k least significant bits of the image pixel, where k varies from 1 to 4 depending on the message length. The proposed scheme is tested and results compared with simple LSB substitution, uniform 4-bit LSB hiding (with PSO) for the test images Nature, Baboon, Lena and Kitty. The experimental study confirms that the proposed method achieves high data hiding capacity and maintains imperceptibility and minimizes the distortion between the cover image and the obtained stego image.

  18. Image Interpolation Scheme based on SVM and Improved PSO

    Science.gov (United States)

    Jia, X. F.; Zhao, B. T.; Liu, X. X.; Song, H. P.

    2018-01-01

    In order to obtain visually pleasing images, a support vector machines (SVM) based interpolation scheme is proposed, in which the improved particle swarm optimization is applied to support vector machine parameters optimization. Training samples are constructed by the pixels around the pixel to be interpolated. Then the support vector machine with optimal parameters is trained using training samples. After the training, we can get the interpolation model, which can be employed to estimate the unknown pixel. Experimental result show that the interpolated images get improvement PNSR compared with traditional interpolation methods, which is agrees with the subjective quality.

  19. A self-optimizing scheme for energy balanced routing in Wireless Sensor Networks using SensorAnt.

    Science.gov (United States)

    Shamsan Saleh, Ahmed M; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A; Ismail, Alyani

    2012-01-01

    Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency.

  20. Optimal placement of combined heat and power scheme (cogeneration): application to an ethylbenzene plant

    International Nuclear Information System (INIS)

    Zainuddin Abd Manan; Lim Fang Yee

    2001-01-01

    Combined heat and power (CHP) scheme, also known as cogeneration is widely accepted as a highly efficient energy saving measure, particularly in medium to large scale chemical process plants. To date, CHP application is well established in the developed countries. The advantage of a CHP scheme for a chemical plant is two-fold: (i) drastically cut down on the electricity bill from on-site power generation (ii) to save the fuel bills through recovery of the quality waste heat from power generation for process heating. In order to be effective, a CHP scheme must be placed at the right temperature level in the context of the overall process. Failure to do so might render a CHP venture worthless. This paper discusses the procedure for an effective implementation of a CHP scheme. An ethylbenzene process is used as a case study. A key visualization tool known as the grand composite curves is used to provide an overall picture of the process heat source and heat sink profiles. The grand composite curve, which is generated based on the first principles of Pinch Analysis enables the CHP scheme to be optimally placed within the overall process scenario. (Author)

  1. The effect of sampling scheme in the survey of atmospheric deposition of heavy metals in Albania by using moss biomonitoring.

    Science.gov (United States)

    Qarri, Flora; Lazo, Pranvera; Bekteshi, Lirim; Stafilov, Trajce; Frontasyeva, Marina; Harmens, Harry

    2015-02-01

    The atmospheric deposition of heavy metals in Albania was investigated by using a carpet-forming moss species (Hypnum cupressiforme) as bioindicator. Sampling was done in the dry seasons of autumn 2010 and summer 2011. Two different sampling schemes are discussed in this paper: a random sampling scheme with 62 sampling sites distributed over the whole territory of Albania and systematic sampling scheme with 44 sampling sites distributed over the same territory. Unwashed, dried samples were totally digested by using microwave digestion, and the concentrations of metal elements were determined by inductively coupled plasma atomic emission spectroscopy (ICP-AES) and AAS (Cd and As). Twelve elements, such as conservative and trace elements (Al and Fe and As, Cd, Cr, Cu, Ni, Mn, Pb, V, Zn, and Li), were measured in moss samples. Li as typical lithogenic element is also included. The results reflect local emission points. The median concentrations and statistical parameters of elements were discussed by comparing two sampling schemes. The results of both sampling schemes are compared with the results of other European countries. Different levels of the contamination valuated by the respective contamination factor (CF) of each element are obtained for both sampling schemes, while the local emitters identified like iron-chromium metallurgy and cement industry, oil refinery, mining industry, and transport have been the same for both sampling schemes. In addition, the natural sources, from the accumulation of these metals in mosses caused by metal-enriched soil, associated with wind blowing soils were pointed as another possibility of local emitting factors.

  2. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  3. An energy-efficient adaptive sampling scheme for wireless sensor networks

    NARCIS (Netherlands)

    Masoum, Alireza; Meratnia, Nirvana; Havinga, Paul J.M.

    2013-01-01

    Wireless sensor networks are new monitoring platforms. To cope with their resource constraints, in terms of energy and bandwidth, spatial and temporal correlation in sensor data can be exploited to find an optimal sampling strategy to reduce number of sampling nodes and/or sampling frequencies while

  4. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    International Nuclear Information System (INIS)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra; Kalogeropoulou, Christina; Pratikakis, Ioannis; Costaridou, Lena

    2015-01-01

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the

  5. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    Energy Technology Data Exchange (ETDEWEB)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra [Department of Medical Physics, School of Medicine,University of Patras, Patras 26504 (Greece); Kalogeropoulou, Christina [Department of Radiology, School of Medicine, University of Patras, Patras 26504 (Greece); Pratikakis, Ioannis [Department of Electrical and Computer Engineering, Democritus University of Thrace, Xanthi 67100 (Greece); Costaridou, Lena, E-mail: costarid@upatras.gr [Department of Medical Physics, School of Medicine, University of Patras, Patras 26504 (Greece)

    2015-08-15

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the

  6. Design and experimental realization of an optimal scheme for teleportation of an n-qubit quantum state

    Science.gov (United States)

    Sisodia, Mitali; Shukla, Abhishek; Thapliyal, Kishore; Pathak, Anirban

    2017-12-01

    An explicit scheme (quantum circuit) is designed for the teleportation of an n-qubit quantum state. It is established that the proposed scheme requires an optimal amount of quantum resources, whereas larger amount of quantum resources have been used in a large number of recently reported teleportation schemes for the quantum states which can be viewed as special cases of the general n-qubit state considered here. A trade-off between our knowledge about the quantum state to be teleported and the amount of quantum resources required for the same is observed. A proof-of-principle experimental realization of the proposed scheme (for a 2-qubit state) is also performed using 5-qubit superconductivity-based IBM quantum computer. The experimental results show that the state has been teleported with high fidelity. Relevance of the proposed teleportation scheme has also been discussed in the context of controlled, bidirectional, and bidirectional controlled state teleportation.

  7. A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs

    Science.gov (United States)

    Xu, Xin; Yuan, Minjiao; Liu, Xiao; Cai, Zhiping; Wang, Tian

    2018-01-01

    In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path

  8. A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs.

    Science.gov (United States)

    Xu, Xin; Yuan, Minjiao; Liu, Xiao; Liu, Anfeng; Xiong, Neal N; Cai, Zhiping; Wang, Tian

    2018-05-03

    In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path

  9. A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs

    Directory of Open Access Journals (Sweden)

    Xin Xu

    2018-05-01

    Full Text Available In wireless sensor networks (WSNs, communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R and COOR(P of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b the reliability can be improved since it is the product of the reliability of every hop of the

  10. Luminosity optimization schemes in Compton experiments based on Fabry-Perot optical resonators

    Directory of Open Access Journals (Sweden)

    Alessandro Variola

    2011-03-01

    Full Text Available The luminosity of Compton x-ray and γ sources depends on the average current in electron bunches, the energy of the laser pulses, and the geometry of the particle bunch to laser pulse collisions. To obtain high power photon pulses, these can be stacked in a passive optical resonator (Fabry-Perot cavity especially when a high average flux is required. But, in this case, owing to the presence of the optical cavity mirrors, the electron bunches have to collide at an angle with the laser pulses with a consequent luminosity decrease. In this article a crab-crossing scheme is proposed for Compton sources, based on a laser amplified in a Fabry-Perot resonator, to eliminate the luminosity losses given by the crossing angle, taking into account that in laser-electron collisions only the electron bunches can be tilted at the collision point. We report the analytical study on the crab-crossing scheme for Compton gamma sources. The analytical expression for the total yield of photons generated in Compton sources with the crab-crossing scheme of collision is derived. The optimal collision angle of the bunch was found to be equal to half of the collision angle. At this crabbing angle, the maximal yield of scattered off laser photons is attained thanks to the maximization, in the collision process, of the time spent by the laser pulse in the electron bunch. Estimations for some Compton source projects are presented. Furthermore, some schemes of the optical cavities configuration are analyzed and the luminosity calculated. As illustrated, the four-mirror two- or three-dimensional scheme is the most appropriate for Compton sources.

  11. SU-F-T-497: Spatiotemporally Optimal, Personalized Prescription Scheme for Glioblastoma Patients Using the Proliferation and Invasion Glioma Model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, M; Rockhill, J; Phillips, M [University Washington, Seattle, WA (United States)

    2016-06-15

    Purpose: To investigate a spatiotemporally optimal radiotherapy prescription scheme and its potential benefit for glioblastoma (GBM) patients using the proliferation and invasion (PI) glioma model. Methods: Standard prescription for GBM was assumed to deliver 46Gy in 23 fractions to GTV1+2cm margin and additional 14Gy in 7 fractions to GTV2+2cm margin. We simulated the tumor proliferation and invasion in 2D according to the PI glioma model with a moving velocity of 0.029(slow-move), 0.079(average-move), and 0.13(fast-move) mm/day for GTV2 with a radius of 1 and 2cm. For each tumor, the margin around GTV1 and GTV2 was varied to 0–6 cm and 1–3 cm respectively. Total dose to GTV1 was constrained such that the equivalent uniform dose (EUD) to normal brain equals EUD with the standard prescription. A non-stationary dose policy, where the fractional dose varies, was investigated to estimate the temporal effect of the radiation dose. The efficacy of an optimal prescription scheme was evaluated by tumor cell-surviving fraction (SF), EUD, and the expected survival time. Results: Optimal prescription for the slow-move tumors was to use 3.0(small)-3.5(large) cm margins to GTV1, and 1.5cm margin to GTV2. For the average- and fast-move tumors, it was optimal to use 6.0cm margin for GTV1 suggesting that whole brain therapy is optimal, and then 1.5cm (average-move) and 1.5–3.0cm (fast-move, small-large) margins for GTV2. It was optimal to deliver the boost sequentially using a linearly decreasing fractional dose for all tumors. Optimal prescription led to 0.001–0.465% of the tumor SF resulted from using the standard prescription, and increased tumor EUD by 25.3–49.3% and the estimated survival time by 7.6–22.2 months. Conclusion: It is feasible to optimize a prescription scheme depending on the individual tumor characteristics. A personalized prescription scheme could potentially increase tumor EUD and the expected survival time significantly without increasing EUD to

  12. Optimal Retrofit Scheme for Highway Network under Seismic Hazards

    Directory of Open Access Journals (Sweden)

    Yongxi Huang

    2014-06-01

    Full Text Available Many older highway bridges in the United States (US are inadequate for seismic loads and could be severely damaged or collapsed in a relatively small earthquake. According to the most recent American Society of Civil Engineers’ infrastructure report card, one-third of the bridges in the US are rated as structurally deficient and many of these structurally deficient bridges are located in seismic zones. To improve this situation, at-risk bridges must be identified and evaluated and effective retrofitting programs should be in place to reduce their seismic vulnerabilities. In this study, a new retrofit strategy decision scheme for highway bridges under seismic hazards is developed and seamlessly integrate the scenario-based seismic analysis of bridges and the traffic network into the proposed optimization modeling framework. A full spectrum of bridge retrofit strategies is considered based on explicit structural assessment for each seismic damage state. As an empirical case study, the proposed retrofit strategy decision scheme is utilized to evaluate the bridge network in one of the active seismic zones in the US, Charleston, South Carolina. The developed modeling framework, on average, will help increase network throughput traffic capacity by 45% with a cost increase of only $15million for the Mw 5.5 event and increase the capacity fourfold with a cost of only $32m for the Mw 7.0 event.

  13. Identification of isomers and control of ionization and dissociation processes using dual-mass-spectrometer scheme and genetic algorithm optimization

    International Nuclear Information System (INIS)

    Chen Zhou; Qiu-Nan Tong; Zhang Cong-Cong; Hu Zhan

    2015-01-01

    Identification of acetone and its two isomers, and the control of their ionization and dissociation processes are performed using a dual-mass-spectrometer scheme. The scheme employs two sets of time of flight mass spectrometers to simultaneously acquire the mass spectra of two different molecules under the irradiation of identically shaped femtosecond laser pulses. The optimal laser pulses are found using closed-loop learning method based on a genetic algorithm. Compared with the mass spectra of the two isomers that are obtained with the transform limited pulse, those obtained under the irradiation of the optimal laser pulse show large differences and the various reaction pathways of the two molecules are selectively controlled. The experimental results demonstrate that the scheme is quite effective and useful in studies of two molecules having common mass peaks, which makes a traditional single mass spectrometer unfeasible. (paper)

  14. Methodology for optimization of process integration schemes in a biorefinery under uncertainty

    International Nuclear Information System (INIS)

    Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >González-Cortés, Meilyn; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Martínez-Martínez, Yenisleidys; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Albernas-Carvajal, Yailet; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Pedraza-Garciga, Julio; Marta Abreu de las Villas (Cuba))" data-affiliation=" (Departamento de Ingeniería Química. Facultad de Química y Farmacia. Universidad Central Marta Abreu de las Villas (Cuba))" >Morales-Zamora, Marlen

    2017-01-01

    The uncertainty has a great impact in the investment decisions, operability of the plants and in the feasibility of integration opportunities in the chemical processes. This paper, presents the steps to consider the optimization of process investment in the processes integration under conditions of uncertainty. It is shown the potentialities of the biomass cane of sugar for the integration with several plants in a biorefinery scheme for the obtaining chemical products, thermal and electric energy. Among the factories with potentialities for this integration are the pulp and paper and sugar factories and other derivative processes. Theses factories have common resources and also have a variety of products that can be exchange between them so certain products generated in a one of them can be raw matter in another plant. The methodology developed guide to obtaining of feasible investment projects under uncertainty. As objective function was considered the maximization of net profitable value in different scenarios that are generated from the integration scheme. (author)

  15. An optimal implicit staggered-grid finite-difference scheme based on the modified Taylor-series expansion with minimax approximation method for elastic modeling

    Science.gov (United States)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2017-03-01

    Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.

  16. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  17. Optimal stochastic reactive power scheduling in a microgrid considering voltage droop scheme of DGs and uncertainty of wind farms

    International Nuclear Information System (INIS)

    Khorramdel, Benyamin; Raoofat, Mahdi

    2012-01-01

    Distributed Generators (DGs) in a microgrid may operate in three different reactive power control strategies, including PV, PQ and voltage droop schemes. This paper proposes a new stochastic programming approach for reactive power scheduling of a microgrid, considering the uncertainty of wind farms. The proposed algorithm firstly finds the expected optimal operating point of each DG in V-Q plane while the wind speed is a probabilistic variable. A multi-objective function with goals of loss minimization, reactive power reserve maximization and voltage security margin maximization is optimized using a four-stage multi-objective nonlinear programming. Then, using Monte Carlo simulation enhanced by scenario reduction technique, the proposed algorithm simulates actual condition and finds optimal operating strategy of DGs. Also, if any DGs are scheduled to operate in voltage droop scheme, the optimum droop is determined. Also, in the second part of the research, to enhance the optimality of the results, PSO algorithm is used for the multi-objective optimization problem. Numerical examples on IEEE 34-bus test system including two wind turbines are studied. The results show the benefits of voltage droop scheme for mitigating the impacts of the uncertainty of wind. Also, the results show preference of PSO method in the proposed approach. -- Highlights: ► Reactive power scheduling in a microgrid considering loss and voltage security. ► Stochastic nature of wind farms affects reactive power scheduling and is considered. ► Advantages of using the voltage droop characteristics of DGs in voltage security are shown. ► Power loss, voltage security and VAR reserve are three goals of a multi-objective optimization. ► Monte Carlo method with scenario reduction is used to determine optimal control strategy of DGs.

  18. Evaluation of alternative macroinvertebrate sampling techniques for use in a new tropical freshwater bioassessment scheme

    OpenAIRE

    Isabel Eleanor Moore; Kevin Joseph Murphy

    2015-01-01

    Aim: The study aimed to determine the effectiveness of benthic macroinvertebrate dredge net sampling procedures as an alternative method to kick net sampling in tropical freshwater systems, specifically as an evaluation of sampling methods used in the Zambian Invertebrate Scoring System (ZISS) river bioassessment scheme. Tropical freshwater ecosystems are sometimes dangerous or inaccessible to sampling teams using traditional kick-sampling methods, so identifying an alternative procedure that...

  19. Optimized Basis Sets for the Environment in the Domain-Specific Basis Set Approach of the Incremental Scheme.

    Science.gov (United States)

    Anacker, Tony; Hill, J Grant; Friedrich, Joachim

    2016-04-21

    Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.

  20. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    Directory of Open Access Journals (Sweden)

    Huan Chen

    Full Text Available This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN. Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  1. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    Science.gov (United States)

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  2. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    Science.gov (United States)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  3. A Latency and Coverage Optimized Data Collection Scheme for Smart Cities Based on Vehicular Ad-hoc Networks.

    Science.gov (United States)

    Xu, Yixuan; Chen, Xi; Liu, Anfeng; Hu, Chunhua

    2017-04-18

    Using mobile vehicles as "data mules" to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC) scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D), but also vehicle to vehicle transmission (V2V). Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%.

  4. A Latency and Coverage Optimized Data Collection Scheme for Smart Cities Based on Vehicular Ad-hoc Networks

    Directory of Open Access Journals (Sweden)

    Yixuan Xu

    2017-04-01

    Full Text Available Using mobile vehicles as “data mules” to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D, but also vehicle to vehicle transmission (V2V. Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%.

  5. A Latency and Coverage Optimized Data Collection Scheme for Smart Cities Based on Vehicular Ad-Hoc Networks

    Science.gov (United States)

    Xu, Yixuan; Chen, Xi; Liu, Anfeng; Hu, Chunhua

    2017-01-01

    Using mobile vehicles as “data mules” to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC) scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D), but also vehicle to vehicle transmission (V2V). Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%. PMID:28420218

  6. Evaluation of alternative macroinvertebrate sampling techniques for use in a new tropical freshwater bioassessment scheme

    Directory of Open Access Journals (Sweden)

    Isabel Eleanor Moore

    2015-06-01

    Full Text Available Aim: The study aimed to determine the effectiveness of benthic macroinvertebrate dredge net sampling procedures as an alternative method to kick net sampling in tropical freshwater systems, specifically as an evaluation of sampling methods used in the Zambian Invertebrate Scoring System (ZISS river bioassessment scheme. Tropical freshwater ecosystems are sometimes dangerous or inaccessible to sampling teams using traditional kick-sampling methods, so identifying an alternative procedure that produces similar results is necessary in order to collect data from a wide variety of habitats.MethodsBoth kick and dredge nets were used to collect macroinvertebrate samples at 16 riverine sites in Zambia, ranging from backwaters and floodplain lagoons to fast flowing streams and rivers. The data were used to calculate ZISS, diversity (S: number of taxa present, and Average Score Per Taxon (ASPT scores per site, using the two sampling methods to compare their sampling effectiveness. Environmental parameters, namely pH, conductivity, underwater photosynthetically active radiation (PAR, temperature, alkalinity, flow, and altitude, were also recorded and used in statistical analysis. Invertebrate communities present at the sample sites were determined using multivariate procedures.ResultsAnalysis of the invertebrate community and environmental data suggested that the testing exercise was undertaken in four distinct macroinvertebrate community types, supporting at least two quite different macroinvertebrate assemblages, and showing significant differences in habitat conditions. Significant correlations were found for all three bioassessment score variables between results acquired using the two methods, with dredge-sampling normally producing lower scores than did the kick net procedures. Linear regression models were produced in order to correct each biological variable score collected by a dredge net to a score similar to that of one collected by kick net

  7. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  8. Optimal control, investment and utilization schemes for energy storage under uncertainty

    Science.gov (United States)

    Mirhosseini, Niloufar Sadat

    Energy storage has the potential to offer new means for added flexibility on the electricity systems. This flexibility can be used in a number of ways, including adding value towards asset management, power quality and reliability, integration of renewable resources and energy bill savings for the end users. However, uncertainty about system states and volatility in system dynamics can complicate the question of when to invest in energy storage and how best to manage and utilize it. This work proposes models to address different problems associated with energy storage within a microgrid, including optimal control, investment, and utilization. Electric load, renewable resources output, storage technology cost and electricity day-ahead and spot prices are the factors that bring uncertainty to the problem. A number of analytical methodologies have been adopted to develop the aforementioned models. Model Predictive Control and discretized dynamic programming, along with a new decomposition algorithm are used to develop optimal control schemes for energy storage for two different levels of renewable penetration. Real option theory and Monte Carlo simulation, coupled with an optimal control approach, are used to obtain optimal incremental investment decisions, considering multiple sources of uncertainty. Two stage stochastic programming is used to develop a novel and holistic methodology, including utilization of energy storage within a microgrid, in order to optimally interact with energy market. Energy storage can contribute in terms of value generation and risk reduction for the microgrid. The integration of the models developed here are the basis for a framework which extends from long term investments in storage capacity to short term operational control (charge/discharge) of storage within a microgrid. In particular, the following practical goals are achieved: (i) optimal investment on storage capacity over time to maximize savings during normal and emergency

  9. Design of an optimization algorithm for clinical use

    International Nuclear Information System (INIS)

    Gustafsson, Anders

    1995-01-01

    Radiation therapy optimization has received much attention in the past few years. In combination with biological objective functions, the different optimization schemes has shown a potential to considerably increase the treatment outcome. With improved radiobiological models and increased computer capacity, radiation therapy optimization has now reached a stage where implementation in a clinical treatment planning system is realistic. A radiation therapy optimization method has been investigated with respect to its feasibility as a tool in a clinical 3D treatment planning system. The optimization algorithm is a constrained iterative gradient method. Photon dose calculation is performed using the clinically validated pencil-beam based algorithm of the clinical treatment planning system. Dose calculation within the optimization scheme is very time consuming and measures are required to decrease the calculation time. Different methods for more effective dose calculation within the optimization scheme have been investigated. The optimization results for adaptive sampling of calculation points, and secondary effect approximations in the dose calculation algorithm are compared with the optimization result for accurate dose calculation in all voxels of interest

  10. A note on a fatal error of optimized LFC private information retrieval scheme and its corrected results

    DEFF Research Database (Denmark)

    Tamura, Jim; Kobara, Kazukuni; Fathi, Hanane

    2010-01-01

    A number of lightweight PIR (Private Information Retrieval) schemes have been proposed in recent years. In JWIS2006, Kwon et al. proposed a new scheme (optimized LFCPIR, or OLFCPIR), which aimed at reducing the communication cost of Lipmaa's O(log2 n) PIR(LFCPIR) to O(logn). However in this paper......, we point out a fatal error of overflow contained in OLFCPIR and show how the error can be corrected. Finally, we compare with LFCPIR to show that the communication cost of our corrected OLFCPIR is asymptotically the same as the previous LFCPIR....

  11. A quadratic form of the Coulomb operator and an optimization scheme for the extended Kohn-Sham models

    International Nuclear Information System (INIS)

    Kusakabe, Koichi

    2009-01-01

    To construct an optimization scheme for an extension of the Kohn-Sham approach, I introduce an operator form of the Coulomb interaction. This form is the sum of quadratic form pairs, which can be redefined in a self-consistent calculation of a multi-reference density functional theory. A detailed derivation of the form is given. A fluctuation term introduced in the extended Kohn-Sham scheme is expressed in this form for regularization. The present procedure also provides an exact derivation of effective negative interactions in charge fluctuation channels. Relevance to high-temperature superconductors is discussed.

  12. Secure RAID Schemes for Distributed Storage

    OpenAIRE

    Huang, Wentao; Bruck, Jehoshua

    2016-01-01

    We propose secure RAID, i.e., low-complexity schemes to store information in a distributed manner that is resilient to node failures and resistant to node eavesdropping. We generalize the concept of systematic encoding to secure RAID and show that systematic schemes have significant advantages in the efficiencies of encoding, decoding and random access. For the practical high rate regime, we construct three XOR-based systematic secure RAID schemes with optimal or almost optimal encoding and ...

  13. Optimal powering schemes for legged robotics

    Science.gov (United States)

    Muench, Paul; Bednarz, David; Czerniak, Gregory P.; Cheok, Ka C.

    2010-04-01

    Legged Robots have tremendous mobility, but they can also be very inefficient. These inefficiencies can be due to suboptimal control schemes, among other things. If your goal is to get from point A to point B in the least amount of time, your control scheme will be different from if your goal is to get there using the least amount of energy. In this paper, we seek a balance between these extremes by looking at both efficiency and speed. We model a walking robot as a rimless wheel, and, using Pontryagin's Maximum Principle (PMP), we find an "on-off" control for the model, and describe the switching curve between these control extremes.

  14. Properties of the DREAM scheme and its optimization for application to proteins

    International Nuclear Information System (INIS)

    Westfeld, Thomas; Verel, René; Ernst, Matthias; Böckmann, Anja; Meier, Beat H.

    2012-01-01

    The DREAM scheme is an efficient adiabatic homonuclear polarization-transfer method suitable for multi-dimensional experiments in biomolecular solid-state NMR. The bandwidth and dynamics of the polarization transfer in the DREAM experiment depend on a number of experimental and spin-system parameters. In order to obtain optimal results, the dependence of the cross-peak intensity on these parameters needs to be understood and carefully controlled. We introduce a simplified model to semi-quantitatively describe the polarization-transfer patterns for the relevant spin systems. Numerical simulations for all natural amino acids (except tryptophane) show the dependence of the cross-peak intensities as a function of the radio-frequency-carrier position. This dependency can be used as a guide to select the desired conditions in protein spectroscopy. Practical guidelines are given on how to set up a DREAM experiment for optimized Cα/Cβ transfer, which is important in sequential assignment experiments.

  15. Parameter optimization of a computer-aided diagnosis scheme for the segmentation of microcalcification clusters in mammograms

    International Nuclear Information System (INIS)

    Gavrielides, Marios A.; Lo, Joseph Y.; Floyd, Carey E. Jr.

    2002-01-01

    Our purpose in this study is to develop a parameter optimization technique for the segmentation of suspicious microcalcification clusters in digitized mammograms. In previous work, a computer-aided diagnosis (CAD) scheme was developed that used local histogram analysis of overlapping subimages and a fuzzy rule-based classifier to segment individual microcalcifications, and clustering analysis for reducing the number of false positive clusters. The performance of this previous CAD scheme depended on a large number of parameters such as the intervals used to calculate fuzzy membership values and on the combination of membership values used by each decision rule. These parameters were optimized empirically based on the performance of the algorithm on the training set. In order to overcome the limitations of manual training and rule generation, the segmentation algorithm was modified in order to incorporate automatic parameter optimization. For the segmentation of individual microcalcifications, the new algorithm used a neural network with fuzzy-scaled inputs. The fuzzy-scaled inputs were created by processing the histogram features with a family of membership functions, the parameters of which were automatically extracted from the distribution of the feature values. The neural network was trained to classify feature vectors as either positive or negative. Individual microcalcifications were segmented from positive subimages. After clustering, another neural network was trained to eliminate false positive clusters. A database of 98 images provided training and testing sets to optimize the parameters and evaluate the CAD scheme, respectively. The performance of the algorithm was evaluated with a FROC analysis. At a sensitivity rate of 93.2%, there was an average of 0.8 false positive clusters per image. The results are very comparable with those taken using our previously published rule-based method. However, the new algorithm is more suited to generalize its

  16. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Directory of Open Access Journals (Sweden)

    Jake M Ferguson

    2014-06-01

    Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  17. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  18. An optimal scheme for top quark mass measurement near the \\rm{t}\\bar{t} threshold at future \\rm{e}^{+}{e}^{-} colliders

    Science.gov (United States)

    Chen, Wei-Guo; Wan, Xia; Wang, You-Kai

    2018-05-01

    A top quark mass measurement scheme near the {{t}}\\bar{{{t}}} production threshold in future {{{e}}}+{{{e}}}- colliders, e.g. the Circular Electron Positron Collider (CEPC), is simulated. A {χ }2 fitting method is adopted to determine the number of energy points to be taken and their locations. Our results show that the optimal energy point is located near the largest slope of the cross section v. beam energy plot, and the most efficient scheme is to concentrate all luminosity on this single energy point in the case of one-parameter top mass fitting. This suggests that the so-called data-driven method could be the best choice for future real experimental measurements. Conveniently, the top mass statistical uncertainty can also be calculated directly by the error matrix even without any sampling and fitting. The agreement of the above two optimization methods has been checked. Our conclusion is that by taking 50 fb‑1 total effective integrated luminosity data, the statistical uncertainty of the top potential subtracted mass can be suppressed to about 7 MeV and the total uncertainty is about 30 MeV. This precision will help to identify the stability of the electroweak vacuum at the Planck scale. Supported by National Science Foundation of China (11405102) and the Fundamental Research Funds for the Central Universities of China (GK201603027, GK201803019)

  19. Geochemical sampling scheme optimization on mine wastes based on hyperspectral data

    CSIR Research Space (South Africa)

    Zhao, T

    2008-07-01

    Full Text Available decontamination, for example, acid-generating minerals. Acid rock drainage can adversely have an impact on the quality of drinking water and the health of riparian ecosystems. To assess or monitor environmental impact of mining, sampling of mine waste is required...

  20. Accelerated Simplified Swarm Optimization with Exploitation Search Scheme for Data Clustering.

    Directory of Open Access Journals (Sweden)

    Wei-Chang Yeh

    Full Text Available Data clustering is commonly employed in many disciplines. The aim of clustering is to partition a set of data into clusters, in which objects within the same cluster are similar and dissimilar to other objects that belong to different clusters. Over the past decade, the evolutionary algorithm has been commonly used to solve clustering problems. This study presents a novel algorithm based on simplified swarm optimization, an emerging population-based stochastic optimization approach with the advantages of simplicity, efficiency, and flexibility. This approach combines variable vibrating search (VVS and rapid centralized strategy (RCS in dealing with clustering problem. VVS is an exploitation search scheme that can refine the quality of solutions by searching the extreme points nearby the global best position. RCS is developed to accelerate the convergence rate of the algorithm by using the arithmetic average. To empirically evaluate the performance of the proposed algorithm, experiments are examined using 12 benchmark datasets, and corresponding results are compared with recent works. Results of statistical analysis indicate that the proposed algorithm is competitive in terms of the quality of solutions.

  1. Self-optimizing robust nonlinear model predictive control

    NARCIS (Netherlands)

    Lazar, M.; Heemels, W.P.M.H.; Jokic, A.; Thoma, M.; Allgöwer, F.; Morari, M.

    2009-01-01

    This paper presents a novel method for designing robust MPC schemes that are self-optimizing in terms of disturbance attenuation. The method employs convex control Lyapunov functions and disturbance bounds to optimize robustness of the closed-loop system on-line, at each sampling instant - a unique

  2. Low-complexity joint symbol synchronization and sampling frequency offset estimation scheme for optical IMDD OFDM systems.

    Science.gov (United States)

    Zhang, Zhen; Zhang, Qianwu; Chen, Jian; Li, Yingchun; Song, Yingxiong

    2016-06-13

    A low-complexity joint symbol synchronization and SFO estimation scheme for asynchronous optical IMDD OFDM systems based on only one training symbol is proposed. Numerical simulations and experimental demonstrations are also under taken to evaluate the performance of the mentioned scheme. The experimental results show that robust and precise symbol synchronization and the SFO estimation can be achieved simultaneously at received optical power as low as -20dBm in asynchronous OOFDM systems. SFO estimation accuracy in MSE can be lower than 1 × 10-11 under SFO range from -60ppm to 60ppm after 25km SSMF transmission. Optimal System performance can be maintained until cumulate number of employed frames for calculation is less than 50 under above-mentioned conditions. Meanwhile, the proposed joint scheme has a low level of operation complexity comparing with existing methods, when the symbol synchronization and SFO estimation are considered together. Above-mentioned results can give an important reference in practical system designs.

  3. Adaptive Digital Watermarking Scheme Based on Support Vector Machines and Optimized Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoyi Zhou

    2018-01-01

    Full Text Available Digital watermarking is an effective solution to the problem of copyright protection, thus maintaining the security of digital products in the network. An improved scheme to increase the robustness of embedded information on the basis of discrete cosine transform (DCT domain is proposed in this study. The embedding process consisted of two main procedures. Firstly, the embedding intensity with support vector machines (SVMs was adaptively strengthened by training 1600 image blocks which are of different texture and luminance. Secondly, the embedding position with the optimized genetic algorithm (GA was selected. To optimize GA, the best individual in the first place of each generation directly went into the next generation, and the best individual in the second position participated in the crossover and the mutation process. The transparency reaches 40.5 when GA’s generation number is 200. A case study was conducted on a 256 × 256 standard Lena image with the proposed method. After various attacks (such as cropping, JPEG compression, Gaussian low-pass filtering (3,0.5, histogram equalization, and contrast increasing (0.5,0.6 on the watermarked image, the extracted watermark was compared with the original one. Results demonstrate that the watermark can be effectively recovered after these attacks. Even though the algorithm is weak against rotation attacks, it provides high quality in imperceptibility and robustness and hence it is a successful candidate for implementing novel image watermarking scheme meeting real timelines.

  4. Optimizing Combinations of Flavonoids Deriving from Astragali Radix in Activating the Regulatory Element of Erythropoietin by a Feedback System Control Scheme

    Directory of Open Access Journals (Sweden)

    Hui Yu

    2013-01-01

    Full Text Available Identifying potent drug combination from a herbal mixture is usually quite challenging, due to a large number of possible trials. Using an engineering approach of the feedback system control (FSC scheme, we identified the potential best combinations of four flavonoids, including formononetin, ononin, calycosin, and calycosin-7-O-β-D-glucoside deriving from Astragali Radix (AR; Huangqi, which provided the best biological action at minimal doses. Out of more than one thousand possible combinations, only tens of trials were required to optimize the flavonoid combinations that stimulated a maximal transcriptional activity of hypoxia response element (HRE, a critical regulator for erythropoietin (EPO transcription, in cultured human embryonic kidney fibroblast (HEK293T. By using FSC scheme, 90% of the work and time can be saved, and the optimized flavonoid combinations increased the HRE mediated transcriptional activity by ~3-fold as compared with individual flavonoid, while the amount of flavonoids was reduced by ~10-fold. Our study suggests that the optimized combination of flavonoids may have strong effect in activating the regulatory element of erythropoietin at very low dosage, which may be used as new source of natural hematopoietic agent. The present work also indicates that the FSC scheme is able to serve as an efficient and model-free approach to optimize the drug combination of different ingredients within a herbal decoction.

  5. Near-optimal labeling schemes for nearest common ancestors

    DEFF Research Database (Denmark)

    Alstrup, Stephen; Bistrup Halvorsen, Esben; Larsen, Kasper Green

    2014-01-01

    and Korman (STOC'10) established that labels in ancestor labeling schemes have size log n + Θ(log log n), our new lower bound separates ancestor and NCA labeling schemes. Our upper bound improves the 10 log n upper bound by Alstrup, Gavoille, Kaplan and Rauhe (TOCS'04), and our theoretical result even...

  6. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  7. Field Sampling from a Segmented Image

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-06-01

    Full Text Available This paper presents a statistical method for deriving the optimal prospective field sampling scheme on a remote sensing image to represent different categories in the field. The iterated conditional modes algorithm (ICM) is used for segmentation...

  8. Exploring synergistic benefits of Water-Food-Energy Nexus through multi-objective reservoir optimization schemes.

    Science.gov (United States)

    Uen, Tinn-Shuan; Chang, Fi-John; Zhou, Yanlai; Tsai, Wen-Ping

    2018-08-15

    This study proposed a holistic three-fold scheme that synergistically optimizes the benefits of the Water-Food-Energy (WFE) Nexus by integrating the short/long-term joint operation of a multi-objective reservoir with irrigation ponds in response to urbanization. The three-fold scheme was implemented step by step: (1) optimizing short-term (daily scale) reservoir operation for maximizing hydropower output and final reservoir storage during typhoon seasons; (2) simulating long-term (ten-day scale) water shortage rates in consideration of the availability of irrigation ponds for both agricultural and public sectors during non-typhoon seasons; and (3) promoting the synergistic benefits of the WFE Nexus in a year-round perspective by integrating the short-term optimization and long-term simulation of reservoir operations. The pivotal Shihmen Reservoir and 745 irrigation ponds located in Taoyuan City of Taiwan together with the surrounding urban areas formed the study case. The results indicated that the optimal short-term reservoir operation obtained from the non-dominated sorting genetic algorithm II (NSGA-II) could largely increase hydropower output but just slightly affected water supply. The simulation results of the reservoir coupled with irrigation ponds indicated that such joint operation could significantly reduce agricultural and public water shortage rates by 22.2% and 23.7% in average, respectively, as compared to those of reservoir operation excluding irrigation ponds. The results of year-round short/long-term joint operation showed that water shortage rates could be reduced by 10% at most, the food production rate could be increased by up to 47%, and the hydropower benefit could increase up to 9.33 million USD per year, respectively, in a wet year. Consequently, the proposed methodology could be a viable approach to promoting the synergistic benefits of the WFE Nexus, and the results provided unique insights for stakeholders and policymakers to pursue

  9. Optimized scheme in coal-fired boiler combustion based on information entropy and modified K-prototypes algorithm

    Science.gov (United States)

    Gu, Hui; Zhu, Hongxia; Cui, Yanfeng; Si, Fengqi; Xue, Rui; Xi, Han; Zhang, Jiayu

    2018-06-01

    An integrated combustion optimization scheme is proposed for the combined considering the restriction in coal-fired boiler combustion efficiency and outlet NOx emissions. Continuous attribute discretization and reduction techniques are handled as optimization preparation by E-Cluster and C_RED methods, in which the segmentation numbers don't need to be provided in advance and can be continuously adapted with data characters. In order to obtain results of multi-objections with clustering method for mixed data, a modified K-prototypes algorithm is then proposed. This algorithm can be divided into two stages as K-prototypes algorithm for clustering number self-adaptation and clustering for multi-objective optimization, respectively. Field tests were carried out at a 660 MW coal-fired boiler to provide real data as a case study for controllable attribute discretization and reduction in boiler system and obtaining optimization parameters considering [ maxηb, minyNOx ] multi-objective rule.

  10. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  11. Cost-based droop scheme for DC microgrid

    DEFF Research Database (Denmark)

    Nutkani, Inam Ullah; Wang, Peng; Loh, Poh Chiang

    2014-01-01

    voltage level, less on optimized operation and control of generation sources. The latter theme is perused in this paper, where cost-based droop scheme is proposed for distributed generators (DGs) in DC microgrids. Unlike traditional proportional power sharing based droop scheme, the proposed scheme......-connected operation. Most importantly, the proposed scheme can reduce overall total generation cost in DC microgrids without centralized controller and communication links. The performance of the proposed scheme has been verified under different load conditions.......DC microgrids are gaining interest due to higher efficiencies of DC distribution compared with AC. The benefits of DC systems have been widely researched for data centers, IT facilities and residential applications. The research focus, however, has been more on system architecture and optimal...

  12. A new configurational bias scheme for sampling supramolecular structures

    Energy Technology Data Exchange (ETDEWEB)

    De Gernier, Robin; Mognetti, Bortolo M., E-mail: bmognett@ulb.ac.be [Center for Nonlinear Phenomena and Complex Systems, Université Libre de Bruxelles, Code Postal 231, Campus Plaine, B-1050 Brussels (Belgium); Curk, Tine [Department of Chemistry, University of Cambridge, Cambridge CB2 1EW (United Kingdom); Dubacheva, Galina V. [Biosurfaces Unit, CIC biomaGUNE, Paseo Miramon 182, 20009 Donostia - San Sebastian (Spain); Richter, Ralf P. [Biosurfaces Unit, CIC biomaGUNE, Paseo Miramon 182, 20009 Donostia - San Sebastian (Spain); Université Grenoble Alpes, DCM, 38000 Grenoble (France); CNRS, DCM, 38000 Grenoble (France); Max Planck Institute for Intelligent Systems, 70569 Stuttgart (Germany)

    2014-12-28

    We present a new simulation scheme which allows an efficient sampling of reconfigurable supramolecular structures made of polymeric constructs functionalized by reactive binding sites. The algorithm is based on the configurational bias scheme of Siepmann and Frenkel and is powered by the possibility of changing the topology of the supramolecular network by a non-local Monte Carlo algorithm. Such a plan is accomplished by a multi-scale modelling that merges coarse-grained simulations, describing the typical polymer conformations, with experimental results accounting for free energy terms involved in the reactions of the active sites. We test the new algorithm for a system of DNA coated colloids for which we compute the hybridisation free energy cost associated to the binding of tethered single stranded DNAs terminated by short sequences of complementary nucleotides. In order to demonstrate the versatility of our method, we also consider polymers functionalized by receptors that bind a surface decorated by ligands. In particular, we compute the density of states of adsorbed polymers as a function of the number of ligand–receptor complexes formed. Such a quantity can be used to study the conformational properties of adsorbed polymers useful when engineering adsorption with tailored properties. We successfully compare the results with the predictions of a mean field theory. We believe that the proposed method will be a useful tool to investigate supramolecular structures resulting from direct interactions between functionalized polymers for which efficient numerical methodologies of investigation are still lacking.

  13. Optimal relaxed causal sampler using sampled-date system theory

    NARCIS (Netherlands)

    Shekhawat, Hanumant; Meinsma, Gjerrit

    This paper studies the design of an optimal relaxed causal sampler using sampled data system theory. A lifted frequency domain approach is used to obtain the existence conditions and the optimal sampler. A state space formulation of the results is also provided. The resulting optimal relaxed causal

  14. Adjoint optimization scheme for lower hybrid current rampup and profile control in Tokamak

    International Nuclear Information System (INIS)

    Litaudon, X.; Moreau, D.; Bizarro, J.P.; Hoang, G.T.; Kupfer, K.; Peysson, Y.; Shkarofsky, I.P.; Bonoli, P.

    1992-12-01

    The purpose of this work is to take into account and study the effect of the electric field profiles on the Lower Hybrid (LH) current drive efficiency during transient phases such as rampup. As a complement to the full ray-tracing / Fokker Planck studies, and for the purpose of optimization studies, we developed a simplified 1-D model based on the adjoint Karney-Fisch numerical results. This approach allows us to estimate the LH power deposition profile which would be required for ramping the current with prescribed rate, total current density profile (q-profile) and surface loop voltage. For rampup optimization studies, we can therefore scan the whole parameter space and eliminate a posteriori those scenarios which correspond to unrealistic deposition profiles. We thus obtain the time evolution of the LH power, minor radius of the plasma, volt-second consumption and total energy dissipated. Optimization can thus be performed with respect to any of those criteria. This scheme is illustrated by some numerical simulations performed with TORE-SUPRA and NET/ITER parameters. We conclude with a derivation of a simple and general scaling law for the flux consumption during the rampup phase

  15. Evolutional Optimization on Material Ordering and Inventory Control of Supply Chain through Incentive Scheme

    Science.gov (United States)

    Prasertwattana, Kanit; Shimizu, Yoshiaki; Chiadamrong, Navee

    This paper studied the material ordering and inventory control of supply chain systems. The effect of controlling policies is analyzed under three different configurations of the supply chain systems, and the formulated problem has been solved by using an evolutional optimization method known as Differential Evolution (DE). The numerical results show that the coordinating policy with the incentive scheme outperforms the other policies and can improve the performance of the overall system as well as all members under the concept of supply chain management.

  16. Sample Adaptive Offset Optimization in HEVC

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2014-11-01

    Full Text Available As the next generation of video coding standard, High Efficiency Video Coding (HEVC adopted many useful tools to improve coding efficiency. Sample Adaptive Offset (SAO, is a technique to reduce sample distortion by providing offsets to pixels in in-loop filter. In SAO, pixels in LCU are classified into several categories, then categories and offsets are given based on Rate-Distortion Optimization (RDO of reconstructed pixels in a Largest Coding Unit (LCU. Pixels in a LCU are operated by the same SAO process, however, transform and inverse transform makes the distortion of pixels in Transform Unit (TU edge larger than the distortion inside TU even after deblocking filtering (DF and SAO. And the categories of SAO can also be refined, since it is not proper for many cases. This paper proposed a TU edge offset mode and a category refinement for SAO in HEVC. Experimental results shows that those two kinds of optimization gets -0.13 and -0.2 gain respectively compared with the SAO in HEVC. The proposed algorithm which using the two kinds of optimization gets -0.23 gain on BD-rate compared with the SAO in HEVC which is a 47 % increase with nearly no increase on coding time.

  17. Determination of Optimal Opening Scheme for Electromagnetic Loop Networks Based on Fuzzy Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Yang Li

    2016-01-01

    Full Text Available Studying optimization and decision for opening electromagnetic loop networks plays an important role in planning and operation of power grids. First, the basic principle of fuzzy analytic hierarchy process (FAHP is introduced, and then an improved FAHP-based scheme evaluation method is proposed for decoupling electromagnetic loop networks based on a set of indicators reflecting the performance of the candidate schemes. The proposed method combines the advantages of analytic hierarchy process (AHP and fuzzy comprehensive evaluation. On the one hand, AHP effectively combines qualitative and quantitative analysis to ensure the rationality of the evaluation model; on the other hand, the judgment matrix and qualitative indicators are expressed with trapezoidal fuzzy numbers to make decision-making more realistic. The effectiveness of the proposed method is validated by the application results on the real power system of Liaoning province of China.

  18. Unified Importance Sampling Schemes for Efficient Simulation of Outage Capacity over Generalized Fading Channels

    KAUST Repository

    Rached, Nadhir B.; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The outage capacity (OC) is among the most important performance metrics of communication systems operating over fading channels. Of interest in the present paper is the evaluation of the OC at the output of the Equal Gain Combining (EGC) and the Maximum Ratio Combining (MRC) receivers. In this case, it can be seen that this problem turns out to be that of computing the Cumulative Distribution Function (CDF) for the sum of independent random variables. Since finding a closedform expression for the CDF of the sum distribution is out of reach for a wide class of commonly used distributions, methods based on Monte Carlo (MC) simulations take pride of price. In order to allow for the estimation of the operating range of small outage probabilities, it is of paramount importance to develop fast and efficient estimation methods as naive Monte Carlo (MC) simulations would require high computational complexity. In this line, we propose in this work two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels. The first estimator is shown to possess the asymptotic optimality criterion and applies for arbitrary fading models, whereas the second one achieves the well-desired bounded relative error property for the majority of the well-known fading variates. Moreover, the second estimator is shown to achieve the asymptotic optimality property under the particular Log-normal environment. Some selected simulation results are finally provided in order to illustrate the substantial computational gain achieved by the proposed IS schemes over naive MC simulations.

  19. Unified Importance Sampling Schemes for Efficient Simulation of Outage Capacity over Generalized Fading Channels

    KAUST Repository

    Rached, Nadhir B.

    2015-11-13

    The outage capacity (OC) is among the most important performance metrics of communication systems operating over fading channels. Of interest in the present paper is the evaluation of the OC at the output of the Equal Gain Combining (EGC) and the Maximum Ratio Combining (MRC) receivers. In this case, it can be seen that this problem turns out to be that of computing the Cumulative Distribution Function (CDF) for the sum of independent random variables. Since finding a closedform expression for the CDF of the sum distribution is out of reach for a wide class of commonly used distributions, methods based on Monte Carlo (MC) simulations take pride of price. In order to allow for the estimation of the operating range of small outage probabilities, it is of paramount importance to develop fast and efficient estimation methods as naive Monte Carlo (MC) simulations would require high computational complexity. In this line, we propose in this work two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels. The first estimator is shown to possess the asymptotic optimality criterion and applies for arbitrary fading models, whereas the second one achieves the well-desired bounded relative error property for the majority of the well-known fading variates. Moreover, the second estimator is shown to achieve the asymptotic optimality property under the particular Log-normal environment. Some selected simulation results are finally provided in order to illustrate the substantial computational gain achieved by the proposed IS schemes over naive MC simulations.

  20. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  1. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  2. On Optimal, Minimal BRDF Sampling for Reflectance Acquisition

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Jensen, Henrik Wann; Ramamoorthi, Ravi

    2015-01-01

    The bidirectional reflectance distribution function (BRDF) is critical for rendering, and accurate material representation requires data-driven reflectance models. However, isotropic BRDFs are 3D functions, and measuring the reflectance of a flat sample can require a million incident and outgoing...... direction pairs, making the use of measured BRDFs impractical. In this paper, we address the problem of reconstructing a measured BRDF from a limited number of samples. We present a novel mapping of the BRDF space, allowing for extraction of descriptive principal components from measured databases......, such as the MERL BRDF database. We optimize for the best sampling directions, and explicitly provide the optimal set of incident and outgoing directions in the Rusinkiewicz parameterization for n = {1, 2, 5, 10, 20} samples. Based on the principal components, we describe a method for accurately reconstructing BRDF...

  3. Model-Based Predictive Control Scheme for Cost Optimization and Balancing Services for Supermarket Refrigeration Systems

    DEFF Research Database (Denmark)

    Weerts, Hermanus H. M.; Shafiei, Seyed Ehsan; Stoustrup, Jakob

    2014-01-01

    A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predictive...... control design. It is however shown that taking into account the knowledge of different time scales in the dynamical subsystems makes possible a linear formulation of a centralized predictive controller. A realistic scenario of regulatory power services in the smart grid is considered and formulated...... in the same objective as of cost optimization one. A simulation benchmark validated against real data and including significant dynamics of the system are employed to show the effectiveness of the proposed control scheme....

  4. A unified thermostat scheme for efficient configurational sampling for classical/quantum canonical ensembles via molecular dynamics

    Science.gov (United States)

    Zhang, Zhijun; Liu, Xinzijian; Chen, Zifei; Zheng, Haifeng; Yan, Kangyu; Liu, Jian

    2017-07-01

    We show a unified second-order scheme for constructing simple, robust, and accurate algorithms for typical thermostats for configurational sampling for the canonical ensemble. When Langevin dynamics is used, the scheme leads to the BAOAB algorithm that has been recently investigated. We show that the scheme is also useful for other types of thermostats, such as the Andersen thermostat and Nosé-Hoover chain, regardless of whether the thermostat is deterministic or stochastic. In addition to analytical analysis, two 1-dimensional models and three typical real molecular systems that range from the gas phase, clusters, to the condensed phase are used in numerical examples for demonstration. Accuracy may be increased by an order of magnitude for estimating coordinate-dependent properties in molecular dynamics (when the same time interval is used), irrespective of which type of thermostat is applied. The scheme is especially useful for path integral molecular dynamics because it consistently improves the efficiency for evaluating all thermodynamic properties for any type of thermostat.

  5. Continuous quality control of the blood sampling procedure using a structured observation scheme

    DEFF Research Database (Denmark)

    Seemann, T. L.; Nybo, M.

    2015-01-01

    Background: An important preanalytical factor is the blood sampling procedure and its adherence to the guidelines, i.e. CLSI and ISO 15189, in order to ensure a consistent quality of the blood collection. Therefore, it is critically important to introduce quality control on this part of the process....... As suggested by the EFLM working group on the preanalytical phase we introduced continuous quality control of the blood sampling procedure using a structured observation scheme to monitor the quality of blood sampling performed on an everyday basis. Materials and methods: Based on our own routines the EFLM....... Conclusion: It is possible to establish a continuous quality control on blood sampling. It has been well accepted by the staff and we have already been able to identify critical areas in the sampling process. We find that continuous auditing increase focus on the quality of blood collection which ensures...

  6. How to decide the optimal scheme and the optimal time for construction

    International Nuclear Information System (INIS)

    Gjermundsen, T.; Dalsnes, B.; Jensen, T.

    1991-01-01

    Since the development in Norway began some 105 years ago the mean annual generation has reached approximately 110 TWh. This means that there is a large potential for uprating and refurbishing (U/R). A project undertaken by the Norwegian Water Resources and Energy Administration (NVE) has identified energy resources by means of U/R to about 10 TWh annual generation. One problem in harnessing the potential owned by small and medium sized electricity boards is the lack of simple tools to help us carry out the right decisions. The paper describes a simple model to find the best solution of scheme and the optimal time to start. The principle of present value is used. The main input is: production, price, annual costs of maintenance, the remaining lifetime and the social rate of return. The model calculates the present value of U/R/N for different points of time to start U/R/N. In addition the present value of the existing plant is calculated. Several alternatives can be considered. The best one will be the one which gives the highest present value according to the value of the existing plant. The internal rate of return is also calculated. To be aware of the sensitivity a star diagram is shown. The model gives the opportunity to include environmental charges and the value of effect (peak power). (Author)

  7. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hojin; Li Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing Lei [Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States) and Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Ehwa University, Seoul 158-710 (Korea, Republic of); Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States); Department of Statistics, Stanford University, Stanford, California 94305-4065 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5304 (United States)

    2012-07-15

    Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the

  8. Hierarchical Control for Optimal and Distributed Operation of Microgrid Systems

    DEFF Research Database (Denmark)

    Meng, Lexuan

    manages the power flow with external grids, while the economic and optimal operation of MGs is not guaranteed by applying the existing schemes. Accordingly, this project dedicates to the study of real-time optimization methods for MGs, including the review of optimization algorithms, system level...... mathematical modeling, and the implementation of real-time optimization into existing hierarchical control schemes. Efficiency enhancement in DC MGs and optimal unbalance compensation in AC MGs are taken as the optimization objectives in this project. Necessary system dynamic modeling and stability analysis......, a discrete-time domain modeling method is proposed to establish an accurate system level model. Taking into account the different sampling times of real world plant, digital controller and communication devices, the system is modeled with these three parts separately, and with full consideration...

  9. Labeling schemes for bounded degree graphs

    DEFF Research Database (Denmark)

    Adjiashvili, David; Rotbart, Noy Galil

    2014-01-01

    We investigate adjacency labeling schemes for graphs of bounded degree Δ = O(1). In particular, we present an optimal (up to an additive constant) log n + O(1) adjacency labeling scheme for bounded degree trees. The latter scheme is derived from a labeling scheme for bounded degree outerplanar...... graphs. Our results complement a similar bound recently obtained for bounded depth trees [Fraigniaud and Korman, SODA 2010], and may provide new insights for closing the long standing gap for adjacency in trees [Alstrup and Rauhe, FOCS 2002]. We also provide improved labeling schemes for bounded degree...

  10. A Velocity-Level Bi-Criteria Optimization Scheme for Coordinated Path Tracking of Dual Robot Manipulators Using Recurrent Neural Network.

    Science.gov (United States)

    Xiao, Lin; Zhang, Yongsheng; Liao, Bolin; Zhang, Zhijun; Ding, Lei; Jin, Long

    2017-01-01

    A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network.

  11. Optimization of trigeneration systems by Mathematical Programming: Influence of plant scheme and boundary conditions

    International Nuclear Information System (INIS)

    Piacentino, A.; Gallea, R.; Cardona, F.; Lo Brano, V.; Ciulla, G.; Catrini, P.

    2015-01-01

    Highlights: • Lay-out, design and operation of trigeneration plant is optimized for hotel building. • The temporal basis used for the optimization is properly selected. • The influence of plant scheme on the optimal results is discussed. • Sensitivity analysis is performed for different levels of tax exemption on fuel. • Dynamic behavior of the cogeneration unit influences its optimal operation strategy. - Abstract: The large potential for energy saving by cogeneration and trigeneration in the building sector is scarcely exploited due to a number of obstacles in making the investments attractive. The analyst often encounters difficulties in identifying optimal design and operation strategies, since a number of factors, either endogenous (i.e. related with the energy load profiles) and exogenous (i.e. related with external conditions like energy prices and support mechanisms), influence the economic viability. In this paper a decision tool is adopted, which represents an upgrade of a software analyzed in previous papers; the tool simultaneously optimizes the plant lay-out, the sizes of the main components and their operation strategy. For a specific building in the hotel sector, a preliminary analysis is performed to identify the most promising plant configuration, in terms of type of cogeneration unit (either microturbine or diesel oil/natural gas-fueled reciprocate engine) and absorption chiller. Then, sensitivity analyses are carried out to investigate the effects induced by: (a) tax exemption for the fuel consumed in “efficient cogeneration” mode, (b) dynamic behavior of the prime mover and consequent capability to rapidly adjust its load level to follow the energy loads

  12. Using remotely-sensed data for optimal field sampling

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-09-01

    Full Text Available M B E R 2 0 0 8 15 USING REMOTELY- SENSED DATA FOR OPTIMAL FIELD SAMPLING BY DR PRAVESH DEBBA STATISTICS IS THE SCIENCE pertaining to the collection, summary, analysis, interpretation and presentation of data. It is often impractical... studies are: where to sample, what to sample and how many samples to obtain. Conventional sampling techniques are not always suitable in environmental studies and scientists have explored the use of remotely-sensed data as ancillary information to aid...

  13. Performance Analysis and Optimization of an Adaptive Admission Control Scheme in Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Shunfu Jin

    2013-01-01

    Full Text Available In cognitive radio networks, if all the secondary user (SU packets join the system without any restrictions, the average latency of the SU packets will be greater, especially when the traffic load of the system is higher. For this, we propose an adaptive admission control scheme with a system access probability for the SU packets in this paper. We suppose the system access probability is inversely proportional to the total number of packets in the system and introduce an Adaptive Factor to adjust the system access probability. Accordingly, we build a discrete-time preemptive queueing model with adjustable joining rate. In order to obtain the steady-state distribution of the queueing model exactly, we construct a two-dimensional Markov chain. Moreover, we derive the formulas for the blocking rate, the throughput, and the average latency of the SU packets. Afterwards, we provide numerical results to investigate the influence of the Adaptive Factor on different performance measures. We also give the individually optimal strategy and the socially optimal strategy from the standpoints of the SU packets. Finally, we provide a pricing mechanism to coordinate the two optimal strategies.

  14. Optimized variational analysis scheme of single Doppler radar wind data

    Science.gov (United States)

    Sasaki, Yoshi K.; Allen, Steve; Mizuno, Koki; Whitehead, Victor; Wilk, Kenneth E.

    1989-01-01

    A computer scheme for extracting singularities has been developed and applied to single Doppler radar wind data. The scheme is planned for use in real-time wind and singularity analysis and forecasting. The method, known as Doppler Operational Variational Extraction of Singularities is outlined, focusing on the principle of local symmetry. Results are presented from the application of the scheme to a storm-generated gust front in Oklahoma on May 28, 1987.

  15. Field sampling scheme optimization using simulated annealing

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-10-01

    Full Text Available : silica (quartz, chalcedony, and opal)→ alunite → kaolinite → illite → smectite → chlorite. Associated with this mineral alteration are high sulphidation gold deposits and low sulphidation base metal deposits. Gold min- eralization is located... of vuggy (porous) quartz, opal and gray and black chalcedony veins. Vuggy quartz (porous quartz) is formed from extreme leaching of the host rock. It hosts high sulphidation gold mineralization and is evidence for a hypogene event. Alteration...

  16. OPTIMIZATION OF THE TEMPERATURE CONTROL SCHEME FOR ROLLER COMPACTED CONCRETE DAMS BASED ON FINITE ELEMENT AND SENSITIVITY ANALYSIS METHODS

    Directory of Open Access Journals (Sweden)

    Huawei Zhou

    2016-10-01

    Full Text Available Achieving an effective combination of various temperature control measures is critical for temperature control and crack prevention of concrete dams. This paper presents a procedure for optimizing the temperature control scheme of roller compacted concrete (RCC dams that couples the finite element method (FEM with a sensitivity analysis method. In this study, seven temperature control schemes are defined according to variations in three temperature control measures: concrete placement temperature, water-pipe cooling time, and thermal insulation layer thickness. FEM is employed to simulate the equivalent temperature field and temperature stress field obtained under each of the seven designed temperature control schemes for a typical overflow dam monolith based on the actual characteristics of a RCC dam located in southwestern China. A sensitivity analysis is subsequently conducted to investigate the degree of influence each of the three temperature control measures has on the temperature field and temperature tensile stress field of the dam. Results show that the placement temperature has a substantial influence on the maximum temperature and tensile stress of the dam, and that the placement temperature cannot exceed 15 °C. The water-pipe cooling time and thermal insulation layer thickness have little influence on the maximum temperature, but both demonstrate a substantial influence on the maximum tensile stress of the dam. The thermal insulation thickness is significant for reducing the probability of cracking as a result of high thermal stress, and the maximum tensile stress can be controlled under the specification limit with a thermal insulation layer thickness of 10 cm. Finally, an optimized temperature control scheme for crack prevention is obtained based on the analysis results.

  17. A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme

    Science.gov (United States)

    Ghoman, Satyajit S.

    The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of

  18. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Science.gov (United States)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  19. Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP

    Science.gov (United States)

    Roshani, E.; Berg, A. A.; Lindsay, J.

    2013-12-01

    Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories

  20. A radial sampling strategy for uniform k-space coverage with retrospective respiratory gating in 3D ultrashort-echo-time lung imaging.

    Science.gov (United States)

    Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon

    2016-05-01

    The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.

  1. New Imaging Operation Scheme at VLTI

    Science.gov (United States)

    Haubois, Xavier

    2018-04-01

    After PIONIER and GRAVITY, MATISSE will soon complete the set of 4 telescope beam combiners at VLTI. Together with recent developments in the image reconstruction algorithms, the VLTI aims to develop its operation scheme to allow optimized and adaptive UV plane coverage. The combination of spectro-imaging instruments, optimized operation framework and image reconstruction algorithms should lead to an increase of the reliability and quantity of the interferometric images. In this contribution, I will present the status of this new scheme as well as possible synergies with other instruments.

  2. Evaluation of sampling schemes for in-service inspection of steam generator tubing

    International Nuclear Information System (INIS)

    Hanlen, R.C.

    1990-03-01

    This report is a follow-on of work initially sponsored by the US Nuclear Regulatory Commission (Bowen et al. 1989). The work presented here is funded by EPRI and is jointly sponsored by the Electric Power Research Institute (EPRI) and the US Nuclear Regulatory Commission (NRC). The goal of this research was to evaluate fourteen sampling schemes or plans. The main criterion used for evaluating plan performance was the effectiveness for sampling, detecting and plugging defective tubes. The performance criterion was evaluated across several choices of distributions of degraded/defective tubes, probability of detection (POD) curves and eddy-current sizing models. Conclusions from this study are dependent upon the tube defect distributions, sample size, and expansion rules considered. As degraded/defective tubes form ''clusters'' (i.e., maps 6A, 8A and 13A), the smaller sample sizes provide a capability of detecting and sizing defective tubes that approaches 100% inspection. When there is little or no clustering (i.e., maps 1A, 20 and 21), sample efficiency is approximately equal to the initial sample size taken. Thee is an indication (though not statistically significant) that the systematic sampling plans are better than the random sampling plans for equivalent initial sample size. There was no indication of an effect due to modifying the threshold value for the second stage expansion. The lack of an indication is likely due to the specific tube flaw sizes considered for the six tube maps. 1 ref., 11 figs., 19 tabs

  3. Practical continuous-variable quantum key distribution without finite sampling bandwidth effects.

    Science.gov (United States)

    Li, Huasheng; Wang, Chao; Huang, Peng; Huang, Duan; Wang, Tao; Zeng, Guihua

    2016-09-05

    In a practical continuous-variable quantum key distribution system, finite sampling bandwidth of the employed analog-to-digital converter at the receiver's side may lead to inaccurate results of pulse peak sampling. Then, errors in the parameters estimation resulted. Subsequently, the system performance decreases and security loopholes are exposed to eavesdroppers. In this paper, we propose a novel data acquisition scheme which consists of two parts, i.e., a dynamic delay adjusting module and a statistical power feedback-control algorithm. The proposed scheme may improve dramatically the data acquisition precision of pulse peak sampling and remove the finite sampling bandwidth effects. Moreover, the optimal peak sampling position of a pulse signal can be dynamically calibrated through monitoring the change of the statistical power of the sampled data in the proposed scheme. This helps to resist against some practical attacks, such as the well-known local oscillator calibration attack.

  4. Alternative difference analysis scheme combining R-space EXAFS fit with global optimization XANES fit for X-ray transient absorption spectroscopy.

    Science.gov (United States)

    Zhan, Fei; Tao, Ye; Zhao, Haifeng

    2017-07-01

    Time-resolved X-ray absorption spectroscopy (TR-XAS), based on the laser-pump/X-ray-probe method, is powerful in capturing the change of the geometrical and electronic structure of the absorbing atom upon excitation. TR-XAS data analysis is generally performed on the laser-on minus laser-off difference spectrum. Here, a new analysis scheme is presented for the TR-XAS difference fitting in both the extended X-ray absorption fine-structure (EXAFS) and the X-ray absorption near-edge structure (XANES) regions. R-space EXAFS difference fitting could quickly provide the main quantitative structure change of the first shell. The XANES fitting part introduces a global non-derivative optimization algorithm and optimizes the local structure change in a flexible way where both the core XAS calculation package and the search method in the fitting shell are changeable. The scheme was applied to the TR-XAS difference analysis of Fe(phen) 3 spin crossover complex and yielded reliable distance change and excitation population.

  5. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  6. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  7. Data Assimilation with Optimal Maps

    Science.gov (United States)

    El Moselhy, T.; Marzouk, Y.

    2012-12-01

    Tarek El Moselhy and Youssef Marzouk Massachusetts Institute of Technology We present a new approach to Bayesian inference that entirely avoids Markov chain simulation and sequential importance resampling, by constructing a map that pushes forward the prior measure to the posterior measure. Existence and uniqueness of a suitable measure-preserving map is established by formulating the problem in the context of optimal transport theory. The map is written as a multivariate polynomial expansion and computed efficiently through the solution of a stochastic optimization problem. While our previous work [1] focused on static Bayesian inference problems, we now extend the map-based approach to sequential data assimilation, i.e., nonlinear filtering and smoothing. One scheme involves pushing forward a fixed reference measure to each filtered state distribution, while an alternative scheme computes maps that push forward the filtering distribution from one stage to the other. We compare the performance of these schemes and extend the former to problems of smoothing, using a map implementation of the forward-backward smoothing formula. Advantages of a map-based representation of the filtering and smoothing distributions include analytical expressions for posterior moments and the ability to generate arbitrary numbers of independent uniformly-weighted posterior samples without additional evaluations of the dynamical model. Perhaps the main advantage, however, is that the map approach inherently avoids issues of sample impoverishment, since it explicitly represents the posterior as the pushforward of a reference measure, rather than with a particular set of samples. The computational complexity of our algorithm is comparable to state-of-the-art particle filters. Moreover, the accuracy of the approach is controlled via the convergence criterion of the underlying optimization problem. We demonstrate the efficiency and accuracy of the map approach via data assimilation in

  8. Intel Xeon Phi accelerated Weather Research and Forecasting (WRF) Goddard microphysics scheme

    Science.gov (United States)

    Mielikainen, J.; Huang, B.; Huang, A. H.-L.

    2014-12-01

    The Weather Research and Forecasting (WRF) model is a numerical weather prediction system designed to serve both atmospheric research and operational forecasting needs. The WRF development is a done in collaboration around the globe. Furthermore, the WRF is used by academic atmospheric scientists, weather forecasters at the operational centers and so on. The WRF contains several physics components. The most time consuming one is the microphysics. One microphysics scheme is the Goddard cloud microphysics scheme. It is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the Goddard scheme code. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU does. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is one familiar to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discussed in this paper. The results show that the optimizations improved performance of Goddard microphysics scheme on Xeon Phi 7120P by a factor of 4.7×. In addition, the optimizations reduced the Goddard microphysics scheme's share of the total WRF processing time from 20.0 to 7.5%. Furthermore, the same optimizations

  9. Geminal embedding scheme for optimal atomic basis set construction in correlated calculations

    Energy Technology Data Exchange (ETDEWEB)

    Sorella, S., E-mail: sorella@sissa.it [International School for Advanced Studies (SISSA), Via Beirut 2-4, 34014 Trieste, Italy and INFM Democritos National Simulation Center, Trieste (Italy); Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr [Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France); Mazzola, G., E-mail: gmazzola@phys.ethz.ch [Theoretische Physik, ETH Zurich, 8093 Zurich (Switzerland); Casula, M., E-mail: michele.casula@impmc.upmc.fr [CNRS and Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France)

    2015-12-28

    We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.

  10. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  11. Critical evaluation of sample pretreatment techniques.

    Science.gov (United States)

    Hyötyläinen, Tuulia

    2009-06-01

    Sample preparation before chromatographic separation is the most time-consuming and error-prone part of the analytical procedure. Therefore, selecting and optimizing an appropriate sample preparation scheme is a key factor in the final success of the analysis, and the judicious choice of an appropriate procedure greatly influences the reliability and accuracy of a given analysis. The main objective of this review is to critically evaluate the applicability, disadvantages, and advantages of various sample preparation techniques. Particular emphasis is placed on extraction techniques suitable for both liquid and solid samples.

  12. Optimism is universal: exploring the presence and benefits of optimism in a representative sample of the world.

    Science.gov (United States)

    Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D

    2013-10-01

    Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations. © 2012 Wiley Periodicals, Inc.

  13. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  14. Optimization of selective inversion recovery magnetization transfer imaging for macromolecular content mapping in the human brain.

    Science.gov (United States)

    Dortch, Richard D; Bagnato, Francesca; Gochberg, Daniel F; Gore, John C; Smith, Seth A

    2018-03-24

    To optimize a selective inversion recovery (SIR) sequence for macromolecular content mapping in the human brain at 3.0T. SIR is a quantitative method for measuring magnetization transfer (qMT) that uses a low-power, on-resonance inversion pulse. This results in a biexponential recovery of free water signal that can be sampled at various inversion/predelay times (t I/ t D ) to estimate a subset of qMT parameters, including the macromolecular-to-free pool-size-ratio (PSR), the R 1 of free water (R 1f ), and the rate of MT exchange (k mf ). The adoption of SIR has been limited by long acquisition times (≈4 min/slice). Here, we use Cramér-Rao lower bound theory and data reduction strategies to select optimal t I /t D combinations to reduce imaging times. The schemes were experimentally validated in phantoms, and tested in healthy volunteers (N = 4) and a multiple sclerosis patient. Two optimal sampling schemes were determined: (i) a 5-point scheme (k mf estimated) and (ii) a 4-point scheme (k mf assumed). In phantoms, the 5/4-point schemes yielded parameter estimates with similar SNRs as our previous 16-point scheme, but with 4.1/6.1-fold shorter scan times. Pair-wise comparisons between schemes did not detect significant differences for any scheme/parameter. In humans, parameter values were consistent with published values, and similar levels of precision were obtained from all schemes. Furthermore, fixing k mf reduced the sensitivity of PSR to partial-volume averaging, yielding more consistent estimates throughout the brain. qMT parameters can be robustly estimated in ≤1 min/slice (without independent measures of ΔB 0 , B1+, and T 1 ) when optimized t I -t D combinations are selected. © 2018 International Society for Magnetic Resonance in Medicine.

  15. Adaptive sampling of AEM transients

    Science.gov (United States)

    Di Massa, Domenico; Florio, Giovanni; Viezzoli, Andrea

    2016-02-01

    This paper focuses on the sampling of the electromagnetic transient as acquired by airborne time-domain electromagnetic (TDEM) systems. Typically, the sampling of the electromagnetic transient is done using a fixed number of gates whose width grows logarithmically (log-gating). The log-gating has two main benefits: improving the signal to noise (S/N) ratio at late times, when the electromagnetic signal has amplitudes equal or lower than the natural background noise, and ensuring a good resolution at the early times. However, as a result of fixed time gates, the conventional log-gating does not consider any geological variations in the surveyed area, nor the possibly varying characteristics of the measured signal. We show, using synthetic models, how a different, flexible sampling scheme can increase the resolution of resistivity models. We propose a new sampling method, which adapts the gating on the base of the slope variations in the electromagnetic (EM) transient. The use of such an alternative sampling scheme aims to get more accurate inverse models by extracting the geoelectrical information from the measured data in an optimal way.

  16. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    Science.gov (United States)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  17. Relevance of sampling schemes in light of Ruelle's linear response theory

    International Nuclear Information System (INIS)

    Lucarini, Valerio; Wouters, Jeroen; Faranda, Davide; Kuna, Tobias

    2012-01-01

    We reconsider the theory of the linear response of non-equilibrium steady states to perturbations. We first show that using a general functional decomposition for space–time dependent forcings, we can define elementary susceptibilities that allow us to construct the linear response of the system to general perturbations. Starting from the definition of SRB measure, we then study the consequence of taking different sampling schemes for analysing the response of the system. We show that only a specific choice of the time horizon for evaluating the response of the system to a general time-dependent perturbation allows us to obtain the formula first presented by Ruelle. We also discuss the special case of periodic perturbations, showing that when they are taken into consideration the sampling can be fine-tuned to make the definition of the correct time horizon immaterial. Finally, we discuss the implications of our results in terms of strategies for analysing the outputs of numerical experiments by providing a critical review of a formula proposed by Reick

  18. Capacity-achieving CPM schemes

    OpenAIRE

    Perotti, Alberto; Tarable, Alberto; Benedetto, Sergio; Montorsi, Guido

    2008-01-01

    The pragmatic approach to coded continuous-phase modulation (CPM) is proposed as a capacity-achieving low-complexity alternative to the serially-concatenated CPM (SC-CPM) coding scheme. In this paper, we first perform a selection of the best spectrally-efficient CPM modulations to be embedded into SC-CPM schemes. Then, we consider the pragmatic capacity (a.k.a. BICM capacity) of CPM modulations and optimize it through a careful design of the mapping between input bits and CPM waveforms. The s...

  19. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    Science.gov (United States)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically

  20. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  1. An effective coded excitation scheme based on a predistorted FM signal and an optimized digital filter

    DEFF Research Database (Denmark)

    Misaridis, Thanasis; Jensen, Jørgen Arendt

    1999-01-01

    This paper presents a coded excitation imaging system based on a predistorted FM excitation and a digital compression filter designed for medical ultrasonic applications, in order to preserve both axial resolution and contrast. In radars, optimal Chebyshev windows efficiently weight a nearly...... as with pulse excitation (about 1.5 lambda), depending on the filter design criteria. The axial sidelobes are below -40 dB, which is the noise level of the measuring imaging system. The proposed excitation/compression scheme shows good overall performance and stability to the frequency shift due to attenuation...... be removed by weighting. We show that by using a predistorted chirp with amplitude or phase shaping for amplitude ripple reduction and a correlation filter that accounts for the transducer's natural frequency weighting, output sidelobe levels of -35 to -40 dB are directly obtained. When an optimized filter...

  2. A classification scheme for risk assessment methods.

    Energy Technology Data Exchange (ETDEWEB)

    Stamp, Jason Edwin; Campbell, Philip LaRoche

    2004-08-01

    This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In

  3. Optimized preparation of urine samples for two-dimensional electrophoresis and initial application to patient samples

    DEFF Research Database (Denmark)

    Lafitte, Daniel; Dussol, Bertrand; Andersen, Søren

    2002-01-01

    OBJECTIVE: We optimized of the preparation of urinary samples to obtain a comprehensive map of urinary proteins of healthy subjects and then compared this map with the ones obtained with patient samples to show that the pattern was specific of their kidney disease. DESIGN AND METHODS: The urinary...

  4. A correction scheme for thermal conductivity measurement using the comparative cut-bar technique based on 3D numerical simulation

    International Nuclear Information System (INIS)

    Xing, Changhu; Folsom, Charles; Jensen, Colby; Ban, Heng; Marshall, Douglas W

    2014-01-01

    As an important factor affecting the accuracy of thermal conductivity measurement, systematic (bias) error in the guarded comparative axial heat flow (cut-bar) method was mostly neglected by previous researches. This bias is primarily due to the thermal conductivity mismatch between sample and meter bars (reference), which is common for a sample of unknown thermal conductivity. A correction scheme, based on finite element simulation of the measurement system, was proposed to reduce the magnitude of the overall measurement uncertainty. This scheme was experimentally validated by applying corrections on four types of sample measurements in which the specimen thermal conductivity is much smaller, slightly smaller, equal and much larger than that of the meter bar. As an alternative to the optimum guarding technique proposed before, the correction scheme can be used to minimize the uncertainty contribution from the measurement system with non-optimal guarding conditions. It is especially necessary for large thermal conductivity mismatches between sample and meter bars. (paper)

  5. Optimal experiment design in a filtering context with application to sampled network data

    OpenAIRE

    Singhal, Harsh; Michailidis, George

    2010-01-01

    We examine the problem of optimal design in the context of filtering multiple random walks. Specifically, we define the steady state E-optimal design criterion and show that the underlying optimization problem leads to a second order cone program. The developed methodology is applied to tracking network flow volumes using sampled data, where the design variable corresponds to controlling the sampling rate. The optimal design is numerically compared to a myopic and a naive strategy. Finally, w...

  6. Optimization of protein samples for NMR using thermal shift assays

    International Nuclear Information System (INIS)

    Kozak, Sandra; Lercher, Lukas; Karanth, Megha N.; Meijers, Rob; Carlomagno, Teresa; Boivin, Stephane

    2016-01-01

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor"® provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  7. Optimization of protein samples for NMR using thermal shift assays

    Energy Technology Data Exchange (ETDEWEB)

    Kozak, Sandra [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Lercher, Lukas; Karanth, Megha N. [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Meijers, Rob [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Carlomagno, Teresa, E-mail: teresa.carlomagno@oci.uni-hannover.de [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Boivin, Stephane, E-mail: sboivin77@hotmail.com, E-mail: s.boivin@embl-hamburg.de [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany)

    2016-04-15

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor{sup ®} provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  8. Estimates and sampling schemes for the instrumentation of accountability systems

    International Nuclear Information System (INIS)

    Jewell, W.S.; Kwiatkowski, J.W.

    1976-10-01

    The problem of estimation of a physical quantity from a set of measurements is considered, where the measurements are made on samples with a hierarchical error structure, and where within-groups error variances may vary from group to group at each level of the structure; minimum mean squared-error estimators are developed, and the case where the physical quantity is a random variable with known prior mean and variance is included. Estimators for the error variances are also given, and optimization of experimental design is considered

  9. Wind power and market integration, comparative study of financing schemes

    International Nuclear Information System (INIS)

    2013-10-01

    The financing scheme of renewable energies is a key factor for their development pace and cost. As some countries like France, Germany or Spain have chosen a Feed-in Tariff (FiT) scheme, there are in fact four possible financing schemes: FiT, ex-post prime, ex-ante prime, and quotas (green certificates). A market convergence is then supposed to meet two main objectives: the control of market distortions related to wind energy development, and the optimization of wind energy production with respect to market signals. The authors analyse the underlying economic challenges and the ability of financing schemes to meet these objectives within a short term horizon (2015). They present the different financing schemes, analyse the impact of three key economic factors (market distortion, production optimization, financing costs)

  10. Monte Carlo importance sampling optimization for system reliability applications

    International Nuclear Information System (INIS)

    Campioni, Luca; Vestrucci, Paolo

    2004-01-01

    This paper focuses on the reliability analysis of multicomponent systems by the importance sampling technique, and, in particular, it tackles the optimization aspect. A methodology based on the minimization of the variance at the component level is proposed for the class of systems consisting of independent components. The claim is that, by means of such a methodology, the optimal biasing could be achieved without resorting to the typical approach by trials

  11. Optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme for Intel Many Integrated Core (MIC) architecture

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.

  12. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  13. Representing major soil variability at regional scale by constrained Latin Hypercube Sampling of remote sensing data

    NARCIS (Netherlands)

    Mulder, V.L.; Bruin, de S.; Schaepman, M.E.

    2013-01-01

    This paper presents a sparse, remote sensing-based sampling approach making use of conditioned Latin Hypercube Sampling (cLHS) to assess variability in soil properties at regional scale. The method optimizes the sampling scheme for a defined spatial population based on selected covariates, which are

  14. A 64-channel readout ASIC for nanowire biosensor array with electrical calibration scheme.

    Science.gov (United States)

    Chai, Kevin T C; Choe, Kunil; Bernal, Olivier D; Gopalakrishnan, Pradeep K; Zhang, Guo-Jun; Kang, Tae Goo; Je, Minkyu

    2010-01-01

    A 1.8-mW, 18.5-mm(2) 64-channel current readout ASIC was implemented in 0.18-µm CMOS together with a new calibration scheme for silicon nanowire biosensor arrays. The ASIC consists of 64 channels of dedicated readout and conditioning circuits which incorporate correlated double sampling scheme to reduce the effect of 1/f noise and offset from the analog front-end. The ASIC provides a 10-bit digital output with a sampling rate of 300 S/s whilst achieving a minimum resolution of 7 pA(rms). A new electrical calibration method was introduced to mitigate the issue of large variations in the nano-scale sensor device parameters and optimize the sensor sensitivity. The experimental results show that the proposed calibration technique improved the sensitivity by 2 to 10 times and reduced the variation between dataset by 9 times.

  15. Design of Infusion Schemes for Neuroreceptor Imaging

    DEFF Research Database (Denmark)

    Feng, Ling; Svarer, Claus; Madsen, Karine

    2016-01-01

    for bolus infusion (BI) or programmed infusion (PI) experiments. Steady-state quantitative measurements can be made with one short scan and venous blood samples. The GABAA receptor ligand [(11)C]Flumazenil (FMZ) was chosen for this purpose, as it lacks a suitable reference region. Methods. Five bolus [(11)C...... state was attained within 40 min, which was 8 min earlier than the optimal BI (B/I ratio = 55 min). Conclusions. The system can design both BI and PI schemes to attain steady state rapidly. For example, subjects can be [(11)C]FMZ-PET scanned after 40 min of tracer infusion for 40 min with venous...

  16. Design and implementation of an optimal laser pulse front tilting scheme for ultrafast electron diffraction in reflection geometry with high temporal resolution

    Directory of Open Access Journals (Sweden)

    Francesco Pennacchio

    2017-07-01

    Full Text Available Ultrafast electron diffraction is a powerful technique to investigate out-of-equilibrium atomic dynamics in solids with high temporal resolution. When diffraction is performed in reflection geometry, the main limitation is the mismatch in group velocity between the overlapping pump light and the electron probe pulses, which affects the overall temporal resolution of the experiment. A solution already available in the literature involved pulse front tilt of the pump beam at the sample, providing a sub-picosecond time resolution. However, in the reported optical scheme, the tilted pulse is characterized by a temporal chirp of about 1 ps at 1 mm away from the centre of the beam, which limits the investigation of surface dynamics in large crystals. In this paper, we propose an optimal tilting scheme designed for a radio-frequency-compressed ultrafast electron diffraction setup working in reflection geometry with 30 keV electron pulses containing up to 105 electrons/pulse. To characterize our scheme, we performed optical cross-correlation measurements, obtaining an average temporal width of the tilted pulse lower than 250 fs. The calibration of the electron-laser temporal overlap was obtained by monitoring the spatial profile of the electron beam when interacting with the plasma optically induced at the apex of a copper needle (plasma lensing effect. Finally, we report the first time-resolved results obtained on graphite, where the electron-phonon coupling dynamics is observed, showing an overall temporal resolution in the sub-500 fs regime. The successful implementation of this configuration opens the way to directly probe structural dynamics of low-dimensional systems in the sub-picosecond regime, with pulsed electrons.

  17. Sensor scheme design for active structural acoustic control

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    Efficient sensing schemes for the active reduction of sound radiation from plates are presented based on error signals derived from spatially weighted plate velocity or near-field pressure. The schemes result in near-optimal reductions as compared to weighting procedures derived from eigenvector or

  18. Multiuser switched diversity scheduling schemes

    KAUST Repository

    Shaqfeh, Mohammad; Alnuweiri, Hussein M.; Alouini, Mohamed-Slim

    2012-01-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.

  19. Multiuser switched diversity scheduling schemes

    KAUST Repository

    Shaqfeh, Mohammad

    2012-09-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.

  20. An Optimal Integrated Control Scheme for Permanent Magnet Synchronous Generator-Based Wind Turbines under Asymmetrical Grid Fault Conditions

    Directory of Open Access Journals (Sweden)

    Dan Wang

    2016-04-01

    Full Text Available In recent years, the increasing penetration level of wind energy into power systems has brought new issues and challenges. One of the main concerns is the issue of dynamic response capability during outer disturbance conditions, especially the fault-tolerance capability during asymmetrical faults. In order to improve the fault-tolerance and dynamic response capability under asymmetrical grid fault conditions, an optimal integrated control scheme for the grid-side voltage-source converter (VSC of direct-driven permanent magnet synchronous generator (PMSG-based wind turbine systems is proposed in this paper. The optimal control strategy includes a main controller and an additional controller. In the main controller, a double-loop controller based on differential flatness-based theory is designed for grid-side VSC. Two parts are involved in the design process of the flatness-based controller: the reference trajectories generation of flatness output and the implementation of the controller. In the additional control aspect, an auxiliary second harmonic compensation control loop based on an improved calculation method for grid-side instantaneous transmission power is designed by the quasi proportional resonant (Quasi-PR control principle, which is able to simultaneously restrain the second harmonic components in active power and reactive power injected into the grid without the respective calculation for current control references. Moreover, to reduce the DC-link overvoltage during grid faults, the mathematical model of DC-link voltage is analyzed and a feedforward modified control factor is added to the traditional DC voltage control loop in grid-side VSC. The effectiveness of the optimal control scheme is verified in PSCAD/EMTDC simulation software.

  1. Optimal sampling plan for clean development mechanism energy efficiency lighting projects

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2013-01-01

    Highlights: • A metering cost minimisation model is built to assist the sampling plan for CDM projects. • The model minimises the total metering cost by the determination of optimal sample size. • The required 90/10 criterion sampling accuracy is maintained. • The proposed metering cost minimisation model is applicable to other CDM projects as well. - Abstract: Clean development mechanism (CDM) project developers are always interested in achieving required measurement accuracies with the least metering cost. In this paper, a metering cost minimisation model is proposed for the sampling plan of a specific CDM energy efficiency lighting project. The problem arises from the particular CDM sampling requirement of 90% confidence and 10% precision for the small-scale CDM energy efficiency projects, which is known as the 90/10 criterion. The 90/10 criterion can be met through solving the metering cost minimisation problem. All the lights in the project are classified into different groups according to uncertainties of the lighting energy consumption, which are characterised by their statistical coefficient of variance (CV). Samples from each group are randomly selected to install power meters. These meters include less expensive ones with less functionality and more expensive ones with greater functionality. The metering cost minimisation model will minimise the total metering cost through the determination of the optimal sample size at each group. The 90/10 criterion is formulated as constraints to the metering cost objective. The optimal solution to the minimisation problem will therefore minimise the metering cost whilst meeting the 90/10 criterion, and this is verified by a case study. Relationships between the optimal metering cost and the population sizes of the groups, CV values and the meter equipment cost are further explored in three simulations. The metering cost minimisation model proposed for lighting systems is applicable to other CDM projects as

  2. A simple language to script and simulate breeding schemes: the breeding scheme language

    Science.gov (United States)

    It is difficult for plant breeders to determine an optimal breeding strategy given that the problem involves many factors, such as target trait genetic architecture and breeding resource availability. There are many possible breeding schemes for each breeding program. Although simulation study may b...

  3. An intelligent hybrid scheme for optimizing parking space: A Tabu metaphor and rough set based approach

    Directory of Open Access Journals (Sweden)

    Soumya Banerjee

    2011-03-01

    Full Text Available Congested roads, high traffic, and parking problems are major concerns for any modern city planning. Congestion of on-street spaces in official neighborhoods may give rise to inappropriate parking areas in office and shopping mall complex during the peak time of official transactions. This paper proposes an intelligent and optimized scheme to solve parking space problem for a small city (e.g., Mauritius using a reactive search technique (named as Tabu Search assisted by rough set. Rough set is being used for the extraction of uncertain rules that exist in the databases of parking situations. The inclusion of rough set theory depicts the accuracy and roughness, which are used to characterize uncertainty of the parking lot. Approximation accuracy is employed to depict accuracy of a rough classification [1] according to different dynamic parking scenarios. And as such, the hybrid metaphor proposed comprising of Tabu Search and rough set could provide substantial research directions for other similar hard optimization problems.

  4. Judgement of Design Scheme Based on Flexible Constraint in ICAD

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The conception of flexible constraint is proposed in the paper. The solution of flexible constraint is in special range, and maybe different in different instances of same design scheme. The paper emphasis on how to evaluate and optimize a design scheme with flexible constraints based on the satisfaction degree function defined on flexible constraints. The conception of flexible constraint is used to solve constraint conflict and design optimization in complicated constraint-based assembly design by the PFM parametrization assembly design system. An instance of gear-box design is used for verifying optimization method.

  5. Formal Model of Certificate Omission Schemes in VANET

    NARCIS (Netherlands)

    Feiri, Michael; Petit, Jonathan; Kargl, Frank

    2014-01-01

    The benefits of certificate omission schemes in VANET have been so far proven by simulation. However, the research community is lacking of a formal model that would allow implementers and policy makers to select the optimal parameters for such schemes. In this paper, we lay the foundations of the

  6. On Richardson extrapolation for low-dissipation low-dispersion diagonally implicit Runge-Kutta schemes

    Science.gov (United States)

    Havasi, Ágnes; Kazemi, Ehsan

    2018-04-01

    In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.

  7. Optimal spatial sampling scheme to characterize mine tailings

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-08-01

    Full Text Available , the location and the covariates as external drift were used to estimate the heavy metal concentration, E[Z(x)] = b 0 + b1 · xu + b2 · xv + b3 ·GOE(x) + b4 · JAR(x) + b5 · FER(x) + b6 ·HEM(x) (8) +b 7 ·KAO(x) + b8 · COP(x) , S TC P M s S es si on s namely, a first... (4) E[Z(x)] = b 0 + ∑ bi · yi(x) = ∑ bi · yi(x) , i=1 i=0 where y0(x) = 1. The method of merging both sources of information uses {yi(x)} as an external drift function for the estimation of Z(x). The drift of Z(x) is defined externally through...

  8. A high-precision sampling scheme to assess persistence and transport characteristics of micropollutants in rivers.

    Science.gov (United States)

    Schwientek, Marc; Guillet, Gaëlle; Rügner, Hermann; Kuch, Bertram; Grathwohl, Peter

    2016-01-01

    Increasing numbers of organic micropollutants are emitted into rivers via municipal wastewaters. Due to their persistence many pollutants pass wastewater treatment plants without substantial removal. Transport and fate of pollutants in receiving waters and export to downstream ecosystems is not well understood. In particular, a better knowledge of processes governing their environmental behavior is needed. Although a lot of data are available concerning the ubiquitous presence of micropollutants in rivers, accurate data on transport and removal rates are lacking. In this paper, a mass balance approach is presented, which is based on the Lagrangian sampling scheme, but extended to account for precise transport velocities and mixing along river stretches. The calculated mass balances allow accurate quantification of pollutants' reactivity along river segments. This is demonstrated for representative members of important groups of micropollutants, e.g. pharmaceuticals, musk fragrances, flame retardants, and pesticides. A model-aided analysis of the measured data series gives insight into the temporal dynamics of removal processes. The occurrence of different removal mechanisms such as photooxidation, microbial degradation, and volatilization is discussed. The results demonstrate, that removal processes are highly variable in time and space and this has to be considered for future studies. The high precision sampling scheme presented could be a powerful tool for quantifying removal processes under different boundary conditions and in river segments with contrasting properties. Copyright © 2015. Published by Elsevier B.V.

  9. Maintenance Optimization of High Voltage Substation Model

    Directory of Open Access Journals (Sweden)

    Radim Bris

    2008-01-01

    Full Text Available The real system from practice is selected for optimization purpose in this paper. We describe the real scheme of a high voltage (HV substation in different work states. Model scheme of the HV substation 22 kV is demonstrated within the paper. The scheme serves as input model scheme for the maintenance optimization. The input reliability and cost parameters of all components are given: the preventive and corrective maintenance costs, the actual maintenance period (being optimized, the failure rate and mean time to repair - MTTR.

  10. Green frame aggregation scheme for Wi-Fi networks

    KAUST Repository

    Alaslani, Maha S.; Showail, Ahmad; Shihada, Basem

    2015-01-01

    Aggregation (GFA) scheduling scheme that optimizes the aggregate size based on channel quality in order to minimize the consumed energy. GFA selects an optimal sub-frame size that satisfies the loss constraint for real-time applications as well as the energy

  11. OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Maja Vrkljan

    2004-12-01

    Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.

  12. Reaction schemes of immunoanalysis

    International Nuclear Information System (INIS)

    Delaage, M.; Barbet, J.

    1991-01-01

    The authors apply a general theory for multiple equilibria to the reaction schemes of immunoanalysis, competition and sandwich. This approach allows the manufacturer to optimize the system and provide the user with interpolation functions for the standard curve and its first derivative as well, thus giving access to variance [fr

  13. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  14. A Traffic Restriction Scheme for Enhancing Carpooling

    Directory of Open Access Journals (Sweden)

    Dong Ding

    2017-01-01

    Full Text Available For the purpose of alleviating traffic congestion, this paper proposes a scheme to encourage travelers to carpool by traffic restriction. By a variational inequity we describe travelers’ mode (solo driving and carpooling and route choice under user equilibrium principle in the context of fixed demand and detect the performance of a simple network with various restriction links, restriction proportions, and carpooling costs. Then the optimal traffic restriction scheme aiming at minimal total travel cost is designed through a bilevel program and applied to a Sioux Fall network example with genetic algorithm. According to various requirements, optimal restriction regions and proportions for restricted automobiles are captured. From the results it is found that traffic restriction scheme is possible to enhance carpooling and alleviate congestion. However, higher carpooling demand is not always helpful to the whole network. The topology of network, OD demand, and carpooling cost are included in the factors influencing the performance of the traffic system.

  15. Development of an Optimal Power Control Scheme for Wave-Offshore Hybrid Generation Systems

    Directory of Open Access Journals (Sweden)

    Seungmin Jung

    2015-08-01

    Full Text Available Integration technology of various distribution systems for improving renewable energy utilization has been receiving attention in the power system industry. The wave-offshore hybrid generation system (HGS, which has a capacity of over 10 MW, was recently developed by adopting several voltage source converters (VSC, while a control method for adopted power conversion systems has not yet been configured in spite of the unique system characteristics of the designated structure. This paper deals with a reactive power assignment method for the developed hybrid system to improve the power transfer efficiency of the entire system. Through the development and application processes for an optimization algorithm utilizing the real-time active power profiles of each generator, a feasibility confirmation of power transmission loss reduction was implemented. To find the practical effect of the proposed control scheme, the real system information regarding the demonstration process was applied from case studies. Also, an evaluation for the loss of the improvement rate was calculated.

  16. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  17. A New Adaptive Hungarian Mating Scheme in Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Chanju Jung

    2016-01-01

    Full Text Available In genetic algorithms, selection or mating scheme is one of the important operations. In this paper, we suggest an adaptive mating scheme using previously suggested Hungarian mating schemes. Hungarian mating schemes consist of maximizing the sum of mating distances, minimizing the sum, and random matching. We propose an algorithm to elect one of these Hungarian mating schemes. Every mated pair of solutions has to vote for the next generation mating scheme. The distance between parents and the distance between parent and offspring are considered when they vote. Well-known combinatorial optimization problems, the traveling salesperson problem, and the graph bisection problem are used for the test bed of our method. Our adaptive strategy showed better results than not only pure and previous hybrid schemes but also existing distance-based mating schemes.

  18. A New Wavelength Optimization and Energy-Saving Scheme Based on Network Coding in Software-Defined WDM-PON Networks

    Science.gov (United States)

    Ren, Danping; Wu, Shanshan; Zhang, Lijing

    2016-09-01

    In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.

  19. A Spectrum Handoff Scheme for Optimal Network Selection in NEMO Based Cognitive Radio Vehicular Networks

    Directory of Open Access Journals (Sweden)

    Krishan Kumar

    2017-01-01

    Full Text Available When a mobile network changes its point of attachments in Cognitive Radio (CR vehicular networks, the Mobile Router (MR requires spectrum handoff. Network Mobility (NEMO in CR vehicular networks is concerned with the management of this movement. In future NEMO based CR vehicular networks deployment, multiple radio access networks may coexist in the overlapping areas having different characteristics in terms of multiple attributes. The CR vehicular node may have the capability to make call for two or more types of nonsafety services such as voice, video, and best effort simultaneously. Hence, it becomes difficult for MR to select optimal network for the spectrum handoff. This can be done by performing spectrum handoff using Multiple Attributes Decision Making (MADM methods which is the objective of the paper. The MADM methods such as grey relational analysis and cost based methods are used. The application of MADM methods provides wider and optimum choice among the available networks with quality of service. Numerical results reveal that the proposed scheme is effective for spectrum handoff decision for optimal network selection with reduced complexity in NEMO based CR vehicular networks.

  20. Towards Efficient Energy Management of Smart Buildings Exploiting Heuristic Optimization with Real Time and Critical Peak Pricing Schemes

    Directory of Open Access Journals (Sweden)

    Sheraz Aslam

    2017-12-01

    Full Text Available The smart grid plays a vital role in decreasing electricity cost through Demand Side Management (DSM. Smart homes, a part of the smart grid, contribute greatly to minimizing electricity consumption cost via scheduling home appliances. However, user waiting time increases due to the scheduling of home appliances. This scheduling problem is the motivation to find an optimal solution that could minimize the electricity cost and Peak to Average Ratio (PAR with minimum user waiting time. There are many studies on Home Energy Management (HEM for cost minimization and peak load reduction. However, none of the systems gave sufficient attention to tackle multiple parameters (i.e., electricity cost and peak load reduction at the same time as user waiting time was minimum for residential consumers with multiple homes. Hence, in this work, we propose an efficient HEM scheme using the well-known meta-heuristic Genetic Algorithm (GA, the recently developed Cuckoo Search Optimization Algorithm (CSOA and the Crow Search Algorithm (CSA, which can be used for electricity cost and peak load alleviation with minimum user waiting time. The integration of a smart Electricity Storage System (ESS is also taken into account for more efficient operation of the Home Energy Management System (HEMS. Furthermore, we took the real-time electricity consumption pattern for every residence, i.e., every home has its own living pattern. The proposed scheme is implemented in a smart building; comprised of thirty smart homes (apartments, Real-Time Pricing (RTP and Critical Peak Pricing (CPP signals are examined in terms of electricity cost estimation for both a single smart home and a smart building. In addition, feasible regions are presented for single and multiple smart homes, which show the relationship among the electricity cost, electricity consumption and user waiting time. Experimental results demonstrate the effectiveness of our proposed scheme for single and multiple smart

  1. Resolution optimization with irregularly sampled Fourier data

    International Nuclear Information System (INIS)

    Ferrara, Matthew; Parker, Jason T; Cheney, Margaret

    2013-01-01

    Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer–Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications. (paper)

  2. Ad-Hoc vs. Standardized and Optimized Arthropod Diversity Sampling

    Directory of Open Access Journals (Sweden)

    Pedro Cardoso

    2009-09-01

    Full Text Available The use of standardized and optimized protocols has been recently advocated for different arthropod taxa instead of ad-hoc sampling or sampling with protocols defined on a case-by-case basis. We present a comparison of both sampling approaches applied for spiders in a natural area of Portugal. Tests were made to their efficiency, over-collection of common species, singletons proportions, species abundance distributions, average specimen size, average taxonomic distinctness and behavior of richness estimators. The standardized protocol revealed three main advantages: (1 higher efficiency; (2 more reliable estimations of true richness; and (3 meaningful comparisons between undersampled areas.

  3. Joint Optimized CPU and Networking Control Scheme for Improved Energy Efficiency in Video Streaming on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Sung-Woong Jo

    2017-01-01

    Full Text Available Video streaming service is one of the most popular applications for mobile users. However, mobile video streaming services consume a lot of energy, resulting in a reduced battery life. This is a critical problem that results in a degraded user’s quality of experience (QoE. Therefore, in this paper, a joint optimization scheme that controls both the central processing unit (CPU and wireless networking of the video streaming process for improved energy efficiency on mobile devices is proposed. For this purpose, the energy consumption of the network interface and CPU is analyzed, and based on the energy consumption profile a joint optimization problem is formulated to maximize the energy efficiency of the mobile device. The proposed algorithm adaptively adjusts the number of chunks to be downloaded and decoded in each packet. Simulation results show that the proposed algorithm can effectively improve the energy efficiency when compared with the existing algorithms.

  4. Superposition Enhanced Nested Sampling

    Directory of Open Access Journals (Sweden)

    Stefano Martiniani

    2014-08-01

    Full Text Available The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  5. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    International Nuclear Information System (INIS)

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-01-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality

  6. Topology optimization based on the harmony search method

    International Nuclear Information System (INIS)

    Lee, Seung-Min; Han, Seog-Young

    2017-01-01

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  7. Topology optimization based on the harmony search method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)

    2017-06-15

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  8. Sampling soils for 137Cs using various field-sampling volumes

    International Nuclear Information System (INIS)

    Nyhan, J.W.; Schofield, T.G.; White, G.C.; Trujillo, G.

    1981-10-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from intensive study area in the fallout pathway of Trinity were sampled for 137 Cs using 25-, 500-, 2500-, and 12 500-cm 3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137 Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137 Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, where CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137 Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2 to 4 aliquots out of an many as 30 collected need be assayed for 137 Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137 Cs concentration decreased dramatically, but decreased very little with additional labor

  9. A lightweight target-tracking scheme using wireless sensor network

    International Nuclear Information System (INIS)

    Kuang, Xing-hong; Shao, Hui-he; Feng, Rui

    2008-01-01

    This paper describes a lightweight target-tracking scheme using wireless sensor network, where randomly distributed sensor nodes take responsibility for tracking the moving target based on the acoustic sensing signal. At every localization interval, a backoff timer algorithm is performed to elect the leader node and determine the transmission order of the localization nodes. An adaptive active region size algorithm based on the node density is proposed to select the optimal nodes taking part in localization. An improved particle filter algorithm performed by the leader node estimates the target state based on the selected nodes' acoustic energy measurements. Some refinements such as optimal linear combination algorithm, residual resampling algorithm, Markov chain Monte Carlo method are introduced in the scheme to improve the tracking performance. Simulation results validate the efficiency of the proposed tracking scheme

  10. Optimal CCD readout by digital correlated double sampling

    Science.gov (United States)

    Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.

    2016-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.

  11. The SS-SCR Scheme for Dynamic Spectrum Access

    Directory of Open Access Journals (Sweden)

    Vinay Thumar

    2012-01-01

    Full Text Available We integrate the two models of Cognitive Radio (CR, namely, the conventional Sense-and-Scavenge (SS Model and Symbiotic Cooperative Relaying (SCR. The resultant scheme, called SS-SCR, improves the efficiency of spectrum usage and reliability of the transmission links. SS-SCR is enabled by a suitable cross-layer optimization problem in a multihop multichannel CR network. Its performance is compared for different PU activity patterns with those schemes which consider SS and SCR separately and perform disjoint resource allocation. Simulation results depict the effectiveness of the proposed SS-SCR scheme. We also indicate the usefulness of cloud computing for a practical deployment of the scheme.

  12. Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2014-05-01

    The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.

  13. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  14. Implementation of a compressive sampling scheme for wireless sensors to achieve energy efficiency in a structural health monitoring system

    Science.gov (United States)

    O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.

    2013-04-01

    Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.

  15. Finding an Optimal Thermo-Mechanical Processing Scheme for a Gum-Type Ti-Nb-Zr-Fe-O Alloy

    Science.gov (United States)

    Nocivin, Anna; Cojocaru, Vasile Danut; Raducanu, Doina; Cinca, Ion; Angelescu, Maria Lucia; Dan, Ioan; Serban, Nicolae; Cojocaru, Mirela

    2017-09-01

    A gum-type alloy was subjected to a thermo-mechanical processing scheme to establish a suitable process for obtaining superior structural and behavioural characteristics. Three processes were proposed: a homogenization treatment, a cold-rolling process and a solution treatment with three heating temperatures: 1073 K (800 °C), 1173 K (900 °C) and 1273 K (1000 °C). Results of all three proposed processes were analyzed using x-ray diffraction and scanning electron microscopy imaging, to establish and compare the structural modifications. The behavioural status was completed with micro-hardness and tensile strength tests. The optimal results were obtained for solution treatment at 1073 K.

  16. A systematic random sampling scheme optimized to detect the proportion of rare synapses in the neuropil.

    Science.gov (United States)

    da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C

    2009-05-30

    Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.

  17. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  18. Benefits of incorporating the adaptive dynamic range optimization amplification scheme into an assistive listening device for people with mild or moderate hearing loss.

    Science.gov (United States)

    Chang, Hung-Yue; Luo, Ching-Hsing; Lo, Tun-Shin; Chen, Hsiao-Chuan; Huang, Kuo-You; Liao, Wen-Huei; Su, Mao-Chang; Liu, Shu-Yu; Wang, Nan-Mai

    2017-08-28

    This study investigated whether a self-designed assistive listening device (ALD) that incorporates an adaptive dynamic range optimization (ADRO) amplification strategy can surpass a commercially available monaurally worn linear ALD, SM100. Both subjective and objective measurements were implemented. Mandarin Hearing-In-Noise Test (MHINT) scores were the objective measurement, whereas participant satisfaction was the subjective measurement. The comparison was performed in a mixed design (i.e., subjects' hearing status being mild or moderate, quiet versus noisy, and linear versus ADRO scheme). The participants were two groups of hearing-impaired subjects, nine mild and eight moderate, respectively. The results of the ADRO system revealed a significant difference in the MHINT sentence reception threshold (SRT) in noisy environments between monaurally aided and unaided conditions, whereas the linear system did not. The benchmark results showed that the ADRO scheme is effectively beneficial to people who experience mild or moderate hearing loss in noisy environments. The satisfaction rating regarding overall speech quality indicated that the participants were satisfied with the speech quality of both ADRO and linear schemes in quiet environments, and they were more satisfied with ADRO than they with the linear scheme in noisy environments.

  19. Event-Triggered Distributed Approximate Optimal State and Output Control of Affine Nonlinear Interconnected Systems.

    Science.gov (United States)

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-06-08

    This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online. In order to retain the advantages of event-triggered feedback for an adaptive optimal controller, a novel hybrid learning scheme is proposed to reduce the convergence time for the learning algorithm. The development is based on the observation that, in the event-triggered feedback, the sampling instants are dynamic and results in variable interevent time. To relax the requirement of entire state measurements, an extended nonlinear observer is designed at each subsystem to recover the system internal states from the measurable feedback. Using a Lyapunov-based analysis, it is demonstrated that the system states and the observer errors remain locally uniformly ultimately bounded and the control policy converges to a neighborhood of the optimal policy. Simulation results are presented to demonstrate the performance of the developed controller.

  20. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  1. Evaluation of an Optimal Epidemiological Typing Scheme for Legionella pneumophila with Whole-Genome Sequence Data Using Validation Guidelines.

    Science.gov (United States)

    David, Sophia; Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R; Afshar, Baharak; Underwood, Anthony; Fry, Norman K; Parkhill, Julian; Harrison, Timothy G

    2016-08-01

    Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current "gold standard" typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard "typing panel," previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. Copyright © 2016 David et al.

  2. Energy mesh optimization for multi-level calculation schemes

    International Nuclear Information System (INIS)

    Mosca, P.; Taofiki, A.; Bellier, P.; Prevost, A.

    2011-01-01

    The industrial calculations of third generation nuclear reactors are based on sophisticated strategies of homogenization and collapsing at different spatial and energetic levels. An important issue to ensure the quality of these calculation models is the choice of the collapsing energy mesh. In this work, we show a new approach to generate optimized energy meshes starting from the SHEM 281-group library. The optimization model is applied on 1D cylindrical cells and consists of finding an energy mesh which minimizes the errors between two successive collision probability calculations. The former is realized over the fine SHEM mesh with Livolant-Jeanpierre self-shielded cross sections and the latter is performed with collapsed cross sections over the energy mesh being optimized. The optimization is done by the particle swarm algorithm implemented in the code AEMC and multigroup flux solutions are obtained from standard APOLLO2 solvers. By this new approach, a set of new optimized meshes which encompass from 10 to 50 groups has been defined for PWR and BWR calculations. This set will allow users to adapt the energy detail of the solution to the complexity of the calculation (assembly, multi-assembly, two-dimensional whole core). Some preliminary verifications, in which the accuracy of the new meshes is measured compared to a direct 281-group calculation, show that the 30-group optimized mesh offers a good compromise between simulation time and accuracy for a standard 17 x 17 UO 2 assembly with and without control rods. (author)

  3. HYBRID SYSTEM BASED FUZZY-PID CONTROL SCHEMES FOR UNPREDICTABLE PROCESS

    Directory of Open Access Journals (Sweden)

    M.K. Tan

    2011-07-01

    Full Text Available In general, the primary aim of polymerization industry is to enhance the process operation in order to obtain high quality and purity product. However, a sudden and large amount of heat will be released rapidly during the mixing process of two reactants, i.e. phenol and formalin due to its exothermic behavior. The unpredictable heat will cause deviation of process temperature and hence affect the quality of the product. Therefore, it is vital to control the process temperature during the polymerization. In the modern industry, fuzzy logic is commonly used to auto-tune PID controller to control the process temperature. However, this method needs an experienced operator to fine tune the fuzzy membership function and universe of discourse via trial and error approach. Hence, the setting of fuzzy inference system might not be accurate due to the human errors. Besides that, control of the process can be challenging due to the rapid changes in the plant parameters which will increase the process complexity. This paper proposes an optimization scheme using hybrid of Q-learning (QL and genetic algorithm (GA to optimize the fuzzy membership function in order to allow the conventional fuzzy-PID controller to control the process temperature more effectively. The performances of the proposed optimization scheme are compared with the existing fuzzy-PID scheme. The results show that the proposed optimization scheme is able to control the process temperature more effectively even if disturbance is introduced.

  4. The method of Sample Management in Neutron Activation Analysis Laboratory-Serpong

    International Nuclear Information System (INIS)

    Elisabeth-Ratnawati

    2005-01-01

    In the testing laboratory used by neutron activation analysis method, sample preparation is the main factor and it can't be neglect. The error in the sample preparation can give result with lower accuracy. In this article is explained the scheme of sample preparation i.e sample receive administration, the separate of sample, fluid and solid sample preparation, sample grouping, irradiation, sample counting and holding the sample post irradiation. If the management of samples were good application based on Standard Operation Procedure, therefore each samples has good traceability. To optimize the management of samples is needed the trained and skilled personal and good facility. (author)

  5. Using Linked Survey Paradata to Improve Sampling Strategies in the Medical Expenditure Panel Survey

    Directory of Open Access Journals (Sweden)

    Mirel Lisa B.

    2017-06-01

    Full Text Available Using paradata from a prior survey that is linked to a new survey can help a survey organization develop more effective sampling strategies. One example of this type of linkage or subsampling is between the National Health Interview Survey (NHIS and the Medical Expenditure Panel Survey (MEPS. MEPS is a nationally representative sample of the U.S. civilian, noninstitutionalized population based on a complex multi-stage sample design. Each year a new sample is drawn as a subsample of households from the prior year’s NHIS. The main objective of this article is to examine how paradata from a prior survey can be used in developing a sampling scheme in a subsequent survey. A framework for optimal allocation of the sample in substrata formed for this purpose is presented and evaluated for the relative effectiveness of alternative substratification schemes. The framework is applied, using real MEPS data, to illustrate how utilizing paradata from the linked survey offers the possibility of making improvements to the sampling scheme for the subsequent survey. The improvements aim to reduce the data collection costs while maintaining or increasing effective responding sample sizes and response rates for a harder to reach population.

  6. Numerical simulation and optimized design of cased telescoped ammunition interior ballistic

    Directory of Open Access Journals (Sweden)

    Jia-gang Wang

    2018-04-01

    Full Text Available In order to achieve the optimized design of a cased telescoped ammunition (CTA interior ballistic design, a genetic algorithm was introduced into the optimal design of CTA interior ballistics with coupling the CTA interior ballistic model. Aiming at the interior ballistic characteristics of a CTA gun, the goal of CTA interior ballistic design is to obtain a projectile velocity as large as possible. The optimal design of CTA interior ballistic is carried out using a genetic algorithm by setting peak pressure, changing the chamber volume and gun powder charge density. A numerical simulation of interior ballistics based on a 35 mm CTA firing experimental scheme was conducted and then the genetic algorithm was used for numerical optimization. The projectile muzzle velocity of the optimized scheme is increased from 1168 m/s for the initial experimental scheme to 1182 m/s. Then four optimization schemes were obtained with several independent optimization processes. The schemes were compared with each other and the difference between these schemes is small. The peak pressure and muzzle velocity of these schemes are almost the same. The result shows that the genetic algorithm is effective in the optimal design of the CTA interior ballistics. This work will be lay the foundation for further CTA interior ballistic design. Keywords: Cased telescoped ammunition, Interior ballistics, Gunpowder, Optimization genetic algorithm

  7. Implementation of suitable flow injection/sequential-sample separation/preconcentration schemes for determination of trace metal concentrations using detection by electrothermal atomic absorption spectrometry and inductively coupled plasma mass spectrometry

    DEFF Research Database (Denmark)

    Hansen, Elo Harald; Wang, Jianhua

    2002-01-01

    Various preconditioning procedures encomprising appropriate separation/preconcentration schemes in order to obtain optimal sensitivity and selectivity characteristics when using electrothermal atomic absorption spectrometry (ETAAS) and inductively coupled plasma mass spectrometry (ICPMS...

  8. Optimized Quasi-Interpolators for Image Reconstruction.

    Science.gov (United States)

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  9. Time-and-ID-Based Proxy Reencryption Scheme

    Directory of Open Access Journals (Sweden)

    Kambombo Mtonga

    2014-01-01

    Full Text Available Time- and ID-based proxy reencryption scheme is proposed in this paper in which a type-based proxy reencryption enables the delegator to implement fine-grained policies with one key pair without any additional trust on the proxy. However, in some applications, the time within which the data was sampled or collected is very critical. In such applications, for example, healthcare and criminal investigations, the delegatee may be interested in only some of the messages with some types sampled within some time bound instead of the entire subset. Hence, in order to carter for such situations, in this paper, we propose a time-and-identity-based proxy reencryption scheme that takes into account the time within which the data was collected as a factor to consider when categorizing data in addition to its type. Our scheme is based on Boneh and Boyen identity-based scheme (BB-IBE and Matsuo’s proxy reencryption scheme for identity-based encryption (IBE to IBE. We prove that our scheme is semantically secure in the standard model.

  10. Energy Aware Routing Schemes in Solar PoweredWireless Sensor Networks

    KAUST Repository

    Dehwah, Ahmad H.

    2016-10-01

    Wireless sensor networks enable inexpensive distributed monitoring systems that are the backbone of smart cities. In this dissertation, we are interested in wireless sensor networks for traffic monitoring and an emergency flood detection to improve the safety of future cities. To achieve real-time traffic monitoring and emergency flood detection, the system has to be continually operational. Accordingly, an energy source is needed to ensure energy availability at all times. The sun provides for the most inexpensive source of energy, and therefore the energy is provided here by a solar panel working in conjunction with a rechargeable battery. Unlike batteries, solar energy fluctuates spatially and temporally due to the panel orientation, seasonal variation and node location, particularly in cities where buildings cast shadows. Especially, it becomes scarce whenever floods are likely to occur, as the weather tends to be cloudy at such times when the emergency detection system is most needed. These considerations lead to the need for the optimization of the energy of the sensor network, to maximize its sensing performance. In this dissertation, we address the challenges associated with long term outdoor deployments along with providing some solutions to overcome part of these challenges. We then introduce the energy optimization problem, as a distributed greedy approach. Motivated by the flood sensing application, our objective is to maximize the energy margin in the solar powered network at the onset of the high rain event, to maximize the network lifetime. The decentralized scheme will achieve this by optimizing the energy over a time horizon T, taking into account the available and predicted energy over the entire routing path. Having a good energy forecasting scheme can significantly enhance the energy optimization in WSN. Thus, this dissertation proposes a new energy forecasting scheme that is compatible with the platform’s capabilities. This proposed

  11. CSR schemes in agribusiness

    DEFF Research Database (Denmark)

    Pötz, Katharina Anna; Haas, Rainer; Balzarova, Michaela

    2013-01-01

    of schemes that can be categorized on focus areas, scales, mechanisms, origins, types and commitment levels. Research limitations/implications – The findings contribute to conceptual and empirical research on existing models to compare and analyse CSR standards. Sampling technique and depth of analysis limit......Purpose – The rise of CSR followed a demand for CSR standards and guidelines. In a sector already characterized by a large number of standards, the authors seek to ask what CSR schemes apply to agribusiness, and how they can be systematically compared and analysed. Design....../methodology/approach – Following a deductive-inductive approach the authors develop a model to compare and analyse CSR schemes based on existing studies and on coding qualitative data on 216 CSR schemes. Findings – The authors confirm that CSR standards and guidelines have entered agribusiness and identify a complex landscape...

  12. Multi-Hierarchical Gray Correlation Analysis Applied in the Selection of Green Building Design Scheme

    Science.gov (United States)

    Wang, Li; Li, Chuanghong

    2018-02-01

    As a sustainable form of ecological structure, green building is widespread concerned and advocated in society increasingly nowadays. In the survey and design phase of preliminary project construction, carrying out the evaluation and selection of green building design scheme, which is in accordance with the scientific and reasonable evaluation index system, can improve the ecological benefits of green building projects largely and effectively. Based on the new Green Building Evaluation Standard which came into effect on January 1, 2015, the evaluation index system of green building design scheme is constructed taking into account the evaluation contents related to the green building design scheme. We organized experts who are experienced in construction scheme optimization to mark and determine the weight of each evaluation index through the AHP method. The correlation degree was calculated between each evaluation scheme and ideal scheme by using multilevel gray relational analysis model and then the optimal scheme was determined. The feasibility and practicability of the evaluation method are verified by introducing examples.

  13. A hybrid pi control scheme for airship hovering

    International Nuclear Information System (INIS)

    Ashraf, Z.; Choudhry, M.A.; Hanif, A.

    2012-01-01

    Airship provides us many attractive applications in aerospace industry including transportation of heavy payloads, tourism, emergency management, communication, hover and vision based applications. Hovering control of airship has many utilizations in different engineering fields. However, it is a difficult problem to sustain the hover condition maintaining controllability. So far, different solutions have been proposed in literature but most of them are difficult in analysis and implementation. In this paper, we have presented a simple and efficient scheme to design a multi input multi output hybrid PI control scheme for airship. It can maintain stability of the plant by rejecting disturbance inputs to ensure robustness. A control scheme based on feedback theory is proposed that uses principles of optimality with integral action for hovering applications. Simulations are carried out in MTALAB for examining the proposed control scheme for hovering in different wind conditions. Comparison of the technique with an existing scheme is performed, describing the effectiveness of control scheme. (author)

  14. Efficient JPEG 2000 Image Compression Scheme for Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Halim Sghaier

    2011-08-01

    Full Text Available When using wireless sensor networks for real-time data transmission, some critical points should be considered. Restricted computational power, reduced memory, narrow bandwidth and energy supplied present strong limits in sensor nodes. Therefore, maximizing network lifetime and minimizing energy consumption are always optimization goals. To overcome the computation and energy limitation of individual sensor nodes during image transmission, an energy efficient image transport scheme is proposed, taking advantage of JPEG2000 still image compression standard using MATLAB and C from Jasper. JPEG2000 provides a practical set of features, not necessarily available in the previous standards. These features were achieved using techniques: the discrete wavelet transform (DWT, and embedded block coding with optimized truncation (EBCOT. Performance of the proposed image transport scheme is investigated with respect to image quality and energy consumption. Simulation results are presented and show that the proposed scheme optimizes network lifetime and reduces significantly the amount of required memory by analyzing the functional influence of each parameter of this distributed image compression algorithm.

  15. A simple and optimal ancestry labeling scheme for trees

    DEFF Research Database (Denmark)

    Dahlgaard, Søren; Knudsen, Mathias Bæk Tejs; Rotbart, Noy Galil

    2015-01-01

    We present a lg n + 2 lg lg n + 3 ancestry labeling scheme for trees. The problem was first presented by Kannan et al. [STOC 88’] along with a simple 2 lg n solution. Motivated by applications to XML files, the label size was improved incrementally over the course of more than 20 years by a series...

  16. Quantum noise in laser-interferometer gravitational-wave detectors with a heterodyne readout scheme

    International Nuclear Information System (INIS)

    Buonanno, Alessandra; Chen Yanbei; Mavalvala, Nergis

    2003-01-01

    We analyze and discuss the quantum noise in signal-recycled laser interferometer gravitational-wave detectors, such as Advanced LIGO, using a heterodyne readout scheme and taking into account the optomechanical dynamics. Contrary to homodyne detection, a heterodyne readout scheme can simultaneously measure more than one quadrature of the output field, providing an additional way of optimizing the interferometer sensitivity, but at the price of additional noise. Our analysis provides the framework needed to evaluate whether a homodyne or heterodyne readout scheme is more optimal for second generation interferometers from an astrophysical point of view. As a more theoretical outcome of our analysis, we show that as a consequence of the Heisenberg uncertainty principle the heterodyne scheme cannot convert conventional interferometers into (broadband) quantum non-demolition interferometers

  17. Control Methods Utilizing Energy Optimizing Schemes in Refrigeration Systems

    DEFF Research Database (Denmark)

    Larsen, L.S; Thybo, C.; Stoustrup, Jakob

    2003-01-01

    The potential energy savings in refrigeration systems using energy optimal control has been proved to be substantial. This however requires an intelligent control that drives the refrigeration systems towards the energy optimal state. This paper proposes an approach for a control, which drives th...... the condenser pressure towards an optimal state. The objective of this is to present a feasible method that can be used for energy optimizing control. A simulation model of a simple refrigeration system will be used as basis for testing the control method....

  18. Distributed Optimal Consensus Control for Nonlinear Multiagent System With Unknown Dynamic.

    Science.gov (United States)

    Zhang, Jilie; Zhang, Huaguang; Feng, Tao

    2017-08-01

    This paper focuses on the distributed optimal cooperative control for continuous-time nonlinear multiagent systems (MASs) with completely unknown dynamics via adaptive dynamic programming (ADP) technology. By introducing predesigned extra compensators, the augmented neighborhood error systems are derived, which successfully circumvents the system knowledge requirement for ADP. It is revealed that the optimal consensus protocols actually work as the solutions of the MAS differential game. Policy iteration algorithm is adopted, and it is theoretically proved that the iterative value function sequence strictly converges to the solution of the coupled Hamilton-Jacobi-Bellman equation. Based on this point, a novel online iterative scheme is proposed, which runs based on the data sampled from the augmented system and the gradient of the value function. Neural networks are employed to implement the algorithm and the weights are updated, in the least-square sense, to the ideal value, which yields approximated optimal consensus protocols. Finally, a numerical example is given to illustrate the effectiveness of the proposed scheme.

  19. An evaluation of sampling and full enumeration strategies for Fisher Jenks classification in big data settings

    Science.gov (United States)

    Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.

    2017-01-01

    Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.

  20. Decoupled Scheme for Time-Dependent Natural Convection Problem II: Time Semidiscreteness

    Directory of Open Access Journals (Sweden)

    Tong Zhang

    2014-01-01

    stability and the corresponding optimal error estimates are presented. Furthermore, a decoupled numerical scheme is proposed by decoupling the nonlinear terms via temporal extrapolation; optimal error estimates are established. Finally, some numerical results are provided to verify the performances of the developed algorithms. Compared with the coupled numerical scheme, the decoupled algorithm not only keeps good accuracy but also saves a lot of computational cost. Both theoretical analysis and numerical experiments show the efficiency and effectiveness of the decoupled method for time-dependent natural convection problem.

  1. Efficient Power Scheduling in Smart Homes Using Hybrid Grey Wolf Differential Evolution Optimization Technique with Real Time and Critical Peak Pricing Schemes

    Directory of Open Access Journals (Sweden)

    Muqaddas Naz

    2018-02-01

    Full Text Available With the emergence of automated environments, energy demand by consumers is increasing rapidly. More than 80% of total electricity is being consumed in the residential sector. This brings a challenging task of maintaining the balance between demand and generation of electric power. In order to meet such challenges, a traditional grid is renovated by integrating two-way communication between the consumer and generation unit. To reduce electricity cost and peak load demand, demand side management (DSM is modeled as an optimization problem, and the solution is obtained by applying meta-heuristic techniques with different pricing schemes. In this paper, an optimization technique, the hybrid gray wolf differential evolution (HGWDE, is proposed by merging enhanced differential evolution (EDE and gray wolf optimization (GWO scheme using real-time pricing (RTP and critical peak pricing (CPP. Load shifting is performed from on-peak hours to off-peak hours depending on the electricity cost defined by the utility. However, there is a trade-off between user comfort and cost. To validate the performance of the proposed algorithm, simulations have been carried out in MATLAB. Results illustrate that using RTP, the peak to average ratio (PAR is reduced to 53.02%, 29.02% and 26.55%, while the electricity bill is reduced to 12.81%, 12.012% and 12.95%, respectively, for the 15-, 30- and 60-min operational time interval (OTI. On the other hand, the PAR and electricity bill are reduced to 47.27%, 22.91%, 22% and 13.04%, 12%, 11.11% using the CPP tariff.

  2. QoE Guarantee Scheme Based on Cooperative Cognitive Cloud and Opportunistic Weight Particle Swarm

    Directory of Open Access Journals (Sweden)

    Weihang Shi

    2015-01-01

    Full Text Available It is well known that the Internet application of cloud services may be affected by the inefficiency of cloud computing and inaccurate evaluation of quality of experience (QoE seriously. In our paper, based on construction algorithms of cooperative cognitive cloud platform and optimization algorithm of opportunities weight particle swarm clustering, the QoE guarantee mechanism was proposed. The mechanism, through the sending users of requests and the cognitive neighbor users’ cooperation, combined the cooperation of subcloud platforms and constructed the optimal cloud platform with the different service. At the same time, the particle swarm optimization algorithm could be enhanced dynamically according to all kinds of opportunity request weight, which could optimize the cooperative cognitive cloud platform. Finally, the QoE guarantee scheme was proposed with the opportunity weight particle swarm optimization algorithm and collaborative cognitive cloud platform. The experimental results show that the proposed mechanism compared is superior to the QoE guarantee scheme based on cooperative cloud and QoE guarantee scheme based on particle swarm optimization, compared with optimization fitness and high cloud computing service execution efficiency and high throughput performance advantages.

  3. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  4. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2014-01-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  5. Quantum dynamics calculations using symmetrized, orthogonal Weyl-Heisenberg wavelets with a phase space truncation scheme. II. Construction and optimization

    International Nuclear Information System (INIS)

    Poirier, Bill; Salam, A.

    2004-01-01

    In this paper, we extend and elaborate upon a wavelet method first presented in a previous publication [B. Poirier, J. Theo. Comput. Chem. 2, 65 (2003)]. In particular, we focus on construction and optimization of the wavelet functions, from theoretical and numerical viewpoints, and also examine their localization properties. The wavelets used are modified Wilson-Daubechies wavelets, which in conjunction with a simple phase space truncation scheme, enable one to solve the multidimensional Schroedinger equation. This approach is ideally suited to rovibrational spectroscopy applications, but can be used in any context where differential equations are involved

  6. A novel two-level dynamic parallel data scheme for large 3-D SN calculations

    International Nuclear Information System (INIS)

    Sjoden, G.E.; Shedlock, D.; Haghighat, A.; Yi, C.

    2005-01-01

    We introduce a new dynamic parallel memory optimization scheme for executing large scale 3-D discrete ordinates (Sn) simulations on distributed memory parallel computers. In order for parallel transport codes to be truly scalable, they must use parallel data storage, where only the variables that are locally computed are locally stored. Even with parallel data storage for the angular variables, cumulative storage requirements for large discrete ordinates calculations can be prohibitive. To address this problem, Memory Tuning has been implemented into the PENTRAN 3-D parallel discrete ordinates code as an optimized, two-level ('large' array, 'small' array) parallel data storage scheme. Memory Tuning can be described as the process of parallel data memory optimization. Memory Tuning dynamically minimizes the amount of required parallel data in allocated memory on each processor using a statistical sampling algorithm. This algorithm is based on the integral average and standard deviation of the number of fine meshes contained in each coarse mesh in the global problem. Because PENTRAN only stores the locally computed problem phase space, optimal two-level memory assignments can be unique on each node, depending upon the parallel decomposition used (hybrid combinations of angular, energy, or spatial). As demonstrated in the two large discrete ordinates models presented (a storage cask and an OECD MOX Benchmark), Memory Tuning can save a substantial amount of memory per parallel processor, allowing one to accomplish very large scale Sn computations. (authors)

  7. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Improving multivariate Horner schemes with Monte Carlo tree search

    Science.gov (United States)

    Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.

    2013-11-01

    Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

  9. Coordinated Voltage Control Scheme for VSC-HVDC Connected Wind Power Plants

    DEFF Research Database (Denmark)

    Guo, Yifei; Gao, Houlei; Wu, Qiuwei

    2017-01-01

    This paper proposes a coordinated voltage control scheme based on model predictive control (MPC) for voltage source converter‐based high voltage direct current (VSC‐HVDC) connected wind power plants (WPPs). In the proposed scheme, voltage regulation capabilities of VSC and WTGs are fully utilized...... and optimally coordinated. Two control modes, namely operation optimization mode and corrective mode, are designed to coordinate voltage control and economic operation of the system. In the first mode, the control objective includes the bus voltages, power losses and dynamic Var reserves of wind turbine...

  10. An evaluation of soil sampling for 137Cs using various field-sampling volumes.

    Science.gov (United States)

    Nyhan, J W; White, G C; Schofield, T G; Trujillo, G

    1983-05-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from an intensive study area in the fallout pathway of Trinity were sampled for 137Cs using 25-, 500-, 2500- and 12,500-cm3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, whereas CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2-4 aliquots out of as many as 30 collected need be assayed for 137Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137Cs concentration decreased dramatically, but decreased very little with additional labor.

  11. An efficient numerical scheme for the simulation of parallel-plate active magnetic regenerators

    DEFF Research Database (Denmark)

    Torregrosa-Jaime, Bárbara; Corberán, José M.; Payá, Jorge

    2015-01-01

    A one-dimensional model of a parallel-plate active magnetic regenerator (AMR) is presented in this work. The model is based on an efficient numerical scheme which has been developed after analysing the heat transfer mechanisms in the regenerator bed. The new finite difference scheme optimally com...... to the fully implicit scheme, the proposed scheme achieves more accurate results, prevents numerical errors and requires less computational effort. In AMR simulations the new scheme can reduce the computational time by 88%....

  12. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2013-01-01

    Full Text Available Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor’s method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  13. Pareto optimization in algebraic dynamic programming.

    Science.gov (United States)

    Saule, Cédric; Giegerich, Robert

    2015-01-01

    Pareto optimization combines independent objectives by computing the Pareto front of its search space, defined as the set of all solutions for which no other candidate solution scores better under all objectives. This gives, in a precise sense, better information than an artificial amalgamation of different scores into a single objective, but is more costly to compute. Pareto optimization naturally occurs with genetic algorithms, albeit in a heuristic fashion. Non-heuristic Pareto optimization so far has been used only with a few applications in bioinformatics. We study exact Pareto optimization for two objectives in a dynamic programming framework. We define a binary Pareto product operator [Formula: see text] on arbitrary scoring schemes. Independent of a particular algorithm, we prove that for two scoring schemes A and B used in dynamic programming, the scoring scheme [Formula: see text] correctly performs Pareto optimization over the same search space. We study different implementations of the Pareto operator with respect to their asymptotic and empirical efficiency. Without artificial amalgamation of objectives, and with no heuristics involved, Pareto optimization is faster than computing the same number of answers separately for each objective. For RNA structure prediction under the minimum free energy versus the maximum expected accuracy model, we show that the empirical size of the Pareto front remains within reasonable bounds. Pareto optimization lends itself to the comparative investigation of the behavior of two alternative scoring schemes for the same purpose. For the above scoring schemes, we observe that the Pareto front can be seen as a composition of a few macrostates, each consisting of several microstates that differ in the same limited way. We also study the relationship between abstract shape analysis and the Pareto front, and find that they extract information of a different nature from the folding space and can be meaningfully combined.

  14. Operation Modes and Control Schemes for Internet-Based Teleoperation System with Time Delay

    Institute of Scientific and Technical Information of China (English)

    曾庆军; 宋爱国

    2003-01-01

    Teleoperation system plays an important role in executing task under hazard environment. As the computer networks such as the Internet are being used as the communication channel of teleoperation system, varying time delay causes the overall system unstable and reduces the performance of transparency. This paper proposed twelve operation modes with different control schemes for teleoperation on the Internet with time delay. And an optimal operation mode with control scheme was specified for teleoperation with time delay, based on the tradeoff between passivity and transparency properties. It experimentally confirmed the validity of the proposed optimal mode and control scheme by using a simple one DOF master-slave manipulator system.

  15. An Antenna Diversity Scheme for Digital Front-End with OFDM Technology

    Institute of Scientific and Technical Information of China (English)

    Fa-Long Luol; Ward Williams; Bruce Gladstone

    2011-01-01

    In,this paper, we propose a new antenna diversity scheme for OFDM-based wireless communication and digital broadcasting applications. Compared with existing schemes, such as post-fast Fourier transform (FFT), pre-FFT, and polyphase-based fitter-bank, the proposed scheme performs optimally and has very low computational complexity. It offers a better compromise between performance, power consumption, and complexity in real-time implementation of the receivers of broadband communication and digital broadcasting systems.

  16. An Energy-Efficient Scheme for Multirelay Cooperative Networks with Energy Harvesting

    Directory of Open Access Journals (Sweden)

    Dingcheng Yang

    2016-01-01

    Full Text Available This study investigates an energy-efficient scheme in multirelay cooperative networks with energy harvesting where multiple sessions need to communicate with each other via the relay node. A two-step optimal method is proposed which maximizes the system energy efficiency, while taking into account the receiver circuit energy consumption. Firstly, the optimal power allocation for relay nodes is determined to maximize the system throughput; this is based on directional water-filling algorithm. Secondly, using quantum particle swarm optimization (QPSO, a joint relay node selection and session grouping optimization is proposed. With this algorithm, sessions can be classified into multiple groups that are assisted by the specific relay node with the maximum energy efficiency. This approach leads to a better global optimization in searching ability and efficiency. Simulation results show that the proposed scheme can improve the energy efficiency effectively compared with direct transmission and opportunistic relay-selected cooperative transmission.

  17. An Efficient Offloading Scheme For MEC System Considering Delay and Energy Consumption

    Science.gov (United States)

    Sun, Yanhua; Hao, Zhe; Zhang, Yanhua

    2018-01-01

    With the increasing numbers of mobile devices, mobile edge computing (MEC) which provides cloud computing capabilities proximate to mobile devices in 5G networks has been envisioned as a promising paradigm to enhance users experience. In this paper, we investigate a joint consideration of delay and energy consumption offloading scheme (JCDE) for MEC system in 5G heterogeneous networks. An optimization is formulated to minimize the delay as well as energy consumption of the offloading system, which the delay and energy consumption of transmitting and calculating tasks are taken into account. We adopt an iterative greedy algorithm to solve the optimization problem. Furthermore, simulations were carried out to validate the utility and effectiveness of our proposed scheme. The effect of parameter variations on the system is analysed as well. Numerical results demonstrate delay and energy efficiency promotion of our proposed scheme compared with another paper’s scheme.

  18. Optimized Explicit Runge--Kutta Schemes for the Spectral Difference Method Applied to Wave Propagation Problems

    KAUST Repository

    Parsani, Matteo

    2013-04-10

    Explicit Runge--Kutta schemes with large stable step sizes are developed for integration of high-order spectral difference spatial discretizations on quadrilateral grids. The new schemes permit an effective time step that is substantially larger than the maximum admissible time step of standard explicit Runge--Kutta schemes available in the literature. Furthermore, they have a small principal error norm and admit a low-storage implementation. The advantages of the new schemes are demonstrated through application to the Euler equations and the linearized Euler equations.

  19. Optimized Explicit Runge--Kutta Schemes for the Spectral Difference Method Applied to Wave Propagation Problems

    KAUST Repository

    Parsani, Matteo; Ketcheson, David I.; Deconinck, W.

    2013-01-01

    Explicit Runge--Kutta schemes with large stable step sizes are developed for integration of high-order spectral difference spatial discretizations on quadrilateral grids. The new schemes permit an effective time step that is substantially larger than the maximum admissible time step of standard explicit Runge--Kutta schemes available in the literature. Furthermore, they have a small principal error norm and admit a low-storage implementation. The advantages of the new schemes are demonstrated through application to the Euler equations and the linearized Euler equations.

  20. Designing incentive schemes for promoting energy-efficient appliances: A new methodology and a case study for Spain

    International Nuclear Information System (INIS)

    Galarraga, Ibon; Abadie, Luis M.; Kallbekken, Steffen

    2016-01-01

    The energy-efficiency gap has been high on research and policy agendas for several decades. Incentive schemes such as subsidies, taxes and bonus-malus schemes are widely used to promote energy-efficient appliances. Most research, however, considers instruments in isolation, and only rarely in the context of political constraints on instrument use, or for alternative policy goals. This paper presents a methodology for the optimal design of incentive schemes based on the minimisation of Dead Weight Loss for different policy goals and policy restrictions. The use of the methodology is illustrated by designing optimal combinations of taxes and subsidies in Spain for three types of appliance: dishwashers, refrigerators and washing machines. The optimal policies are designed subject to different policy goals such as achieving a fixed reduction in emissions or a certain increased market share for efficient appliances, and for policy constraints such as budget neutrality. The methodology developed here can also be used to evaluate past and current incentive schemes. - Highlights: • A new methodology for the optimal design of incentive schemes is presented. • This is done minimising the Dead Weight Loss for different goals and restrictions. • Efficient bonus malus schemes can be designed with this method.

  1. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  2. Binary cuckoo search based optimal PMU placement scheme for ...

    African Journals Online (AJOL)

    without including zero-injection effect, an Optimal PMU Placement strategy considering ..... in Indian power grid — A case study, Frontiers in Energy, Vol. ... optimization approach, Proceedings: International Conference on Intelligent Systems ...

  3. Hydraulic design and optimization of a modular pump-turbine runner

    International Nuclear Information System (INIS)

    Schleicher, W.C.; Oztekin, A.

    2015-01-01

    Highlights: • A modular pumped-storage scheme using elevated water storage towers is investigated. • The pumped-storage scheme also aides in the wastewater treatment process. • A preliminary hydraulic pump-turbine runner design is created based on existing literature. • The preliminary design is optimized using a response surface optimization methodology. • The performance and flow fields between preliminary and optimized designs are compared. - Abstract: A novel modular pumped-storage scheme is investigated that uses elevated water storage towers and cement pools as the upper and lower reservoirs. The scheme serves a second purpose as part of the wastewater treatment process, providing multiple benefits besides energy storage. A small pumped-storage scheme has been shown to be a competitive energy storage solution for micro renewable energy grids; however, pumped-storage schemes have not been implemented on scales smaller than megawatts. Off-the-shelf runner designs are not available for modular pumped-storage schemes, so a custom runner design is sought. A preliminary hydraulic design for a pump-turbine runner is examined and optimized for increased pumping hydraulic efficiency using a response surface optimization methodology. The hydraulic pumping efficiency was found to have improved by 1.06% at the best efficiency point, while turbine hydraulic efficiency decreased by 0.70% at the turbine best efficiency point. The round-trip efficiency for the system was estimated to be about 78%, which is comparable to larger pumped-storage schemes currently in operation

  4. Simultaneous beam sampling and aperture shape optimization for SPORT.

    Science.gov (United States)

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case

  5. Simultaneous beam sampling and aperture shape optimization for SPORT

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Ye, Yinyu [Department of Management Science and Engineering, Stanford University, Stanford, California 94305 (United States)

    2015-02-15

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  6. Simultaneous beam sampling and aperture shape optimization for SPORT

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu

    2015-01-01

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  7. Optimally cloned binary coherent states

    DEFF Research Database (Denmark)

    Mueller, C. R.; Leuchs, G.; Marquardt, Ch

    2017-01-01

    their quantum-optimal clones. We analyze the Wigner function and the cumulants of the clones, and we conclude that optimal cloning of binary coherent states requires a nonlinearity above second order. We propose several practical and near-optimal cloning schemes and compare their cloning fidelity to the optimal...

  8. Optimization of the fractionated irradiation scheme considering physical doses to tumor and organ at risk based on dose–volume histograms

    Energy Technology Data Exchange (ETDEWEB)

    Sugano, Yasutaka [Graduate School of Health Sciences, Hokkaido University, Kita-12, Nishi-5, Kita-ku, Sapporo, Hokkaido 060-0812 (Japan); Mizuta, Masahiro [Laboratory of Advanced Data Science, Information Initiative Center, Hokkaido University, Kita-11, Nishi-5, Kita-ku, Sapporo, Hokkaido 060-0811 (Japan); Takao, Seishin; Shirato, Hiroki; Sutherland, Kenneth L. [Department of Radiation Medicine, Graduate School of Medicine, Hokkaido University, Kita-15, Nishi-5, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Date, Hiroyuki, E-mail: date@hs.hokudai.ac.jp [Faculty of Health Sciences, Hokkaido University, Kita-12, Nishi-5, Kita-ku, Sapporo, Hokkaido 060-0812 (Japan)

    2015-11-15

    Purpose: Radiotherapy of solid tumors has been performed with various fractionation regimens such as multi- and hypofractionations. However, the ability to optimize the fractionation regimen considering the physical dose distribution remains insufficient. This study aims to optimize the fractionation regimen, in which the authors propose a graphical method for selecting the optimal number of fractions (n) and dose per fraction (d) based on dose–volume histograms for tumor and normal tissues of organs around the tumor. Methods: Modified linear-quadratic models were employed to estimate the radiation effects on the tumor and an organ at risk (OAR), where the repopulation of the tumor cells and the linearity of the dose-response curve in the high dose range of the surviving fraction were considered. The minimization problem for the damage effect on the OAR was solved under the constraint that the radiation effect on the tumor is fixed by a graphical method. Here, the damage effect on the OAR was estimated based on the dose–volume histogram. Results: It was found that the optimization of fractionation scheme incorporating the dose–volume histogram is possible by employing appropriate cell surviving models. The graphical method considering the repopulation of tumor cells and a rectilinear response in the high dose range enables them to derive the optimal number of fractions and dose per fraction. For example, in the treatment of prostate cancer, the optimal fractionation was suggested to lie in the range of 8–32 fractions with a daily dose of 2.2–6.3 Gy. Conclusions: It is possible to optimize the number of fractions and dose per fraction based on the physical dose distribution (i.e., dose–volume histogram) by the graphical method considering the effects on tumor and OARs around the tumor. This method may stipulate a new guideline to optimize the fractionation regimen for physics-guided fractionation.

  9. Practical splitting methods for the adaptive integration of nonlinear evolution equations. Part I: Construction of optimized schemes and pairs of schemes

    KAUST Repository

    Auzinger, Winfried; Hofstä tter, Harald; Ketcheson, David I.; Koch, Othmar

    2016-01-01

    We present a number of new contributions to the topic of constructing efficient higher-order splitting methods for the numerical integration of evolution equations. Particular schemes are constructed via setup and solution of polynomial systems for the splitting coefficients. To this end we use and modify a recent approach for generating these systems for a large class of splittings. In particular, various types of pairs of schemes intended for use in adaptive integrators are constructed.

  10. Practical splitting methods for the adaptive integration of nonlinear evolution equations. Part I: Construction of optimized schemes and pairs of schemes

    KAUST Repository

    Auzinger, Winfried

    2016-07-28

    We present a number of new contributions to the topic of constructing efficient higher-order splitting methods for the numerical integration of evolution equations. Particular schemes are constructed via setup and solution of polynomial systems for the splitting coefficients. To this end we use and modify a recent approach for generating these systems for a large class of splittings. In particular, various types of pairs of schemes intended for use in adaptive integrators are constructed.

  11. Optimal calculational schemes for solving multigroup photon transport problem

    International Nuclear Information System (INIS)

    Dubinin, A.A.; Kurachenko, Yu.A.

    1987-01-01

    A scheme of complex algorithm for solving multigroup equation of radiation transport is suggested. The algorithm is based on using the method of successive collisions, the method of forward scattering and the spherical harmonics method, and is realized in the FORAP program (FORTRAN, BESM-6 computer). As an example the results of calculating reactor photon transport in water are presented. The considered algorithm being modified may be used for solving neutron transport problems

  12. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    Energy Technology Data Exchange (ETDEWEB)

    Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)

    2016-01-15

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  13. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    International Nuclear Information System (INIS)

    Maglevanny, I.I.; Smolar, V.A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  14. A Suboptimal Power-Saving Transmission Scheme in Multiple Component Carrier Networks

    Science.gov (United States)

    Chung, Yao-Liang; Tsai, Zsehong

    Power consumption due to transmissions in base stations (BSs) has been a major contributor to communication-related CO2 emissions. A power optimization model is developed in this study with respect to radio resource allocation and activation in a multiple Component Carrier (CC) environment. We formulate and solve the power-minimization problem of the BS transceivers for multiple-CC networks with carrier aggregation, while maintaining the overall system and respective users' utilities above minimum levels. The optimized power consumption based on this model can be viewed as a lower bound of that of other algorithms employed in practice. A suboptimal scheme with low computation complexity is proposed. Numerical results show that the power consumption of our scheme is much better than that of the conventional one in which all CCs are always active, if both schemes maintain the same required utilities.

  15. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  16. The Performance-based Funding Scheme of Universities

    Directory of Open Access Journals (Sweden)

    Juha KETTUNEN

    2016-05-01

    Full Text Available The purpose of this study is to analyse the effectiveness of the performance-based funding scheme of the Finnish universities that was adopted at the beginning of 2013. The political decision-makers expect that the funding scheme will create incentives for the universities to improve performance, but these funding schemes have largely failed in many other countries, primarily because public funding is only a small share of the total funding of universities. This study is interesting because Finnish universities have no tuition fees, unlike in many other countries, and the state allocates funding based on the objectives achieved. The empirical evidence of the graduation rates indicates that graduation rates increased when a new scheme was adopted, especially among male students, who have more room for improvement than female students. The new performance-based funding scheme allocates the funding according to the output-based indicators and limits the scope of strategic planning and the autonomy of the university. The performance-based funding scheme is transformed to the strategy map of the balanced scorecard. The new funding scheme steers universities in many respects but leaves the research and teaching skills to the discretion of the universities. The new scheme has also diminished the importance of the performance agreements between the university and the Ministry. The scheme increases the incentives for universities to improve the processes and structures in order to attain as much public funding as possible. It is optimal for the central administration of the university to allocate resources to faculties and other organisational units following the criteria of the performance-based funding scheme. The new funding scheme has made the universities compete with each other, because the total funding to the universities is allocated to each university according to the funding scheme. There is a tendency that the funding schemes are occasionally

  17. Feasibility of Stochastic Voltage/VAr Optimization Considering Renewable Energy Resources for Smart Grid

    Science.gov (United States)

    Momoh, James A.; Salkuti, Surender Reddy

    2016-06-01

    This paper proposes a stochastic optimization technique for solving the Voltage/VAr control problem including the load demand and Renewable Energy Resources (RERs) variation. The RERs often take along some inputs like stochastic behavior. One of the important challenges i. e., Voltage/VAr control is a prime source for handling power system complexity and reliability, hence it is the fundamental requirement for all the utility companies. There is a need for the robust and efficient Voltage/VAr optimization technique to meet the peak demand and reduction of system losses. The voltages beyond the limit may damage costly sub-station devices and equipments at consumer end as well. Especially, the RERs introduces more disturbances and some of the RERs are not even capable enough to meet the VAr demand. Therefore, there is a strong need for the Voltage/VAr control in RERs environment. This paper aims at the development of optimal scheme for Voltage/VAr control involving RERs. In this paper, Latin Hypercube Sampling (LHS) method is used to cover full range of variables by maximally satisfying the marginal distribution. Here, backward scenario reduction technique is used to reduce the number of scenarios effectively and maximally retain the fitting accuracy of samples. The developed optimization scheme is tested on IEEE 24 bus Reliability Test System (RTS) considering the load demand and RERs variation.

  18. Design of Infusion Schemes for Neuroreceptor Imaging: Application to [11C]Flumazenil-PET Steady-State Study

    Directory of Open Access Journals (Sweden)

    Ling Feng

    2016-01-01

    Full Text Available This study aims at developing a simulation system that predicts the optimal study design for attaining tracer steady-state conditions in brain and blood rapidly. Tracer kinetics was determined from bolus studies and used to construct the system. Subsequently, the system was used to design inputs for bolus infusion (BI or programmed infusion (PI experiments. Steady-state quantitative measurements can be made with one short scan and venous blood samples. The GABAA receptor ligand [C11]Flumazenil (FMZ was chosen for this purpose, as it lacks a suitable reference region. Methods. Five bolus [C11]FMZ-PET scans were conducted, based on which population-based PI and BI schemes were designed and tested in five additional healthy subjects. The design of a PI was assisted by an offline feedback controller. Results. The system could reproduce the measurements in blood and brain. With PI, [C11]FMZ steady state was attained within 40 min, which was 8 min earlier than the optimal BI (B/I ratio = 55 min. Conclusions. The system can design both BI and PI schemes to attain steady state rapidly. For example, subjects can be [C11]FMZ-PET scanned after 40 min of tracer infusion for 40 min with venous sampling and a straight-forward quantification. This simulation toolbox is available for other PET-tracers.

  19. Additive operator-difference schemes splitting schemes

    CERN Document Server

    Vabishchevich, Petr N

    2013-01-01

    Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy

  20. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    Science.gov (United States)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  1. Security problem on arbitrated quantum signature schemes

    International Nuclear Information System (INIS)

    Choi, Jeong Woon; Chang, Ku-Young; Hong, Dowon

    2011-01-01

    Many arbitrated quantum signature schemes implemented with the help of a trusted third party have been developed up to now. In order to guarantee unconditional security, most of them take advantage of the optimal quantum one-time encryption based on Pauli operators. However, in this paper we point out that the previous schemes provide security only against a total break attack and show in fact that there exists an existential forgery attack that can validly modify the transmitted pair of message and signature. In addition, we also provide a simple method to recover security against the proposed attack.

  2. Security problem on arbitrated quantum signature schemes

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jeong Woon [Emerging Technology R and D Center, SK Telecom, Kyunggi 463-784 (Korea, Republic of); Chang, Ku-Young; Hong, Dowon [Cryptography Research Team, Electronics and Telecommunications Research Institute, Daejeon 305-700 (Korea, Republic of)

    2011-12-15

    Many arbitrated quantum signature schemes implemented with the help of a trusted third party have been developed up to now. In order to guarantee unconditional security, most of them take advantage of the optimal quantum one-time encryption based on Pauli operators. However, in this paper we point out that the previous schemes provide security only against a total break attack and show in fact that there exists an existential forgery attack that can validly modify the transmitted pair of message and signature. In addition, we also provide a simple method to recover security against the proposed attack.

  3. Systematic ultrasound-guided saturation and template biopsy of the prostate: indications and advantages of extended sampling.

    Science.gov (United States)

    Isbarn, Hendrik; Briganti, Alberto; De Visschere, Pieter J L; Fütterer, Jurgen J; Ghadjar, Pirus; Giannarini, Gianluca; Ost, Piet; Ploussard, Guillaume; Sooriakumaran, Prasanna; Surcel, Christian I; van Oort, Inge M; Yossepowitch, Ofer; van den Bergh, Roderick C N

    2015-04-01

    Prostate biopsy (PB) is the gold standard for the diagnosis of prostate cancer (PCa). However, the optimal number of biopsy cores remains debatable. We sought to compare contemporary standard (10-12 cores) vs. saturation (=18 cores) schemes on initial as well as repeat PB. A non-systematic review of the literature was performed from 2000 through 2013. Studies of highest evidence (randomized controlled trials, prospective non-randomized studies, and retrospective reports of high quality) comparing standard vs saturation schemes on initial and repeat PB were evaluated. Outcome measures were overall PCa detection rate, detection rate of insignificant PCa, and procedure-associated morbidity. On initial PB, there is growing evidence that a saturation scheme is associated with a higher PCa detection rate compared to a standard one in men with lower PSA levels (40 cc), or lower PSA density values (sampling is associated with a high rate of acute urinary retention, whereas other severe adverse events, such as sepsis, appear not to occur more frequently with saturation schemes. Current evidence suggests that saturation schemes are associated with a higher PCa detection rate compared to standard ones on initial PB in men with lower PSA levels or larger prostates, and on repeat PB. Since most data are derived from retrospective studies, other endpoints such as detection rate of insignificant disease - especially on repeat PB - show broad variations throughout the literature and must, thus, be interpreted with caution. Future prospective controlled trials should be conducted to compare extended templates with newer techniques, such as image-guided sampling, in order to optimize PCa diagnostic strategy.

  4. A Fault-tolerable Control Scheme for an Open-frame Underwater Vehicle

    Directory of Open Access Journals (Sweden)

    Huang Hai

    2014-05-01

    Full Text Available Open-frame is one of the major types of structures of Remote Operated Vehicles (ROV because it is easy to place sensors and operations equipment onboard. Firstly, this paper designed a petri-based recurrent neural network (PRFNN to improve the robustness with response to nonlinear characteristics and strong disturbance of an open-frame underwater vehicle. A threshold has been set in the third layer to reduce the amount of calculations and regulate the training process. The whole network convergence is guaranteed with the selection of learning rate parameters. Secondly, a fault tolerance control (FTC scheme is established with the optimal allocation of thrust. Infinity-norm optimization has been combined with 2-norm optimization to construct a bi-criteria primal-dual neural network FTC scheme. In the experiments and simulation, PRFNN outperformed fuzzy neural networks in motion control, while bi-criteria optimization outperformed 2-norm optimization in FTC, which demonstrates that the FTC controller can improve computational efficiency, reduce control errors, and implement fault tolerable thrust allocation.

  5. Optimization strategies for discrete multi-material stiffness optimization

    DEFF Research Database (Denmark)

    Hvejsel, Christian Frier; Lund, Erik; Stolpe, Mathias

    2011-01-01

    Design of composite laminated lay-ups are formulated as discrete multi-material selection problems. The design problem can be modeled as a non-convex mixed-integer optimization problem. Such problems are in general only solvable to global optimality for small to moderate sized problems. To attack...... which numerically confirm the sought properties of the new scheme in terms of convergence to a discrete solution....

  6. Sliding Mode Extremum Seeking Control Scheme Based on PSO for Maximum Power Point Tracking in Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Her-Terng Yau

    2013-01-01

    Full Text Available An extremum seeking control (ESC scheme is proposed for maximum power point tracking (MPPT in photovoltaic power generation systems. The robustness of the proposed scheme toward irradiance changes is enhanced by implementing the ESC scheme using a sliding mode control (SMC law. In the proposed approach, the chattering phenomenon caused by high frequency switching is suppressed by means of a sliding layer concept. Moreover, in implementing the proposed controller, the optimal value of the gain constant is determined using a particle swarm optimization (PSO algorithm. The experimental and simulation results show that the proposed PSO-based sliding mode ESC (SMESC control scheme yields a better transient response, steady-state stability, and robustness than traditional MPPT schemes based on gradient detection methods.

  7. Toward a General Theory of Commitment, Renegotiation and Contract Incompleteness : (II) Commitment Problem and Optimal Incentive Schemes in Agency with Bilateral Moral Hazard

    OpenAIRE

    Suzuki, Yutaka

    1998-01-01

    This paper investigates the characteristics of the optimal incentive contracts when the principal is also a productive agent. In this bilateral moral hazard framework, the two requirements should be satisfied in designing an incentive scheme. One is the agent's incentive provision and the other is the principal's incentive provision. Because of the trade off between these two incentive provisions, only the second best is obtainable if the incentive contract should be based only on the total o...

  8. How update schemes influence crowd simulations

    International Nuclear Information System (INIS)

    Seitz, Michael J; Köster, Gerta

    2014-01-01

    Time discretization is a key modeling aspect of dynamic computer simulations. In current pedestrian motion models based on discrete events, e.g. cellular automata and the Optimal Steps Model, fixed-order sequential updates and shuffle updates are prevalent. We propose to use event-driven updates that process events in the order they occur, and thus better match natural movement. In addition, we present a parallel update with collision detection and resolution for situations where computational speed is crucial. Two simulation studies serve to demonstrate the practical impact of the choice of update scheme. Not only do density-speed relations differ, but there is a statistically significant effect on evacuation times. Fixed-order sequential and random shuffle updates with a short update period come close to event-driven updates. The parallel update scheme overestimates evacuation times. All schemes can be employed for arbitrary simulation models with discrete events, such as car traffic or animal behavior. (paper)

  9. Performance of laboratories analysing welding fume on filter samples: results from the WASP proficiency testing scheme.

    Science.gov (United States)

    Stacey, Peter; Butler, Owen

    2008-06-01

    This paper emphasizes the need for occupational hygiene professionals to require evidence of the quality of welding fume data from analytical laboratories. The measurement of metals in welding fume using atomic spectrometric techniques is a complex analysis often requiring specialist digestion procedures. The results from a trial programme testing the proficiency of laboratories in the Workplace Analysis Scheme for Proficiency (WASP) to measure potentially harmful metals in several different types of welding fume showed that most laboratories underestimated the mass of analyte on the filters. The average recovery was 70-80% of the target value and >20% of reported recoveries for some of the more difficult welding fume matrices were welding fume trial filter samples. Consistent rather than erratic error predominated, suggesting that the main analytical factor contributing to the differences between the target values and results was the effectiveness of the sample preparation procedures used by participating laboratories. It is concluded that, with practice and regular participation in WASP, performance can improve over time.

  10. Sequential sampling: a novel method in farm animal welfare assessment.

    Science.gov (United States)

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall

  11. On Secure NOMA Systems with Transmit Antenna Selection Schemes

    KAUST Repository

    Lei, Hongjiang; Zhang, Jianming; Park, Kihong; Xu, Peng; Ansari, Imran Shafique; Pan, Gaofeng; Alomair, Basel; Alouini, Mohamed-Slim

    2017-01-01

    This paper investigates the secrecy performance of a two-user downlink non-orthogonal multiple access systems. Both single-input and single-output and multiple-input and singleoutput systems with different transmit antenna selection (TAS) strategies are considered. Depending on whether the base station has the global channel state information of both the main and wiretap channels, the exact closed-form expressions for the secrecy outage probability (SOP) with suboptimal antenna selection and optimal antenna selection schemes are obtained and compared with the traditional space-time transmission scheme. To obtain further insights, the asymptotic analysis of the SOP in high average channel power gains regime is presented and it is found that the secrecy diversity order for all the TAS schemes with fixed power allocation is zero. Furthermore, an effective power allocation scheme is proposed to obtain the nonzero diversity order with all the TAS schemes. Monte-Carlo simulations are performed to verify the proposed analytical results.

  12. On Secure NOMA Systems with Transmit Antenna Selection Schemes

    KAUST Repository

    Lei, Hongjiang

    2017-08-09

    This paper investigates the secrecy performance of a two-user downlink non-orthogonal multiple access systems. Both single-input and single-output and multiple-input and singleoutput systems with different transmit antenna selection (TAS) strategies are considered. Depending on whether the base station has the global channel state information of both the main and wiretap channels, the exact closed-form expressions for the secrecy outage probability (SOP) with suboptimal antenna selection and optimal antenna selection schemes are obtained and compared with the traditional space-time transmission scheme. To obtain further insights, the asymptotic analysis of the SOP in high average channel power gains regime is presented and it is found that the secrecy diversity order for all the TAS schemes with fixed power allocation is zero. Furthermore, an effective power allocation scheme is proposed to obtain the nonzero diversity order with all the TAS schemes. Monte-Carlo simulations are performed to verify the proposed analytical results.

  13. A user-driven treadmill control scheme for simulating overground locomotion.

    Science.gov (United States)

    Kim, Jonghyun; Stanley, Christopher J; Curatalo, Lindsey A; Park, Hyung-Soon

    2012-01-01

    Treadmill-based locomotor training should simulate overground walking as closely as possible for optimal skill transfer. The constant speed of a standard treadmill encourages automaticity rather than engagement and fails to simulate the variable speeds encountered during real-world walking. To address this limitation, this paper proposes a user-driven treadmill velocity control scheme that allows the user to experience natural fluctuations in walking velocity with minimal unwanted inertial force due to acceleration/deceleration of the treadmill belt. A smart estimation limiter in the scheme effectively attenuates the inertial force during velocity changes. The proposed scheme requires measurement of pelvic and swing foot motions, and is developed for a treadmill of typical belt length (1.5 m). The proposed scheme is quantitatively evaluated here with four healthy subjects by comparing it with the most advanced control scheme identified in the literature.

  14. Adaptive protection coordination scheme for distribution network with distributed generation using ABC

    Directory of Open Access Journals (Sweden)

    A.M. Ibrahim

    2016-09-01

    Full Text Available This paper presents an adaptive protection coordination scheme for optimal coordination of DOCRs in interconnected power networks with the impact of DG, the used coordination technique is the Artificial Bee Colony (ABC. The scheme adapts to system changes; new relays settings are obtained as generation-level or system-topology changes. The developed adaptive scheme is applied on the IEEE 30-bus test system for both single- and multi-DG existence where results are shown and discussed.

  15. Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.

    Science.gov (United States)

    Ullah, Azmat; Malik, Suheel Abdullah; Alimgeer, Khurram Saleem

    2018-01-01

    In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA) with Interior Point Algorithm (IPA) is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.

  16. Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.

    Directory of Open Access Journals (Sweden)

    Azmat Ullah

    Full Text Available In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA with Interior Point Algorithm (IPA is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.

  17. Matching soil salinization and cropping systems in communally managed irrigation schemes

    Science.gov (United States)

    Malota, Mphatso; Mchenga, Joshua

    2018-03-01

    Occurrence of soil salinization in irrigation schemes can be a good indicator to introduce high salt tolerant crops in irrigation schemes. This study assessed the level of soil salinization in a communally managed 233 ha Nkhate irrigation scheme in the Lower Shire Valley region of Malawi. Soil samples were collected within the 0-0.4 m soil depth from eight randomly selected irrigation blocks. Irrigation water samples were also collected from five randomly selected locations along the Nkhate River which supplies irrigation water to the scheme. Salinity of both the soil and the irrigation water samples was determined using an electrical conductivity (EC) meter. Analysis of the results indicated that even for very low salinity tolerant crops (ECi water was suitable for irrigation purposes. However, root-zone soil salinity profiles depicted that leaching of salts was not adequate and that the leaching requirement for the scheme needs to be relooked and always be adhered to during irrigation operation. The study concluded that the crop system at the scheme needs to be adjusted to match with prevailing soil and irrigation water salinity levels.

  18. Optimized IMAC-IMAC protocol for phosphopeptide recovery from complex biological samples

    DEFF Research Database (Denmark)

    Ye, Juanying; Zhang, Xumin; Young, Clifford

    2010-01-01

    using Fe(III)-NTA IMAC resin and it proved to be highly selective in the phosphopeptide enrichment of a highly diluted standard sample (1:1000) prior to MALDI MS analysis. We also observed that a higher iron purity led to an increased IMAC enrichment efficiency. The optimized method was then adapted...... to phosphoproteome analyses of cell lysates of high protein complexity. From either 20 microg of mouse sample or 50 microg of Drosophila melanogaster sample, more than 1000 phosphorylation sites were identified in each study using IMAC-IMAC and LC-MS/MS. We demonstrate efficient separation of multiply phosphorylated...... characterization of phosphoproteins in functional phosphoproteomics research projects....

  19. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  20. Sparse Learning with Stochastic Composite Optimization.

    Science.gov (United States)

    Zhang, Weizhong; Zhang, Lijun; Jin, Zhongming; Jin, Rong; Cai, Deng; Li, Xuelong; Liang, Ronghua; He, Xiaofei

    2017-06-01

    In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT).

  1. An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad; Valstar, Johan R.; Hoteit, Ibrahim

    2014-01-01

    Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system's parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.

  2. An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad

    2014-09-01

    Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.

  3. Programming scheme based optimization of hybrid 4T-2R OxRAM NVSRAM

    Science.gov (United States)

    Majumdar, Swatilekha; Kingra, Sandeep Kaur; Suri, Manan

    2017-09-01

    In this paper, we present a novel single-cycle programming scheme for 4T-2R NVSRAM, exploiting pulse engineered input signals. OxRAM devices based on 3 nm thick bi-layer active switching oxide and 90 nm CMOS technology node were used for all simulations. The cell design is implemented for real-time non-volatility rather than last-bit, or power-down non-volatility. Detailed analysis of the proposed single-cycle, parallel RRAM device programming scheme is presented in comparison to the two-cycle sequential RRAM programming used for similar 4T-2R NVSRAM bit-cells. The proposed single-cycle programming scheme coupled with the 4T-2R architecture leads to several benefits such as- possibility of unconventional transistor sizing, 50% lower latency, 20% improvement in SNM and ∼20× reduced energy requirements, when compared against two-cycle programming approach.

  4. Green frame aggregation scheme for Wi-Fi networks

    KAUST Repository

    Alaslani, Maha S.

    2015-07-01

    Frame aggregation is a major enhancement in the IEEE 802.11 family to boost the network performance. The increasing awareness about energy efficiency motivates the re-think of frame aggregation design. In this paper, we propose a novel Green Frame Aggregation (GFA) scheduling scheme that optimizes the aggregate size based on channel quality in order to minimize the consumed energy. GFA selects an optimal sub-frame size that satisfies the loss constraint for real-time applications as well as the energy budget of the ideal channel. This scheme is implemented and evaluated using a testbed deployment. The experimental analysis shows that GFA outperforms the conventional frame aggregation methodology in terms of energy efficiency by about 6x in the presence of severe interference conditions. Moreover, GFA outperforms the static frame sizing method in terms of network goodput while maintaining the same end-to-end latency.

  5. Green-Frag: Energy-Efficient Frame Fragmentation Scheme for Wireless Sensor Networks

    KAUST Repository

    Daghistani, Anas H.

    2013-05-15

    Power management is an active area of research in wireless sensor networks (WSNs). Efficient power management is necessary because WSNs are battery-operated devices that can be deployed in mission-critical applications. From the communications perspective, one main approach to reduce energy is to maximize throughput so the data can be transmitted in a short amount of time. Frame fragmentation techniques aim to achieve higher throughput by reducing retransmissions. Using experiments on a WSN testbed, we show that frame fragmentation helps to reduce energy consumption. We then study and compare recent frame fragmentation schemes to find the most energy-efficient scheme. Our main contribution is to propose a new frame fragmentation scheme that is optimized to be energy efficient, which is originated from the chosen frame fragmentation scheme. This new energy-efficient frame fragmentation protocol is called (Green-Frag). Green-Frag uses an algorithm that gives sensor nodes the ability to transmit data with optimal transmit power and optimal frame structure based on environmental conditions. Green-Frag takes into consideration the channel conditions, interference patterns and level, as well as the distance between sender and receiver. The thesis discusses various design and implementation considerations for Green-Frag. Also, it shows empirical results of comparing Green-Frag with other frame fragmentation protocols in terms of energy efficiency. Green-Frag performance results shows that it is capable of choosing the best transmit according to the channel conditions. Subsequently, Green-Frag achieves the least energy consumption in all environmental conditions.

  6. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  7. Modelling of Rabies Transmission Dynamics Using Optimal Control Analysis

    Directory of Open Access Journals (Sweden)

    Joshua Kiddy K. Asamoah

    2017-01-01

    Full Text Available We examine an optimal way of eradicating rabies transmission from dogs into the human population, using preexposure prophylaxis (vaccination and postexposure prophylaxis (treatment due to public education. We obtain the disease-free equilibrium, the endemic equilibrium, the stability, and the sensitivity analysis of the optimal control model. Using the Latin hypercube sampling (LHS, the forward-backward sweep scheme and the fourth-order Range-Kutta numerical method predict that the global alliance for rabies control’s aim of working to eliminate deaths from canine rabies by 2030 is attainable through mass vaccination of susceptible dogs and continuous use of pre- and postexposure prophylaxis in humans.

  8. SU-E-T-175: Clinical Evaluations of Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Y; Li, Y; Tian, Z; Gu, X; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose: Pencil-beam or superposition-convolution type dose calculation algorithms are routinely used in inverse plan optimization for intensity modulated radiation therapy (IMRT). However, due to their limited accuracy in some challenging cases, e.g. lung, the resulting dose may lose its optimality after being recomputed using an accurate algorithm, e.g. Monte Carlo (MC). It is the objective of this study to evaluate the feasibility and advantages of a new method to include MC in the treatment planning process. Methods: We developed a scheme to iteratively perform MC-based beamlet dose calculations and plan optimization. In the MC stage, a GPU-based dose engine was used and the particle number sampled from a beamlet was proportional to its optimized fluence from the previous step. We tested this scheme in four lung cancer IMRT cases. For each case, the original plan dose, plan dose re-computed by MC, and dose optimized by our scheme were obtained. Clinically relevant dosimetric quantities in these three plans were compared. Results: Although the original plan achieved a satisfactory PDV dose coverage, after re-computing doses using MC method, it was found that the PTV D95% were reduced by 4.60%–6.67%. After re-optimizing these cases with our scheme, the PTV coverage was improved to the same level as in the original plan, while the critical OAR coverages were maintained to clinically acceptable levels. Regarding the computation time, it took on average 144 sec per case using only one GPU card, including both MC-based beamlet dose calculation and treatment plan optimization. Conclusion: The achieved dosimetric gains and high computational efficiency indicate the feasibility and advantages of the proposed MC-based IMRT optimization method. Comprehensive validations in more patient cases are in progress.

  9. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.

    Science.gov (United States)

    Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden.

  10. A Suboptimal Scheme for Multi-User Scheduling in Gaussian Broadcast Channels

    KAUST Repository

    Zafar, Ammar; Alouini, Mohamed-Slim; Shaqfeh, Mohammad

    2014-01-01

    This work proposes a suboptimal multi-user scheduling scheme for Gaussian broadcast channels which improves upon the classical single user selection, while considerably reducing complexity as compared to the optimal superposition coding with successful interference cancellation. The proposed scheme combines the two users with the maximum weighted instantaneous rate using superposition coding. The instantaneous rate and power allocation are derived in closed-form, while the long term rate of each user is derived in integral form for all channel distributions. Numerical results are then provided to characterize the prospected gains of the proposed scheme.

  11. A Suboptimal Scheme for Multi-User Scheduling in Gaussian Broadcast Channels

    KAUST Repository

    Zafar, Ammar

    2014-05-28

    This work proposes a suboptimal multi-user scheduling scheme for Gaussian broadcast channels which improves upon the classical single user selection, while considerably reducing complexity as compared to the optimal superposition coding with successful interference cancellation. The proposed scheme combines the two users with the maximum weighted instantaneous rate using superposition coding. The instantaneous rate and power allocation are derived in closed-form, while the long term rate of each user is derived in integral form for all channel distributions. Numerical results are then provided to characterize the prospected gains of the proposed scheme.

  12. Development of Fault Detection and Diagnosis Schemes for Industrial Refrigeration Systems

    DEFF Research Database (Denmark)

    Thybo, C.; Izadi-Zamanabadi, Roozbeh

    2004-01-01

    The success of a fault detection and diagnosis (FDD) scheme depends not alone on developing an advanced detection scheme. To enable successful deployment in industrial applications, an economically optimal development of FDD schemes are required. This paper reviews and discusses the gained...... experiences achieved by employing a combination of various techniques, methods, and algorithms, which are proposed by academia, on an industrial application. The main focus is on sharing the "lessons learned" from developing and employing Faulttolerant functionalities to a controlled process in order to meet...... the industrial needs while satisfying economically motivated constraints....

  13. Designing a Profit-Maximizing Critical Peak Pricing Scheme Considering the Payback Phenomenon

    Directory of Open Access Journals (Sweden)

    Sung Chan Park

    2015-10-01

    Full Text Available Critical peak pricing (CPP is a demand response program that can be used to maximize profits for a load serving entity in a deregulated market environment. Like other such programs, however, CPP is not free from the payback phenomenon: a rise in consumption after a critical event. This payback has a negative effect on profits and thus must be appropriately considered when designing a CPP scheme. However, few studies have examined CPP scheme design considering payback. This study thus characterizes payback using three parameters (duration, amount, and pattern and examines payback effects on the optimal schedule of critical events and on the optimal peak rate for two specific payback patterns. This analysis is verified through numerical simulations. The results demonstrate the need to properly consider payback parameters when designing a profit-maximizing CPP scheme.

  14. Over-Sampling Codebook-Based Hybrid Minimum Sum-Mean-Square-Error Precoding for Millimeter-Wave 3D-MIMO

    KAUST Repository

    Mao, Jiening

    2018-05-23

    Abstract: Hybrid precoding design is challenging for millimeter-wave (mmWave) massive MIMO. Most prior hybrid precoding schemes are designed to maximize the sum spectral efficiency (SSE), while seldom investigate the bit-error-rate (BER). Therefore, this letter designs an over-sampling codebook (OSC)-based hybrid minimum sum-mean-square-error (min-SMSE) precoding to optimize the BER. Specifically, given the effective baseband channel consisting of the real channel and analog precoding, we first design the digital precoder/combiner based on min-SMSE criterion to optimize the BER. To further reduce the SMSE between the transmit and receive signals, we propose an OSC-based joint analog precoder/combiner (JAPC) design. Simulation results show that the proposed scheme can achieve the better performance than its conventional counterparts.

  15. Over-Sampling Codebook-Based Hybrid Minimum Sum-Mean-Square-Error Precoding for Millimeter-Wave 3D-MIMO

    KAUST Repository

    Mao, Jiening; Gao, Zhen; Wu, Yongpeng; Alouini, Mohamed-Slim

    2018-01-01

    Hybrid precoding design is challenging for millimeter-wave (mmWave) massive MIMO. Most prior hybrid precoding schemes are designed to maximize the sum spectral efficiency (SSE), while seldom investigate the bit-error-rate (BER). Therefore, this letter designs an over-sampling codebook (OSC)-based hybrid minimum sum-mean-square-error (min-SMSE) precoding to optimize the BER. Specifically, given the effective baseband channel consisting of the real channel and analog precoding, we first design the digital precoder/combiner based on min-SMSE criterion to optimize the BER. To further reduce the SMSE between the transmit and receive signals, we propose an OSC-based joint analog precoder/combiner (JAPC) design. Simulation results show that the proposed scheme can achieve the better performance than its conventional counterparts.

  16. Convex Programming and Bootstrap Sensitivity for Optimized Electricity Bill in Healthcare Buildings under a Time-Of-Use Pricing Scheme

    Directory of Open Access Journals (Sweden)

    Rodolfo Gordillo-Orquera

    2018-06-01

    Full Text Available Efficient energy management is strongly dependent on determining the adequate power contracts among the ones offered by different electricity suppliers. This topic takes special relevance in healthcare buildings, where noticeable amounts of energy are required to generate an adequate health environment for patients and staff. In this paper, a convex optimization method is scrutinized to give a straightforward analysis of the optimal power levels to be contracted while minimizing the electricity bill cost in a time-of-use pricing scheme. In addition, a sensitivity analysis is carried out on the constraints in the optimization problems, which are analyzed in terms of both their empirical distribution and their bootstrap-estimated statistical distributions to create a simple-to-use tool for this purpose, the so-called mosaic-distribution. The evaluation of the proposed method was carried out with five-year consumption data on two different kinds of healthcare buildings, a large one given by Hospital Universitario de Fuenlabrada, and a primary care center, Centro de Especialidades el Arroyo, both located at Fuenlabrada (Madrid, Spain. The analysis of the resulting optimization shows that the annual savings achieved vary moderately, ranging from −0.22 % to +27.39%, depending on the analyzed year profile and the healthcare building type. The analysis introducing mosaic-distribution to represent the sensitivity score also provides operative information to evaluate the convenience of implementing energy saving measures. All this information is useful for managers to determine the appropriate power levels for next year contract renewal and to consider whether to implement demand response mechanisms in healthcare buildings.

  17. Importance Sampling Based Decision Trees for Security Assessment and the Corresponding Preventive Control Schemes: the Danish Case Study

    DEFF Research Database (Denmark)

    Liu, Leo; Rather, Zakir Hussain; Chen, Zhe

    2013-01-01

    Decision Trees (DT) based security assessment helps Power System Operators (PSO) by providing them with the most significant system attributes and guiding them in implementing the corresponding emergency control actions to prevent system insecurity and blackouts. DT is obtained offline from time...... and adopts a methodology of importance sampling to maximize the information contained in the database so as to increase the accuracy of DT. Further, this paper also studies the effectiveness of DT by implementing its corresponding preventive control schemes. These approaches are tested on the detailed model...

  18. Time optimization of 90Sr measurements: Sequential measurement of multiple samples during ingrowth of 90Y

    International Nuclear Information System (INIS)

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-01-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing 90 Sr by making the Cherenkov measurement of the daughter nuclide 90 Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of 90 Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21 h to 6.5 h, when assuming a MDA of 1 Bq/L and at a background count rate of approximately 0.8 cpm. - Highlights: • An approach roughly a factor of three more efficient than an un-optimized method. • The optimization gives a more efficient use of instrument time. • The efficiency increase ranges from a factor of three to 10, for 10 to 40 samples.

  19. Electricity Consumption Forecasting Scheme via Improved LSSVM with Maximum Correntropy Criterion

    Directory of Open Access Journals (Sweden)

    Jiandong Duan

    2018-02-01

    Full Text Available In recent years, with the deepening of China’s electricity sales side reform and electricity market opening up gradually, the forecasting of electricity consumption (FoEC becomes an extremely important technique for the electricity market. At present, how to forecast the electricity accurately and make an evaluation of results scientifically are still key research topics. In this paper, we propose a novel prediction scheme based on the least-square support vector machine (LSSVM model with a maximum correntropy criterion (MCC to forecast the electricity consumption (EC. Firstly, the electricity characteristics of various industries are analyzed to determine the factors that mainly affect the changes in electricity, such as the gross domestic product (GDP, temperature, and so on. Secondly, according to the statistics of the status quo of the small sample data, the LSSVM model is employed as the prediction model. In order to optimize the parameters of the LSSVM model, we further use the local similarity function MCC as the evaluation criterion. Thirdly, we employ the K-fold cross-validation and grid searching methods to improve the learning ability. In the experiments, we have used the EC data of Shaanxi Province in China to evaluate the proposed prediction scheme, and the results show that the proposed prediction scheme outperforms the method based on the traditional LSSVM model.

  20. A reduced feedback proportional fair multiuser scheduling scheme

    KAUST Repository

    Shaqfeh, Mohammad

    2011-12-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed and ordered scheduling mechanism. A slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we propose a novel proportional fair multiuser switched-diversity scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the per-user feedback thresholds. We demonstrate by numerical examples that our reduced feedback proportional fair scheduler operates within 0.3 bits/sec/Hz from the achievable rates by the conventional full feedback proportional fair scheduler in Rayleigh fading conditions. © 2011 IEEE.

  1. Advanced Process Control Application and Optimization in Industrial Facilities

    Directory of Open Access Journals (Sweden)

    Howes S.

    2015-01-01

    Full Text Available This paper describes application of the new method and tool for system identification and PID tuning/advanced process control (APC optimization using the new 3G (geometric, gradient, gravity optimization method. It helps to design and implement control schemes directly inside the distributed control system (DCS or programmable logic controller (PLC. Also, the algorithm helps to identify process dynamics in closed-loop mode, optimizes controller parameters, and helps to develop adaptive control and model-based control (MBC. Application of the new 3G algorithm for designing and implementing APC schemes is presented. Optimization of primary and advanced control schemes stabilizes the process and allows the plant to run closer to process, equipment and economic constraints. This increases production rates, minimizes operating costs and improves product quality.

  2. Green Frame Aggregation Scheme for IEEE 802.11n Networks

    KAUST Repository

    Alaslani, Maha S.

    2015-01-01

    In this thesis, a novel Green Frame Aggregation (GFA) scheduling scheme has been proposed and evaluated. GFA optimizes the aggregate size based on channel quality in order to minimize the consumed energy

  3. Optimization of multi-channel neutron focusing guides for extreme sample environments

    International Nuclear Information System (INIS)

    Di Julio, D D; Lelièvre-Berna, E; Andersen, K H; Bentley, P M; Courtois, P

    2014-01-01

    In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.

  4. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    Science.gov (United States)

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Multi-Objective Climb Path Optimization for Aircraft/Engine Integration Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Aristeidis Antonakis

    2017-04-01

    Full Text Available In this article, a new multi-objective approach to the aircraft climb path optimization problem, based on the Particle Swarm Optimization algorithm, is introduced to be used for aircraft–engine integration studies. This considers a combination of a simulation with a traditional Energy approach, which incorporates, among others, the use of a proposed path-tracking scheme for guidance in the Altitude–Mach plane. The adoption of population-based solver serves to simplify case setup, allowing for direct interfaces between the optimizer and aircraft/engine performance codes. A two-level optimization scheme is employed and is shown to improve search performance compared to the basic PSO algorithm. The effectiveness of the proposed methodology is demonstrated in a hypothetic engine upgrade scenario for the F-4 aircraft considering the replacement of the aircraft’s J79 engine with the EJ200; a clear advantage of the EJ200-equipped configuration is unveiled, resulting, on average, in 15% faster climbs with 20% less fuel.

  6. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...

  7. Optimization of the data taking strategy for a high precision τ mass measurement

    International Nuclear Information System (INIS)

    Wang, Y.K.; Mo, X.H.; Yuan, C.Z.; Liu, J.P.

    2007-01-01

    To achieve a high precision τ mass (m τ ) measurement at the forthcoming high luminosity experiment, Monte Carlo simulation and sampling technique are adopted to simulate various data taking cases from which the optimal scheme is determined. The study indicates that when m τ is the sole parameter to be fit, the optimal energy for data taking is located near the τ + τ - production threshold in the vicinity of the largest derivative of the cross-section to energy; one point in the optimal position with luminosity around 63pb -1 is sufficient for getting a statistical precision of 0.1MeV/c 2 or better

  8. A Novel Two-Stage Dynamic Spectrum Sharing Scheme in Cognitive Radio Networks

    Institute of Scientific and Technical Information of China (English)

    Guodong Zhang; Wei Heng; Tian Liang; Chao Meng; Jinming Hu

    2016-01-01

    In order to enhance the efficiency of spectrum utilization and reduce communication overhead in spectrum sharing process,we propose a two-stage dynamic spectrum sharing scheme in which cooperative and noncooperative modes are analyzed in both stages.In particular,the existence and the uniqueness of Nash Equilibrium (NE) strategies for noncooperative mode are proved.In addition,a distributed iterative algorithm is proposed to obtain the optimal solutions of the scheme.Simulation studies are carried out to show the performance comparison between two modes as well as the system revenue improvement of the proposed scheme compared with a conventional scheme without a virtual price control factor.

  9. Analysis of sensitivity to different parameterization schemes for a subtropical cyclone

    Science.gov (United States)

    Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.

    2018-05-01

    A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.

  10. A short numerical study on the optimization methods influence on topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Sigmund, Ole; Stolpe, Mathias

    2017-01-01

    Structural topology optimization problems are commonly defined using continuous design variables combined with material interpolation schemes. One of the challenges for density based topology optimization observed in the review article (Sigmund and Maute Struct Multidiscip Optim 48(6):1031–1055...... 2013) is the slow convergence that is often encountered in practice, when an almost solid-and-void design is found. The purpose of this forum article is to present some preliminary observations on how designs evolves during the optimization process for different choices of optimization methods...

  11. Application of Allan Deviation to Assessing Uncertainties of Continuous-measurement Instruments, and Optimizing Calibration Schemes

    Science.gov (United States)

    Jacobson, Gloria; Rella, Chris; Farinas, Alejandro

    2014-05-01

    Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits

  12. Quantitative schemes in energy dispersive X-ray fluorescence implemented in AXIL

    International Nuclear Information System (INIS)

    Tchantchane, A.; Benamar, M.A.; Tobbeche, S.

    1995-01-01

    E.D.X.R.F (Energy Dispersive X-ray Fluorescence) has long been used for quantitative analysis of many types of samples including environment samples. the software package AXIL (Analysis of x-ray spectra by iterative least quares) is extensively used for the spectra analysis and the quantification of x-ray spectra. It includes several methods of quantitative schemes for evaluating element concentrations. We present the general theory behind each scheme implemented into the software package. The spectra of the performance of each of these quantitative schemes. We have also investigated their performance relative to the uncertainties in the experimental parameters and sample description

  13. Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.

  14. A hybrid iterative scheme for optimal control problems governed by ...

    African Journals Online (AJOL)

    MRT

    KEY WORDS: Optimal control problem; Fredholm integral equation; ... control problems governed by Fredholm integral and integro-differential equations is given in (Brunner and Yan, ..... The exact optimal trajectory and control functions are. 2.

  15. Optimal sampling in damage detection of flexural beams by continuous wavelet transform

    International Nuclear Information System (INIS)

    Basu, B; Broderick, B M; Montanari, L; Spagnoli, A

    2015-01-01

    Modern measurement techniques are improving in capability to capture spatial displacement fields occurring in deformed structures with high precision and in a quasi-continuous manner. This in turn has made the use of vibration-based damage identification methods more effective and reliable for real applications. However, practical measurement and data processing issues still present barriers to the application of these methods in identifying several types of structural damage. This paper deals with spatial Continuous Wavelet Transform (CWT) damage identification methods in beam structures with the aim of addressing the following key questions: (i) can the cost of damage detection be reduced by down-sampling? (ii) what is the minimum number of sampling intervals required for optimal damage detection ? The first three free vibration modes of a cantilever and a simple supported beam with an edge open crack are numerically simulated. A thorough parametric study is carried out by taking into account the key parameters governing the problem, including level of noise, crack depth and location, mechanical and geometrical parameters of the beam. The results are employed to assess the optimal number of sampling intervals for effective damage detection. (paper)

  16. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, M; Li, R; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States); Ye, Y [Stanford Univ, Management Science and Engineering, Stanford, Ca (United States); Boyd, S [Stanford University, Electrical Engineering, Stanford, CA (United States)

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  17. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    International Nuclear Information System (INIS)

    Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S

    2014-01-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  18. Engineering application of in-core fuel management optimization code with CSA algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhihong; Hu, Yongming [INET, Tsinghua university, Beijing 100084 (China)

    2009-06-15

    PWR in-core loading (reloading) pattern optimization is a complex combined problem. An excellent fuel management optimization code can greatly improve the efficiency of core reloading design, and bring economic and safety benefits. Today many optimization codes with experiences or searching algorithms (such as SA, GA, ANN, ACO) have been developed, while how to improve their searching efficiency and engineering usability still needs further research. CSA (Characteristic Statistic Algorithm) is a global optimization algorithm with high efficiency developed by our team. The performance of CSA has been proved on many problems (such as Traveling Salesman Problems). The idea of CSA is to induce searching direction by the statistic distribution of characteristic values. This algorithm is quite suitable for fuel management optimization. Optimization code with CSA has been developed and was used on many core models. The research in this paper is to improve the engineering usability of CSA code according to all the actual engineering requirements. Many new improvements have been completed in this code, such as: 1. Considering the asymmetry of burn-up in one assembly, the rotation of each assembly is considered as new optimization variables in this code. 2. Worth of control rods must satisfy the given constraint, so some relative modifications are added into optimization code. 3. To deal with the combination of alternate cycles, multi-cycle optimization is considered in this code. 4. To confirm the accuracy of optimization results, many identifications of the physics calculation module in this code have been done, and the parameters of optimization schemes are checked by SCIENCE code. The improved optimization code with CSA has been used on Qinshan nuclear plant of China. The reloading of cycle 7, 8, 9 (12 months, no burnable poisons) and the 18 months equilibrium cycle (with burnable poisons) reloading are optimized. At last, many optimized schemes are found by CSA code

  19. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  20. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  1. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    International Nuclear Information System (INIS)

    Oliveira, Karina B. de; Oliveira, Bras H. de

    2013-01-01

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C 18 column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min−1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 ± 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  2. A sampling scheme intended for tandem measurements of sodium transport and microvillous surface area in the coprodaeal epithelium of hens on high- and low-salt diets.

    Science.gov (United States)

    Mayhew, T M; Dantzer, V; Elbrønd, V S; Skadhauge, E

    1990-12-01

    A tissue sampling protocol for combined morphometric and physiological studies on the mucosa of the avian coprodaeum is presented. The morphometric goal is to estimate the surface area due to microvilli at the epithelial cell apex and the proposed scheme is illustrated using material from three White Plymouth Rock hens. The scheme is designed to satisfy sampling requirements for the unbiased estimation of surface areas by vertical sectioning coupled with cycloid test lines and it incorporates a number of useful internal checks. It relies on multi-level sampling with four levels of stereological estimation. At Level I, macroscopic estimates of coprodaeal volume are obtained. Light microscopy is employed at Level II to calculate epithelial volume density. Levels III and IV require low and high power electron microscopy to estimate the surface density of the epithelial apical border and the amplification factor due to microvilli. Worked examples of the calculation steps are provided.

  3. An Extended Multilocus Sequence Typing (MLST Scheme for Rapid Direct Typing of Leptospira from Clinical Samples.

    Directory of Open Access Journals (Sweden)

    Sabrina Weiss

    2016-09-01

    Full Text Available Rapid typing of Leptospira is currently impaired by requiring time consuming culture of leptospires. The objective of this study was to develop an assay that provides multilocus sequence typing (MLST data direct from patient specimens while minimising costs for subsequent sequencing.An existing PCR based MLST scheme was modified by designing nested primers including anchors for facilitated subsequent sequencing. The assay was applied to various specimen types from patients diagnosed with leptospirosis between 2014 and 2015 in the United Kingdom (UK and the Lao Peoples Democratic Republic (Lao PDR. Of 44 clinical samples (23 serum, 6 whole blood, 3 buffy coat, 12 urine PCR positive for pathogenic Leptospira spp. at least one allele was amplified in 22 samples (50% and used for phylogenetic inference. Full allelic profiles were obtained from ten specimens, representing all sample types (23%. No nonspecific amplicons were observed in any of the samples. Of twelve PCR positive urine specimens three gave full allelic profiles (25% and two a partial profile. Phylogenetic analysis allowed for species assignment. The predominant species detected was L. interrogans (10/14 and 7/8 from UK and Lao PDR, respectively. All other species were detected in samples from only one country (Lao PDR: L. borgpetersenii [1/8]; UK: L. kirschneri [1/14], L. santarosai [1/14], L. weilii [2/14].Typing information of pathogenic Leptospira spp. was obtained directly from a variety of clinical samples using a modified MLST assay. This assay negates the need for time-consuming culture of Leptospira prior to typing and will be of use both in surveillance, as single alleles enable species determination, and outbreaks for the rapid identification of clusters.

  4. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. A Temporal Domain Decomposition Algorithmic Scheme for Large-Scale Dynamic Traffic Assignment

    Directory of Open Access Journals (Sweden)

    Eric J. Nava

    2012-03-01

    This paper presents a temporal decomposition scheme for large spatial- and temporal-scale dynamic traffic assignment, in which the entire analysis period is divided into Epochs. Vehicle assignment is performed sequentially in each Epoch, thus improving the model scalability and confining the peak run-time memory requirement regardless of the total analysis period. A proposed self-turning scheme adaptively searches for the run-time-optimal Epoch setting during iterations regardless of the characteristics of the modeled network. Extensive numerical experiments confirm the promising performance of the proposed algorithmic schemes.

  6. Energy-Efficient Optimization for HARQ Schemes over Time-Correlated Fading Channels

    KAUST Repository

    Shi, Zheng; Ma, Shaodan; Yang, Guanghua; Alouini, Mohamed-Slim

    2018-01-01

    in the optimization, which further differentiates this work from prior ones. Using a unified expression of asymptotic outage probabilities, optimal transmission powers and optimal rate are derived in closed-forms to maximize the energy efficiency while satisfying

  7. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  8. Assessment of irrigation schemes in Turkey based on management ...

    African Journals Online (AJOL)

    This suggests that the WUAs-operated schemes are not optimally managed, possibly due to factors such as inappropriate crop pattern and intensity, irrigation infrastructure, lack of an effective monitoring and evaluation system, insufficient awareness among managers and farmers, or unstable administrative structure.

  9. Energy group structure determination using particle swarm optimization

    International Nuclear Information System (INIS)

    Yi, Ce; Sjoden, Glenn

    2013-01-01

    Highlights: ► Particle swarm optimization is applied to determine broad group structure. ► A graph representation of the broad group structure problem is introduced. ► The approach is tested on a fuel-pin model. - Abstract: Multi-group theory is widely applied for the energy domain discretization when solving the Linear Boltzmann Equation. To reduce the computational cost, fine group cross libraries are often down-sampled into broad group cross section libraries. Cross section data collapsing generally involves two steps: Firstly, the broad group structure has to be determined; secondly, a weighting scheme is used to evaluate the broad cross section library based on the fine group cross section data and the broad group structure. A common scheme is to average the fine group cross section weighted by the fine group flux. Cross section collapsing techniques have been intensively researched. However, most studies use a pre-determined group structure, open based on experience, to divide the neutron energy spectrum into thermal, epi-thermal, fast, etc. energy range. In this paper, a swarm intelligence algorithm, particle swarm optimization (PSO), is applied to optimize the broad group structure. A graph representation of the broad group structure determination problem is introduced. And the swarm intelligence algorithm is used to solve the graph model. The effectiveness of the approach is demonstrated using a fuel-pin model

  10. An adaptive robust optimization scheme for water-flooding optimization in oil reservoirs using residual analysis

    NARCIS (Netherlands)

    Siraj, M.M.; Van den Hof, P.M.J.; Jansen, J.D.

    2017-01-01

    Model-based dynamic optimization of the water-flooding process in oil reservoirs is a computationally complex problem and suffers from high levels of uncertainty. A traditional way of quantifying uncertainty in robust water-flooding optimization is by considering an ensemble of uncertain model

  11. Two-Way Multiple Relays Channel: Achievable Rate Region and Optimal Resources

    Directory of Open Access Journals (Sweden)

    Zouhair Al-Qudah

    2016-01-01

    Full Text Available This paper considers a communication model containing two users that exchange their information with the help of multiple parallel relay nodes. To avoid interference at these common nodes, two users are required to transmit over the different frequency bands. Based on this scenario, the achievable rate region is initially derived. Next, an optimization scheme is described to choose the best relays that can be used by each user. Then, two power allocation optimization schemes are investigated to allocate the proper average power value to each node. Finally, comparisons between these two optimization schemes are carried out through some numerical examples.

  12. Optimized Power Allocation and Relay Location Selection in Cooperative Relay Networks

    Directory of Open Access Journals (Sweden)

    Jianrong Bao

    2017-01-01

    Full Text Available An incremental selection hybrid decode-amplify forward (ISHDAF scheme for the two-hop single relay systems and a relay selection strategy based on the hybrid decode-amplify-and-forward (HDAF scheme for the multirelay systems are proposed along with an optimized power allocation for the Internet of Thing (IoT. Given total power as the constraint and outage probability as an objective function, the proposed scheme possesses good power efficiency better than the equal power allocation. By the ISHDAF scheme and HDAF relay selection strategy, an optimized power allocation for both the source and relay nodes is obtained, as well as an effective reduction of outage probability. In addition, the optimal relay location for maximizing the gain of the proposed algorithm is also investigated and designed. Simulation results show that, in both single relay and multirelay selection systems, some outage probability gains by the proposed scheme can be obtained. In the comparison of the optimized power allocation scheme with the equal power allocation one, nearly 0.1695 gains are obtained in the ISHDAF single relay network at a total power of 2 dB, and about 0.083 gains are obtained in the HDAF relay selection system with 2 relays at a total power of 2 dB.

  13. Germinal Center Optimization Applied to Neural Inverse Optimal Control for an All-Terrain Tracked Robot

    Directory of Open Access Journals (Sweden)

    Carlos Villaseñor

    2017-12-01

    Full Text Available Nowadays, there are several meta-heuristics algorithms which offer solutions for multi-variate optimization problems. These algorithms use a population of candidate solutions which explore the search space, where the leadership plays a big role in the exploration-exploitation equilibrium. In this work, we propose to use a Germinal Center Optimization algorithm (GCO which implements temporal leadership through modeling a non-uniform competitive-based distribution for particle selection. GCO is used to find an optimal set of parameters for a neural inverse optimal control applied to all-terrain tracked robot. In the Neural Inverse Optimal Control (NIOC scheme, a neural identifier, based on Recurrent High Orden Neural Network (RHONN trained with an extended kalman filter algorithm, is used to obtain a model of the system, then, a control law is design using such model with the inverse optimal control approach. The RHONN identifier is developed without knowledge of the plant model or its parameters, on the other hand, the inverse optimal control is designed for tracking velocity references. Applicability of the proposed scheme is illustrated using simulations results as well as real-time experimental results with an all-terrain tracked robot.

  14. A Secure and Privacy-Preserving Navigation Scheme Using Spatial Crowdsourcing in Fog-Based VANETs

    Science.gov (United States)

    Wang, Lingling; Liu, Guozhu; Sun, Lijun

    2017-01-01

    Fog-based VANETs (Vehicular ad hoc networks) is a new paradigm of vehicular ad hoc networks with the advantages of both vehicular cloud and fog computing. Real-time navigation schemes based on fog-based VANETs can promote the scheme performance efficiently. In this paper, we propose a secure and privacy-preserving navigation scheme by using vehicular spatial crowdsourcing based on fog-based VANETs. Fog nodes are used to generate and release the crowdsourcing tasks, and cooperatively find the optimal route according to the real-time traffic information collected by vehicles in their coverage areas. Meanwhile, the vehicle performing the crowdsourcing task can get a reasonable reward. The querying vehicle can retrieve the navigation results from each fog node successively when entering its coverage area, and follow the optimal route to the next fog node until it reaches the desired destination. Our scheme fulfills the security and privacy requirements of authentication, confidentiality and conditional privacy preservation. Some cryptographic primitives, including the Elgamal encryption algorithm, AES, randomized anonymous credentials and group signatures, are adopted to achieve this goal. Finally, we analyze the security and the efficiency of the proposed scheme. PMID:28338620

  15. A Secure and Privacy-Preserving Navigation Scheme Using Spatial Crowdsourcing in Fog-Based VANETs.

    Science.gov (United States)

    Wang, Lingling; Liu, Guozhu; Sun, Lijun

    2017-03-24

    Fog-based VANETs (Vehicular ad hoc networks) is a new paradigm of vehicular ad hoc networks with the advantages of both vehicular cloud and fog computing. Real-time navigation schemes based on fog-based VANETs can promote the scheme performance efficiently. In this paper, we propose a secure and privacy-preserving navigation scheme by using vehicular spatial crowdsourcing based on fog-based VANETs. Fog nodes are used to generate and release the crowdsourcing tasks, and cooperatively find the optimal route according to the real-time traffic information collected by vehicles in their coverage areas. Meanwhile, the vehicle performing the crowdsourcing task can get a reasonable reward. The querying vehicle can retrieve the navigation results from each fog node successively when entering its coverage area, and follow the optimal route to the next fog node until it reaches the desired destination. Our scheme fulfills the security and privacy requirements of authentication, confidentiality and conditional privacy preservation. Some cryptographic primitives, including the Elgamal encryption algorithm, AES, randomized anonymous credentials and group signatures, are adopted to achieve this goal. Finally, we analyze the security and the efficiency of the proposed scheme.

  16. Respiratory motion sampling in 4DCT reconstruction for radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Chi Yuwei; Liang Jian; Qin Xu; Yan Di [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States); Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, Michigan 48073 (United States)

    2012-04-15

    Purpose: Phase-based and amplitude-based sorting techniques are commonly used in four-dimensional CT (4DCT) reconstruction. However, effect of these sorting techniques on 4D dose calculation has not been explored. In this study, the authors investigated a candidate 4DCT sorting technique by comparing its 4D dose calculation accuracy with that for phase-based and amplitude-based sorting techniques.Method: An optimization model was formed using organ motion probability density function (PDF) in the 4D dose convolution. The objective function for optimization was defined as the maximum difference between the expected 4D dose in organ of interest and the 4D dose calculated using a 4DCT sorted by a candidate sampling method. Sorting samples, as optimization variables, were selected on the respiratory motion PDF assessed during the CT scanning. Breathing curves obtained from patients' 4DCT scanning, as well as 3D dose distribution from treatment planning, were used in the study. Given the objective function, a residual error analysis was performed, and k-means clustering was found to be an effective sampling scheme to improve the 4D dose calculation accuracy and independent with the patient-specific dose distribution. Results: Patient data analysis demonstrated that the k-means sampling was superior to the conventional phase-based and amplitude-based sorting and comparable to the optimal sampling results. For phase-based sorting, the residual error in 4D dose calculations may not be further reduced to an acceptable accuracy after a certain number of phases, while for amplitude-based sorting, k-means sampling, and the optimal sampling, the residual error in 4D dose calculations decreased rapidly as the number of 4DCT phases increased to 6.Conclusion: An innovative phase sorting method (k-means method) is presented in this study. The method is dependent only on tumor motion PDF. It could provide a way to refine the phase sorting in 4DCT reconstruction and is effective

  17. Designing an Efficient Retransmission Scheme for Wireless LANs: Theory and Implementation

    OpenAIRE

    Koutsonikolas, Dimitrios; Wang, Chih-Chun; Hu, Y Charlie; Shroff, Ness

    2010-01-01

    Network coding is known to benefit the downlink retransmissions by the AP in a wireless LAN from exploiriting overhearing at the client nodes. However, designing an efficient and practical retransmission scheme remains a challange. We present an (asymptotically) optimal scheme, ECR, for reduing the downlink retransmissions by the AP in a wireless LAN from exploiting overhearing at the client nodes. The design of ECR, consisting of three components: batch-based operations, a systematic pha...

  18. Review, modeling, Heat Integration, and improved schemes of Rectisol®-based processes for CO2 capture

    International Nuclear Information System (INIS)

    Gatti, Manuele; Martelli, Emanuele; Marechal, François; Consonni, Stefano

    2014-01-01

    The paper evaluates the thermodynamic performances and the energy integration of alternative schemes of a methanol absorption based acid gas removal process designed for CO 2 Capture and Storage. More precisely, this work focuses the attention on the Rectisol ® process specifically designed for the selective removal of H 2 S and CO 2 from syngas produced by coal gasification. The study addresses the following issues: (i) perform a review of the Rectisol ® schemes proposed by engineers and researchers with the purpose of determining the best one for CO 2 capture and storage; (ii) calibrate the PC-SAFT equation of state for CH 3 OH–CO 2 –H 2 S–H 2 –CO mixtures at conditions relevant to the Rectisol ® process; (iii) evaluate the thermodynamic performances and optimize the energy integration of a “Reference” scheme derived from those available in the literature; (iv) identify and assess alternative Rectisol ® schemes with optimized performance for CO 2 Capture and Storage and Heat Integration with utilities. On the basis of the analysis of the Composite Curves of the integrated process, we propose some possible improvements at the level of the process configuration, like the introduction of mechanical vapor recompression and the development of a two stage regeneration arrangement. - Highlights: • Comprehensive review of the Rectisol ® process configurations and applications. • Calibration of PC-SAFT equation of state for Rectisol ® -relevant mixtures. • Detailed process simulation and optimized Heat Integration, and utility design. • Development of alternative Rectisol ® schemes optimized for CO 2 Capture

  19. The Quasar Fraction in Low-Frequency Selected Complete Samples and Implications for Unified Schemes

    Science.gov (United States)

    Willott, Chris J.; Rawlings, Steve; Blundell, Katherine M.; Lacy, Mark

    2000-01-01

    Low-frequency radio surveys are ideal for selecting orientation-independent samples of extragalactic sources because the sample members are selected by virtue of their isotropic steep-spectrum extended emission. We use the new 7C Redshift Survey along with the brighter 3CRR and 6C samples to investigate the fraction of objects with observed broad emission lines - the 'quasar fraction' - as a function of redshift and of radio and narrow emission line luminosity. We find that the quasar fraction is more strongly dependent upon luminosity (both narrow line and radio) than it is on redshift. Above a narrow [OII] emission line luminosity of log(base 10) (L(sub [OII])/W) approximately > 35 [or radio luminosity log(base 10) (L(sub 151)/ W/Hz.sr) approximately > 26.5], the quasar fraction is virtually independent of redshift and luminosity; this is consistent with a simple unified scheme with an obscuring torus with a half-opening angle theta(sub trans) approximately equal 53 deg. For objects with less luminous narrow lines, the quasar fraction is lower. We show that this is not due to the difficulty of detecting lower-luminosity broad emission lines in a less luminous, but otherwise similar, quasar population. We discuss evidence which supports at least two probable physical causes for the drop in quasar fraction at low luminosity: (i) a gradual decrease in theta(sub trans) and/or a gradual increase in the fraction of lightly-reddened (0 approximately quasar luminosity; and (ii) the emergence of a distinct second population of low luminosity radio sources which, like M8T, lack a well-fed quasar nucleus and may well lack a thick obscuring torus.

  20. PSO-tuned PID controller for coupled tank system via priority-based fitness scheme

    Science.gov (United States)

    Jaafar, Hazriq Izzuan; Hussien, Sharifah Yuslinda Syed; Selamat, Nur Asmiza; Abidin, Amar Faiz Zainal; Aras, Mohd Shahrieel Mohd; Nasir, Mohamad Na'im Mohd; Bohari, Zul Hasrizal

    2015-05-01

    The industrial applications of Coupled Tank System (CTS) are widely used especially in chemical process industries. The overall process is require liquids to be pumped, stored in the tank and pumped again to another tank. Nevertheless, the level of liquid in tank need to be controlled and flow between two tanks must be regulated. This paper presents development of an optimal PID controller for controlling the desired liquid level of the CTS. Two method of Particle Swarm Optimization (PSO) algorithm will be tested in optimizing the PID controller parameters. These two methods of PSO are standard Particle Swarm Optimization (PSO) and Priority-based Fitness Scheme in Particle Swarm Optimization (PFPSO). Simulation is conducted within Matlab environment to verify the performance of the system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). It has been demonstrated that implementation of PSO via Priority-based Fitness Scheme (PFPSO) for this system is potential technique to control the desired liquid level and improve the system performances compared with standard PSO.

  1. Multiobjective design of aquifer monitoring networks for optimal spatial prediction and geostatistical parameter estimation

    Science.gov (United States)

    Alzraiee, Ayman H.; Bau, Domenico A.; Garcia, Luis A.

    2013-06-01

    Effective sampling of hydrogeological systems is essential in guiding groundwater management practices. Optimal sampling of groundwater systems has previously been formulated based on the assumption that heterogeneous subsurface properties can be modeled using a geostatistical approach. Therefore, the monitoring schemes have been developed to concurrently minimize the uncertainty in the spatial distribution of systems' states and parameters, such as the hydraulic conductivity K and the hydraulic head H, and the uncertainty in the geostatistical model of system parameters using a single objective function that aggregates all objectives. However, it has been shown that the aggregation of possibly conflicting objective functions is sensitive to the adopted aggregation scheme and may lead to distorted results. In addition, the uncertainties in geostatistical parameters affect the uncertainty in the spatial prediction of K and H according to a complex nonlinear relationship, which has often been ineffectively evaluated using a first-order approximation. In this study, we propose a multiobjective optimization framework to assist the design of monitoring networks of K and H with the goal of optimizing their spatial predictions and estimating the geostatistical parameters of the K field. The framework stems from the combination of a data assimilation (DA) algorithm and a multiobjective evolutionary algorithm (MOEA). The DA algorithm is based on the ensemble Kalman filter, a Monte-Carlo-based Bayesian update scheme for nonlinear systems, which is employed to approximate the posterior uncertainty in K, H, and the geostatistical parameters of K obtained by collecting new measurements. Multiple MOEA experiments are used to investigate the trade-off among design objectives and identify the corresponding monitoring schemes. The methodology is applied to design a sampling network for a shallow unconfined groundwater system located in Rocky Ford, Colorado. Results indicate that

  2. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  3. An Improved Evolutionary Programming with Voting and Elitist Dispersal Scheme

    Science.gov (United States)

    Maity, Sayan; Gunjan, Kumar; Das, Swagatam

    Although initially conceived for evolving finite state machines, Evolutionary Programming (EP), in its present form, is largely used as a powerful real parameter optimizer. For function optimization, EP mainly relies on its mutation operators. Over past few years several mutation operators have been proposed to improve the performance of EP on a wide variety of numerical benchmarks. However, unlike real-coded GAs, there has been no fitness-induced bias in parent selection for mutation in EP. That means the i-th population member is selected deterministically for mutation and creation of the i-th offspring in each generation. In this article we present an improved EP variant called Evolutionary Programming with Voting and Elitist Dispersal (EPVE). The scheme encompasses a voting process which not only gives importance to best solutions but also consider those solutions which are converging fast. By introducing Elitist Dispersal Scheme we maintain the elitism by keeping the potential solutions intact and other solutions are perturbed accordingly, so that those come out of the local minima. By applying these two techniques we can be able to explore those regions which have not been explored so far that may contain optima. Comparison with the recent and best-known versions of EP over 25 benchmark functions from the CEC (Congress on Evolutionary Computation) 2005 test-suite for real parameter optimization reflects the superiority of the new scheme in terms of final accuracy, speed, and robustness.

  4. Comparison of pulsed three-dimensional CEST acquisition schemes at 7 tesla : steady state versus pseudosteady state

    NARCIS (Netherlands)

    Khlebnikov, Vitaly; Geades, Nicolas; Klomp, DWJ; Hoogduin, Hans; Gowland, Penny; Mougin, Olivier

    PURPOSE: To compare two pulsed, volumetric chemical exchange saturation transfer (CEST) acquisition schemes: steady state (SS) and pseudosteady state (PS) for the same brain coverage, spatial/spectral resolution and scan time. METHODS: Both schemes were optimized for maximum sensitivity to amide

  5. Optimum wireless sensor deployment scheme for structural health monitoring: a simulation study

    International Nuclear Information System (INIS)

    Liu, Chengyin; Fang, Kun; Teng, Jun

    2015-01-01

    With the rapid advancements in smart sensing technology and wireless communication technology, the wireless sensor network (WSN) offers an alternative solution to structural health monitoring (SHM). In WSNs, dense deployment of wireless nodes aids the identification of structural dynamic characteristics, while data transmission is a significant issue since wireless channels typically have a lower bandwidth and a limited power supply. This paper provides a wireless sensor deployment optimization scheme for SHM, in terms of both energy consumption and modal identification accuracy. A spherical energy model is established to formulate the energy consumption within a WSN. The optimal number of sensors and their locations are obtained through solving a multi-objective function with weighting factors on energy consumption and modal identification accuracy using a genetic algorithm (GA). Simulation and comparison results with traditional sensor deployment methods demonstrate the efficiency of the proposed optimization scheme. (paper)

  6. A Domestic Microgrid with Optimized Home Energy Management System

    Directory of Open Access Journals (Sweden)

    Zafar Iqbal

    2018-04-01

    Full Text Available Microgrid is a community-based power generation and distribution system that interconnects smart homes with renewable energy sources (RESs. Microgrid efficiently and economically generates power for electricity consumers and operates in both islanded and grid-connected modes. In this study, we proposed optimization schemes for reducing electricity cost and minimizing peak to average ratio (PAR with maximum user comfort (UC in a smart home. We considered a grid-connected microgrid for electricity generation which consists of wind turbine and photovoltaic (PV panel. First, the problem was mathematically formulated through multiple knapsack problem (MKP then solved by existing heuristic techniques: grey wolf optimization (GWO, binary particle swarm optimization (BPSO, genetic algorithm (GA and wind-driven optimization (WDO. Furthermore, we also proposed three hybrid schemes for electric cost and PAR reduction: (1 hybrid of GA and WDO named WDGA; (2 hybrid of WDO and GWO named WDGWO; and (3 WBPSO, which is the hybrid of BPSO and WDO. In addition, a battery bank system (BBS was also integrated to make our proposed schemes more cost-efficient and reliable, and to ensure stable grid operation. Finally, simulations were performed to verify our proposed schemes. Results show that our proposed scheme efficiently minimizes the electricity cost and PAR. Moreover, our proposed techniques, WDGA, WDGWO and WBPSO, outperform the existing heuristic techniques.

  7. Experimental Study on Intelligent Control Scheme for Fan Coil Air-Conditioning System

    Directory of Open Access Journals (Sweden)

    Yanfeng Li

    2013-01-01

    Full Text Available An intelligent control scheme for fan coil air-conditioning systems has been put forward in order to overcome the shortcomings of the traditional proportion-integral-derivative (PID control scheme. These shortcomings include the inability of anti-interference and large inertia. An intelligent control test rig of fan coil air-conditioning system has been built, and MATLAB/Simulink dynamics simulation software has been adopted to implement the intelligent control scheme. A software for data exchange has been developed to combine the intelligence control system and the building automation (BA system. Experimental tests have been conducted to investigate the effectiveness of different control schemes including the traditional PID control, fuzzy control, and fuzzy-PID control for fan coil air-conditioning system. The effects of control schemes have been compared and analyzed in robustness, static and dynamic character, and economy. The results have shown that the developed data exchange interface software can induce the intelligent control scheme of the BA system more effectively. Among the proposed control strategies, fuzzy-PID control scheme which has the advantages of both traditional PID and fuzzy schemes is the optimal control scheme for the fan coil air-conditioning system.

  8. Cognitive Aware Interference Mitigation Scheme for LTE Femtocells

    KAUST Repository

    Alqerm, Ismail

    2015-04-21

    Femto-cells deployment in today’s cellular networks came into practice to fulfill the increasing demand for data services. However, interference to other femto and macro-cells users remains an unresolved challenge. In this paper, we propose an interference mitigation scheme to control the cross-tier interference caused by femto-cells to the macro users and the co-tier interference among femtocells. Cognitive radio spectrum sensing capability is utilized to determine the non-occupied channels or the ones that cause minimal interference to the macro users. An awareness based channel allocation scheme is developed with the assistance of the graph-coloring algorithm to assign channels to the femto-cells base stations with power optimization, minimal interference, maximum throughput, and maximum spectrum efficiency. In addition, the scheme exploits negotiation capability to match traffic load and QoS with the channel capacity, and to maintain efficient utilization of the available channels.

  9. Evaluation of sample preparation methods and optimization of nickel determination in vegetable tissues

    Directory of Open Access Journals (Sweden)

    Rodrigo Fernando dos Santos Salazar

    2011-02-01

    Full Text Available Nickel, although essential to plants, may be toxic to plants and animals. It is mainly assimilated by food ingestion. However, information about the average levels of elements (including Ni in edible vegetables from different regions is still scarce in Brazil. The objectives of this study were to: (a evaluate and optimize a method for preparation of vegetable tissue samples for Ni determination; (b optimize the analytical procedures for determination by Flame Atomic Absorption Spectrometry (FAAS and by Electrothermal Atomic Absorption (ETAAS in vegetable samples and (c determine the Ni concentration in vegetables consumed in the cities of Lorena and Taubaté in the Vale do Paraíba, State of São Paulo, Brazil. By means of the analytical technique for determination by ETAAS or FAAS, the results were validated by the test of analyte addition and recovery. The most viable method tested for quantification of this element was HClO4-HNO3 wet digestion. All samples but carrot tissue collected in Lorena contained Ni levels above the permitted by the Brazilian Ministry of Health. The most disturbing results, requiring more detailed studies, were the Ni concentrations measured in carrot samples from Taubaté, where levels were five times higher than permitted by Brazilian regulations.

  10. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  11. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    Science.gov (United States)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  12. Optimization and Control of Cyber-Physical Vehicle Systems

    Directory of Open Access Journals (Sweden)

    Justin M. Bradley

    2015-09-01

    Full Text Available A cyber-physical system (CPS is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined.

  13. Optimization and Control of Cyber-Physical Vehicle Systems.

    Science.gov (United States)

    Bradley, Justin M; Atkins, Ella M

    2015-09-11

    A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined.

  14. Gradual and Cumulative Improvements to the Classical Differential Evolution Scheme through Experiments

    Directory of Open Access Journals (Sweden)

    Anescu George

    2016-12-01

    Full Text Available The paper presents the experimental results of some tests conducted with the purpose to gradually and cumulatively improve the classical DE scheme in both efficiency and success rate. The modifications consisted in the randomization of the scaling factor (a simple jitter scheme, a more efficient Random Greedy Selection scheme, an adaptive scheme for the crossover probability and a resetting mechanism for the agents. After each modification step, experiments have been conducted on a set of 11 scalable, multimodal, continuous optimization functions in order to analyze the improvements and decide the new improvement direction. Finally, only the initial classical scheme and the constructed Fast Self-Adaptive DE (FSA-DE variant were compared with the purpose of testing their performance degradation with the increase of the search space dimension. The experimental results demonstrated the superiority of the proposed FSA-DE variant.

  15. REMINDER: Saved Leave Scheme (SLS)

    CERN Multimedia

    2003-01-01

    Transfer of leave to saved leave accounts Under the provisions of the voluntary saved leave scheme (SLS), a maximum total of 10 days'* annual and compensatory leave (excluding saved leave accumulated in accordance with the provisions of Administrative Circular No 22B) can be transferred to the saved leave account at the end of the leave year (30 September). We remind you that unused leave of all those taking part in the saved leave scheme at the closure of the leave year accounts is transferred automatically to the saved leave account on that date. Therefore, staff members have no administrative steps to take. In addition, the transfer, which eliminates the risk of omitting to request leave transfers and rules out calculation errors in transfer requests, will be clearly shown in the list of leave transactions that can be consulted in EDH from October 2003 onwards. Furthermore, this automatic leave transfer optimizes staff members' chances of benefiting from a saved leave bonus provided that they ar...

  16. Accelerated Enveloping Distribution Sampling: Enabling Sampling of Multiple End States while Preserving Local Energy Minima.

    Science.gov (United States)

    Perthold, Jan Walther; Oostenbrink, Chris

    2018-05-17

    Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.

  17. Quantum money with nearly optimal error tolerance

    Science.gov (United States)

    Amiri, Ryan; Arrazola, Juan Miguel

    2017-06-01

    We present a family of quantum money schemes with classical verification which display a number of benefits over previous proposals. Our schemes are based on hidden matching quantum retrieval games and they tolerate noise up to 23 % , which we conjecture reaches 25 % asymptotically as the dimension of the underlying hidden matching states is increased. Furthermore, we prove that 25 % is the maximum tolerable noise for a wide class of quantum money schemes with classical verification, meaning our schemes are almost optimally noise tolerant. We use methods in semidefinite programming to prove security in a substantially different manner to previous proposals, leading to two main advantages: first, coin verification involves only a constant number of states (with respect to coin size), thereby allowing for smaller coins; second, the reusability of coins within our scheme grows linearly with the size of the coin, which is known to be optimal. Last, we suggest methods by which the coins in our protocol could be implemented using weak coherent states and verified using existing experimental techniques, even in the presence of detector inefficiencies.

  18. Neuro-genetic system for optimization of GMI samples sensitivity.

    Science.gov (United States)

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. A Cost-Based Adaptive Handover Hysteresis Scheme to Minimize the Handover Failure Rate in 3GPP LTE System

    Directory of Open Access Journals (Sweden)

    Gil Gye-Tae

    2010-01-01

    Full Text Available We deal with a cost-based adaptive handover hysteresis scheme for the horizontal handover decision strategies, as one of the self-optimization techniques that can minimize the handover failure rate (HFR in the 3rd generation partnership project (3GPP long-term evolution (LTE system based on the network-controlled hard handover. Especially, for real-time operation, we propose an adaptive hysteresis scheme with a simplified cost function considering some dominant factors closely related to HFR performance such as the load difference between the target and serving cells, the velocity of user equipment (UE, and the service type. With the proposed scheme, a proper hysteresis value based on the dominant factors is easily obtained, so that the handover parameter optimization for minimizing the HFR can be effectively achieved. Simulation results show that the proposed scheme can support better HFR performance than the conventional schemes.

  20. Topology optimization of Halbach magnet arrays using isoparametric projection

    International Nuclear Information System (INIS)

    Lee, Jaewook; Nomura, Tsuyoshi; Dede, Ercan M.

    2017-01-01

    Highlights: • Design method of Halbach magnet array is proposed using topology optimization. • Magnet strength and direction are simultaneously optimized by isoparametric projection. • For manufacturing feasibility of magnet, penalization and extrusion schemes are proposed. • Design results of circular shaped Halbach arrays are provided. • Halbach arrays in linear actuator are optimized to maximize magnetic force. - Abstract: Topology optimization using isoparametric projection for the design of permanent magnet patterns in Halbach arrays is proposed. Based on isoparametric shape functions used in the finite element analysis, the permanent magnet strength and magnetization directions in a Halbach array are simultaneously optimized for a given design goal. To achieve fabrication feasibility of a designed Halbach magnet array, two design schemes are combined with the isoparametric projection method. First, a penalization scheme is proposed for designing the permanent magnets to have discrete magnetization direction angles. Second, an extrusion scheme is proposed for the shape regularization of the permanent magnet segments. As a result, the method systematically finds the optimal permanent magnet patterns of a Halbach array considering manufacturing feasibility. In two numerical examples, a circular shaped permanent magnet Halbach array is designed to minimize the magnitude of the magnetic flux density and to maximize the upward direction magnetic flux density inside the magnet array. Logical extension of the method to the design of permanent magnet arrays in linear actuators is provided, where the design goal is to maximize the actuator magnetic force.

  1. Topology optimization of Halbach magnet arrays using isoparametric projection

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jaewook, E-mail: jaewooklee@gist.ac.kr [School of Mechanical Engineering, Gwangju Institute of Science and Technology (GIST), Gwangju, 61005 (Korea, Republic of); Nomura, Tsuyoshi [Toyota Central R& D Labs., Inc., 41-1 Yokomichi, Aichi 480-1192 (Japan); Toyota Research Institute of North America, 1555 Woodridge Avenue, Ann Arbor, MI 48105 (United States); Dede, Ercan M. [Toyota Research Institute of North America, 1555 Woodridge Avenue, Ann Arbor, MI 48105 (United States)

    2017-06-15

    Highlights: • Design method of Halbach magnet array is proposed using topology optimization. • Magnet strength and direction are simultaneously optimized by isoparametric projection. • For manufacturing feasibility of magnet, penalization and extrusion schemes are proposed. • Design results of circular shaped Halbach arrays are provided. • Halbach arrays in linear actuator are optimized to maximize magnetic force. - Abstract: Topology optimization using isoparametric projection for the design of permanent magnet patterns in Halbach arrays is proposed. Based on isoparametric shape functions used in the finite element analysis, the permanent magnet strength and magnetization directions in a Halbach array are simultaneously optimized for a given design goal. To achieve fabrication feasibility of a designed Halbach magnet array, two design schemes are combined with the isoparametric projection method. First, a penalization scheme is proposed for designing the permanent magnets to have discrete magnetization direction angles. Second, an extrusion scheme is proposed for the shape regularization of the permanent magnet segments. As a result, the method systematically finds the optimal permanent magnet patterns of a Halbach array considering manufacturing feasibility. In two numerical examples, a circular shaped permanent magnet Halbach array is designed to minimize the magnitude of the magnetic flux density and to maximize the upward direction magnetic flux density inside the magnet array. Logical extension of the method to the design of permanent magnet arrays in linear actuators is provided, where the design goal is to maximize the actuator magnetic force.

  2. A new channel allocation scheme and performance optimizing for mobile multimedia wireless networks

    Institute of Scientific and Technical Information of China (English)

    ZHAO Fang-ming; JIANG Ling-ge; MA Ming-da

    2008-01-01

    A multimedia channel allocation scheme is proposed and studied in terms of the connection-level QoS. A new traffic model based on multidimensional Markov chain is developed considering the traffic charac-teristic of two special periods of time. And the pre-emptive priority strategies are used to classify real-time serv-ices and non-real-time services. Real-time service is given higher priority for its allowance to pre-empt channels used by non-real-time service. Considering the mobility of persons in a day, which affects the mobile user's den-sity, the simulation was conducted involving the two pre-emptive priority strategies. The result of some compari-sons shows the feasibility of the proposed scheme.

  3. The Proteome of Ulcerative Colitis in Colon Biopsies from Adults - Optimized Sample Preparation and Comparison with Healthy Controls.

    Science.gov (United States)

    Schniers, Armin; Anderssen, Endre; Fenton, Christopher Graham; Goll, Rasmus; Pasing, Yvonne; Paulssen, Ruth Hracky; Florholmen, Jon; Hansen, Terkel

    2017-12-01

    The purpose of the study was to optimize the sample preparation and to further use an improved sample preparation to identify proteome differences between inflamed ulcerative colitis tissue from untreated adults and healthy controls. To optimize the sample preparation, we studied the effect of adding different detergents to a urea containing lysis buffer for a Lys-C/trypsin tandem digestion. With the optimized method, we prepared clinical samples from six ulcerative colitis patients and six healthy controls and analysed them by LC-MS/MS. We examined the acquired data to identify differences between the states. We improved the protein extraction and protein identification number by utilizing a urea and sodium deoxycholate containing buffer. Comparing ulcerative colitis and healthy tissue, we found 168 of 2366 identified proteins differently abundant. Inflammatory proteins are higher abundant in ulcerative colitis, proteins related to anion-transport and mucus production are lower abundant. A high proportion of S100 proteins is differently abundant, notably with both up-regulated and down-regulated proteins. The optimized sample preparation method will improve future proteomic studies on colon mucosa. The observed protein abundance changes and their enrichment in various groups improve our understanding of ulcerative colitis on protein level. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Microbiological assessment along the fish production chain of the Norwegian pelagic fisheries sector--Results from a spot sampling programme.

    Science.gov (United States)

    Svanevik, Cecilie Smith; Roiha, Irja Sunde; Levsen, Arne; Lunestad, Bjørn Tore

    2015-10-01

    Microbes play an important role in the degradation of fish products, thus better knowledge of the microbiological conditions throughout the fish production chain may help to optimise product quality and resource utilisation. This paper presents the results of a ten-year spot sampling programme (2005-2014) of the commercially most important pelagic fish species harvested in Norway. Fish-, surface-, and storage water samples were collected from fishing vessels and processing factories. Totally 1,181 samples were assessed with respect to microbiological quality, hygiene and food safety. We introduce a quality and safety assessment scheme for fresh pelagic fish recommending limits for heterotrophic plate counts (HPC), thermos tolerant coliforms, enterococci and Listeria monocytogenes. According to the scheme, in 25 of 41 samplings, sub-optimal conditions were found with respect to quality, whereas in 21 and 9 samplings, samples were not in compliance concerning hygiene and food safety, respectively. The present study has revealed that the quality of pelagic fish can be optimised by improving the hygiene conditions at some critical points at an early phase of the production chain. Thus, the proposed assessment scheme may provide a useful tool for the industry to optimise quality and maintain consumer safety of pelagic fishery products. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. An optimized encoding method for secure key distribution by swapping quantum entanglement and its extension

    International Nuclear Information System (INIS)

    Gao Gan

    2015-01-01

    Song [Song D 2004 Phys. Rev. A 69 034301] first proposed two key distribution schemes with the symmetry feature. We find that, in the schemes, the private channels which Alice and Bob publicly announce the initial Bell state or the measurement result through are not needed in discovering keys, and Song’s encoding methods do not arrive at the optimization. Here, an optimized encoding method is given so that the efficiencies of Song’s schemes are improved by 7/3 times. Interestingly, this optimized encoding method can be extended to the key distribution scheme composed of generalized Bell states. (paper)

  6. [Multi-mathematical modelings for compatibility optimization of Jiangzhi granules].

    Science.gov (United States)

    Yang, Ming; Zhang, Li; Ge, Yingli; Lu, Yanliu; Ji, Guang

    2011-12-01

    To investigate into the method of "multi activity index evaluation and combination optimized of mult-component" for Chinese herbal formulas. According to the scheme of uniform experimental design, efficacy experiment, multi index evaluation, least absolute shrinkage, selection operator (LASSO) modeling, evolutionary optimization algorithm, validation experiment, we optimized the combination of Jiangzhi granules based on the activity indexes of blood serum ALT, ALT, AST, TG, TC, HDL, LDL and TG level of liver tissues, ratio of liver tissue to body. Analytic hierarchy process (AHP) combining with criteria importance through intercriteria correlation (CRITIC) for multi activity index evaluation was more reasonable and objective, it reflected the information of activity index's order and objective sample data. LASSO algorithm modeling could accurately reflect the relationship between different combination of Jiangzhi granule and the activity comprehensive indexes. The optimized combination of Jiangzhi granule showed better values of the activity comprehensive indexed than the original formula after the validation experiment. AHP combining with CRITIC can be used for multi activity index evaluation and LASSO algorithm, it is suitable for combination optimized of Chinese herbal formulas.

  7. On Converting Secret Sharing Scheme to Visual Secret Sharing Scheme

    Directory of Open Access Journals (Sweden)

    Wang Daoshun

    2010-01-01

    Full Text Available Abstract Traditional Secret Sharing (SS schemes reconstruct secret exactly the same as the original one but involve complex computation. Visual Secret Sharing (VSS schemes decode the secret without computation, but each share is m times as big as the original and the quality of the reconstructed secret image is reduced. Probabilistic visual secret sharing (Prob.VSS schemes for a binary image use only one subpixel to share the secret image; however the probability of white pixels in a white area is higher than that in a black area in the reconstructed secret image. SS schemes, VSS schemes, and Prob. VSS schemes have various construction methods and advantages. This paper first presents an approach to convert (transform a -SS scheme to a -VSS scheme for greyscale images. The generation of the shadow images (shares is based on Boolean XOR operation. The secret image can be reconstructed directly by performing Boolean OR operation, as in most conventional VSS schemes. Its pixel expansion is significantly smaller than that of VSS schemes. The quality of the reconstructed images, measured by average contrast, is the same as VSS schemes. Then a novel matrix-concatenation approach is used to extend the greyscale -SS scheme to a more general case of greyscale -VSS scheme.

  8. Adaptive quantization-parameter clip scheme for smooth quality in H.264/AVC.

    Science.gov (United States)

    Hu, Sudeng; Wang, Hanli; Kwong, Sam

    2012-04-01

    In this paper, we investigate the issues over the smooth quality and the smooth bit rate during rate control (RC) in H.264/AVC. An adaptive quantization-parameter (Q(p)) clip scheme is proposed to optimize the quality smoothness while keeping the bit-rate fluctuation at an acceptable level. First, the frame complexity variation is studied by defining a complexity ratio between two nearby frames. Second, the range of the generated bits is analyzed to prevent the encoder buffer from overflow and underflow. Third, based on the safe range of the generated bits, an optimal Q(p) clip range is developed to reduce the quality fluctuation. Experimental results demonstrate that the proposed Q(p) clip scheme can achieve excellent performance in quality smoothness and buffer regulation.

  9. On the optimal sampling of bandpass measurement signals through data acquisition systems

    International Nuclear Information System (INIS)

    Angrisani, L; Vadursi, M

    2008-01-01

    Data acquisition systems (DAS) play a fundamental role in a lot of modern measurement solutions. One of the parameters characterizing a DAS is its maximum sample rate, which imposes constraints on the signals that can be alias-free digitized. Bandpass sampling theory singles out separated ranges of admissible sample rates, which can be significantly lower than carrier frequency. But, how to choose the most convenient sample rate according to the purpose at hand? The paper proposes a method for the automatic selection of the optimal sample rate in measurement applications involving bandpass signals; the effects of sample clock instability and limited resolution are also taken into account. The method allows the user to choose the location of spectral replicas of the sampled signal in terms of normalized frequency, and the minimum guard band between replicas, thus introducing a feature that no DAS currently available on the market seems to offer. A number of experimental tests on bandpass digitally modulated signals are carried out to assess the concurrence of the obtained central frequency with the expected one

  10. Basic Principles of Financial Planning in Ex-ante Deposit Insurance Schemes

    Directory of Open Access Journals (Sweden)

    Đurđica Ognjenović

    2006-12-01

    Full Text Available The paper explores main principles of financial planning in ex-ante deposit insurance schemes from a theoretical perspective and in terms of the EU Directive on deposit- guarantee schemes. Further on, the paper assesses how these principles and standards are used in financial planning in deposit insurance schemes around the world for annual budgeting, strategic planning and optimalization of available financial resources. After reviewing available references and different practices, the conclusion is that there are no clear internationally accepted principles for deposit insurers’ financial planning, except some broad and general guidelines. Practices in the industry differ significantly. Given the fact that deposit insurance is in fact a monopolistic business, lack of clear principles and lack of proper financial planning may lead to inadequacy of ex-ante funds and negligence on the side of the management of deposit insurance schemes.

  11. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  12. Cognitive Aware Interference Mitigation Scheme for OFDMA Femtocells

    KAUST Repository

    Alqerm, Ismail

    2015-04-09

    Femto-cells deployment in today’s cellular networks came into practice to fulfill the increasing demand for data services. It also extends the coverage in the indoor areas. However, interference to other femto and macro-cells users remains an unresolved challenge. In this paper, we propose an interference mitigation scheme to control the cross-tier interference caused by femto-cells to the macro users and the co-tier interference among femtocells. Cognitive radio spectrum sensing capability is utilized to determine the non-occupied channels or the ones that cause minimal interference to the macro users. An awareness based channel allocation scheme is developed with the assistance of the graph-coloring algorithm to assign channels to the femto-cells base stations with power optimization, minimal interference, maximum throughput, and maximum spectrum efficiency. In addition, the scheme exploits negotiation capability to match traffic load and QoS with the channel, and to maintain efficient utilization of the available channels.

  13. Nonlinear H∞ Optimal Control Scheme for an Underwater Vehicle with Regional Function Formulation

    Directory of Open Access Journals (Sweden)

    Zool H. Ismail

    2013-01-01

    Full Text Available A conventional region control technique cannot meet the demands for an accurate tracking performance in view of its inability to accommodate highly nonlinear system dynamics, imprecise hydrodynamic coefficients, and external disturbances. In this paper, a robust technique is presented for an Autonomous Underwater Vehicle (AUV with region tracking function. Within this control scheme, nonlinear H∞ and region based control schemes are used. A Lyapunov-like function is presented for stability analysis of the proposed control law. Numerical simulations are presented to demonstrate the performance of the proposed tracking control of the AUV. It is shown that the proposed control law is robust against parameter uncertainties, external disturbances, and nonlinearities and it leads to uniform ultimate boundedness of the region tracking error.

  14. Evaluation of world's largest social welfare scheme: An assessment using non-parametric approach.

    Science.gov (United States)

    Singh, Sanjeet

    2016-08-01

    Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) is the world's largest social welfare scheme in India for the poverty alleviation through rural employment generation. This paper aims to evaluate and rank the performance of the states in India under MGNREGA scheme. A non-parametric approach, Data Envelopment Analysis (DEA) is used to calculate the overall technical, pure technical, and scale efficiencies of states in India. The sample data is drawn from the annual official reports published by the Ministry of Rural Development, Government of India. Based on three selected input parameters (expenditure indicators) and five output parameters (employment generation indicators), I apply both input and output oriented DEA models to estimate how well the states utilize their resources and generate outputs during the financial year 2013-14. The relative performance evaluation has been made under the assumption of constant returns and also under variable returns to scale to assess the impact of scale on performance. The results indicate that the main source of inefficiency is both technical and managerial practices adopted. 11 states are overall technically efficient and operate at the optimum scale whereas 18 states are pure technical or managerially efficient. It has been found that for some states it necessary to alter scheme size to perform at par with the best performing states. For inefficient states optimal input and output targets along with the resource savings and output gains are calculated. Analysis shows that if all inefficient states operate at optimal input and output levels, on an average 17.89% of total expenditure and a total amount of $780million could have been saved in a single year. Most of the inefficient states perform poorly when it comes to the participation of women and disadvantaged sections (SC&ST) in the scheme. In order to catch up with the performance of best performing states, inefficient states on an average need to enhance

  15. Optimally Joint Subcarrier Matching and Power Allocation in OFDM Multihop System

    Directory of Open Access Journals (Sweden)

    Shuyuan Yang

    2008-04-01

    Full Text Available Orthogonal frequency division multiplexing (OFDM multihop system is a promising way to increase capacity and coverage. In this paper, we propose an optimally joint subcarrier matching and power allocation scheme to further maximize the total channel capacity with the constrained total system power. First, the problem is formulated as a mixed binary integer programming problem, which is prohibitive to find the global optimum in terms of complexity. Second, by making use of the equivalent channel power gain for any matched subcarrier pair, a low complexity scheme is proposed. The optimal subcarrier matching is to match subcarriers by the order of the channel power gains. The optimal power allocation among the matched subcarrier pairs is water-filling. An analytical argument is given to prove that the two steps achieve the optimally joint subcarrier matching and power allocation. The simulation results show that the proposed scheme achieves the largest total channel capacity as compared to the other schemes, where there is no subcarrier matching or no power allocation.

  16. Optimally Joint Subcarrier Matching and Power Allocation in OFDM Multihop System

    Directory of Open Access Journals (Sweden)

    Wang Wenyi

    2008-01-01

    Full Text Available Orthogonal frequency division multiplexing (OFDM multihop system is a promising way to increase capacity and coverage. In this paper, we propose an optimally joint subcarrier matching and power allocation scheme to further maximize the total channel capacity with the constrained total system power. First, the problem is formulated as a mixed binary integer programming problem, which is prohibitive to find the global optimum in terms of complexity. Second, by making use of the equivalent channel power gain for any matched subcarrier pair, a low complexity scheme is proposed. The optimal subcarrier matching is to match subcarriers by the order of the channel power gains. The optimal power allocation among the matched subcarrier pairs is water-filling. An analytical argument is given to prove that the two steps achieve the optimally joint subcarrier matching and power allocation. The simulation results show that the proposed scheme achieves the largest total channel capacity as compared to the other schemes, where there is no subcarrier matching or no power allocation.

  17. SMR-Based Adaptive Mobility Management Scheme in Hierarchical SIP Networks

    Directory of Open Access Journals (Sweden)

    KwangHee Choi

    2014-10-01

    Full Text Available In hierarchical SIP networks, paging is performed to reduce location update signaling cost for mobility management. However, the cost efficiency largely depends on each mobile node’s session-to-mobility-ratio (SMR, which is defined as a ratio of the session arrival rate to the movement rate. In this paper, we propose the adaptive mobility management scheme that can determine the policy regarding to each mobile node’s SMR. Each mobile node determines whether the paging is applied or not after comparing its SMR with the threshold. In other words, the paging is applied to a mobile node when a mobile node’s SMR is less than the threshold. Therefore, the proposed scheme provides a way to minimize signaling costs according to each mobile node’s SMR. We find out the optimal threshold through performance analysis, and show that the proposed scheme can reduce signaling cost than the existing SIP and paging schemes in hierarchical SIP networks.

  18. Closed-Loop Autofocus Scheme for Scanning Electron Microscope

    Directory of Open Access Journals (Sweden)

    Cui Le

    2015-01-01

    Full Text Available In this paper, we present a full scale autofocus approach for scanning electron microscope (SEM. The optimal focus (in-focus position of the microscope is achieved by maximizing the image sharpness using a vision-based closed-loop control scheme. An iterative optimization algorithm has been designed using the sharpness score derived from image gradient information. The proposed method has been implemented and validated using a tungsten gun SEM at various experimental conditions like varying raster scan speed, magnification at real-time. We demonstrate that the proposed autofocus technique is accurate, robust and fast.

  19. Optimal control of nonlinear continuous-time systems in strict-feedback form.

    Science.gov (United States)

    Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani

    2015-10-01

    This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.

  20. Risk:reward sharing contracts in the oil industry: the effects of bonus:penalty schemes

    International Nuclear Information System (INIS)

    Kemp, A.G.; Stephen, L.

    1999-01-01

    Partnering and alliancing among oil companies and their contractors have become common in the oil industry in recent years. The risk:reward mechanisms established very often incorporate bonus/penalty schemes in relation to agreed base values. This paper examines the efficiency requirements of such schemes. The effects of project cost and completion risks on the risk: reward positions of field investors and contractors with and without bonus/penalty schemes are examined with the aid of Monte Carlo simulation analysis. The schemes increase the total risk for contractors and have consequence for their cost of capital and optimal risk-bearing arrangements within the industry. (author)

  1. Optimal model-free prediction from multivariate time series

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V.; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  2. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    Science.gov (United States)

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-07

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation

  3. Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo

    2016-11-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.

  4. Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples

    Directory of Open Access Journals (Sweden)

    Hyunok Oh

    2003-05-01

    Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.

  5. Improved QRD-M Detection Algorithm for Generalized Spatial Modulation Scheme

    Directory of Open Access Journals (Sweden)

    Xiaorong Jing

    2017-01-01

    Full Text Available Generalized spatial modulation (GSM is a spectral and energy efficient multiple-input multiple-output (MIMO transmission scheme. It will lead to imperfect detection performance with relatively high computational complexity by directly applying the original QR-decomposition with M algorithm (QRD-M to the GSM scheme. In this paper an improved QRD-M algorithm is proposed for GSM signal detection, which achieves near-optimal performance but with relatively low complexity. Based on the QRD, the improved algorithm firstly transforms the maximum likelihood (ML detection of the GSM signals into searching an inverted tree structure. Then, in the searching process of the M branches, the branches corresponding to the illegitimate transmit antenna combinations (TACs and related to invalid number of active antennas are cut in order to improve the validity of the resultant branches at each level by taking advantage of characteristics of GSM signals. Simulation results show that the improved QRD-M detection algorithm provides similar performance to maximum likelihood (ML with the reduced computational complexity compared to the original QRD-M algorithm, and the optimal value of parameter M of the improved QRD-M algorithm for detection of the GSM scheme is equal to modulation order plus one.

  6. How to resolve the factorization- and the renormalization-scheme ambiguities simultaneously

    International Nuclear Information System (INIS)

    Nakkagawa, H.; Niegawa, A.

    1982-01-01

    A combined investigation of both the factorization- and renormalization-scheme dependences of perturbative QCD calculations is reported. Applyong Stevenson's optimization method, we get a remarkable result, which forces us to exponentiate 'everything' with uncorrected subprocess cross sections. (orig.)

  7. Research on crude oil storage and transportation based on optimization algorithm

    Science.gov (United States)

    Yuan, Xuhua

    2018-04-01

    At present, the optimization theory and method have been widely used in the optimization scheduling and optimal operation scheme of complex production systems. Based on C++Builder 6 program development platform, the theoretical research results are implemented by computer. The simulation and intelligent decision system of crude oil storage and transportation inventory scheduling are designed. The system includes modules of project management, data management, graphics processing, simulation of oil depot operation scheme. It can realize the optimization of the scheduling scheme of crude oil storage and transportation system. A multi-point temperature measuring system for monitoring the temperature field of floating roof oil storage tank is developed. The results show that by optimizing operating parameters such as tank operating mode and temperature, the total transportation scheduling costs of the storage and transportation system can be reduced by 9.1%. Therefore, this method can realize safe and stable operation of crude oil storage and transportation system.

  8. Decoupling Scheme for a Cryogenic Rx-Only RF Coil for 13C Imaging at 3T

    DEFF Research Database (Denmark)

    Sanchez, Juan Diego; Søvsø Szocska Hansen, Esben; Laustsen, Christoffer

    In this study we evaluate the different active decoupling schemes that can be used to drive an Rx-only coil, in order to determine the optimal design for 13C MRI at 3T. Three different circuit schemes are studied: two known ones (with regular series and parallel tuning respectively), and a novel...... one which we found to be optimal for this case. The circuits have been cooled to 77K to reduce coil noise. Preliminary tests with the preamplifier cooled to 77K for reduction of noise figure, are also reported....

  9. Quality control scheme for thyroid related hormones measured by radioimmunoassay

    International Nuclear Information System (INIS)

    Kamel, R.S.

    1989-09-01

    A regional quality control scheme for thyroid related hormones measured by radioimmunoassay is being established in the Middle East. The scheme started in January 1985, with eight laboratories which were all from Iraq. At the present nineteen laboratories from Iraq, Jordan, Kuwait, Saudi Arabia and United Arab Emirates (Dubai) are now participating in the scheme. The scheme was supported by the International Atomic Energy Agency. All participants received monthly three freeze dried quality control samples for assay. Results for T3, T4 and TSH received from participants are analysed statistically batch by batch and returned to the participants. Laboratories reporting quite marked bias results were contacted to check the assay performance for that particular batch and to define the weak points. Clinical interpretation for certain well defined samples were reported. A regular case study report is recently introduced to the scheme and will be distributed regularly as one of the guidelines in establishing a trouble shooting programme throughout the scheme. The overall mean between the laboratory performance showed a good result for the T4, moderate but acceptable for T3 and poor for TSH. The statistical analysis of the results based on the concept of a ''target'' value is derived from the believed correct value the ''Median''. The overall mean bias values (ignoring signs) for respectively low, normal and high concentration samples were for T4 18.0 ± 12.5, 11.2 ± 6.4 and 11.2 ± 6.4, for T3 28.8 ± 23.5, 11.2 ± 8.4 and 13.4 ± 9.0 and for TSH 46.3 ± 50.1, 37.2 ± 28.5 and 19.1 ± 12.1. The scheme proved to be effective not only in improving the overall performance but also it helped to develop awareness of the need for internal quality control programmes and gave confidence in the results of the participants. The scheme will continue and will be expanded to involve more laboratories in the region. Refs, fig and tabs

  10. Continuous quality control of the blood sampling procedure using a structured observation scheme

    DEFF Research Database (Denmark)

    Seemann, Tine Lindberg; Nybo, Mads

    2016-01-01

    INTRODUCTION: An observational study was conducted using a structured observation scheme to assess compliance with the local phlebotomy guideline, to identify necessary focus items, and to investigate whether adherence to the phlebotomy guideline improved. MATERIALS AND METHODS: The questionnaire...

  11. Determination of total concentration of chemically labeled metabolites as a means of metabolome sample normalization and sample loading optimization in mass spectrometry-based metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2012-12-18

    For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C

  12. Co-ordination of renewable energy support schemes in the EU

    Energy Technology Data Exchange (ETDEWEB)

    Grenaa Jensen, S.; Morthorst, P.E. [Risoe National Lab., Roskilde (Denmark)

    2007-05-15

    This paper illustrates the effect that can be observed when support schemes for renewable energy are regionalised. Two theoretical examples are used to explain interactive effects on, e.g., price of power, conditions for conventional power producers, and changes in import and export of power. The results are based on a deterministic partial equilibrium model, where two cases are studied. The first case covers countries with regional power markets that also regionalise their tradable green certificate (TGC) support schemes. The second, countries with separate national power markets that regionalise their TGC-support schemes. The main findings indicate that the almost ideal situation exists if the region prior to regionalising their RES-E support scheme already has a common liberalised power market. In this case, introduction of a common TGC-support scheme for renewable technologies will lead to more efficient sitings of renewable plants, improving economic and environmental performance of the total power system. But if no such common power market exits, regionalising their TGC-schemes might, due to interactions, introduce distortions in the conventional power system. Thus, contrary to intentions, we might in this case end up in a system that is far from optimal with regard to efficiency and emissions. (au)

  13. Co-ordination of renewable energy support schemes in the EU

    International Nuclear Information System (INIS)

    Grenaa Jensen, S.; Morthorst, P.E.

    2007-01-01

    This paper illustrates the effect that can be observed when support schemes for renewable energy are regionalised. Two theoretical examples are used to explain interactive effects on, e.g., price of power, conditions for conventional power producers, and changes in import and export of power. The results are based on a deterministic partial equilibrium model, where two cases are studied. The first case covers countries with regional power markets that also regionalise their tradable green certificate (TGC) support schemes. The second, countries with separate national power markets that regionalise their TGC-support schemes. The main findings indicate that the almost ideal situation exists if the region prior to regionalising their RES-E support scheme already has a common liberalised power market. In this case, introduction of a common TGC-support scheme for renewable technologies will lead to more efficient sitings of renewable plants, improving economic and environmental performance of the total power system. But if no such common power market exits, regionalising their TGC-schemes might, due to interactions, introduce distortions in the conventional power system. Thus, contrary to intentions, we might in this case end up in a system that is far from optimal with regard to efficiency and emissions. (au)

  14. Evaluation of the readsorption of plutonium and americium in dynamic fractionations of environmental solid samples

    DEFF Research Database (Denmark)

    Petersen, Roongrat; Hou, Xiaolin; Hansen, Elo Harald

    2008-01-01

    A dynamic extraction system exploiting sequential injection (SI) for sequential extractions incorporating a specially designed extraction column is developed to fractionate radionuclides in environmental solid samples such as soils and sediments. The extraction column can contain a large amount...... of soil sample (up to 5 g), and under optimal operational conditions it does not give rise to creation of back pressure. Attention has been placed on studies of the readsorption problems during sequential extraction using a modified Standards, Measurements and Testing (SM&T) scheme with 4-step sequential...

  15. An Interference Cancellation Scheme for High Reliability Based on MIMO Systems

    Directory of Open Access Journals (Sweden)

    Jae-Hyun Ro

    2018-03-01

    Full Text Available This article proposes a new interference cancellation scheme in a half-duplex based two-path relay system. In the conventional two-path relay system, inter-relay-interference (IRI which severely degrades the error performances at a destination occurs because a source and a relay transmit signals simultaneously at a specific time. The proposed scheme removes the IRI at a relay for higher signal-to-interference plus noise ratio (SINR to receive interference free signal at a destination, unlike the conventional relay system, which removes IRI at a destination. To handle the IRI, the proposed scheme uses multiple-input multiple-output (MIMO signal detection at the relays and it makes low-complexity signal processing at a destination which is a usually mobile user. At the relays, the proposed scheme uses the low-complexity QR decomposition-M algorithm (QRD-M to optimally remove the IRI. Also, for obtaining diversity gain, the proposed scheme uses cyclic delay diversity (CDD to transmit the signals at a source and the relays. In simulation results, the error performance for the proposed scheme is better when the distance between one relay and another relay is low unlike the conventional scheme because the QRD-M detects received signal in order of higher post signal-to-noise ratio (SNR.

  16. Optimizing the Betts-Miller-Janjic cumulus parameterization with Intel Many Integrated Core (MIC) architecture

    Science.gov (United States)

    Huang, Melin; Huang, Bormin; Huang, Allen H.-L.

    2015-10-01

    The schemes of cumulus parameterization are responsible for the sub-grid-scale effects of convective and/or shallow clouds, and intended to represent vertical fluxes due to unresolved updrafts and downdrafts and compensating motion outside the clouds. Some schemes additionally provide cloud and precipitation field tendencies in the convective column, and momentum tendencies due to convective transport of momentum. The schemes all provide the convective component of surface rainfall. Betts-Miller-Janjic (BMJ) is one scheme to fulfill such purposes in the weather research and forecast (WRF) model. National Centers for Environmental Prediction (NCEP) has tried to optimize the BMJ scheme for operational application. As there are no interactions among horizontal grid points, this scheme is very suitable for parallel computation. With the advantage of Intel Xeon Phi Many Integrated Core (MIC) architecture, efficient parallelization and vectorization essentials, it allows us to optimize the BMJ scheme. If compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670, the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.4x and 17.0x, respectively.

  17. Energy efficient scheme for cognitive radios utilizing soft sensing

    KAUST Repository

    Alabbasi, AbdulRahman; Rezki, Zouheir; Shihada, Basem

    2014-01-01

    In this paper we propose an energy efficient cognitive radio system. Our design considers an underlaying resource allocation combined with soft sensing information to achieve a sub-optimum energy efficient system. The sub-optimality is achieved by optimizing over a channel inversion power policy instead of considering a water-filling power policy. We consider an Energy per Goodbit (EPG) metric to express the energy efficient objective function of the system and as an evaluation metric to our system performance. Since our optimization problem is not a known convex problem, we prove its convexity to guarantee its feasibility. We evaluate the proposed scheme comparing to a benchmark system through both analytical and numerical results.

  18. Energy efficient scheme for cognitive radios utilizing soft sensing

    KAUST Repository

    Alabbasi, Abdulrahman

    2014-04-06

    In this paper we propose an energy efficient cognitive radio system. Our design considers an underlaying resource allocation combined with soft sensing information to achieve a sub-optimum energy efficient system. The sub-optimality is achieved by optimizing over a channel inversion power policy instead of considering a water-filling power policy. We consider an Energy per Goodbit (EPG) metric to express the energy efficient objective function of the system and as an evaluation metric to our system performance. Since our optimization problem is not a known convex problem, we prove its convexity to guarantee its feasibility. We evaluate the proposed scheme comparing to a benchmark system through both analytical and numerical results.

  19. Green-Frag: Energy-Efficient Frame Fragmentation Scheme for Wireless Sensor Networks

    KAUST Repository

    Daghistani, Anas H.

    2013-01-01

    that is optimized to be energy efficient, which is originated from the chosen frame fragmentation scheme. This new energy-efficient frame fragmentation protocol is called (Green-Frag). Green-Frag uses an algorithm that gives sensor nodes the ability to transmit data

  20. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    Science.gov (United States)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one

  1. Integrated optical 3D digital imaging based on DSP scheme

    Science.gov (United States)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  2. Positivity-preserving CE/SE schemes for solving the compressible Euler and Navier–Stokes equations on hybrid unstructured meshes

    KAUST Repository

    Shen, Hua; Parsani, Matteo

    2018-01-01

    . The schemes use an a posteriori limiter to prevent negative densities and pressures based on the premise of preserving optimal accuracy. The limiter enforces a constraint for spatial derivatives and does not change the conservative property of CE/SE schemes

  3. Airfoil shape optimization using non-traditional optimization technique and its validation

    Directory of Open Access Journals (Sweden)

    R. Mukesh

    2014-07-01

    Full Text Available Computational fluid dynamics (CFD is one of the computer-based solution methods which is more widely employed in aerospace engineering. The computational power and time required to carry out the analysis increase as the fidelity of the analysis increases. Aerodynamic shape optimization has become a vital part of aircraft design in the recent years. Generally if we want to optimize an airfoil we have to describe the airfoil and for that, we need to have at least hundred points of x and y co-ordinates. It is really difficult to optimize airfoils with this large number of co-ordinates. Nowadays many different schemes of parameter sets are used to describe general airfoil such as B-spline, and PARSEC. The main goal of these parameterization schemes is to reduce the number of needed parameters as few as possible while controlling the important aerodynamic features effectively. Here the work has been done on the PARSEC geometry representation method. The objective of this work is to introduce the knowledge of describing general airfoil using twelve parameters by representing its shape as a polynomial function. And also we have introduced the concept of Genetic Algorithm to optimize the aerodynamic characteristics of a general airfoil for specific conditions. A MATLAB program has been developed to implement PARSEC, Panel Technique, and Genetic Algorithm. This program has been tested for a standard NACA 2411 airfoil and optimized to improve its coefficient of lift. Pressure distribution and co-efficient of lift for airfoil geometries have been calculated using the Panel method. The optimized airfoil has improved co-efficient of lift compared to the original one. The optimized airfoil is validated using wind tunnel data.

  4. Development and optimization of the determination of pharmaceuticals in water samples by SPE and HPLC with diode-array detection.

    Science.gov (United States)

    Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra

    2013-09-01

    This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Analysis of energy efficiency retrofit schemes for heating, ventilating and air-conditioning systems in existing office buildings based on the modified bin method

    International Nuclear Information System (INIS)

    Wang, Zhaoxia; Ding, Yan; Geng, Geng; Zhu, Neng

    2014-01-01

    Highlights: • A modified bin method is adopted to propose and optimize the EER schemes. • A case study is presented to demonstrate the analysis procedures of EER schemes. • Pertinent EER schemes for HVAC systems are proposed for the object building. - Abstract: Poor thermal performance of building envelop and low efficiencies of heating, ventilating and air-conditioning (HVAC) systems can always be found in the existing office buildings with large energy consumption. This paper adopted a modified bin method to propose and optimize the energy efficiency retrofit (EER) schemes. An existing office building in Tianjin was selected as an example to demonstrate the procedures of formulating the design scheme. Pertinent retrofit schemes for HVAC system were proposed after the retrofit of building envelop. With comprehensive consideration of energy efficiency and economic benefits, the recommended scheme that could improve the overall energy efficiency by 71.20% was determined

  6. An implementation of particle swarm optimization to evaluate optimal under-voltage load shedding in competitive electricity markets

    Science.gov (United States)

    Hosseini-Bioki, M. M.; Rashidinejad, M.; Abdollahi, A.

    2013-11-01

    Load shedding is a crucial issue in power systems especially under restructured electricity environment. Market-driven load shedding in reregulated power systems associated with security as well as reliability is investigated in this paper. A technoeconomic multi-objective function is introduced to reveal an optimal load shedding scheme considering maximum social welfare. The proposed optimization problem includes maximum GENCOs and loads' profits as well as maximum loadability limit under normal and contingency conditions. Particle swarm optimization (PSO) as a heuristic optimization technique, is utilized to find an optimal load shedding scheme. In a market-driven structure, generators offer their bidding blocks while the dispatchable loads will bid their price-responsive demands. An independent system operator (ISO) derives a market clearing price (MCP) while rescheduling the amount of generating power in both pre-contingency and post-contingency conditions. The proposed methodology is developed on a 3-bus system and then is applied to a modified IEEE 30-bus test system. The obtained results show the effectiveness of the proposed methodology in implementing the optimal load shedding satisfying social welfare by maintaining voltage stability margin (VSM) through technoeconomic analyses.

  7. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    Science.gov (United States)

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2018-05-01

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2  = 0.98; p strategy promises to achieve the target area under the curve as part of precision dosing.

  8. Simultaneous analysis in renormalization and factorization scheme dependences in perturbative QCD

    International Nuclear Information System (INIS)

    Nakkagawa, Hisao; Niegawa, Akira.

    1983-01-01

    Combined and thorough investigations of both the factorization and the renormalization scheme dependences of perturbative QCD calculations are given. Our findings are that (i) by introducing a multiscale-dependent coupling the simultaneous parametrization of both scheme-dependences can be accomplished, (ii) Stevenson's optimization method works quite well so that it gives a remarkable prediction which forces us to exponentiate ''everything'' with uncorrected subprocess cross sections, and (iii) the perturbation series in QCD may converge when Stevenson's principle of minimal sensitivity is taken into account at each order of perturbative approximation. (author)

  9. Dynamic optimization of dead-end membrane filtration

    NARCIS (Netherlands)

    Blankert, B.; Betlem, Bernardus H.L.; Roffel, B.; Marquardt, Wolfgang; Pantelides, Costas

    2006-01-01

    An operating strategy aimed at minimizing the energy consumption during the filtration phase of dead-end membrane filtration has been formulated. A method allowing fast calculation of trajectories is used to allow incorporation in a hierarchical optimization scheme. The optimal trajectory can be

  10. Sampling designs and methods for estimating fish-impingement losses at cooling-water intakes

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-01-01

    Several systems for estimating fish impingement at power plant cooling-water intakes are compared to determine the most statistically efficient sampling designs and methods. Compared to a simple random sampling scheme the stratified systematic random sampling scheme, the systematic random sampling scheme, and the stratified random sampling scheme yield higher efficiencies and better estimators for the parameters in two models of fish impingement as a time-series process. Mathematical results and illustrative examples of the applications of the sampling schemes to simulated and real data are given. Some sampling designs applicable to fish-impingement studies are presented in appendixes

  11. Numerical study of read scheme in one-selector one-resistor crossbar array

    Science.gov (United States)

    Kim, Sungho; Kim, Hee-Dong; Choi, Sung-Jin

    2015-12-01

    A comprehensive numerical circuit analysis of read schemes of a one selector-one resistance change memory (1S1R) crossbar array is carried out. Three schemes-the ground, V/2, and V/3 schemes-are compared with each other in terms of sensing margin and power consumption. Without the aid of a complex analytical approach or SPICE-based simulation, a simple numerical iteration method is developed to simulate entire current flows and node voltages within a crossbar array. Understanding such phenomena is essential in successfully evaluating the electrical specifications of selectors for suppressing intrinsic drawbacks of crossbar arrays, such as sneaky current paths and series line resistance problems. This method provides a quantitative tool for the accurate analysis of crossbar arrays and provides guidelines for developing an optimal read scheme, array configuration, and selector device specifications.

  12. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  13. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  14. Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo

    2016-01-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134

  15. Performance tuning Weather Research and Forecasting (WRF) Goddard longwave radiative transfer scheme on Intel Xeon Phi

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2015-10-01

    Next-generation mesoscale numerical weather prediction system, the Weather Research and Forecasting (WRF) model, is a designed for dual use for forecasting and research. WRF offers multiple physics options that can be combined in any way. One of the physics options is radiance computation. The major source for energy for the earth's climate is solar radiation. Thus, it is imperative to accurately model horizontal and vertical distribution of the heating. Goddard solar radiative transfer model includes the absorption duo to water vapor,ozone, ozygen, carbon dioxide, clouds and aerosols. The model computes the interactions among the absorption and scattering by clouds, aerosols, molecules and surface. Finally, fluxes are integrated over the entire longwave spectrum.In this paper, we present our results of optimizing the Goddard longwave radiative transfer scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The optimizations improved the performance of the original Goddard longwave radiative transfer scheme on Xeon Phi 7120P by a factor of 2.2x. Furthermore, the same optimizations improved the performance of the Goddard longwave radiative transfer scheme on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 2.1x compared to the original Goddard longwave radiative transfer scheme code.

  16. Optimally cloned binary coherent states

    Science.gov (United States)

    Müller, C. R.; Leuchs, G.; Marquardt, Ch.; Andersen, U. L.

    2017-10-01

    Binary coherent state alphabets can be represented in a two-dimensional Hilbert space. We capitalize this formal connection between the otherwise distinct domains of qubits and continuous variable states to map binary phase-shift keyed coherent states onto the Bloch sphere and to derive their quantum-optimal clones. We analyze the Wigner function and the cumulants of the clones, and we conclude that optimal cloning of binary coherent states requires a nonlinearity above second order. We propose several practical and near-optimal cloning schemes and compare their cloning fidelity to the optimal cloner.

  17. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  18. Efficient Secure and Privacy-Preserving Route Reporting Scheme for VANETs

    Science.gov (United States)

    Zhang, Yuanfei; Pei, Qianwen; Dai, Feifei; Zhang, Lei

    2017-10-01

    Vehicular ad-hoc network (VANET) is a core component of intelligent traffic management system which could provide various of applications such as accident prediction, route reporting, etc. Due to the problems caused by traffic congestion, route reporting becomes a prospective application which can help a driver to get optimal route to save her travel time. Before enjoying the convenience of route reporting, security and privacy-preserving issues need to be concerned. In this paper, we propose a new secure and privacy-preserving route reporting scheme for VANETs. In our scheme, only an authenticated vehicle can use the route reporting service provided by the traffic management center. Further, a vehicle may receive the response from the traffic management center with low latency and without violating the privacy of the vehicle. Experiment results show that our scheme is much more efficiency than the existing one.

  19. Multicore-Optimized Wavefront Diamond Blocking for Optimizing Stencil Updates

    KAUST Repository

    Malas, T.

    2015-07-02

    The importance of stencil-based algorithms in computational science has focused attention on optimized parallel implementations for multilevel cache-based processors. Temporal blocking schemes leverage the large bandwidth and low latency of caches to accelerate stencil updates and approach theoretical peak performance. A key ingredient is the reduction of data traffic across slow data paths, especially the main memory interface. In this work we combine the ideas of multicore wavefront temporal blocking and diamond tiling to arrive at stencil update schemes that show large reductions in memory pressure compared to existing approaches. The resulting schemes show performance advantages in bandwidth-starved situations, which are exacerbated by the high bytes per lattice update case of variable coefficients. Our thread groups concept provides a controllable trade-off between concurrency and memory usage, shifting the pressure between the memory interface and the CPU. We present performance results on a contemporary Intel processor.

  20. Multicore-Optimized Wavefront Diamond Blocking for Optimizing Stencil Updates

    KAUST Repository

    Malas, T.; Hager, G.; Ltaief, Hatem; Stengel, H.; Wellein, G.; Keyes, David E.

    2015-01-01

    The importance of stencil-based algorithms in computational science has focused attention on optimized parallel implementations for multilevel cache-based processors. Temporal blocking schemes leverage the large bandwidth and low latency of caches to accelerate stencil updates and approach theoretical peak performance. A key ingredient is the reduction of data traffic across slow data paths, especially the main memory interface. In this work we combine the ideas of multicore wavefront temporal blocking and diamond tiling to arrive at stencil update schemes that show large reductions in memory pressure compared to existing approaches. The resulting schemes show performance advantages in bandwidth-starved situations, which are exacerbated by the high bytes per lattice update case of variable coefficients. Our thread groups concept provides a controllable trade-off between concurrency and memory usage, shifting the pressure between the memory interface and the CPU. We present performance results on a contemporary Intel processor.

  1. Application of Nontraditional Optimization Techniques for Airfoil Shape Optimization

    Directory of Open Access Journals (Sweden)

    R. Mukesh

    2012-01-01

    Full Text Available The method of optimization algorithms is one of the most important parameters which will strongly influence the fidelity of the solution during an aerodynamic shape optimization problem. Nowadays, various optimization methods, such as genetic algorithm (GA, simulated annealing (SA, and particle swarm optimization (PSO, are more widely employed to solve the aerodynamic shape optimization problems. In addition to the optimization method, the geometry parameterization becomes an important factor to be considered during the aerodynamic shape optimization process. The objective of this work is to introduce the knowledge of describing general airfoil geometry using twelve parameters by representing its shape as a polynomial function and coupling this approach with flow solution and optimization algorithms. An aerodynamic shape optimization problem is formulated for NACA 0012 airfoil and solved using the methods of simulated annealing and genetic algorithm for 5.0 deg angle of attack. The results show that the simulated annealing optimization scheme is more effective in finding the optimum solution among the various possible solutions. It is also found that the SA shows more exploitation characteristics as compared to the GA which is considered to be more effective explorer.

  2. Corrections to the General (2,4) and (4,4) FDTD Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Meierbachtol, Collin S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Smith, William S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shao, Xuan-Min [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-29

    The sampling weights associated with two general higher order FDTD schemes were derived by Smith, et al. and published in a IEEE Transactions on Antennas and Propagation article in 2012. Inconsistencies between governing equations and their resulting solutions were discovered within the article. In an effort to track down the root cause of these inconsistencies, the full three-dimensional, higher order FDTD dispersion relation was re-derived using MathematicaTM. During this process, two errors were identi ed in the article. Both errors are highlighted in this document. The corrected sampling weights are also provided. Finally, the original stability limits provided for both schemes are corrected, and presented in a more precise form. It is recommended any future implementations of the two general higher order schemes provided in the Smith, et al. 2012 article should instead use the sampling weights and stability conditions listed in this document.

  3. Optimization of a fuel bundle within a CANDU supercritical water reactor

    International Nuclear Information System (INIS)

    Schofield, M.E.

    2009-01-01

    The supercritical water reactor is one of six nuclear reactor concepts being studied under the Generation IV International Forum. Generation IV nuclear reactors will improve the metrics of economics, sustainability, safety and reliability, and physical protection and proliferation resistance over current nuclear reactor designs. The supercritical water reactor has specific benefits in the areas of economics, safety and reliability, and physical protection. This work optimizes the fuel composition and bundle geometry to maximize the fuel burnup, and minimize the surface heat flux and the form factor. In optimizing these factors, improvements can be achieved in the areas of economics, safety and reliability of the supercritical water reactor. The WIMS-AECL software was used to model a fuel bundle within a CANDU supercritical water reactor. The Gauss' steepest descent method was used to optimize the above mentioned factors. Initially the fresh fuel composition was optimized within a 43-rod CANFLEX bundle and a 61-rod bundle. In both the 43-rod and 61-rod bundle scenarios an online refuelling scheme and non-refuelling scheme were studied. The geometry of the fuel bundles was then optimized. Finally, a homogeneous mixture of thorium and uranium fuel was studied in a 60-rod bundle. Each optimization process showed definitive improvements in the factors being studied, with the most significant improvement being an increase in the fuel burnup. The 43-rod CANFLEX bundle was the most successful at being optimized. There was little difference in the final fresh fuel content when comparing an online refuelling scheme and non-refuelling scheme. Through each optimization scenario the ratio of the fresh fuel content between the annuli was a significant determining cause in the improvements in the factors being optimized. The geometry optimization showed that improvement in the design of a fuel bundle is indeed possible, although it would be more advantageous to pursue it

  4. A Novel Iterative Scheme for the Very Fast and Accurate Solution of Non-LTE Radiative Transfer Problems

    Science.gov (United States)

    Trujillo Bueno, J.; Fabiani Bendicho, P.

    1995-12-01

    Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel

  5. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes

    Directory of Open Access Journals (Sweden)

    Lotz Meredith J

    2008-01-01

    Full Text Available Abstract Background Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures × time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. Results We found that the optimal imputation algorithms (LSA, LLS, and BPCA are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Conclusion Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA

  6. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes.

    Science.gov (United States)

    Brock, Guy N; Shaffer, John R; Blakesley, Richard E; Lotz, Meredith J; Tseng, George C

    2008-01-10

    Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures x time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. We found that the optimal imputation algorithms (LSA, LLS, and BPCA) are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS) scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS) scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA) are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA) performed better on mcroarray data with lower complexity

  7. Incorporating prior knowledge into beam orientation optimization in IMRT

    International Nuclear Information System (INIS)

    Pugachev, Andrei M.S.; Lei Xing

    2002-01-01

    Purpose: Selection of beam configuration in currently available intensity-modulated radiotherapy (IMRT) treatment planning systems is still based on trial-and-error search. Computer beam orientation optimization has the potential to improve the situation, but its practical implementation is hindered by the excessive computing time associated with the calculation. The purpose of this work is to provide an effective means to speed up the beam orientation optimization by incorporating a priori geometric and dosimetric knowledge of the system and to demonstrate the utility of the new algorithm for beam placement in IMRT. Methods and Materials: Beam orientation optimization was performed in two steps. First, the quality of each possible beam orientation was evaluated using beam's-eye-view dosimetrics (BEVD) developed in our previous study. A simulated annealing algorithm was then employed to search for the optimal set of beam orientations, taking into account the BEVD scores of different incident beam directions. During the calculation, sampling of gantry angles was weighted according to the BEVD score computed before the optimization. A beam direction with a higher BEVD score had a higher probability of being included in the trial configuration, and vice versa. The inclusion of the BEVD weighting in the stochastic beam angle sampling process made it possible to avoid spending valuable computing time unnecessarily at 'bad' beam angles. An iterative inverse treatment planning algorithm was used for beam intensity profile optimization during the optimization process. The BEVD-guided beam orientation optimization was applied to an IMRT treatment of paraspinal tumor. The advantage of the new optimization algorithm was demonstrated by comparing the calculation with the conventional scheme without the BEVD weighting in the beam sampling. Results: The BEVD tool provided useful guidance for the selection of the potentially good directions for the beams to incident and was used

  8. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  9. An adaptive Cartesian control scheme for manipulators

    Science.gov (United States)

    Seraji, H.

    1987-01-01

    A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.

  10. Optimal Joint Liability Lending and with Costly Peer Monitoring

    NARCIS (Netherlands)

    Carli, Francesco; Uras, R.B.

    2014-01-01

    This paper characterizes an optimal group loan contract with costly peer monitoring. Using a fairly standard moral hazard framework, we show that the optimal group lending contract could exhibit a joint-liability scheme. However, optimality of joint-liability requires the involvement of a group

  11. Fast rerouting schemes for protected mobile IP over MPLS networks

    Science.gov (United States)

    Wen, Chih-Chao; Chang, Sheng-Yi; Chen, Huan; Chen, Kim-Joan

    2005-10-01

    Fast rerouting is a critical traffic engineering operation in the MPLS networks. To implement the Mobile IP service over the MPLS network, one can collaborate with the fast rerouting operation to enhance the availability and survivability. MPLS can protect critical LSP tunnel between Home Agent (HA) and Foreign Agent (FA) using the fast rerouting scheme. In this paper, we propose a simple but efficient algorithm to address the triangle routing problem for the Mobile IP over the MPLS networks. We consider this routing issue as a link weighting and capacity assignment (LW-CA) problem. The derived solution is used to plan the fast restoration mechanism to protect the link or node failure. In this paper, we first model the LW-CA problem as a mixed integer optimization problem. Our goal is to minimize the call blocking probability on the most congested working truck for the mobile IP connections. Many existing network topologies are used to evaluate the performance of our scheme. Results show that our proposed scheme can obtain the best performance in terms of the smallest blocking probability compared to other schemes.

  12. Optimized Skip-Stop Metro Line Operation Using Smart Card Data

    Directory of Open Access Journals (Sweden)

    Peitong Zhang

    2017-01-01

    Full Text Available Skip-stop operation is a low cost approach to improving the efficiency of metro operation and passenger travel experience. This paper proposes a novel method to optimize the skip-stop scheme for bidirectional metro lines so that the average passenger travel time can be minimized. Different from the conventional “A/B” scheme, the proposed Flexible Skip-Stop Scheme (FSSS can better accommodate spatially and temporally varied passenger demand. A genetic algorithm (GA based approach is then developed to efficiently search for the optimal solution. A case study is conducted based on a real world bidirectional metro line in Shenzhen, China, using the time-dependent passenger demand extracted from smart card data. It is found that the optimized skip-stop operation is able to reduce the average passenger travel time and transit agencies may benefit from this scheme due to energy and operational cost savings. Analyses are made to evaluate the effects of that fact that certain number of passengers fail to board the right train (due to skip operation. Results show that FSSS always outperforms the all-stop scheme even when most passengers of the skipped OD pairs are confused and cannot get on the right train.

  13. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    Directory of Open Access Journals (Sweden)

    Arnaud Chapon

    Full Text Available Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples’ characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters. Keywords: Liquid Scintillation Counting (LSC, PerkinElmer, Tri-Carb, Smear, Swipe

  14. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  15. A more accurate scheme for calculating Earth's skin temperature

    Science.gov (United States)

    Tsuang, Ben-Jei; Tu, Chia-Ying; Tsai, Jeng-Lin; Dracup, John A.; Arpe, Klaus; Meyers, Tilden

    2009-02-01

    The theoretical framework of the vertical discretization of a ground column for calculating Earth’s skin temperature is presented. The suggested discretization is derived from the evenly heat-content discretization with the optimal effective thickness for layer-temperature simulation. For the same level number, the suggested discretization is more accurate in skin temperature as well as surface ground heat flux simulations than those used in some state-of-the-art models. A proposed scheme (“op(3,2,0)”) can reduce the normalized root-mean-square error (or RMSE/STD ratio) of the calculated surface ground heat flux of a cropland site significantly to 2% (or 0.9 W m-2), from 11% (or 5 W m-2) by a 5-layer scheme used in ECMWF, from 19% (or 8 W m-2) by a 5-layer scheme used in ECHAM, and from 74% (or 32 W m-2) by a single-layer scheme used in the UCLA GCM. Better accuracy can be achieved by including more layers to the vertical discretization. Similar improvements are expected for other locations with different land types since the numerical error is inherited into the models for all the land types. The proposed scheme can be easily implemented into state-of-the-art climate models for the temperature simulation of snow, ice and soil.

  16. An Effective Approach Control Scheme for the Tethered Space Robot System

    Directory of Open Access Journals (Sweden)

    Zhongjie Meng

    2014-09-01

    Full Text Available The tethered space robot system (TSR, which is composed of a platform, a gripper and a space tether, has great potential in future space missions. Given the relative motion among the platform, tether, gripper and the target, an integrated approach model is derived. Then, a novel coordinated approach control scheme is presented, in which the tether tension, thrusters and the reaction wheel are all utilized. It contains the open-loop trajectory optimization, the feedback trajectory control and attitude control. The numerical simulation results show that the rendezvous between TSR and the target can be realized by the proposed coordinated control scheme, and the propellant consumption is efficiently reduced. Moreover, the control scheme performs well in the presence of the initial state's perturbations, actuator characteristics and sensor errors.

  17. Economic optimization of heat pump-assisted distillation columns in methanol-water separation

    International Nuclear Information System (INIS)

    Shahandeh, Hossein; Jafari, Mina; Kasiri, Norollah; Ivakpour, Javad

    2015-01-01

    Finding efficient alternative to CDiC (Conventional Distillation Column) for methanol-water separation has been an attractive field of study in literature. In this work, five heat pump-assisted schemes are proposed and compared to each other to find the optimal one; (1) VRC (Vapor Recompression Column), (2) external HIDiC (Heat-Integrated Distillation Column), (3) intensified HIDiC with feed preheater, (4) double compressor intensified HIDiC-1, and (5) double compressor intensified HIDiC-2. GA (Genetic Algorithm) is then implemented for optimization of the schemes when TAC (Total Annual Cost) is its objective function. During optimization, two new variables are added for using only appropriate amount of the overhead stream in VRC and double compressor intensified HIDiCs, and another new binary variable is also used for considering feed preheating. Although TAC of the intensified HIDiC with feed preheater is found higher than CDiC by 25.0%, all optimal VRC, external HIDiC, double compressor intensified HIDiCs schemes are reached lower optimal TAC by 3.1%, 27.2%, 24.4%, and 34.2%. Introduced for the first time, the optimal scheme is the double compressor intensified HIDiC-2 with 34.2% TAC saving, 70.4% TEC (Total Energy Consumption) reduction with payback period of 3.30 years. - Highlights: • Study of an industrial distillation unit in methanol-water separation. • Optimization of different heat pump-assisted distillation columns. • Implementation of genetic algorithm during optimization. • Economic and thermodynamic comparisons of optimal results with the industrial case

  18. Research on a New Control Scheme of Photovoltaic Grid Power Generation System

    Directory of Open Access Journals (Sweden)

    Dong-Hui Li

    2014-01-01

    Full Text Available A new type of photovoltaic grid power generation system control scheme to solve the problems of the conventional photovoltaic grid power generation systems is presented. To aim at the oscillation and misjudgment of traditional perturbation observation method, an improved perturbation observation method comparing to the next moment power is proposed, combining with BOOST step-up circuit to realize the maximum power tracking. To counter the harmonic pollution problem in photovoltaic grid power generation system, the deadbeat control scheme in fundamental wave synchronous frequency rotating coordinate system of power grid is presented. A parameter optimization scheme based on positive feedback of active frequency shift island detection to solve the problems like the nondetection zone due to the import of disturbance in traditional island detection method is proposed. Finally, the results in simulation environment by MATLAB/Simulink simulation and experiment environment verify the validity and superiority of the proposed scheme.

  19. Probability approaching method (PAM) and its application on fuel management optimization

    International Nuclear Information System (INIS)

    Liu, Z.; Hu, Y.; Shi, G.

    2004-01-01

    For multi-cycle reloading optimization problem, a new solving scheme is presented. The multi-cycle problem is de-coupled into a number of relatively independent mono-cycle issues, then this non-linear programming problem with complex constraints is solved by an advanced new algorithm -probability approaching method (PAM), which is based on probability theory. The result on simplified core model shows well effect of this new multi-cycle optimization scheme. (authors)

  20. Performance analysis of switch-based multiuser scheduling schemes with adaptive modulation in spectrum sharing systems

    KAUST Repository

    Qaraqe, Marwa

    2014-04-01

    This paper focuses on the development of multiuser access schemes for spectrum sharing systems whereby secondary users are allowed to share the spectrum with primary users under the condition that the interference observed at the primary receiver is below a predetermined threshold. In particular, two scheduling schemes are proposed for selecting a user among those that satisfy the interference constraint and achieve an acceptable signal-to-noise ratio level. The first scheme focuses on optimizing the average spectral efficiency by selecting the user that reports the best channel quality. In order to alleviate the relatively high feedback required by the first scheme, a second scheme based on the concept of switched diversity is proposed, where the base station (BS) scans the secondary users in a sequential manner until a user whose channel quality is above an acceptable predetermined threshold is found. We develop expressions for the statistics of the signal-to-interference and noise ratio as well as the average spectral efficiency, average feedback load, and the delay at the secondary BS. We then present numerical results for the effect of the number of users and the interference constraint on the optimal switching threshold and the system performance and show that our analysis results are in perfect agreement with the numerical results. © 2014 John Wiley & Sons, Ltd.